Hi all,
Currently I faced a problem caused by a long Flink SQL. My sql is like “insert into tableA select a, b, c …….from sourceTable”, I have more than 1000 columns in select target, so that’s the problem, flink code generator will generate a RichMapFunction class and contains a map function which exceed the JVM max method limit (64kb). It throws the exception like: Caused by: java.lang.RuntimeException: Compiling "DataStreamSinkConversion$3055": Code of method "map(Ljava/lang/Object;)Ljava/lang/Object;" of class "DataStreamSinkConversion$3055" grows beyond 64 KB So is there any best practice for this ? Thanks, Simon |
Hi Simon: Does your code include the PR[1]? If include: try set TableConfig.setMaxGeneratedCodeLength smaller (default 64000)? If exclude: Can you wrap some fields to a nested Row field to reduce field number.
|
Hi Jiongsong Thanks for your reply. It seems that to wrap fields is a feasible way for me now. And there already exists another JIRA FLINK-8921 try to improve this. Thanks, Simon
On 06/26/2019 19:21,[hidden email] wrote:
|
Hi Simon, Hope you can wrap them simply. In our scenario, there are also many jobs that have so many columns, the huge generated code not only lead to compile exception, but also lead to the code cannot be optimized by JIT. We are planning to introduce a Java code Splitter (analyze Java code and make appropriate segmentation when compile) to solve this problem thoroughly in blink planner. Maybe it will in release-1.10. Best, JingsongLee
|
Free forum by Nabble | Edit this page |