Hi everyone,
Can you please explain if this is expected behaviour or not?
My environment:
I have a session cluster of flink 1.8.1 and a job written in scala. I'm submitting it to it with flink CLI, specifying the address of jobmanager.
To submit a job I'm using CLI from 2 flink docker images flink:1.8.1-scala_2.11 and flink:1.8.0-scala_2.11 and different jars built with respective versions of flink.
When I try to submit a job using flink 1.8.0 everything works fine, but when I try to use 1.8.1 with default settings I get StackOverflowError of logger framework.
Here is the log:
https://gist.github.com/Atlaster/a4aae378be81355ae576e849d9eda20aAfter that I followed documentation diff from this jira issue
https://issues.apache.org/jira/browse/FLINK-12297 and set ClosureCleanerLevel to TOP_LEVEL. That helped and job launched without any problems.
Following that I marked all ObjectMapper fields as transient, since the loop was obviously related to it and launched the job with ClosureCleanerLevel.RECURSIVE. No errors as well and job is working fine, which is strange since jackson ObjectMapper supposed to be serializable, or am I missing the point here?
Thank you in advance.
Best regards,
Alex.