Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Actually our team have our own Stream Engine, we tested our engine and flink,
find out when we aggregate the stream data, the throughput is decreasing very fast. So we catch the stack and find out a deep copy in flink. In different operator, there will be org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.copy between in different operator. -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/ |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Hi,
You can disable those copies via ExecutionConfig.enableObjectReuse(), which you can get from the StreamExecutionEnvironment via getConfig(). Best, Aljoscha > On 12. Feb 2018, at 04:00, chen <[hidden email]> wrote: > > Actually our team have our own Stream Engine, we tested our engine and flink, > find out when we aggregate the stream data, the throughput is decreasing > very fast. > > So we catch the stack and find out a deep copy in flink. > > In different operator, there will be > org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.copy between > in different operator. > > > > > -- > Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/ ... [show rest of quote] |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Hello,
You might also be able to make Flink use a better serializer than Kryo. Flink falls back to Kryo when it can't use its own serializers, see here: https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/types_serialization.html For example, it might help to make your type a POJO. Best, Gábor On Wed, Feb 14, 2018 at 3:38 PM, Aljoscha Krettek <[hidden email]> wrote: > Hi, > > You can disable those copies via ExecutionConfig.enableObjectReuse(), which you can get from the StreamExecutionEnvironment via getConfig(). > > Best, > Aljoscha > >> On 12. Feb 2018, at 04:00, chen <[hidden email]> wrote: >> >> Actually our team have our own Stream Engine, we tested our engine and flink, >> find out when we aggregate the stream data, the throughput is decreasing >> very fast. >> >> So we catch the stack and find out a deep copy in flink. >> >> In different operator, there will be >> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.copy between >> in different operator. >> >> >> >> >> -- >> Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/ > ... [show rest of quote]
|
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
In reply to this post by Aljoscha Krettek
@Aljoscha Krettek,
Thanks Aljoscha, I will try this way to test the performance. Last 7 days is chinese spring fastival, sorry for response you so late. -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/ |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
In reply to this post by Gábor Gévay
@Gábor Gévay,
Thanks Gábor I just use flink in produce environment, but the performance is not good, especially in aggregation. At the beginning I used Java serialization, but it does not work well. Maybe I do not understood flink very well then. I will try change the serialization method. And test again. Last 7 days is chinese spring fastival, sorry for responsing you so late. -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/ |
Free forum by Nabble | Edit this page |