How can I handle backpressure with event time.

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

How can I handle backpressure with event time.

yunfan123
For example, I want to merge two kafka topics (named topicA and topicB) by the specific key with a max timeout.
I use event time and class BoundedOutOfOrdernessTimestampExtractor to generate water mark.
When some partitions of topicA be delayed by backpressure, and the delays exceeds my max timeout.
It results in all of my delayed partition in topicA (also corresponding data in topicB) can't be merged.
What I want is if backpressure happens, consumers can only consume depends on my event time.
Reply | Threaded
Open this post in threaded view
|

Re: How can I handle backpressure with event time.

Eron Wright
Try setting the assigner on the Kafka consumer, rather than on the DataStream:

I believe this will produce a per-partition assigner and forward only the minimum watermark across all partitions.

Hope this helps,
-Eron

On Thu, May 25, 2017 at 3:21 AM, yunfan123 <[hidden email]> wrote:
For example, I want to merge two kafka topics (named topicA and topicB) by
the specific key with a max timeout.
I use event time and class BoundedOutOfOrdernessTimestampExtractor to
generate water mark.
When some partitions of topicA be delayed by backpressure, and the delays
exceeds my max timeout.
It results in all of my delayed partition in topicA (also corresponding data
in topicB) can't be merged.
What I want is if backpressure happens, consumers can only consume depends
on my event time.



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/How-can-I-handle-backpressure-with-event-time-tp13313.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

Reply | Threaded
Open this post in threaded view
|

Re: How can I handle backpressure with event time.

yunfan123
It seems can't support consume multi topics with different deserialization
schema.
I use protobuf, different topic mapping to different schema.



--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/