Hi all , I am getting this error with flink 1.6.0 , please help me . 2018-09-23 07:15:08,846 ERROR org.apache.kafka.clients.producer.KafkaProducer - Interrupted while joining ioThread java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1257) at org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:703) at org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:682) at org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:661) at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase.close(FlinkKafkaProducerBase.java:319) at org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:43) at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.dispose(AbstractUdfStreamOperator.java:117) at org.apache.flink.streaming.runtime.tasks.StreamTask.disposeAllOperators(StreamTask.java:477) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:378) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711) at java.lang.Thread.run(Thread.java:745) 2018-09-23 07:15:08,847 INFO org.apache.kafka.clients.producer.KafkaProducer - Proceeding to force close the producer since pending requests could not be completed within timeout 9223372036854775807 ms. 2018-09-23 07:15:08,860 ERROR org.apache.flink.streaming.runtime.tasks.StreamTask - Error during disposal of stream operator. org.apache.kafka.common.KafkaException: Failed to close kafka producer at org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:734) at org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:682) at org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:661) at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase.close(FlinkKafkaProducerBase.java:319) at org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:43) at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.dispose(AbstractUdfStreamOperator.java:117) at org.apache.flink.streaming.runtime.tasks.StreamTask.disposeAllOperators(StreamTask.java:477) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:378) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1257) at org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:703) ... 9 more Thanks Yubraj Singh |
What are you trying to do , can you share some code ? This is the reason for the exeption Proceeding to force close the producer since pending requests could not be completed within timeout 9223372036854775807 ms. On Mon, 24 Sep 2018, 9:23 yuvraj singh, <[hidden email]> wrote:
|
I am processing data and then sending it to kafka by kafka sink . this is method where I am producing the data nudgeDetailsDataStream.keyBy(NudgeDetails::getCarSugarID).addSink(NudgeCarLevelProducer.getProducer(config)) its my producer public class NudgeCarLevelProducer { On Mon, Sep 24, 2018 at 12:24 PM miki haiat <[hidden email]> wrote:
|
i have one more question , is it possible , if i do keyby on the stream it will get portioned automatically , because i am getting all the data in the same partition in kafka. Thanks Yubraj Singh On Mon, Sep 24, 2018 at 12:34 PM yuvraj singh <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |