Leader not found

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

Leader not found

Balaji Rajagopalan
I am facing this exception repeatedly while trying to consume from kafka topic. It seems it was reported in 1.0.0 and fixed in 1.0.0, how can  I be sure that is fixed in the version of flink that I am using, does it require me to install patch updates ? 

Caused by: java.lang.RuntimeException: Unable to find a leader for partitions: [FetchPartition {topic=capi, partition=0, offset=-915623761776}]
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.findLeaderForPartitions(LegacyFetcher.java:323)
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:162)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer08.run(FlinkKafkaConsumer08.java:316)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:224)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
at java.lang.Thread.run(Thread.java:745)

https://issues.apache.org/jira/browse/FLINK-3368
Reply | Threaded
Open this post in threaded view
|

Re: Leader not found

rmetzger0
Hi,

I'm sorry, the documentation in the JIRA issue is a bit incorrect. The issue has been fixed in all versions including and after 1.0.0. Earlier releases (0.10, 0.9) will fail when the leader changes.
However, you don't necessarily need to upgrade to Flink 1.0.0 to resolve the issue: With checkpointing enabled, your job will fail on a leader change, then Flink will restart the Kafka consumers and they'll find the new leaders.
Starting from Flink 1.0.0 the Kafka consumer will handle leader changes without failing.

Regards,
Robert

On Tue, Apr 19, 2016 at 12:17 PM, Balaji Rajagopalan <[hidden email]> wrote:
I am facing this exception repeatedly while trying to consume from kafka topic. It seems it was reported in 1.0.0 and fixed in 1.0.0, how can  I be sure that is fixed in the version of flink that I am using, does it require me to install patch updates ? 

Caused by: java.lang.RuntimeException: Unable to find a leader for partitions: [FetchPartition {topic=capi, partition=0, offset=-915623761776}]
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.findLeaderForPartitions(LegacyFetcher.java:323)
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:162)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer08.run(FlinkKafkaConsumer08.java:316)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:224)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
at java.lang.Thread.run(Thread.java:745)

https://issues.apache.org/jira/browse/FLINK-3368

Reply | Threaded
Open this post in threaded view
|

Re: Leader not found

Balaji Rajagopalan
It does not seem to fully work if there is no data in the kafka stream, the flink application emits this error and bails, could this be missed use case in the fix. 

On Tue, Apr 19, 2016 at 3:58 PM, Robert Metzger <[hidden email]> wrote:
Hi,

I'm sorry, the documentation in the JIRA issue is a bit incorrect. The issue has been fixed in all versions including and after 1.0.0. Earlier releases (0.10, 0.9) will fail when the leader changes.
However, you don't necessarily need to upgrade to Flink 1.0.0 to resolve the issue: With checkpointing enabled, your job will fail on a leader change, then Flink will restart the Kafka consumers and they'll find the new leaders.
Starting from Flink 1.0.0 the Kafka consumer will handle leader changes without failing.

Regards,
Robert

On Tue, Apr 19, 2016 at 12:17 PM, Balaji Rajagopalan <[hidden email]> wrote:
I am facing this exception repeatedly while trying to consume from kafka topic. It seems it was reported in 1.0.0 and fixed in 1.0.0, how can  I be sure that is fixed in the version of flink that I am using, does it require me to install patch updates ? 

Caused by: java.lang.RuntimeException: Unable to find a leader for partitions: [FetchPartition {topic=capi, partition=0, offset=-915623761776}]
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.findLeaderForPartitions(LegacyFetcher.java:323)
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:162)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer08.run(FlinkKafkaConsumer08.java:316)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:224)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
at java.lang.Thread.run(Thread.java:745)

https://issues.apache.org/jira/browse/FLINK-3368


Reply | Threaded
Open this post in threaded view
|

Re: Leader not found

rmetzger0
Can you provide me with the exact Flink and Kafka versions you are using and the steps to reproduce the issue?

On Tue, Apr 19, 2016 at 2:06 PM, Balaji Rajagopalan <[hidden email]> wrote:
It does not seem to fully work if there is no data in the kafka stream, the flink application emits this error and bails, could this be missed use case in the fix. 

On Tue, Apr 19, 2016 at 3:58 PM, Robert Metzger <[hidden email]> wrote:
Hi,

I'm sorry, the documentation in the JIRA issue is a bit incorrect. The issue has been fixed in all versions including and after 1.0.0. Earlier releases (0.10, 0.9) will fail when the leader changes.
However, you don't necessarily need to upgrade to Flink 1.0.0 to resolve the issue: With checkpointing enabled, your job will fail on a leader change, then Flink will restart the Kafka consumers and they'll find the new leaders.
Starting from Flink 1.0.0 the Kafka consumer will handle leader changes without failing.

Regards,
Robert

On Tue, Apr 19, 2016 at 12:17 PM, Balaji Rajagopalan <[hidden email]> wrote:
I am facing this exception repeatedly while trying to consume from kafka topic. It seems it was reported in 1.0.0 and fixed in 1.0.0, how can  I be sure that is fixed in the version of flink that I am using, does it require me to install patch updates ? 

Caused by: java.lang.RuntimeException: Unable to find a leader for partitions: [FetchPartition {topic=capi, partition=0, offset=-915623761776}]
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.findLeaderForPartitions(LegacyFetcher.java:323)
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:162)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer08.run(FlinkKafkaConsumer08.java:316)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:224)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
at java.lang.Thread.run(Thread.java:745)

https://issues.apache.org/jira/browse/FLINK-3368



Reply | Threaded
Open this post in threaded view
|

Re: Leader not found

Balaji Rajagopalan
Flink version : 1.0.0
Kafka version : 0.8.2.1

Try to use a topic which has no message posted to it, at the time flink starts. 

On Tue, Apr 19, 2016 at 5:41 PM, Robert Metzger <[hidden email]> wrote:
Can you provide me with the exact Flink and Kafka versions you are using and the steps to reproduce the issue?

On Tue, Apr 19, 2016 at 2:06 PM, Balaji Rajagopalan <[hidden email]> wrote:
It does not seem to fully work if there is no data in the kafka stream, the flink application emits this error and bails, could this be missed use case in the fix. 

On Tue, Apr 19, 2016 at 3:58 PM, Robert Metzger <[hidden email]> wrote:
Hi,

I'm sorry, the documentation in the JIRA issue is a bit incorrect. The issue has been fixed in all versions including and after 1.0.0. Earlier releases (0.10, 0.9) will fail when the leader changes.
However, you don't necessarily need to upgrade to Flink 1.0.0 to resolve the issue: With checkpointing enabled, your job will fail on a leader change, then Flink will restart the Kafka consumers and they'll find the new leaders.
Starting from Flink 1.0.0 the Kafka consumer will handle leader changes without failing.

Regards,
Robert

On Tue, Apr 19, 2016 at 12:17 PM, Balaji Rajagopalan <[hidden email]> wrote:
I am facing this exception repeatedly while trying to consume from kafka topic. It seems it was reported in 1.0.0 and fixed in 1.0.0, how can  I be sure that is fixed in the version of flink that I am using, does it require me to install patch updates ? 

Caused by: java.lang.RuntimeException: Unable to find a leader for partitions: [FetchPartition {topic=capi, partition=0, offset=-915623761776}]
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.findLeaderForPartitions(LegacyFetcher.java:323)
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:162)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer08.run(FlinkKafkaConsumer08.java:316)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:224)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
at java.lang.Thread.run(Thread.java:745)

https://issues.apache.org/jira/browse/FLINK-3368




Reply | Threaded
Open this post in threaded view
|

Re: Leader not found

rmetzger0
Hi,
I just tried it with Kafka 0.8.2.0 and 0.8.2.1 and for both versions everything worked fine.
How many partitions does your topic have?

Can you send me the full logs of the Kafka consumer?

On Tue, Apr 19, 2016 at 6:05 PM, Balaji Rajagopalan <[hidden email]> wrote:
Flink version : 1.0.0
Kafka version : 0.8.2.1

Try to use a topic which has no message posted to it, at the time flink starts. 

On Tue, Apr 19, 2016 at 5:41 PM, Robert Metzger <[hidden email]> wrote:
Can you provide me with the exact Flink and Kafka versions you are using and the steps to reproduce the issue?

On Tue, Apr 19, 2016 at 2:06 PM, Balaji Rajagopalan <[hidden email]> wrote:
It does not seem to fully work if there is no data in the kafka stream, the flink application emits this error and bails, could this be missed use case in the fix. 

On Tue, Apr 19, 2016 at 3:58 PM, Robert Metzger <[hidden email]> wrote:
Hi,

I'm sorry, the documentation in the JIRA issue is a bit incorrect. The issue has been fixed in all versions including and after 1.0.0. Earlier releases (0.10, 0.9) will fail when the leader changes.
However, you don't necessarily need to upgrade to Flink 1.0.0 to resolve the issue: With checkpointing enabled, your job will fail on a leader change, then Flink will restart the Kafka consumers and they'll find the new leaders.
Starting from Flink 1.0.0 the Kafka consumer will handle leader changes without failing.

Regards,
Robert

On Tue, Apr 19, 2016 at 12:17 PM, Balaji Rajagopalan <[hidden email]> wrote:
I am facing this exception repeatedly while trying to consume from kafka topic. It seems it was reported in 1.0.0 and fixed in 1.0.0, how can  I be sure that is fixed in the version of flink that I am using, does it require me to install patch updates ? 

Caused by: java.lang.RuntimeException: Unable to find a leader for partitions: [FetchPartition {topic=capi, partition=0, offset=-915623761776}]
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.findLeaderForPartitions(LegacyFetcher.java:323)
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:162)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer08.run(FlinkKafkaConsumer08.java:316)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:224)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
at java.lang.Thread.run(Thread.java:745)

https://issues.apache.org/jira/browse/FLINK-3368





Reply | Threaded
Open this post in threaded view
|

Re: Leader not found

Balaji Rajagopalan

/usr/share/kafka_2.11-0.8.2.1/bin$ ./kafka-topics.sh --describe --topic capi --zookeeper (someserver)

Topic:capi PartitionCount:1 ReplicationFactor:1 Configs:

Topic: capi Partition: 0 Leader: 0 Replicas: 0 Isr: 0


There are no events to consume from this topic, this I confirm by running the console consumer. 

./kafka-console-consumer.sh --topic topicname --zookeeper (some server)


The flink connector is other consumer. This is happening in our pre-production machines consistently, I will also try to reproduce this locally. 

java.lang.RuntimeException: Unable to find a leader for partitions: [FetchPartition {topic=location, partition=58, offset=-915623761776}, FetchPartition {topic=location, partition=60, offset=-915623761776}, FetchPartition {topic=location, partition=54, offset=-915623761776}, FetchPartition {topic=location, partition=56, offset=-915623761776}, FetchPartition {topic=location, partition=66, offset=-915623761776}, FetchPartition {topic=location, partition=68, offset=-915623761776}, FetchPartition {topic=location, partition=62, offset=-915623761776}, FetchPartition {topic=location, partition=64, offset=-915623761776}, FetchPartition {topic=location, partition=74, offset=-915623761776}, FetchPartition {topic=location, partition=76, offset=-915623761776}, FetchPartition {topic=location, partition=70, offset=-915623761776}, FetchPartition {topic=location, partition=72, offset=-915623761776}, FetchPartition {topic=location, partition=82, offset=-915623761776}, FetchPartition {topic=location, partition=84, offset=-915623761776}, FetchPartition {topic=location, partition=78, offset=-915623761776}, FetchPartition {topic=location, partition=80, offset=-915623761776}, FetchPartition {topic=location, partition=26, offset=-915623761776}, FetchPartition {topic=location, partition=28, offset=-915623761776}, FetchPartition {topic=location, partition=22, offset=-915623761776}, FetchPartition {topic=location, partition=24, offset=-915623761776}, FetchPartition {topic=location, partition=34, offset=-915623761776}, FetchPartition {topic=location, partition=36, offset=-915623761776}, FetchPartition {topic=location, partition=30, offset=-915623761776}, FetchPartition {topic=location, partition=32, offset=-915623761776}, FetchPartition {topic=location, partition=42, offset=-915623761776}, FetchPartition {topic=location, partition=44, offset=-915623761776}, FetchPartition {topic=location, partition=38, offset=-915623761776}, FetchPartition {topic=location, partition=40, offset=-915623761776}, FetchPartition {topic=location, partition=50, offset=-915623761776}, FetchPartition {topic=location, partition=52, offset=-915623761776}, FetchPartition {topic=location, partition=46, offset=-915623761776}, FetchPartition {topic=location, partition=48, offset=-915623761776}, FetchPartition {topic=location, partition=122, offset=-915623761776}, FetchPartition {topic=location, partition=124, offset=-915623761776}, FetchPartition {topic=location, partition=118, offset=-915623761776}, FetchPartition {topic=location, partition=120, offset=-915623761776}, FetchPartition {topic=location, partition=2, offset=-915623761776}, FetchPartition {topic=location, partition=130, offset=-915623761776}, FetchPartition {topic=location, partition=4, offset=-915623761776}, FetchPartition {topic=location, partition=132, offset=-915623761776}, FetchPartition {topic=location, partition=126, offset=-915623761776}, FetchPartition {topic=location, partition=0, offset=-915623761776}, FetchPartition {topic=location, partition=128, offset=-915623761776}, FetchPartition {topic=location, partition=10, offset=-915623761776}, FetchPartition {topic=location, partition=138, offset=-915623761776}, FetchPartition {topic=location, partition=12, offset=-915623761776}, FetchPartition {topic=location, partition=140, offset=-915623761776}, FetchPartition {topic=location, partition=6, offset=-915623761776}, FetchPartition {topic=location, partition=134, offset=-915623761776}, FetchPartition {topic=location, partition=8, offset=-915623761776}, FetchPartition {topic=location, partition=136, offset=-915623761776}, FetchPartition {topic=location, partition=18, offset=-915623761776}, FetchPartition {topic=location, partition=146, offset=-915623761776}, FetchPartition {topic=location, partition=20, offset=-915623761776}, FetchPartition {topic=location, partition=148, offset=-915623761776}, FetchPartition {topic=location, partition=14, offset=-915623761776}, FetchPartition {topic=location, partition=142, offset=-915623761776}, FetchPartition {topic=location, partition=16, offset=-915623761776}, FetchPartition {topic=location, partition=144, offset=-915623761776}, FetchPartition {topic=location, partition=90, offset=-915623761776}, FetchPartition {topic=location, partition=92, offset=-915623761776}, FetchPartition {topic=location, partition=86, offset=-915623761776}, FetchPartition {topic=location, partition=88, offset=-915623761776}, FetchPartition {topic=location, partition=98, offset=-915623761776}, FetchPartition {topic=location, partition=100, offset=-915623761776}, FetchPartition {topic=location, partition=94, offset=-915623761776}, FetchPartition {topic=location, partition=96, offset=-915623761776}, FetchPartition {topic=location, partition=106, offset=-915623761776}, FetchPartition {topic=location, partition=108, offset=-915623761776}, FetchPartition {topic=location, partition=102, offset=-915623761776}, FetchPartition {topic=location, partition=104, offset=-915623761776}, FetchPartition {topic=location, partition=114, offset=-915623761776}, FetchPartition {topic=location, partition=116, offset=-915623761776}, FetchPartition {topic=location, partition=110, offset=-915623761776}, FetchPartition {topic=location, partition=112, offset=-915623761776}]

at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.findLeaderForPartitions(LegacyFetcher.java:323)

at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:162)

at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer08.run(FlinkKafkaConsumer08.java:316)

at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)

at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)

at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:224)

at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)

at java.lang.Thread.run(Thread.java:745)


On Wed, Apr 20, 2016 at 1:12 PM, Robert Metzger <[hidden email]> wrote:
Hi,
I just tried it with Kafka 0.8.2.0 and 0.8.2.1 and for both versions everything worked fine.
How many partitions does your topic have?

Can you send me the full logs of the Kafka consumer?

On Tue, Apr 19, 2016 at 6:05 PM, Balaji Rajagopalan <[hidden email]> wrote:
Flink version : 1.0.0
Kafka version : 0.8.2.1

Try to use a topic which has no message posted to it, at the time flink starts. 

On Tue, Apr 19, 2016 at 5:41 PM, Robert Metzger <[hidden email]> wrote:
Can you provide me with the exact Flink and Kafka versions you are using and the steps to reproduce the issue?

On Tue, Apr 19, 2016 at 2:06 PM, Balaji Rajagopalan <[hidden email]> wrote:
It does not seem to fully work if there is no data in the kafka stream, the flink application emits this error and bails, could this be missed use case in the fix. 

On Tue, Apr 19, 2016 at 3:58 PM, Robert Metzger <[hidden email]> wrote:
Hi,

I'm sorry, the documentation in the JIRA issue is a bit incorrect. The issue has been fixed in all versions including and after 1.0.0. Earlier releases (0.10, 0.9) will fail when the leader changes.
However, you don't necessarily need to upgrade to Flink 1.0.0 to resolve the issue: With checkpointing enabled, your job will fail on a leader change, then Flink will restart the Kafka consumers and they'll find the new leaders.
Starting from Flink 1.0.0 the Kafka consumer will handle leader changes without failing.

Regards,
Robert

On Tue, Apr 19, 2016 at 12:17 PM, Balaji Rajagopalan <[hidden email]> wrote:
I am facing this exception repeatedly while trying to consume from kafka topic. It seems it was reported in 1.0.0 and fixed in 1.0.0, how can  I be sure that is fixed in the version of flink that I am using, does it require me to install patch updates ? 

Caused by: java.lang.RuntimeException: Unable to find a leader for partitions: [FetchPartition {topic=capi, partition=0, offset=-915623761776}]
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.findLeaderForPartitions(LegacyFetcher.java:323)
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:162)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer08.run(FlinkKafkaConsumer08.java:316)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:224)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
at java.lang.Thread.run(Thread.java:745)

https://issues.apache.org/jira/browse/FLINK-3368






Reply | Threaded
Open this post in threaded view
|

Re: Leader not found

Balaji Rajagopalan
Robert,
Sorry I gave the information about wrong topic. Here is the right one. 

balajirajagopalan@megatron-server02:/usr/share/kafka_2.11-0.8.2.1/bin$ ./kafka-topics.sh --describe --topic location --zookeeper  (someserver)

Topic:location PartitionCount:150 ReplicationFactor:1 Configs:

Topic: location Partition: 0 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 1 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 2 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 3 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 4 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 5 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 6 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 7 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 8 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 9 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 10 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 11 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 12 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 13 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 14 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 15 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 16 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 17 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 18 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 19 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 20 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 21 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 22 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 23 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 24 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 25 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 26 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 27 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 28 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 29 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 30 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 31 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 32 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 33 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 34 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 35 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 36 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 37 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 38 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 39 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 40 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 41 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 42 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 43 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 44 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 45 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 46 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 47 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 48 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 49 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 50 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 51 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 52 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 53 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 54 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 55 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 56 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 57 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 58 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 59 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 60 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 61 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 62 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 63 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 64 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 65 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 66 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 67 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 68 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 69 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 70 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 71 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 72 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 73 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 74 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 75 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 76 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 77 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 78 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 79 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 80 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 81 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 82 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 83 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 84 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 85 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 86 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 87 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 88 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 89 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 90 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 91 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 92 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 93 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 94 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 95 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 96 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 97 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 98 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 99 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 100 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 101 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 102 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 103 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 104 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 105 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 106 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 107 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 108 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 109 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 110 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 111 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 112 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 113 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 114 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 115 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 116 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 117 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 118 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 119 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 120 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 121 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 122 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 123 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 124 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 125 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 126 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 127 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 128 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 129 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 130 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 131 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 132 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 133 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 134 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 135 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 136 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 137 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 138 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 139 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 140 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 141 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 142 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 143 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 144 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 145 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 146 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 147 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 148 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 149 Leader: 0 Replicas: 0 Isr: 0


On Wed, Apr 20, 2016 at 10:35 PM, Balaji Rajagopalan <[hidden email]> wrote:

/usr/share/kafka_2.11-0.8.2.1/bin$ ./kafka-topics.sh --describe --topic capi --zookeeper (someserver)

Topic:capi PartitionCount:1 ReplicationFactor:1 Configs:

Topic: capi Partition: 0 Leader: 0 Replicas: 0 Isr: 0


There are no events to consume from this topic, this I confirm by running the console consumer. 

./kafka-console-consumer.sh --topic topicname --zookeeper (some server)


The flink connector is other consumer. This is happening in our pre-production machines consistently, I will also try to reproduce this locally. 

java.lang.RuntimeException: Unable to find a leader for partitions: [FetchPartition {topic=location, partition=58, offset=-915623761776}, FetchPartition {topic=location, partition=60, offset=-915623761776}, FetchPartition {topic=location, partition=54, offset=-915623761776}, FetchPartition {topic=location, partition=56, offset=-915623761776}, FetchPartition {topic=location, partition=66, offset=-915623761776}, FetchPartition {topic=location, partition=68, offset=-915623761776}, FetchPartition {topic=location, partition=62, offset=-915623761776}, FetchPartition {topic=location, partition=64, offset=-915623761776}, FetchPartition {topic=location, partition=74, offset=-915623761776}, FetchPartition {topic=location, partition=76, offset=-915623761776}, FetchPartition {topic=location, partition=70, offset=-915623761776}, FetchPartition {topic=location, partition=72, offset=-915623761776}, FetchPartition {topic=location, partition=82, offset=-915623761776}, FetchPartition {topic=location, partition=84, offset=-915623761776}, FetchPartition {topic=location, partition=78, offset=-915623761776}, FetchPartition {topic=location, partition=80, offset=-915623761776}, FetchPartition {topic=location, partition=26, offset=-915623761776}, FetchPartition {topic=location, partition=28, offset=-915623761776}, FetchPartition {topic=location, partition=22, offset=-915623761776}, FetchPartition {topic=location, partition=24, offset=-915623761776}, FetchPartition {topic=location, partition=34, offset=-915623761776}, FetchPartition {topic=location, partition=36, offset=-915623761776}, FetchPartition {topic=location, partition=30, offset=-915623761776}, FetchPartition {topic=location, partition=32, offset=-915623761776}, FetchPartition {topic=location, partition=42, offset=-915623761776}, FetchPartition {topic=location, partition=44, offset=-915623761776}, FetchPartition {topic=location, partition=38, offset=-915623761776}, FetchPartition {topic=location, partition=40, offset=-915623761776}, FetchPartition {topic=location, partition=50, offset=-915623761776}, FetchPartition {topic=location, partition=52, offset=-915623761776}, FetchPartition {topic=location, partition=46, offset=-915623761776}, FetchPartition {topic=location, partition=48, offset=-915623761776}, FetchPartition {topic=location, partition=122, offset=-915623761776}, FetchPartition {topic=location, partition=124, offset=-915623761776}, FetchPartition {topic=location, partition=118, offset=-915623761776}, FetchPartition {topic=location, partition=120, offset=-915623761776}, FetchPartition {topic=location, partition=2, offset=-915623761776}, FetchPartition {topic=location, partition=130, offset=-915623761776}, FetchPartition {topic=location, partition=4, offset=-915623761776}, FetchPartition {topic=location, partition=132, offset=-915623761776}, FetchPartition {topic=location, partition=126, offset=-915623761776}, FetchPartition {topic=location, partition=0, offset=-915623761776}, FetchPartition {topic=location, partition=128, offset=-915623761776}, FetchPartition {topic=location, partition=10, offset=-915623761776}, FetchPartition {topic=location, partition=138, offset=-915623761776}, FetchPartition {topic=location, partition=12, offset=-915623761776}, FetchPartition {topic=location, partition=140, offset=-915623761776}, FetchPartition {topic=location, partition=6, offset=-915623761776}, FetchPartition {topic=location, partition=134, offset=-915623761776}, FetchPartition {topic=location, partition=8, offset=-915623761776}, FetchPartition {topic=location, partition=136, offset=-915623761776}, FetchPartition {topic=location, partition=18, offset=-915623761776}, FetchPartition {topic=location, partition=146, offset=-915623761776}, FetchPartition {topic=location, partition=20, offset=-915623761776}, FetchPartition {topic=location, partition=148, offset=-915623761776}, FetchPartition {topic=location, partition=14, offset=-915623761776}, FetchPartition {topic=location, partition=142, offset=-915623761776}, FetchPartition {topic=location, partition=16, offset=-915623761776}, FetchPartition {topic=location, partition=144, offset=-915623761776}, FetchPartition {topic=location, partition=90, offset=-915623761776}, FetchPartition {topic=location, partition=92, offset=-915623761776}, FetchPartition {topic=location, partition=86, offset=-915623761776}, FetchPartition {topic=location, partition=88, offset=-915623761776}, FetchPartition {topic=location, partition=98, offset=-915623761776}, FetchPartition {topic=location, partition=100, offset=-915623761776}, FetchPartition {topic=location, partition=94, offset=-915623761776}, FetchPartition {topic=location, partition=96, offset=-915623761776}, FetchPartition {topic=location, partition=106, offset=-915623761776}, FetchPartition {topic=location, partition=108, offset=-915623761776}, FetchPartition {topic=location, partition=102, offset=-915623761776}, FetchPartition {topic=location, partition=104, offset=-915623761776}, FetchPartition {topic=location, partition=114, offset=-915623761776}, FetchPartition {topic=location, partition=116, offset=-915623761776}, FetchPartition {topic=location, partition=110, offset=-915623761776}, FetchPartition {topic=location, partition=112, offset=-915623761776}]

at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.findLeaderForPartitions(LegacyFetcher.java:323)

at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:162)

at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer08.run(FlinkKafkaConsumer08.java:316)

at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)

at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)

at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:224)

at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)

at java.lang.Thread.run(Thread.java:745)


On Wed, Apr 20, 2016 at 1:12 PM, Robert Metzger <[hidden email]> wrote:
Hi,
I just tried it with Kafka 0.8.2.0 and 0.8.2.1 and for both versions everything worked fine.
How many partitions does your topic have?

Can you send me the full logs of the Kafka consumer?

On Tue, Apr 19, 2016 at 6:05 PM, Balaji Rajagopalan <[hidden email]> wrote:
Flink version : 1.0.0
Kafka version : 0.8.2.1

Try to use a topic which has no message posted to it, at the time flink starts. 

On Tue, Apr 19, 2016 at 5:41 PM, Robert Metzger <[hidden email]> wrote:
Can you provide me with the exact Flink and Kafka versions you are using and the steps to reproduce the issue?

On Tue, Apr 19, 2016 at 2:06 PM, Balaji Rajagopalan <[hidden email]> wrote:
It does not seem to fully work if there is no data in the kafka stream, the flink application emits this error and bails, could this be missed use case in the fix. 

On Tue, Apr 19, 2016 at 3:58 PM, Robert Metzger <[hidden email]> wrote:
Hi,

I'm sorry, the documentation in the JIRA issue is a bit incorrect. The issue has been fixed in all versions including and after 1.0.0. Earlier releases (0.10, 0.9) will fail when the leader changes.
However, you don't necessarily need to upgrade to Flink 1.0.0 to resolve the issue: With checkpointing enabled, your job will fail on a leader change, then Flink will restart the Kafka consumers and they'll find the new leaders.
Starting from Flink 1.0.0 the Kafka consumer will handle leader changes without failing.

Regards,
Robert

On Tue, Apr 19, 2016 at 12:17 PM, Balaji Rajagopalan <[hidden email]> wrote:
I am facing this exception repeatedly while trying to consume from kafka topic. It seems it was reported in 1.0.0 and fixed in 1.0.0, how can  I be sure that is fixed in the version of flink that I am using, does it require me to install patch updates ? 

Caused by: java.lang.RuntimeException: Unable to find a leader for partitions: [FetchPartition {topic=capi, partition=0, offset=-915623761776}]
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.findLeaderForPartitions(LegacyFetcher.java:323)
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:162)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer08.run(FlinkKafkaConsumer08.java:316)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:224)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
at java.lang.Thread.run(Thread.java:745)

https://issues.apache.org/jira/browse/FLINK-3368







Reply | Threaded
Open this post in threaded view
|

Re: Leader not found

rmetzger0
Hi,
sorry for dropping this one. I totally forgot this thread.

Did you find a solution?
One reason for this to happen could be that you specified only one broker in the list of bootstrap severs?

On Wed, Apr 20, 2016 at 7:08 PM, Balaji Rajagopalan <[hidden email]> wrote:
Robert,
Sorry I gave the information about wrong topic. Here is the right one. 

balajirajagopalan@megatron-server02:/usr/share/kafka_2.11-0.8.2.1/bin$ ./kafka-topics.sh --describe --topic location --zookeeper  (someserver)

Topic:location PartitionCount:150 ReplicationFactor:1 Configs:

Topic: location Partition: 0 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 1 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 2 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 3 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 4 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 5 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 6 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 7 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 8 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 9 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 10 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 11 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 12 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 13 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 14 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 15 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 16 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 17 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 18 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 19 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 20 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 21 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 22 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 23 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 24 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 25 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 26 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 27 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 28 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 29 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 30 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 31 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 32 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 33 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 34 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 35 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 36 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 37 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 38 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 39 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 40 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 41 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 42 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 43 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 44 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 45 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 46 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 47 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 48 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 49 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 50 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 51 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 52 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 53 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 54 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 55 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 56 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 57 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 58 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 59 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 60 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 61 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 62 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 63 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 64 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 65 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 66 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 67 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 68 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 69 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 70 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 71 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 72 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 73 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 74 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 75 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 76 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 77 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 78 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 79 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 80 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 81 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 82 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 83 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 84 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 85 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 86 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 87 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 88 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 89 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 90 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 91 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 92 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 93 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 94 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 95 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 96 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 97 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 98 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 99 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 100 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 101 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 102 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 103 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 104 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 105 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 106 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 107 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 108 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 109 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 110 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 111 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 112 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 113 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 114 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 115 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 116 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 117 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 118 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 119 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 120 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 121 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 122 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 123 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 124 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 125 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 126 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 127 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 128 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 129 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 130 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 131 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 132 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 133 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 134 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 135 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 136 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 137 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 138 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 139 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 140 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 141 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 142 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 143 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 144 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 145 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 146 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 147 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 148 Leader: 0 Replicas: 0 Isr: 0

Topic: location Partition: 149 Leader: 0 Replicas: 0 Isr: 0


On Wed, Apr 20, 2016 at 10:35 PM, Balaji Rajagopalan <[hidden email]> wrote:

/usr/share/kafka_2.11-0.8.2.1/bin$ ./kafka-topics.sh --describe --topic capi --zookeeper (someserver)

Topic:capi PartitionCount:1 ReplicationFactor:1 Configs:

Topic: capi Partition: 0 Leader: 0 Replicas: 0 Isr: 0


There are no events to consume from this topic, this I confirm by running the console consumer. 

./kafka-console-consumer.sh --topic topicname --zookeeper (some server)


The flink connector is other consumer. This is happening in our pre-production machines consistently, I will also try to reproduce this locally. 

java.lang.RuntimeException: Unable to find a leader for partitions: [FetchPartition {topic=location, partition=58, offset=-915623761776}, FetchPartition {topic=location, partition=60, offset=-915623761776}, FetchPartition {topic=location, partition=54, offset=-915623761776}, FetchPartition {topic=location, partition=56, offset=-915623761776}, FetchPartition {topic=location, partition=66, offset=-915623761776}, FetchPartition {topic=location, partition=68, offset=-915623761776}, FetchPartition {topic=location, partition=62, offset=-915623761776}, FetchPartition {topic=location, partition=64, offset=-915623761776}, FetchPartition {topic=location, partition=74, offset=-915623761776}, FetchPartition {topic=location, partition=76, offset=-915623761776}, FetchPartition {topic=location, partition=70, offset=-915623761776}, FetchPartition {topic=location, partition=72, offset=-915623761776}, FetchPartition {topic=location, partition=82, offset=-915623761776}, FetchPartition {topic=location, partition=84, offset=-915623761776}, FetchPartition {topic=location, partition=78, offset=-915623761776}, FetchPartition {topic=location, partition=80, offset=-915623761776}, FetchPartition {topic=location, partition=26, offset=-915623761776}, FetchPartition {topic=location, partition=28, offset=-915623761776}, FetchPartition {topic=location, partition=22, offset=-915623761776}, FetchPartition {topic=location, partition=24, offset=-915623761776}, FetchPartition {topic=location, partition=34, offset=-915623761776}, FetchPartition {topic=location, partition=36, offset=-915623761776}, FetchPartition {topic=location, partition=30, offset=-915623761776}, FetchPartition {topic=location, partition=32, offset=-915623761776}, FetchPartition {topic=location, partition=42, offset=-915623761776}, FetchPartition {topic=location, partition=44, offset=-915623761776}, FetchPartition {topic=location, partition=38, offset=-915623761776}, FetchPartition {topic=location, partition=40, offset=-915623761776}, FetchPartition {topic=location, partition=50, offset=-915623761776}, FetchPartition {topic=location, partition=52, offset=-915623761776}, FetchPartition {topic=location, partition=46, offset=-915623761776}, FetchPartition {topic=location, partition=48, offset=-915623761776}, FetchPartition {topic=location, partition=122, offset=-915623761776}, FetchPartition {topic=location, partition=124, offset=-915623761776}, FetchPartition {topic=location, partition=118, offset=-915623761776}, FetchPartition {topic=location, partition=120, offset=-915623761776}, FetchPartition {topic=location, partition=2, offset=-915623761776}, FetchPartition {topic=location, partition=130, offset=-915623761776}, FetchPartition {topic=location, partition=4, offset=-915623761776}, FetchPartition {topic=location, partition=132, offset=-915623761776}, FetchPartition {topic=location, partition=126, offset=-915623761776}, FetchPartition {topic=location, partition=0, offset=-915623761776}, FetchPartition {topic=location, partition=128, offset=-915623761776}, FetchPartition {topic=location, partition=10, offset=-915623761776}, FetchPartition {topic=location, partition=138, offset=-915623761776}, FetchPartition {topic=location, partition=12, offset=-915623761776}, FetchPartition {topic=location, partition=140, offset=-915623761776}, FetchPartition {topic=location, partition=6, offset=-915623761776}, FetchPartition {topic=location, partition=134, offset=-915623761776}, FetchPartition {topic=location, partition=8, offset=-915623761776}, FetchPartition {topic=location, partition=136, offset=-915623761776}, FetchPartition {topic=location, partition=18, offset=-915623761776}, FetchPartition {topic=location, partition=146, offset=-915623761776}, FetchPartition {topic=location, partition=20, offset=-915623761776}, FetchPartition {topic=location, partition=148, offset=-915623761776}, FetchPartition {topic=location, partition=14, offset=-915623761776}, FetchPartition {topic=location, partition=142, offset=-915623761776}, FetchPartition {topic=location, partition=16, offset=-915623761776}, FetchPartition {topic=location, partition=144, offset=-915623761776}, FetchPartition {topic=location, partition=90, offset=-915623761776}, FetchPartition {topic=location, partition=92, offset=-915623761776}, FetchPartition {topic=location, partition=86, offset=-915623761776}, FetchPartition {topic=location, partition=88, offset=-915623761776}, FetchPartition {topic=location, partition=98, offset=-915623761776}, FetchPartition {topic=location, partition=100, offset=-915623761776}, FetchPartition {topic=location, partition=94, offset=-915623761776}, FetchPartition {topic=location, partition=96, offset=-915623761776}, FetchPartition {topic=location, partition=106, offset=-915623761776}, FetchPartition {topic=location, partition=108, offset=-915623761776}, FetchPartition {topic=location, partition=102, offset=-915623761776}, FetchPartition {topic=location, partition=104, offset=-915623761776}, FetchPartition {topic=location, partition=114, offset=-915623761776}, FetchPartition {topic=location, partition=116, offset=-915623761776}, FetchPartition {topic=location, partition=110, offset=-915623761776}, FetchPartition {topic=location, partition=112, offset=-915623761776}]

at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.findLeaderForPartitions(LegacyFetcher.java:323)

at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:162)

at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer08.run(FlinkKafkaConsumer08.java:316)

at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)

at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)

at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:224)

at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)

at java.lang.Thread.run(Thread.java:745)


On Wed, Apr 20, 2016 at 1:12 PM, Robert Metzger <[hidden email]> wrote:
Hi,
I just tried it with Kafka 0.8.2.0 and 0.8.2.1 and for both versions everything worked fine.
How many partitions does your topic have?

Can you send me the full logs of the Kafka consumer?

On Tue, Apr 19, 2016 at 6:05 PM, Balaji Rajagopalan <[hidden email]> wrote:
Flink version : 1.0.0
Kafka version : 0.8.2.1

Try to use a topic which has no message posted to it, at the time flink starts. 

On Tue, Apr 19, 2016 at 5:41 PM, Robert Metzger <[hidden email]> wrote:
Can you provide me with the exact Flink and Kafka versions you are using and the steps to reproduce the issue?

On Tue, Apr 19, 2016 at 2:06 PM, Balaji Rajagopalan <[hidden email]> wrote:
It does not seem to fully work if there is no data in the kafka stream, the flink application emits this error and bails, could this be missed use case in the fix. 

On Tue, Apr 19, 2016 at 3:58 PM, Robert Metzger <[hidden email]> wrote:
Hi,

I'm sorry, the documentation in the JIRA issue is a bit incorrect. The issue has been fixed in all versions including and after 1.0.0. Earlier releases (0.10, 0.9) will fail when the leader changes.
However, you don't necessarily need to upgrade to Flink 1.0.0 to resolve the issue: With checkpointing enabled, your job will fail on a leader change, then Flink will restart the Kafka consumers and they'll find the new leaders.
Starting from Flink 1.0.0 the Kafka consumer will handle leader changes without failing.

Regards,
Robert

On Tue, Apr 19, 2016 at 12:17 PM, Balaji Rajagopalan <[hidden email]> wrote:
I am facing this exception repeatedly while trying to consume from kafka topic. It seems it was reported in 1.0.0 and fixed in 1.0.0, how can  I be sure that is fixed in the version of flink that I am using, does it require me to install patch updates ? 

Caused by: java.lang.RuntimeException: Unable to find a leader for partitions: [FetchPartition {topic=capi, partition=0, offset=-915623761776}]
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.findLeaderForPartitions(LegacyFetcher.java:323)
at org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:162)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer08.run(FlinkKafkaConsumer08.java:316)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:224)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
at java.lang.Thread.run(Thread.java:745)

https://issues.apache.org/jira/browse/FLINK-3368