Hello,
We're running a streaming application that reads data from Kafka (Flink on
yarn, Flink version 1.6.1 / Kafka cluster version 0.10.1)
While troubleshooting a performance issue, we've noticed several
KafkaExceptions (around 3500 per minute) with the following stack trace
(using Flight Recorder):
java.lang.Exception.<init>()
java.io.IOException.<init>()
java.io.EOFException.<init>()
java.io.DataInputStream.readFully(byte[], int, int)
java.io.DataInputStream.readLong()
org.apache.kafka.common.record.RecordsIterator$DataLogInputStream.nextEntry()
org.apache.kafka.common.record.RecordsIterator$DeepRecordsIterator.<init>(LogEntry,
boolean, int)
org.apache.kafka.common.record.RecordsIterator.makeNext()
org.apache.kafka.common.record.RecordsIterator.makeNext()
org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext()
org.apache.kafka.common.utils.AbstractIterator.hasNext()
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher$CompletedFetch)
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords()
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(long)
org.apache.kafka.clients.consumer.KafkaConsumer.poll(long)
org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.run()
However the taskmanagers didn't log any of these exceptions (both rootLogger
and stdout).
How can we get those exception/full stack trace logged by taskmanagers?
Regards,
Amine
--
Sent from:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/