Our experience on this has been that if Kafka cluster is healthy, JVM resource contentions on our Flink app caused by high heap utilization and there by lost CPU cycles on GC also did result in this issue. Getting basic JVM metrics like CPU load, GC times and Heap Util from your app (we use Graphite reporter) might help point out if you have same issue.
- Ashish
On Thursday, July 5, 2018, 11:46 AM, Ted Yu <[hidden email]> wrote:
Have you tried
increasing the request.timeout.ms parameter (Kafka) ?
Which Flink / Kafka release are you using ?
Cheers
On Thu, Jul 5, 2018 at 5:39 AM Amol S - iProgrammer <
[hidden email]> wrote:
Hello,
I am using flink with kafka and getting below exception.
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for
helloworld.t-7: 30525 ms has passed since last append
-----------------------------------------------
*Amol Suryawanshi*
Java Developer
[hidden email]
*iProgrammer Solutions Pvt. Ltd.*
*Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
MH, INDIA.**Phone: <a dir="ltr" href="tel:+91%209689077510" x-apple-data-detectors="true" x-apple-data-detectors-type="telephone" x-apple-data-detectors-result="5/0">+91 9689077510 | Skype: amols_iprogrammer*
www.iprogrammer.com <[hidden email]>
------------------------------------------------