This post was updated on .
Hi,
When using Elasticsearch connector, is there a way to reflect IP change of Elasticsearch cluster? We use DNS of Elasticsearch in data sink, e.g. elasticsearch-dev.foo.de. However, when we replace the old Elasticsearch cluster with a new one, the Elasticsearch connector cannot write into the new one due to IP change. This is an important feature for us because we don't have to restart Flink job. The reason might be Flink-Elasticsearch2 connector looks up for the IP from DNS only once. Thus, one way might be when the response of writing into Elasticsearch says not success, let Flink environment create a new data sink? We use Flink Elasticsearch-connector2(for Elasticsearch2.x) on AWS Best, Sendoh |
Yes, it looks like the connector only creates the connection once when it starts and fails if the host is no longer reachable. It should be possible to catch that failure and try to re-open the connection. I opened a JIRA for this issue (FLINK-3857). Would you like to implement the improvement? 2016-05-02 9:38 GMT+02:00 Sendoh <[hidden email]>: Hi, |
This post was updated on .
Glad to see it's developing. Thanks you.
Can I ask would the same feature (reconnect) be useful for Kafka connector ? For example, if the IP of broker changes. Best, Sendoh |
I'm not so much familiar with the Kafka connector. Thanks, FabianCan you post your suggestion to the user or dev mailing list? 2016-05-04 16:53 GMT+02:00 Sendoh <[hidden email]>: Glad to see it's developing. |
Sorry, I confused the mail threads. We're already on the user list :-) Thanks for the suggestion.2016-05-04 17:35 GMT+02:00 Fabian Hueske <[hidden email]>:
|
Hi, the Kafka connector is able to handle leader changes. On Wed, May 4, 2016 at 5:46 PM, Fabian Hueske <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |