Hi,
I'm running Flink 1.9.0 and I'm trying to set the key to be published by the Table API's Kafka Connector. I've searched the documentation by could find no reference for such an ability. Additionally, while browsing the code of the KafkaTableSink, it looks like it creates a KeyedSerializationSchemaWrapper which just sets the key to null? Would love some help. Best Regards, Yuval Itzchakov. |
Hi Yuval, it looks as if the KafkaTableSink only supports writing out rows without a key. Pulling in Timo for verification. If you want to use a Kafka producer which writes the records out with a key, then please take a look at KafkaSerializationSchema. It supports this functionality. Cheers, Till On Wed, Aug 19, 2020 at 6:36 PM Yuval Itzchakov <[hidden email]> wrote:
|
Hi Yuval, Unfortunately setting the key or timestamp (or other metadata) from the SQL API is not supported yet. There is an ongoing discussion to support it[1]. Right now your option would be to change the code of KafkaTableSink and write your own version of KafkaSerializationSchema as Till mentioned. Best, Dawid
On 20/08/2020 09:26, Till Rohrmann
wrote:
signature.asc (849 bytes) Download Attachment |
In reply to this post by Till Rohrmann
Hi Till, KafkaSerializationSchema is only pluggable for the DataStream API, not for the Table API. KafkaTableSink hard codes a KeyedSerializationSchema that uses a null key, and this behavior can't be overridden. I have to say I was quite surprised by this behavior, as publishing events to Kafka using a key to keep order inside a given partition is usually a very common requirement. On Thu, Aug 20, 2020 at 10:26 AM Till Rohrmann <[hidden email]> wrote:
Best Regards, Yuval Itzchakov. |
This is indeed not optimal. Could you file a JIRA issue to add this functionality? Thanks a lot Yuval. Cheers, Till On Thu, Aug 20, 2020 at 9:47 AM Yuval Itzchakov <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |