Hi, We're using Kinesis as our input & output of a job and experiencing parsing exception while reading from the output stream. All streams contain 1 shard only. While investigating the issue I noticed a weird behaviour where records get a PartitionKey I did not assign and the record Data is being wrapped with random illegal chars. I wrote a very basic program to try to isolate the problem, but still I see this happening:
To verify the records in the Kinesis stream I use AWS CLI get-records API and see the following: ....................... Where did PartitionKey "a" come from? Further more, if you Base64 decode the record data of the records you see that all records written with this PartitionKey "a" are wrapped with weird illegal characters. For example: $ echo 84mawgoBMBpsCAAaaDc5LUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUEKGmwIABpoODAtQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQQodBmhDDIwmRVeomHOIGlWJ | base64 --decode While the records with PartitionKey "0" look good: $ echo ODEtQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQQo= | base64 --decode I tried using both 1.4.2 version & 1.6-SNAPSHOT and still see the issue... Here is a link to the gist: https://gist.github.com/aroch/7fb4219e7ada74f30654f1effe9d2f43 Am I missing anything? Has anyone encountered such issue? Would appreciate any help, Rafi |
Hi,
Have you tried to write the same records, with exactly the same configuration to the Kinesis, but outside of Flink (with some standalone Java application)? Piotrek
|
Hi,
Thanks Piotr for your response. I've further investigated the issue and found the root cause. There are 2 possible ways to produce/consume records to/from Kinesis:
The FlinkKinesisProducer uses the AWS KPL to push records into Kinesis, for optimized performance. One of the features of the KPL is Aggregation, meaning that it batches many UserRecords into one Kinesis Record to increase producer throughput. The thing is, that consumers of that stream needs to be aware that the records being consumed are aggregated and handle it accordingly [1][2]. In my case, the output stream is being consumed by Druid. So the consumer code is not in my control... So my choices are to disable the Aggregation feature by passing aggregationEnable: false in the kinesis configuration or writing my own custom consumer for Druid. I think that we should state this as part of the documentation for Flink Kinesis Connector. Thanks, Rafi On Thu, May 24, 2018 at 11:18 AM, Piotr Nowojski <[hidden email]> wrote:
|
Hi,
I’m glad that you have figured it out. Unfortunately it’s almost impossible to mention in our documentation all of the quirks of connectors that we are using, since it would more or less finally come down to fully coping their documentation :( However I created a small PR that mentions this issue: Please feel free to make further comments/suggestions there Thanks, Piotrek
|
Free forum by Nabble | Edit this page |