Re: Best pattern for achieving stream enrichment (side-input) from a large static source
Posted by
Ken Krugler on
URL: http://deprecated-apache-flink-user-mailing-list-archive.369.s1.nabble.com/Best-pattern-for-achieving-stream-enrichment-side-input-from-a-large-static-source-tp25771p25775.html
Hi Nimrod,
One approach is as follows…
2. Then use a CoMapFunction to “merge” the two connected streams (Kafka and Parquet), and output something like Tuple2<key, Either<Parquet, Kafka>>
3. In your enrichment function, based on what’s in the Either<> you’re either updating your enrichment state, or processing a record from Kafka.
But I think you might want to add a stateful function in the Parquet stream, so that you can separately track Kafka record state in the enrichment function.
There’s also the potential issue of wanting to buffer Kafka data in the enrichment function, if you need to coordinate with the enrichment data (e.g. you need to get a complete set of updated enrichment data before applying any of it to the incoming Kafka data).
— Ken
Hello,
We're using Flink on a high velocity data-stream, and we're looking for the best way to enrich our stream using a large static source (originating from Parquet files, which are rarely updated).
The source for the enrichment weights a few GBs, which is why we want to avoid using techniques such as broadcast streams, which cannot be keyed and need to be duplicated for every Flink operator that is used.
We started looking into the possibility of merging streams with datasets, or using the Table API, but any best-practice that's been done before will be greatly appreciated.
I'm attaching a simple chart for convenience,
Thanks you very much,
Nimrod.
<flink enrichment flow.png>
--------------------------