Re: data enrichment with SQL use case

Posted by miki haiat on
URL: http://deprecated-apache-flink-user-mailing-list-archive.369.s1.nabble.com/data-enrichment-with-SQL-use-case-tp19520p19555.html

HI thanks  for the reply  i will try to break your reply to the flow execution order .

First data stream Will use AsyncIO and select the table ,
Second stream will be kafka and the i can join the stream and map it ?

If that   the case  then i will select the table only once on load ?
How can i make sure that my stream table is "fresh" .

Im thinking to myself , is thire a way to use flink backend (ROKSDB)  and create read/write through 
macanisem ?

Thanks 

miki



On Mon, Apr 16, 2018 at 2:45 AM, Ken Krugler <[hidden email]> wrote:
If the SQL data is all (or mostly all) needed to join against the data from Kafka, then I might try a regular join.

Otherwise it sounds like you want to use an AsyncFunction to do ad hoc queries (in parallel) against your SQL DB.


— Ken


On Apr 15, 2018, at 12:15 PM, miki haiat <[hidden email]> wrote:

Hi,

I have a case of meta data enrichment and im wondering if my approach is the correct way .
  1. input stream from kafka. 
  2. MD in msSQL .
  3. map to new pojo 
I need to extract  a key from the kafka stream   and use it to select some values from the sql table  .

SO i thought  to use  the table SQL api in order to select the table MD 
then convert the kafka stream to table and join the data by  the stream key .

At the end i need to map the joined data to a new POJO and send it to elesticserch .

Any suggestions or different ways to solve this use case ?

thanks,
Miki  




--------------------------
Ken Krugler
custom big data solutions & training
Hadoop, Cascading, Cassandra & Solr