Hi,
I want to connect my Flink streaming job to Hive. At the moment, what is the best way to connect to Hive. Some features seems to be in development. Some really cool features have been described here: https://fr.slideshare.net/BowenLi9/integrating-flink-with-hive-xuefu-zhang-and-bowen-li-seattle-flink-meetup-feb-2019 My first need is to read and update Hive metadata. Concerning the Hive data I can store them directly in HDFS (as Orc format) in a first step. thx. David |
Hi David, Check out Hive related documentations: Note - I just merged a PR restructuring Hive related docs today, changes should reflect on the website in a day or so - I didn't find release-1.9-snapshot's doc, so just reference release-1.10-snapshot's doc for now. 1.9 rc2 has been released, official 1.9 should be out soon - Hive features are in beta in 1.9 Feel free to open tickets if you have feature requests. On Fri, Aug 9, 2019 at 8:00 AM David Morin <[hidden email]> wrote: Hi, |
Thanks a lot Bowen. I've started reading these docs. Really helpful. It's a good description of the Hive integration in Flink and how to use it. I continue my dev. See you soon Le lun. 12 août 2019 à 20:55, Bowen Li <[hidden email]> a écrit :
|
Free forum by Nabble | Edit this page |