I am trying to Sink data to Hive via Confluent Kafka -> Flink -> Hive using following code snippet: But I am getting following error:
I checked hive-jdbc driver and it seems that the Method is not supported in hive-jdbc driver.
Is there any way we can achieve this using JDBC Driver ? Let me know, Thanks in advance. |
Don’t use the JDBC driver to write to Hive. The performance of JDBC in general for large volumes is suboptimal. Write it to a file in HDFS in a format supported by HIve and point the table definition in Hive to it.
|
Thanks, We are getting data in Avro format from Kafka and are planning to write data in ORC format to Hive tables. 1. Is BucketingSink better option for this use case or something else ? 2. Is there a sample code example which we can refer ? Thanks in advance, On Sun, Jun 10, 2018 at 10:49 PM, Jörn Franke <[hidden email]> wrote:
Regards,
SAGAR. |
Yes, BucketingSink is a better option. You can start from looking at the BucketingSink java docs.
Please also take a look on this: Alternatively if you do not need to push a lot of data, you could write your own JDBC sink that bases on the JDBCAppendTableSink and adjusting it so that it works with hive’s JDBC client.
Piotrek
|
Free forum by Nabble | Edit this page |