If there is a CassandraSource for Hadoop, you can also use that with the HadoopInputFormatWrapper.
If you want to implement a Flink-specific source, extending InputFormat is the right thing to do. A user has started to implement a cassandra sink in this fork (may be able to reuse some code or testing infrastructure): https://github.com/rzvoncek/flink/tree/zvo/cassandraSink
Greetings,
Stephan
On Thu, Jul 2, 2015 at 11:34 AM, tambunanw <[hidden email]> wrote:
Hi All,
I want to if there's a custom data source available for Cassandra ?
From my observation seems that we need to implement that by extending
InputFormat. Is there any guide on how to do this robustly ?
On Thu, Jul 2, 2015 at 6:05 PM, Stephan Ewen <[hidden email]> wrote:
Hi!
If there is a CassandraSource for Hadoop, you can also use that with the HadoopInputFormatWrapper.
If you want to implement a Flink-specific source, extending InputFormat is the right thing to do. A user has started to implement a cassandra sink in this fork (may be able to reuse some code or testing infrastructure): https://github.com/rzvoncek/flink/tree/zvo/cassandraSink
Greetings,
Stephan
On Thu, Jul 2, 2015 at 11:34 AM, tambunanw <[hidden email]> wrote:
Hi All,
I want to if there's a custom data source available for Cassandra ?
From my observation seems that we need to implement that by extending
InputFormat. Is there any guide on how to do this robustly ?