Re: Question about Apache Flink Use Case

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Re: Question about Apache Flink Use Case

Kostas Kloudas
Hi Suma Cherukuri,

I also replied to your question in the dev list, but I repeat the answer here
just in case you missed in.

From what I understand you have many small files and you want to
aggregate them into bigger ones containing the logs of the last 24h.

As Max said RollingSinks will allow you to have exactly-once semantics
when writing your aggregated results to your FS.

As far as reading your input is concerned, Flink recently
integrated functionality to periodically monitor a directory, e.g. your
log directory, and process only the new files as they appear.

This will be part of the 1.1 release which is coming possibly during this
week or the next, but you can always find it on the master branch.

The method that you need is:

readFile(FileInputFormat<OUT> inputFormat,
                                String filePath,
                                FileProcessingMode watchType,
                                long interval,
                                FilePathFilter filter,
                                TypeInformation<OUT> typeInformation)

which allows you to specify the FileProcessingMode (which you should set to
FileProcessingMode.PROCESS_CONTINUOUSLY) and the “interval” at which
Flink is going to monitor the directory (path) for new files.

In addition you can find some helper methods in the StreamExecutionEnvironment
class that allow you to avoid specifying some parameters.

I believe that with the above two features (RollingSink and ContinuousMonitoring source)
Link can be the tool for your job, as both of them also provide exactly-once guarantees.

I hope this helps.

Let us know what you think,
Kostas

> On Jul 22, 2016, at 9:03 PM, Suma Cherukuri <[hidden email]> wrote:
>
> Hi,
>
> Good Afternoon!
>
> I work as an engineer at Symantec. My team works on Multi-tenant Event Processing System. Just a high level background, our customers write data to kafka brokers though agents like logstash and we process the events and save the log data in Elastic Search and S3.
>
> Use Case: We have a use case where in we write batches of events to S3 when file size limitation of 1MB (specific to our case) or a certain time threshold is reached.  We are planning on merging the number of files specific to a folder into one single file based on either time limit such as every 24 hrs.
>
> We were considering various options available today and would like to know if Apache Flink can be used to serve the purpose.
>
> Looking forward to hearing from you.
>
> Thank you
> Suma Cherukuri
>