Hi, latency for Flink and Storm are pretty similar. The only reason I could see for Flink having the slight upper hand there is the fact that Storm tracks the progress of every tuple throughout the topology and requires ACKs that have to go back to the sinks. As for throughput you are right that Flink sends elements in batches. The size of these batches can be controlled, even be reduced to 1, which yields best latency. The fact that there are these batches not not visible anywhere in the model, so calling them micro batches is problematic, since that already refers to a very different concept in Spark Streaming. Cheers, Aljoscha On Mon, 9 May 2016 at 11:06 <[hidden email]> wrote:
|
Hi Leon! I agree with Aljoscha that the term "microbatches" is confusing in that context. Flink's network layer is "buffer" oriented rather than "record oriented". Buffering it is a best effort to gather some elements in case where they come fast enough that this would not add much latency anyways. Concerning the latency: Chaining has a positive effect on latency. Some of the benchmarks show how Flink needs to communicate less with external systems (like Redis) - that is another source of reducing latency. For very simple programs that have no external communication and no chaining, I would expect Flink and Storm to be not very different in latency. Greetings, Stephan On Wed, May 11, 2016 at 9:24 AM, Aljoscha Krettek <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |