[DISCUSS] Flink 1.6 features

classic Classic list List threaded Threaded
24 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Flink 1.6 features

zhangminglei
Hi, Sagar

Thank your for your review. I will fix it when available. 

2. Will you be able to add more unit tests in the commit ? Eg. Writing some example data with simple schema which will initialize OrcWriter object and sinking it to local hdfs node ?

Ans: Yes. I will add more unit tests in there.

3. Are there plans to add support for other data types ?

Ans: Yes. Since I have been busy these days. After a couple of days, I will add the rest data type. And give more tests for that.

Cheers
Zhangminglei


在 2018年6月19日,上午9:10,sagar loke <[hidden email]> 写道:

Thanks @zhangminglei for replying. 

I agree, hive on Flink would be a big project. 

By the way, i looked at the Jira ticket related to ORC format which you shared. 

Couple of comments/requests about the pull request in th ticket:

1. Sorry for nitpicking but meatSchema is mispelled. I think it should be metaSchema. 

2. Will you be able to add more unit tests in the commit ? Eg. Writing some example data with simple schema which will initialize OrcWriter object and sinking it to local hdfs node ?

3. Are there plans to add support for other data types ?

Thanks,
Sagar

On Sun, Jun 17, 2018 at 6:45 AM zhangminglei <[hidden email]> wrote:
But if we do hive on flink , I think it should be a very big project.


> 在 2018年6月17日,下午9:36,Will Du <[hidden email]> 写道:
>
> Agree, two missing pieces I think could make Flink more competitive against Spark SQL/Stream and Kafka Stream
> 1. Flink over Hive or Flink SQL hive table source and sink
> 2. Flink ML on stream
>
>
>> On Jun 17, 2018, at 8:34 AM, zhangminglei <[hidden email]> wrote:
>>
>> Actually, I have been an idea, how about support hive on flink ? Since lots of business are written by hive sql. And users wants to transform map reduce to fink without changing the sql.
>>
>> Zhangminglei
>>
>>
>>
>>> 在 2018年6月17日,下午8:11,zhangminglei <[hidden email]> 写道:
>>>
>>> Hi, Sagar
>>>
>>> There already has relative JIRAs for ORC and Parquet, you can take a look here:
>>>
>>> https://issues.apache.org/jira/browse/FLINK-9407 <https://issues.apache.org/jira/browse/FLINK-9407> and https://issues.apache.org/jira/browse/FLINK-9411 <https://issues.apache.org/jira/browse/FLINK-9411>
>>>
>>> For ORC format, Currently only support basic data types, such as Long, Boolean, Short, Integer, Float, Double, String.
>>>
>>> Best
>>> Zhangminglei
>>>
>>>
>>>
>>>> 在 2018年6月17日,上午11:11,sagar loke <[hidden email]> 写道:
>>>>
>>>> We are eagerly waiting for
>>>>
>>>> - Extends Streaming Sinks:
>>>>   - Bucketing Sink should support S3 properly (compensate for eventual consistency), work with Flink's shaded S3 file systems, and efficiently support formats that compress/index arcoss individual rows (Parquet, ORC, ...)
>>>>
>>>> Especially for ORC and Parquet sinks. Since, We are planning to use Kafka-jdbc to move data from rdbms to hdfs.
>>>>
>>>> Thanks,
>>>>
>>>> On Sat, Jun 16, 2018 at 5:08 PM Elias Levy <[hidden email] <mailto:[hidden email]>> wrote:
>>>> One more, since it we have to deal with it often:
>>>>
>>>> - Idling sources (Kafka in particular) and proper watermark propagation: FLINK-5018 / FLINK-5479
>>>>
>>>> On Fri, Jun 8, 2018 at 2:58 PM, Elias Levy <[hidden email] <mailto:[hidden email]>> wrote:
>>>> Since wishes are free:
>>>>
>>>> - Standalone cluster job isolation: https://issues.apache.org/jira/browse/FLINK-8886 <https://issues.apache.org/jira/browse/FLINK-8886>
>>>> - Proper sliding window joins (not overlapping hoping window joins): https://issues.apache.org/jira/browse/FLINK-6243 <https://issues.apache.org/jira/browse/FLINK-6243>
>>>> - Sharing state across operators: https://issues.apache.org/jira/browse/FLINK-6239 <https://issues.apache.org/jira/browse/FLINK-6239>
>>>> - Synchronizing streams: https://issues.apache.org/jira/browse/FLINK-4558 <https://issues.apache.org/jira/browse/FLINK-4558>
>>>>
>>>> Seconded:
>>>> - Atomic cancel-with-savepoint: https://issues.apache.org/jira/browse/FLINK-7634 <https://issues.apache.org/jira/browse/FLINK-7634>
>>>> - Support dynamically changing CEP patterns : https://issues.apache.org/jira/browse/FLINK-7129 <https://issues.apache.org/jira/browse/FLINK-7129>
>>>>
>>>>
>>>> On Fri, Jun 8, 2018 at 1:31 PM, Stephan Ewen <[hidden email] <mailto:[hidden email]>> wrote:
>>>> Hi all!
>>>>
>>>> Thanks for the discussion and good input. Many suggestions fit well with the proposal above.
>>>>
>>>> Please bear in mind that with a time-based release model, we would release whatever is mature by end of July.
>>>> The good thing is we could schedule the next release not too far after that, so that the features that did not quite make it will not be delayed too long.
>>>> In some sense, you could read this as as "what to do first" list, rather than "this goes in, other things stay out".
>>>>
>>>> Some thoughts on some of the suggestions
>>>>
>>>> Kubernetes integration: An opaque integration with Kubernetes should be supported through the "as a library" mode. For a deeper integration, I know that some committers have experimented with some PoC code. I would let Till add some thoughts, he has worked the most on the deployment parts recently.
>>>>
>>>> Per partition watermarks with idleness: Good point, could one implement that on the current interface, with a periodic watermark extractor?
>>>>
>>>> Atomic cancel-with-savepoint: Agreed, this is important. Making this work with all sources needs a bit more work. We should have this in the roadmap.
>>>>
>>>> Elastic Bloomfilters: This seems like an interesting new feature - the above suggested feature set was more about addressing some longer standing issues/requests. However, nothing should prevent contributors to work on that.
>>>>
>>>> Best,
>>>> Stephan
>>>>
>>>>
>>>> On Wed, Jun 6, 2018 at 6:23 AM, Yan Zhou [FDS Science] <[hidden email] <mailto:[hidden email]>> wrote:
>>>> +1 on https://issues.apache.org/jira/browse/FLINK-5479 <https://issues.apache.org/jira/browse/FLINK-5479>
>>>> [FLINK-5479] Per-partition watermarks in ... <https://issues.apache.org/jira/browse/FLINK-5479>
>>>> issues.apache.org <http://issues.apache.org/>
>>>> Reported in ML: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Kafka-topic-partition-skewness-causes-watermark-not-being-emitted-td11008.html <http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Kafka-topic-partition-skewness-causes-watermark-not-being-emitted-td11008.html> It's normally not a common case to have Kafka partitions not producing any data, but it'll probably be good to handle this as well. I ...
>>>>
>>>> From: Rico Bergmann <[hidden email] <mailto:[hidden email]>>
>>>> Sent: Tuesday, June 5, 2018 9:12:00 PM
>>>> To: Hao Sun
>>>> Cc: [hidden email] <mailto:[hidden email]>; user
>>>> Subject: Re: [DISCUSS] Flink 1.6 features
>>>>
>>>> +1 on K8s integration
>>>>
>>>>
>>>>
>>>> Am 06.06.2018 um 00:01 schrieb Hao Sun <[hidden email] <mailto:[hidden email]>>:
>>>>
>>>>> adding my vote to K8S Job mode, maybe it is this?
>>>>>> Smoothen the integration in Container environment, like "Flink as a Library", and easier integration with Kubernetes services and other proxies.
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Jun 4, 2018 at 11:01 PM Ben Yan <[hidden email] <mailto:[hidden email]>> wrote:
>>>>> Hi Stephan,
>>>>>
>>>>> Will  [ https://issues.apache.org/jira/browse/FLINK-5479 <https://issues.apache.org/jira/browse/FLINK-5479> ]  (Per-partition watermarks in FlinkKafkaConsumer should consider idle partitions) be included in 1.6? As we are seeing more users with this issue on the mailing lists.
>>>>>
>>>>> Thanks.
>>>>> Ben
>>>>>
>>>>> 2018-06-05 5:29 GMT+08:00 Che Lui Shum <[hidden email] <mailto:[hidden email]>>:
>>>>> Hi Stephan,
>>>>>
>>>>> Will FLINK-7129 (Support dynamically changing CEP patterns) be included in 1.6? There were discussions about possibly including it in 1.6:
>>>>> http://mail-archives.apache.org/mod_mbox/flink-user/201803.mbox/%3cCAMq=OU7gru2O9JtoWXn1Lc1F7NKcxAyN6A3e58kxctb4b508RQ@...%3e <http://mail-archives.apache.org/mod_mbox/flink-user/201803.mbox/%3cCAMq=OU7gru2O9JtoWXn1Lc1F7NKcxAyN6A3e58kxctb4b508RQ@...%3e>
>>>>>
>>>>> Thanks,
>>>>> Shirley Shum
>>>>>
>>>>> Stephan Ewen ---06/04/2018 02:21:47 AM---Hi Flink Community! The release of Apache Flink 1.5 has happened (yay!) - so it is a good time
>>>>>
>>>>> From: Stephan Ewen <[hidden email] <mailto:[hidden email]>>
>>>>> To: [hidden email] <mailto:[hidden email]>, user <[hidden email] <mailto:[hidden email]>>
>>>>> Date: 06/04/2018 02:21 AM
>>>>> Subject: [DISCUSS] Flink 1.6 features
>>>>>
>>>>>
>>>>>
>>>>> Hi Flink Community!
>>>>>
>>>>> The release of Apache Flink 1.5 has happened (yay!) - so it is a good time to start talking about what to do for release 1.6.
>>>>>
>>>>> == Suggested release timeline ==
>>>>>
>>>>> I would propose to release around end of July (that is 8-9 weeks from now).
>>>>>
>>>>> The rational behind that: There was a lot of effort in release testing automation (end-to-end tests, scripted stress tests) as part of release 1.5. You may have noticed the big set of new modules under "flink-end-to-end-tests" in the Flink repository. It delayed the 1.5 release a bit, and needs to continue as part of the coming release cycle, but should help make releasing more lightweight from now on.
>>>>>
>>>>> (Side note: There are also some nightly stress tests that we created and run at data Artisans, and where we are looking whether and in which way it would make sense to contribute them to Flink.)
>>>>>
>>>>> == Features and focus areas ==
>>>>>
>>>>> We had a lot of big and heavy features in Flink 1.5, with FLIP-6, the new network stack, recovery, SQL joins and client, ... Following something like a "tick-tock-model", I would suggest to focus the next release more on integrations, tooling, and reducing user friction.
>>>>>
>>>>> Of course, this does not mean that no other pull request gets reviewed, an no other topic will be examined - it is simply meant as a help to understand where to expect more activity during the next release cycle. Note that these are really the coarse focus areas - don't read this as a comprehensive list.
>>>>>
>>>>> This list is my first suggestion, based on discussions with committers, users, and mailing list questions.
>>>>>
>>>>> - Support Java 9 and Scala 2.12
>>>>>
>>>>> - Smoothen the integration in Container environment, like "Flink as a Library", and easier integration with Kubernetes services and other proxies.
>>>>>
>>>>> - Polish the remaing parts of the FLIP-6 rewrite
>>>>>
>>>>> - Improve state backends with asynchronous timer snapshots, efficient timer deletes, state TTL, and broadcast state support in RocksDB.
>>>>>
>>>>> - Extends Streaming Sinks:
>>>>>   - Bucketing Sink should support S3 properly (compensate for eventual consistency), work with Flink's shaded S3 file systems, and efficiently support formats that compress/index arcoss individual rows (Parquet, ORC, ...)
>>>>>   - Support ElasticSearch's new REST API
>>>>>
>>>>> - Smoothen State Evolution to support type conversion on snapshot restore
>>>>>
>>>>> - Enhance Stream SQL and CEP
>>>>>   - Add support for "update by key" Table Sources
>>>>>   - Add more table sources and sinks (Kafka, Kinesis, Files, K/V stores)
>>>>>   - Expand SQL client
>>>>>   - Integrate CEP and SQL, through MATCH_RECOGNIZE clause
>>>>>   - Improve CEP Performance of SharedBuffer on RocksDB
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Cheers,
>>>> Sagar
>>>
>>
>>


--
Cheers,
Sagar

Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Flink 1.6 features

zhangminglei
In reply to this post by Stephan Ewen
Hi, Community

By the way, there is a very important feature I think it should be. Currently, the BucketingSink does not support when a bucket is ready for user use. This situation will be very obvious when flink work with offline end. We called that real time/offline integration in business. In this case, we should let the user can do some extra work when the bucket is ready. And here is the JIRA for this https://issues.apache.org/jira/browse/FLINK-9609

Cheers
Minglei

在 2018年6月4日,下午5:21,Stephan Ewen <[hidden email]> 写道:

Hi Flink Community!

The release of Apache Flink 1.5 has happened (yay!) - so it is a good time to start talking about what to do for release 1.6.

== Suggested release timeline ==

I would propose to release around end of July (that is 8-9 weeks from now).

The rational behind that: There was a lot of effort in release testing automation (end-to-end tests, scripted stress tests) as part of release 1.5. You may have noticed the big set of new modules under "flink-end-to-end-tests" in the Flink repository. It delayed the 1.5 release a bit, and needs to continue as part of the coming release cycle, but should help make releasing more lightweight from now on.

(Side note: There are also some nightly stress tests that we created and run at data Artisans, and where we are looking whether and in which way it would make sense to contribute them to Flink.)

== Features and focus areas ==

We had a lot of big and heavy features in Flink 1.5, with FLIP-6, the new network stack, recovery, SQL joins and client, ... Following something like a "tick-tock-model", I would suggest to focus the next release more on integrations, tooling, and reducing user friction. 

Of course, this does not mean that no other pull request gets reviewed, an no other topic will be examined - it is simply meant as a help to understand where to expect more activity during the next release cycle. Note that these are really the coarse focus areas - don't read this as a comprehensive list.

This list is my first suggestion, based on discussions with committers, users, and mailing list questions.

  - Support Java 9 and Scala 2.12
  
  - Smoothen the integration in Container environment, like "Flink as a Library", and easier integration with Kubernetes services and other proxies.
  
  - Polish the remaing parts of the FLIP-6 rewrite

  - Improve state backends with asynchronous timer snapshots, efficient timer deletes, state TTL, and broadcast state support in RocksDB.

  - Extends Streaming Sinks:
     - Bucketing Sink should support S3 properly (compensate for eventual consistency), work with Flink's shaded S3 file systems, and efficiently support formats that compress/index arcoss individual rows (Parquet, ORC, ...)
     - Support ElasticSearch's new REST API

  - Smoothen State Evolution to support type conversion on snapshot restore
  
  - Enhance Stream SQL and CEP
     - Add support for "update by key" Table Sources
     - Add more table sources and sinks (Kafka, Kinesis, Files, K/V stores)
     - Expand SQL client
     - Integrate CEP and SQL, through MATCH_RECOGNIZE clause
     - Improve CEP Performance of SharedBuffer on RocksDB


Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Flink 1.6 features

Stephan Ewen
Hi all!

A late follow-up with some thoughts:

In principle, all these are good suggestions and are on the roadmap. We are trying to make the release "by time", meaning for it at a certain date (roughly in two weeks) and take what features are ready into the release.

Looking at the status of the features that you suggested (please take this with a grain of salt, I don't know the status of all issues perfectly):

  - Proper sliding window joins: This is happening, see https://issues.apache.org/jira/browse/FLINK-8478
    At least inner joins will most likely make it into 1.6. Related to that is enrichment joins against time versioned tables, which are being worked on in the Table API: https://issues.apache.org/jira/browse/FLINK-9712

  - Bucketing Sink on S3 and ORC / Parquet Support: There are multiple efforts happening here.
    The biggest one is that the bucketing sink is getting a complete overhaul to support all of the mentioned features, see https://issues.apache.org/jira/browse/FLINK-9749 https://issues.apache.org/jira/browse/FLINK-9752 https://issues.apache.org/jira/browse/FLINK-9753
    Hard to say how much will make it until the feature freeze of 1.6, but this is happening and will be merged soon.

  - ElasticBloomFilters: Quite a big feature, I added a reply to the discussion thread, looking at whether this can be realized in a more loosely coupled way from the core state abstraction.

  - Per partition Watermark idleness: Noted, let's look at this more. There should also be a way to implement this today, with periodic watermarks (that realize when no records came for a while).
  
  - Dynamic CEP patterns: I don't know if anyone if working on this.
   CEP is getting some love at the moment, though, with SQL integration and better performance on RocksDB for larger patterns. We'll take a note that there are various requests for dynamic patterns.
  
  - Kubernetes Integration: There are some developments going on both for a "passive" k8s integration (jobs as docker images, Flink beine a transparent k8s citizen) and a more "active" integration, where Flink directly talks to k8s to start TaskManagers dynamically. I think the former has a good chance that a first version goes into 1.6, the latter needs more work.

  - Atomic Cancel With Savepoint: Not enough progress on this, yet. It is important, but needs more work.
  - Synchronizing streams: Same as above, acknowledged that this is important, but needs more work.

  - Standalone cluster job isolation: No work on this, yet, as far as I know.

  - Sharing state across operators: This is an interesting and tricky one, I left some questions on the JIRA issue https://issues.apache.org/jira/browse/FLINK-6239

Best,
Stephan


On Mon, Jun 25, 2018 at 4:45 AM, zhangminglei <[hidden email]> wrote:
Hi, Community

By the way, there is a very important feature I think it should be. Currently, the BucketingSink does not support when a bucket is ready for user use. This situation will be very obvious when flink work with offline end. We called that real time/offline integration in business. In this case, we should let the user can do some extra work when the bucket is ready. And here is the JIRA for this https://issues.apache.org/jira/browse/FLINK-9609

Cheers
Minglei

在 2018年6月4日,下午5:21,Stephan Ewen <[hidden email]> 写道:

Hi Flink Community!

The release of Apache Flink 1.5 has happened (yay!) - so it is a good time to start talking about what to do for release 1.6.

== Suggested release timeline ==

I would propose to release around end of July (that is 8-9 weeks from now).

The rational behind that: There was a lot of effort in release testing automation (end-to-end tests, scripted stress tests) as part of release 1.5. You may have noticed the big set of new modules under "flink-end-to-end-tests" in the Flink repository. It delayed the 1.5 release a bit, and needs to continue as part of the coming release cycle, but should help make releasing more lightweight from now on.

(Side note: There are also some nightly stress tests that we created and run at data Artisans, and where we are looking whether and in which way it would make sense to contribute them to Flink.)

== Features and focus areas ==

We had a lot of big and heavy features in Flink 1.5, with FLIP-6, the new network stack, recovery, SQL joins and client, ... Following something like a "tick-tock-model", I would suggest to focus the next release more on integrations, tooling, and reducing user friction. 

Of course, this does not mean that no other pull request gets reviewed, an no other topic will be examined - it is simply meant as a help to understand where to expect more activity during the next release cycle. Note that these are really the coarse focus areas - don't read this as a comprehensive list.

This list is my first suggestion, based on discussions with committers, users, and mailing list questions.

  - Support Java 9 and Scala 2.12
  
  - Smoothen the integration in Container environment, like "Flink as a Library", and easier integration with Kubernetes services and other proxies.
  
  - Polish the remaing parts of the FLIP-6 rewrite

  - Improve state backends with asynchronous timer snapshots, efficient timer deletes, state TTL, and broadcast state support in RocksDB.

  - Extends Streaming Sinks:
     - Bucketing Sink should support S3 properly (compensate for eventual consistency), work with Flink's shaded S3 file systems, and efficiently support formats that compress/index arcoss individual rows (Parquet, ORC, ...)
     - Support ElasticSearch's new REST API

  - Smoothen State Evolution to support type conversion on snapshot restore
  
  - Enhance Stream SQL and CEP
     - Add support for "update by key" Table Sources
     - Add more table sources and sinks (Kafka, Kinesis, Files, K/V stores)
     - Expand SQL client
     - Integrate CEP and SQL, through MATCH_RECOGNIZE clause
     - Improve CEP Performance of SharedBuffer on RocksDB



Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Flink 1.6 features

Vishal Santoshi
I am not sure whether this is in any roadmap and as someone suggested wishes are free...Tensorflow on flink though ambitious should be a big win. I am not sure how operator isolation  for a hybrid GPU/CPU  would be achieved and how repetitive execution could be natively supported by flink but it seems that if as a developer looking at flink as filling the ML void  that has emerged and thus being forced to chose between spark and flink some futuristic announcement in the ML space is required and tensorflow is that one marquee project that everyone wants to get a handle on....

On Wed, Jul 4, 2018, 5:10 PM Stephan Ewen <[hidden email]> wrote:
Hi all!

A late follow-up with some thoughts:

In principle, all these are good suggestions and are on the roadmap. We are trying to make the release "by time", meaning for it at a certain date (roughly in two weeks) and take what features are ready into the release.

Looking at the status of the features that you suggested (please take this with a grain of salt, I don't know the status of all issues perfectly):

  - Proper sliding window joins: This is happening, see https://issues.apache.org/jira/browse/FLINK-8478
    At least inner joins will most likely make it into 1.6. Related to that is enrichment joins against time versioned tables, which are being worked on in the Table API: https://issues.apache.org/jira/browse/FLINK-9712

  - Bucketing Sink on S3 and ORC / Parquet Support: There are multiple efforts happening here.
    The biggest one is that the bucketing sink is getting a complete overhaul to support all of the mentioned features, see https://issues.apache.org/jira/browse/FLINK-9749 https://issues.apache.org/jira/browse/FLINK-9752 https://issues.apache.org/jira/browse/FLINK-9753
    Hard to say how much will make it until the feature freeze of 1.6, but this is happening and will be merged soon.

  - ElasticBloomFilters: Quite a big feature, I added a reply to the discussion thread, looking at whether this can be realized in a more loosely coupled way from the core state abstraction.

  - Per partition Watermark idleness: Noted, let's look at this more. There should also be a way to implement this today, with periodic watermarks (that realize when no records came for a while).
  
  - Dynamic CEP patterns: I don't know if anyone if working on this.
   CEP is getting some love at the moment, though, with SQL integration and better performance on RocksDB for larger patterns. We'll take a note that there are various requests for dynamic patterns.
  
  - Kubernetes Integration: There are some developments going on both for a "passive" k8s integration (jobs as docker images, Flink beine a transparent k8s citizen) and a more "active" integration, where Flink directly talks to k8s to start TaskManagers dynamically. I think the former has a good chance that a first version goes into 1.6, the latter needs more work.

  - Atomic Cancel With Savepoint: Not enough progress on this, yet. It is important, but needs more work.
  - Synchronizing streams: Same as above, acknowledged that this is important, but needs more work.

  - Standalone cluster job isolation: No work on this, yet, as far as I know.

  - Sharing state across operators: This is an interesting and tricky one, I left some questions on the JIRA issue https://issues.apache.org/jira/browse/FLINK-6239

Best,
Stephan


On Mon, Jun 25, 2018 at 4:45 AM, zhangminglei <[hidden email]> wrote:
Hi, Community

By the way, there is a very important feature I think it should be. Currently, the BucketingSink does not support when a bucket is ready for user use. This situation will be very obvious when flink work with offline end. We called that real time/offline integration in business. In this case, we should let the user can do some extra work when the bucket is ready. And here is the JIRA for this https://issues.apache.org/jira/browse/FLINK-9609

Cheers
Minglei

在 2018年6月4日,下午5:21,Stephan Ewen <[hidden email]> 写道:

Hi Flink Community!

The release of Apache Flink 1.5 has happened (yay!) - so it is a good time to start talking about what to do for release 1.6.

== Suggested release timeline ==

I would propose to release around end of July (that is 8-9 weeks from now).

The rational behind that: There was a lot of effort in release testing automation (end-to-end tests, scripted stress tests) as part of release 1.5. You may have noticed the big set of new modules under "flink-end-to-end-tests" in the Flink repository. It delayed the 1.5 release a bit, and needs to continue as part of the coming release cycle, but should help make releasing more lightweight from now on.

(Side note: There are also some nightly stress tests that we created and run at data Artisans, and where we are looking whether and in which way it would make sense to contribute them to Flink.)

== Features and focus areas ==

We had a lot of big and heavy features in Flink 1.5, with FLIP-6, the new network stack, recovery, SQL joins and client, ... Following something like a "tick-tock-model", I would suggest to focus the next release more on integrations, tooling, and reducing user friction. 

Of course, this does not mean that no other pull request gets reviewed, an no other topic will be examined - it is simply meant as a help to understand where to expect more activity during the next release cycle. Note that these are really the coarse focus areas - don't read this as a comprehensive list.

This list is my first suggestion, based on discussions with committers, users, and mailing list questions.

  - Support Java 9 and Scala 2.12
  
  - Smoothen the integration in Container environment, like "Flink as a Library", and easier integration with Kubernetes services and other proxies.
  
  - Polish the remaing parts of the FLIP-6 rewrite

  - Improve state backends with asynchronous timer snapshots, efficient timer deletes, state TTL, and broadcast state support in RocksDB.

  - Extends Streaming Sinks:
     - Bucketing Sink should support S3 properly (compensate for eventual consistency), work with Flink's shaded S3 file systems, and efficiently support formats that compress/index arcoss individual rows (Parquet, ORC, ...)
     - Support ElasticSearch's new REST API

  - Smoothen State Evolution to support type conversion on snapshot restore
  
  - Enhance Stream SQL and CEP
     - Add support for "update by key" Table Sources
     - Add more table sources and sinks (Kafka, Kinesis, Files, K/V stores)
     - Expand SQL client
     - Integrate CEP and SQL, through MATCH_RECOGNIZE clause
     - Improve CEP Performance of SharedBuffer on RocksDB



12