Hey, I noticed that the Flink Statefun 2.1.0 release notes had this snippet with regards to TTL:
I noticed that the Ticket and PR for this have been closed with a reference to commit "289c30e8cdb54d2504ee47a57858a1d179f9a540". Does this mean that if I upgrade to 2.2.2 and set an expiration in my modules.yaml it is now "per function id" rather than across instances of said function? Thanks, Tim |
Hi Timothy, Starting from StateFun 2.2.x, in the module.yaml file, you can set for each individual state of a function an "expireMode" field, which values can be either "after-invoke" or "after-write". For example: ``` - function: meta: ... spec: states: - name: state-1 expireMode: after-write expireAfter: 1min - name: state-2 expireMode: after-invoke expireAfter: 5sec ``` In earlier versions, expireMode can not be individually set for each state. This is more flexible with 2.2.x. As a side note which is somewhat related, all state related configurations will be removed from the module.yaml, instead to be defined by the language SDKs starting from StateFun 3.0. This opens up even more flexibility, such as zero-downtime upgrades of remote functions which allows adding / removing state declarations without restarting the StateFun cluster. We're planning to reach out to the language SDK developers we know of (which includes you for the Haskell SDK ;) ) soon on a briefing of this change, as there is a change in the remote invocation protocol and will require existing SDKs to be updated in order to work with StateFun 3.0. Cheers, Gordon On Wed, Feb 24, 2021 at 11:00 PM Timothy Bess <[hidden email]> wrote:
|
Hi Gordon, Ah so when it said "all registered state" that means all state keys defined in the "module.yaml", not all state for all function instances. So the expiration has always been _per_ instance then and not across all instances of a function. Thanks for the heads up, that sounds like a good change! I definitely like the idea of putting more configuration into the SDK so that there's not two sources that have to be kept up to date. Would be neat if eventually the SDK just hosts some "/spec" endpoint that serves a list of functions and all their configuration options to Statefun on boot. Btw, I ended up also making a Scala replica of my Haskell library to use at work (some of my examples in the microsite are a bit out of date, need to revisit that): https://github.com/BlueChipFinancial/flink-statefun4s I know it seems weird to not use an embedded function, but it keeps us from having to deal with mismatched Scala versions since Flink is still on 2.12, and generally reduces friction using stuff in the Scala Cats ecosystem. Thanks, Tim On Wed, Feb 24, 2021 at 11:49 AM Tzu-Li (Gordon) Tai <[hidden email]> wrote:
|
On Thu, Feb 25, 2021 at 12:06 PM Timothy Bess <[hidden email]> wrote:
Exactly! Expiration happens individually for each function instance per declared state.
Really cool to hear about your efforts on a Scala SDK! I would not say it is weird to implement a Scala SDK for remote functions. In fact, with the changes upcoming in 3.0, the community is doubling down on remote as the primary deployment mode for functions, and would like to have a wider array of supported language SDKs. There's actually a remote Java SDK that was just merged to master and to be released in 3.0 [1]. Cheers, Gordon [1] https://github.com/apache/flink-statefun/tree/master/statefun-sdk-java
|
Free forum by Nabble | Edit this page |