Performance issues when RocksDB block cache is full

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Performance issues when RocksDB block cache is full

Yaroslav Tkachenko
Hello,

I observe throughput degradation when my pipeline reaches the maximum of the allocated block cache. 

The pipeline is consuming from a few Kafka topics at a high rate (100k+ rec/s). Almost every processed message results in a (keyed) state read with an optional write. I've enabled native RocksDB metrics and noticed that everything stays stable until the block cache usage reaches maximum. If I understand correctly, this makes sense: this cache is used for all reads and cache misses could mean reading data on disk, which is much slower (I haven't switched to SSDs yet). Does it make sense? 

One thing I know about the messages I consume: I expect very few keys to be active simultaneously, most of them can be treated as cold. So I'd love RocksDB block cache to have a TTL option (say, 30 minutes), which, I imagine, could solve this issue by guaranteeing to only keep active keys in memory. I don't feel like LRU is doing a very good job here... I couldn't find any option like that, but I'm wondering if someone could recommend something similar.

Thank you!

--
Yaroslav Tkachenko
Reply | Threaded
Open this post in threaded view
|

Re: Performance issues when RocksDB block cache is full

Dawid Wysakowicz-2

Hey Yaroslav,

Unfortunately I don't have enough knowledge to give you an educated reply. The first part certainly does make sense to me, but I am not sure how to mitigate the issue. I am ccing Yun Tang who worked more on the RocksDB state backend (It might take him a while to answer though, as he is on vacation right now).

Best,

Dawid

On 14/02/2021 06:57, Yaroslav Tkachenko wrote:
Hello,

I observe throughput degradation when my pipeline reaches the maximum of the allocated block cache. 

The pipeline is consuming from a few Kafka topics at a high rate (100k+ rec/s). Almost every processed message results in a (keyed) state read with an optional write. I've enabled native RocksDB metrics and noticed that everything stays stable until the block cache usage reaches maximum. If I understand correctly, this makes sense: this cache is used for all reads and cache misses could mean reading data on disk, which is much slower (I haven't switched to SSDs yet). Does it make sense? 

One thing I know about the messages I consume: I expect very few keys to be active simultaneously, most of them can be treated as cold. So I'd love RocksDB block cache to have a TTL option (say, 30 minutes), which, I imagine, could solve this issue by guaranteeing to only keep active keys in memory. I don't feel like LRU is doing a very good job here... I couldn't find any option like that, but I'm wondering if someone could recommend something similar.

Thank you!

--
Yaroslav Tkachenko

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Performance issues when RocksDB block cache is full

Yun Tang
In reply to this post by Yaroslav Tkachenko
Hi Yaroslav,

Unfortunately, RocksDB does not have such TTL block cache, and if you really only have very few active keys, current LRU implementation should work well as only useful latest entries are inserted into cache.
What kind of behavior when cache reached the maximum? Have you ever noticed anything different on RocksDB metrics?
Perhaps you might meet problem of flushing write buffer too early [1] and partitioned index [2] might help.


Best
Yun Tang



From: Dawid Wysakowicz
Sent: Monday, February 15, 2021 17:55
To: Yaroslav Tkachenko; [hidden email]
Cc: Yun Tang
Subject: Re: Performance issues when RocksDB block cache is full

Hey Yaroslav,

Unfortunately I don't have enough knowledge to give you an educated reply. The first part certainly does make sense to me, but I am not sure how to mitigate the issue. I am ccing Yun Tang who worked more on the RocksDB state backend (It might take him a while to answer though, as he is on vacation right now).

Best,

Dawid

On 14/02/2021 06:57, Yaroslav Tkachenko wrote:
Hello,

I observe throughput degradation when my pipeline reaches the maximum of the allocated block cache. 

The pipeline is consuming from a few Kafka topics at a high rate (100k+ rec/s). Almost every processed message results in a (keyed) state read with an optional write. I've enabled native RocksDB metrics and noticed that everything stays stable until the block cache usage reaches maximum. If I understand correctly, this makes sense: this cache is used for all reads and cache misses could mean reading data on disk, which is much slower (I haven't switched to SSDs yet). Does it make sense? 

One thing I know about the messages I consume: I expect very few keys to be active simultaneously, most of them can be treated as cold. So I'd love RocksDB block cache to have a TTL option (say, 30 minutes), which, I imagine, could solve this issue by guaranteeing to only keep active keys in memory. I don't feel like LRU is doing a very good job here... I couldn't find any option like that, but I'm wondering if someone could recommend something similar.

Thank you!

--
Yaroslav Tkachenko