Valkey 9.0 In-Memory Data Store Brings Atomic Slot Migrations, Clustered Databases

Valkey 9.0 in-memory data store debuts atomic slot migrations, cluster-wide databases, and major throughput boosts for modern workloads.

After the initial v8.0, Valkey 9.0 is the second major release for this open-source, Redis-compatible data store, with the headline change being atomic slot migrations, a fundamental rework of how data moves between nodes in a cluster.

Previously, Valkey used a key-by-key approach—each key was moved, deleted, and reinserted individually. That worked well enough, but under heavy load or large datasets, it could lead to performance drops, blocked migrations, and even data inaccessibility until the process finished.

Valkey 9.0 replaces this with a slot-based migration system. Instead of transferring keys one by one, entire slots—each containing a group of keys—are migrated atomically using the Append Only File format. This allows collections like sets or lists to be transferred efficiently without overwhelming the target node’s input buffer. The result is smoother migrations, fewer retries, and no partial data states.

Moreover, this update brings hash field expiration functionality. Until now, Valkey has only allowed expiration at the key level—meaning if one field needed to expire, the entire hash had to go. The new version introduces commands like HEXPIRE, HEXPIREAT, HGETEX, and HPERSIST, enabling fine-grained control over field-level expiry.

Another standout change that breaks from the legacy design of its predecessor is the newly added full support for numbered databases in cluster mode.

Previously, cluster deployments were locked to a single database (db 0), which limited scalability. But now, however, users can logically separate workloads and avoid key collisions without giving up cluster functionality—a handy capability for multi-tenant environments or sharded applications.

Beyond the big-ticket features, Valkey 9.0 brings a long list of optimizations aimed at pushing throughput and reducing latency:

  • 1 Billion Requests/Second: Improved cluster resilience enables scaling to 2,000 nodes and handling up to a billion requests per second.
  • Pipeline Memory Prefetch: Boosts throughput by up to 40%.
  • Zero-Copy Responses: Reduce memory overhead and improve handling of large requests.
  • Multipath TCP Support: Lowers latency by around 25% on multipath network links.
  • SIMD Optimizations: Adds vectorized processing for BITCOUNT and HyperLogLog, improving performance by up to 200%.
  • Conditional Delete: A new DELIFEQ command deletes keys only when their values match a given condition.
  • CLIENT LIST Filtering: Enables filtering by flags, names, IPs, and other parameters for more precise monitoring.

Finally, the project also revisited 25 previously deprecated commands and reinstated them. For more information, see the announcement.

Bobby Borisov

Bobby Borisov

Bobby, an editor-in-chief at Linuxiac, is a Linux professional with over 20 years of experience. With a strong focus on Linux and open-source software, he has worked as a Senior Linux System Administrator, Software Developer, and DevOps Engineer for small and large multinational companies.

Leave a Reply

Your email address will not be published. Required fields are marked *