21.10.0 Release Notes

Yves Lorphelin  |  03 November 2021

We are pleased to announce the official release of EventStoreDB OSS & Commercial version 21.10.0 long-term support (LTS).

This LTS release will be supported for a period of 24 months, until October 2023. This release also marks the end of long-term support for the 5.x versions. Read more about our versioning strategy here.

The complete changelog can be found here. If you need help planning your upgrade or want to discuss support, please contact us here.

EventStoreDB 21.10.0 is available for the following operating systems:

  • Windows
  • Ubuntu 18.04
  • Ubuntu 20.04
  • CentOS 7 (Commercial version)
  • Amazon Linux 2 (Commercial version)
  • Oracle Linux 7 (Commercial version)

Additionally, an experimental build for ARM64 processors will be available shortly as a Docker image.

Note: Ubuntu 16.04 is not supported anymore as of this release, more information about Ubuntu's release and support policy can be found here.

Where Can I Get the Packages?

Downloads are available on our website.

The packages can also be installed using the following instructions.

Ubuntu 18.04/20.04 (via packagecloud)

curl -s https://packagecloud.io/install/repositories/EventStore/EventStore-OSS/script.deb.sh | 
sudo bash
sudo apt-get install eventstore-oss=21.10.0

Windows (via Chocolatey)

choco install eventstore-oss -version 21.10.0

Docker (via docker hub)

docker pull eventstore/eventstore:21.10.0-focal
docker pull eventstore/eventstore:21.10.0-buster-slim

Highlights for this Release

Interpreted runtime for projections

The interpreted runtime for projections introduced in version 21.6.0 is now the default and the V8 engine has been completely removed from the server (EventStore#3193).
This new runtime is a step to allow ARM-based support for the database as well as enhanced projections debugging experience in the future.

Improved index performance for large scale instances

Performance when new streams are created

A Bloom filter (the Stream Existence Filter) has been added to EventStoreDB in order to improve performance when appending events to new streams. When appending to a stream, EventStoreDB looks up the number of the last event written to that stream. If the stream is a new stream this lookup is expensive because it needs to search in every index file. The Bloom filter allows us to detect, most of the time, that the stream is new and skip the index searches.

You can control the size of the filter using the --stream-existence-filter-size configuration option which is specified in bytes. We recommend setting it to 1-2x the number of streams expected in the database.

The first time EventStoreDB is started after the upgrade it will need to build the Stream Existence Filter. This can be an expensive operation for large databases, taking approximately as long as it takes to read through the whole index. The filter can be disabled by setting the size to 0.

You can read more details about the rationale and implementation in this blog post.

This feature adds a new directory under the location of the indexes and 2 additional files

/index/stream-existence/streamExistenceFilter.chk
/index/stream-existence/streamExistenceFilter.dat

This impacts file copy-based backup procedure, as they need to be part of the backup, as well as the provisioning of disk: the additional disk space used by streamExistenceFilter.dat needs to be taken into account. The default size is approximately 260MB.

Performance when reading streams

In addition to the Stream Existence Filter described above, which tracks which streams exist in the whole database, we have also added a Bloom filter for each index file. The index file Bloom filters track which streams are present in each index file. This allows EventStoreDB to service stream read requests more quickly by not searching in index files that do not contain the stream. The increased speed is most pronounced for streams that are contained to a small number of index files (i.e. their events are not very spread out in the log). Our testing found a 3x performance improvement for reading such streams in a 2 billion event database.

The index Bloom filters are stored next to each index file with the same name and a .bloomfilter extension and can be backed up and restored in the same way as the index files themselves.

The index Bloom filters are only created for new index files (i.e. when writing new data, on index merge, on scavenge, or on index rebuild). Therefore the performance will improve over time as older index files are merged together and given Bloom filters. Rebuilding the index will generate the Bloom filters immediately.

If necessary the index Bloom filters can be disabled by setting --use-index-bloom-filters to false. This setting is a feature flag for safety and will be removed in a subsequent release. Please reach out to EventStore if you discover some need to disable this feature.

More performance when reading streams

EventStoreDB now has the option to keep an in-memory least-recently-used (LRU) cache per index file to speed up repeated reads. The maximum number of entries per index file can be set with the configuration option

--index-cache-size

The cache size is set to 0 (off) by default because it has an associated memory overhead and can be detrimental to workloads that produce a lot of cache misses. The cache is, however, well suited to read-heavy workloads of long-lived streams. Our testing found a 2-3x performance improvement for such workloads in a 2 billion event database, making those reads as quick as reading the streams that are contained to a small number of index files.

The index LRU cache is only created for index files that have Bloom filters.

Support for Intermediate CA Certificates

We have introduced support for intermediate CA certificates (EventStore#3176). Intermediate CAs can now be bundled in PEM or PKCS #12 format with the node certificate. Additionally, in order to improve performance, the server will also try to bypass intermediate certificate downloads, when they are available on the system in the appropriate locations.

Steps on Linux

sudo su eventstore --shell /bin/bash
dotnet tool install --global dotnet-certificate-tool
 ~/.dotnet/tools/certificate-tool add --file 
/path/to/intermediate.crt

Steps on Windows

To import the certificate store, run the following PowerShell under the same account as EventStoreDB is running:

Import-Certificate -FilePath .\path\to\intermediate.crt -CertStoreLocation Cert:\CurrentUser\CA

To import the intermediate certificate in the `Local Computer` store, run the following as `Administrator`:

Import-Certificate -FilePath .\ca.crt -CertStoreLocation Cert:\LocalMachine\CA

Alternatively, you can right-click on the intermediate certificate, click Install (let windows choose the appropriate location, it should go under Intermediate Certification Authorities -> Certificates in the current user's certificate store. We will be adding support to the es-gencer-cli to allow generating self-signed intermediate certificates shortly.

Server Capabilities

With the 21.6 release we added some additional server features, however, to properly support forward and backward compatibility, we needed to add a server capabilities discovery feature to the database. This will enable us to update clients to use newer features as they now have a safe way to detect what the server supports and produce meaningful error messages if a client is trying to use a feature that doesn’t exist on the server version it is connecting to. Because of that, some of the features below which were added in 21.6 will only be supported from 21.10 onwards and a database upgrade will need to be performed to take advantage of them.

Persistent Subscriptions to $all

Persistent Subscriptions over gRPC now support subscribing to the $all stream, with an optional filter. These subscriptions can only be created in a gRPC client, not through the UI or the TCP clients.

Persistent Subscriptions to $all were introduced with the 21.6 version. Clients supporting this feature will be released over the next few weeks as they require the server capabilities feature listed above.

Experimental v3 Log Format

This version continues the work begun with 21.6 on the new log format. This new format will enable new features and improved performance for larger databases.

At the moment, the new log format should behave similarly to the existing one. You can enable it by setting the DbLogFormat option to ExperimentalV3 if you want to check it out.

Please be aware that this log format version is not compatible with the log V2 format, and is itself subject to change. As such, it is not meant for running in production, and can only be used for new databases. Tooling for migration will be coming at a later stage.

BatchAppend for gRPC

Support for a more performant append operation has been added to the gRPC proto. This will make appending large numbers of events much faster. This does come with some restrictions such as all appends made using a single user-specified at the connection level rather than the operation level. Further documentation and client support will be added shortly. Clients supporting this feature will be released over the next few weeks as they require the server capabilities feature listed above.

Auto Configuration on Startup

There are a few configuration options that need to be tuned when running EventStoreDB on larger instances or machines in order to make the most of the available resources. To help with that, some options are now configured automatically at startup based on the system resources.

These options are StreamInfoCacheCapacity, ReaderThreadsCount, and WorkerThreads.

If you want to disable this behavior, you can do so by simply overriding the configuration for the options.

StreamInfoCacheCapacity

StreamInfoCacheCapacity sets the maximum number of entries to keep in the stream info cache. This is the lookup that contains the information of any stream that has recently been read or written to. Having entries in this cache significantly improves write and read performance to cached streams on larger databases.

The cache is configured at startup based on the available free memory at the time. If there is 4GB or more available, it will be configured to take at most 75% of the remaining memory, otherwise, it will take at most 50%. The minimum that it can be set to is 100,000 entries.

ReaderThreadsCount

ReaderThreadsCount configures the number of reader threads available to EventStoreDB. Having more reader threads allows more concurrent reads to be processed.

The reader threads count will be set at startup to twice the number of available processors, with a minimum of 4 and a maximum of 16 threads.

WorkerThreads

WorkerThreads configures the number of threads available to the pool of worker services.

At startup, the number of worker threads will be set to 10 if there are more than 4 reader threads. Otherwise, it will be set to have 5 threads available.

Gossip on a Single Node

All nodes have gossip enabled by default. You can connect using gossip seeds regardless of whether you have a cluster or not.

Note that the GossipOnSingleNode option has been removed in this version.

Heartbeat Timeout Improvements

In a scenario where one side of a connection to EventStoreDB is sending a lot of data and the other side is idle, a false-positive heartbeat timeout can occur for the following reasons:

  • The heartbeat request may need to wait behind a lot of other data on the send queue on the sender’s side or on the receive queue on the receiver’s side before it can be processed.
  • The receiver does not schedule any heartbeat request to the sender as it assumes that the connection is alive.
  • The sender’s heartbeat request can eventually take more time than the heartbeat timeout to reach the receiver and be processed causing a false-positive heartbeat timeout to occur.

In this release, we have extended the heartbeat logic by proactively scheduling a heartbeat request from the receiver to the sender to prevent the heartbeat timeout. This should lower the number of incorrect heartbeat timeouts that occur on busy clusters.

Please see the documentation for more information about heartbeats and how they work.

Content Type Validation for Projections

We want to make sure that projections are predictable.
To support coming changes, we have added content type validation for projections. This means the following:

  • If the event is a json event, then it must have valid non-null json data.
  • If the event is not a json event, then it may have null data.
  • Null metadata is accepted in any scenario.

Events that don’t meet these requirements will be filtered out without erroring the projection.

This change only takes effect either when a projection is created on v21.2.0 or higher, or if a projection is stopped and started again. Projections that were created before the upgrade will not enforce content validation.

Breaking Changes for Community supported gRPC Clients

  • Projections
    The behavior of stopping a projection has been corrected. Setting WriteCheckpoint to true will now stop the projection, whereas setting it to false will abort it.

  • Persistent Subscriptions
    Updating a non-existent persistent subscription will return a PersistentSubscriptionNotFoundException rather than an InvalidOperationException.

  • Streams
    Deleting a stream that does not exist now results in an error.

Changes in Backup Procedure

The introduction of new indexes ( bloom filters) requires a change in the backup procedure if you use file-based backups.
New files and directories have been added in the index directory.
Read the documentation carefully here in order to adapt the backup procedure.

Breaking Changes

Log files rotation

EventStore#3212 added proper log rotation to on disk logging. To accomplish this, Serilog must add a date-stamp to the file names of both the stats and regular logs, i.e., log-statsYYYYMMDD.json and logYYYYMMDD.json, respectively.

Intermediate Certificates

As mentioned earlier, support has been added for intermediate certificates in EventStore#3176. However, if you are currently using intermediate certificates and rely on the AIA URL to build the full chain, the configuration will no longer work and will print the error “Failed to build the certificate chain with the node's own certificate up to the root“ on startup.

Upgrade Procedure

To upgrade a cluster from 20.10.x, a usual rolling upgrade can be done:

  • Pick a node (start with follower nodes first, then choose the leader last).
  • Stop the node, upgrade it and start it.

There is no way to perform a rolling upgrade between version 5.x and version 21.10.0 due to changes in the replication protocols and the way nodes gossip and host elections. 

As such, the upgrade process from 5.x is as follows:

  • Take down the cluster.
  • Perform an in-place upgrade of the nodes, ensuring that the relevant configuration and certificates are set up.
  • Bring the nodes back online and wait for them to stabilize.

Contributions by the Community

Thank you to the community members who made contributions to this server release!

Documentation

Documentation for EventStoreDB can be found here.
If you have any questions that aren't covered in these release notes or the docs, please feel free to reach out on Discuss, Github, or Slack.

Providing Feedback

If you encounter any issues, please don’t hesitate to open an issue on Github if there isn’t one already.
Additionally, there is a fairly active discuss channel and an #eventstore channel on the DDD-CQRS-ES Slack community.


Photo of Yves Lorphelin

Yves Lorphelin Yves is Principal Solution Architect at Event Store and helps customers and users reap the benefits of Event Sourcing and EventStoreDB. He has been in the industry for over 20 years, always focusing on finding and solving the actual problems that businesses want to solve with their IT systems.