How to gradually migrate from CRUD to Event Sourcing

Dennis Doomen  |  18 March 2021

Strategies for migrating to Event Sourcing

Let me start this article by saying that Event Sourcing is a great architecture style for high-performance collaborative domains that warrants the complexity that it adds. But as I said before, just like any other principle or practice, even Event Sourcing comes with pros and cons. And it's not a top-level architecture. Some parts of your system may benefit from it, but others may not. That being said, if you need Event Sourcing, and you have an existing, more traditional (a.k.a. CRUD) application, there are roughly three strategies you can follow:

  1. Keep everything as-is and build only the new parts of the system using Event Sourcing
  2. Shadow an existing sub-system or domain by rebuilding it side-by-side. Then, after the rebuild has completed, switch over all existing consumers and migrate the data automatically.
  3. Do a gradual entity-by-entity migration of an existing domain

About seven years ago, we gradually converted an existing .NET application that was designed using the Command Query Responsibility Segregation (CQRS) pattern to Event Sourcing. Since plenty has been written about the first two scenarios, let me share the recipe we took for the latter. 

Migrating entity by entity

Let's begin by establishing the terminology. In a more traditional system, your domain consists of entities. In an Event Sourcing world, you have often see a couple of those related entities forming a transactional boundary. In Domain Driven Design, this is called an aggregate. Most event stores use the term stream to capture all of the events that ever happened within that aggregate. And there's usually a single entity within that aggregate serving as the sole entry point. This is the aggregate root, which is identified by a unique number or key, the stream ID. Now that we got that out of the way, here are some practical steps to get you going.

  1. Figure out whether your current domain relies on transactions across multiple entities and whether or not the event store implementation supports cross-aggregate (or cross-stream) transactions.
  2. Carefully decide which entities are going to form the aggregate. If your aggregate is too big, and you're not ready to adopt event merging techniques, you increase the chance that users are going to run in optimistic concurrency problems. If your aggregate is too small, and your event store does not support cross-aggregate transactions, you'll have to handle those business rules in a functional manner, for example, using compensating actions. That's why it's so important to have those invariants help you to define the boundaries of your aggregate.
  3. Determine which entity should serve as the aggregate root, the entry point into the aggregate, and add a version to it. Make sure that any change to the entities within the aggregate bumps the version. If you already had a version there, we recommend calculating the new version by adding the number of events to the original version number.
  4. Ensure no other code can mutate the state of the entities within the aggregate without going through the aggregate root first. Replace writeable properties and public methods on the child entities with methods on the root, so the root controls access, can protect business rules, generate unique child IDs and bump the version.
  5. Remove direct dependencies between entities across aggregates. For example, in many domains supported by an Object-Relational Mapper, it's common to have lazy-loading properties. You need to refactor any code that relies on that, or introduce and inject repository abstractions.
  6. Ensure entities are persistence ignorant and do not directly access the database. Either move this to command handlers that handle incoming requests from your APIs, or introduce repository abstractions for that as well.
  7. Determine a natural partition key for that aggregate, something that allows you to split the events in case the event store becomes very big and causes performance problems or storage issues. A great partition key is something that separates data in such a way that you don't need to handle business rules across partitions. For example, maybe your domain is organized per geographical region or a company. In a multi-tenant domain, the tenant ID would be a good candidate.
  8. Since you’re not supposed to modify history, the concept of deletion is a bit different in Event Sourcing. Although you technically can delete the events from the underlying event store, you typically take a more functional approach and mark the aggregate as deleted using an event. So any query that used to request a specific instance of an entity and was prepared to not find anything must be adopted either explicitly or through some kind of abstraction. A common solution is to add an IsDeleted property to an aggregate root that a repository implementation can check for.
  9. Think about data import needs. If you're used to import data through tables directly, you'll have to change that into something like a CLI or HTTP API. And also decide whether you want to handle that import through the existing "property-changed" events or through a specialized "data-was-imported" event.
  10. Carefully determine how to map the original key of your entity to the stream ID. Most event stores support using strings as the stream IDs, but it's not possible to change the ID afterwards without going through some more complicated hoops. If your store only works with GUIDs, you can use a deterministic Guid generator like this. And don't forget that there's a difference between the internal keys and the ones you expose outside your domain.
  11. Closely related to this is the fact that guaranteeing uniqueness works a bit differently in Event Sourcing. So if your domain relied on the database schema to protect unique constraints, you'll need to find alternatives (e.g. using stream id).
  12. Introduce the infrastructure for loading and saving aggregates from/to the event store and rehyrdrate an aggregate from the persisted events. You can find some examples on how to do that as well as a base-class for your aggregate roots in .NET here, here and here. Up to now, we’ve mostly used those references as examples and not as frameworks to build our domains on.
  13. In case you have a repository abstraction, make sure it knows which entities have been converted and need to be loaded from the event store, and which still need to be loaded from the original tables. We used a marker interface or a .NET attribute for that.
  14. Postpone decisions like snapshotting until you need them. Snapshotting is a valid solution for aggregates that end up having lots and lots of events. But don't go there until you have sufficient performance results to warrant that complexity.
  15. Decide how you're going to convert an existing entity stored in the database into an Event Sourced aggregate. In the past we tried to map that existing record into the individual, more "property-changed" events. In retrospect, we should have defined a one-time conversion event.
  16. Determine whether you want to make the projection code transactionally consistent with the events emitted by the aggregate and whether that would give you acceptable performance. If you don't go for that, and all projection tables are built asynchronously, make sure the rest of the code base does not expect queries on the projection tables to be consistent.
  17. Design the strategy to convert existing data into the new Event Sourced model. For example, this is what we did:
    1. Rename the existing table and its child tables using a temporary name
    2. Read the records one by one and build new aggregates using the events you designed in the earlier steps
    3. Project those new events into a new set of tables with the same names and structure as what it looked like before the migration started
    4. Delete each record from the temporary table as soon as it was converted and projected
    5. Delete the temporary tables
  18. Repeat the previous steps for the remaining entities, but don’t hesitate to release intermediate step in production.
  19. Build more optimized projections depending on your needs. But don't forget, the first goal is to convert your existing code base.

Wrap-up

Well, there you have it; a recipe to convert an existing domain into Event Sourcing in small increments. But it’s just a recipe, don’t worry. Every code base is unique and will require additional ingredients to make it work. It’s really a journey that will take some time, but will hopefully deliver on the many strengths of Event Sourcing.


Photo of Dennis Doomen

Dennis Doomen He specializes in designing enterprise solutions based on the .NET technologies as well as providing coaching on all aspects of designing, building and maintaining enterprise systems. He is the author of fluentassertions.com, a very popular .NET assertion framework, liquidprojections.net, a set of libraries for building Event Sourcing architectures and he has been maintaining coding guidelines for C# on csharpcodingguidelines.com since 2001. He also keeps a blog on his everlasting quest for better solutions at continuousimprover.com. You can reach him on twitter through @ddoomen.