Event Sourcing and CQRS

Reading Time: 5 minutes


Event Sourcing and CQRS patterns are out there for a quite long time and the patterns caught my attention, while I was reading blogs. I have gone through some materials and prepared a demo project to reinforce my understanding. Furthermore, I’ll not bore you with lots of in-depth details, which can be found elsewhere, but give fundamentals and show the key points to better apprehend the topic and the concepts.

Event Sourcing

The fundamental logic in Event sourcing is to preserve the changed states of a system that are raised by Domain Events. Ultimately, event sourcing is only about insertion, appending on top of the previous event, and not allowing modifications. In this case each, inserted event is purely atomic and immutable.

The idea is similar to the Ledger in Accounting, as the accountant keeps inserting all the transactions, but not modifying.

Furthermore, the corrections are possible, while insert a new correction event on top of the previous event. This way modifications are provided and the data in the system is consistent.

Additionally, the major benefit of the Event Sourcing pattern is the ability to replay the events, so that you can rebuild the history and see the transitions, if you are to look for a specific piece of history. On top of that, you can build snapshots as checkpoints, if you don’t want to traverse through the whole past, because the history can become quite enormous. Eventually I want to highlight a great example from Greg’s presentation[2]. You will see how the Event Sourcing pattern can transform an architecture and preserve the history of actions.

With such transformation, you will be able to have series of historical facts that cause to the designated events to occur

In addition to Greg’s illustration above, I’d like to state one of his statement here

State transitions are an important part of our problem space and should be modeled within our domain. Facts are about the past.


The abbreviation of this pattern goes as Command Query Responsibility Segregation. It suggests an Ultimate separation of Read and Writes and treat them different commands. The pattern makes more sense when the ratio of one operation is higher than the other one.

To exemplify the statement if the ratio of Reads is higher than the Ratio of the Writes in your system, then you may consider this pattern. In data intensive application, this pattern increases the performance, however as a disadvantage, the complexity increases.

Event Source and CQRS patterns provide eventual consistency.

When the business transactions span through multiple microservices, ACID properties are not supported in such distributed systems. In remote communications, we accept the fact that the performed changes on the other end of the system are eventually consisted. Such patterns like Saga are not in the consideration of these patterns.

However, Saga pattern is helpful when administering a business flow and Orchestrator Saga can provide means of rollback scenarios by executing compensation operations.

Furthermore, the implementation of the CQRS pattern initially can start in the application layer. You can separate your interfaces in the code logic as follows:

public interface UserReadRepository {
      public User findById(Long id);      
      public List<User> getAll();
public interface UserWriteRepository {
      public void deleteById(Long id);      
      public void create(User user);

This is a simple and straightforward head-start, if you are decomposing a monolith or willing to convert your brand-new data intensive microservice into individual high-performing services. You’ll evaluate the logical separation of services in the upcoming comprehensive demo.

Demo Project

In the demo project, I’ll be simulating series of financial operations in the “ASLAN Bank” which exposes APIs that provide ability to make transactions and retrieve the account details.

Let’s look at the illustration here to evaluate what we will achieve in this demo

Let me introduce the participating parties here:

  • Customer DDoS service: For a good measure, this service sends random withdrawal and deposit requests of the existing accounts to the Gateway. It sort of bombards the backend with a higher ratio of writes, thus it can be a good way of evaluating the system under massive loads,
  • ASLAN Bank Gateway Service: As the name suggests, this service is a gateway whose sole purpose is to forward requests of :
    1. POST to the Account Service,
    2. GET to the Account History Service,
  • Account Service: The service takes up only the write requests via POST that are wrapped up with either deposits or withdrawals including account information and the designated amount:
    1. The service has its own customer account table which holds only the basic information of clients, but no history,
    2. On the other hand, this table provides consistency of the balance, which should not go below zero,
    3. Upon the completion of invariant checks, the account amount is updated in the database,
    4. Accordingly, the event is sourced to the Broker along with information of the current account balance, designated amount as well as the transaction type,
  • Account History Service: The Account History service is responsible of reads only in the system:
    1. Via REST, the service returns a response of client’s account history of Transaction Type and amount along with the current account balance,
    2. Asynchronously the account information is updated whenever an event is retrieved from the sourcing system,
  • Kafka Message Broker: The choice of Kafka broker is well suited with the purpose of being “Single Source of Truth” in the event sourcing terms. The participating services have the benefits of both exchanging messages asynchronously, and keeping the history of accounts’ states, in case of service failures.

Prerequisites and Source Code

The project stack is formed in Java and Spring, and you should have Java 8+ and Maven 3+ installed on your system. On top of the stack, Apache Kafka as a messaging broker which also requires Zookeeper. For the sake of the simplicity, I’ll be using dockerized Kafka and Zookeeper. For that reason, you will need Docker and Docker Composer installed in your system.

The whole home-brew projects are hosted in my Github. Just clone the repository and switch to the folder “CQRSSample” in which you will find all the necessary services.

Environment Setup, Running Projects and Resources

Kafka and Zookeeper

I stumbled upon this very useful Github repository and the author had done a great job for launching a Kafka Broker with no special setup. The project seems to be well maintained and provides ways to run single or multi Kafka Brokers. In addition, you can select to run an older version as well.

All the steps are explained in the Read-me, but I’ll briefly summarize the small steps for a single node launch:

  1. Clone the repository somewhere convenient in your file system,
  2. Open the “docker-compose-single-broker.yml” configuration file with your text editor,
  3. Add your computer’s ip address as a value to the “KAFKA_ADVERTISED_HOST_NAME” key,
  4. Open your terminal and switch to the cloned project folder,
  5. issue the command to run all:
sudo docker-compose -f docker-compose-single-broker.yml up

Running Projects

If you prefer command line, in your terminal open up the multiple projects “AslanBankGatewayService, AccountService and AccountHistoryService” and run the below command for each:

mvn spring-boot:run

Alternatively, if you want to evaluate the services under massive loads, then run the “CustomersDDoSService” after the above services start. As soon as the DDosService starts up, it will send requests to the Gateway service.


Here you can find the the resources to send requests of

getting the account history:

curl http://localhost:8080/accounts/3/history -H "Content-Type: application/json"

making a transaction:

curl -X POST http://localhost:8080/transactions/ -H "Content-Type: application/json" -d "{\"accountId\":\"1\",\"amount\":\"45.00\",\"type\":\"DEPOSIT\"}"


  1. Greg Young — A Decade of DDD, CQRS, Event Sourcing
  2. Greg Young – CQRS and Event Sourcing – Code on the Beach 2014
  3. An Introduction to CQRS and Event Sourcing Patterns – Mathew McLoughlin
  4. Building microservices with event sourcing and CQRS
  5. What do you mean by “Event-Driven”? by Martin Fowler
  6. CQRS by Martin Fowler
  7. Focusing on Events by Martin Fowler
  8. Event Collaboration by Martin Fowler
  9. Event Sourcing by Martin Fowler
  10. Eventually Consistent by Werner Vogels