Implementing Event Sourcing in Spring Boot: Aggregates, CQRS, and Projections

Event Sourcing is one of those architectural patterns that seems abstract until you've hit the exact problem it solves. Most backend applications assume a single ground truth: the database stores the current state. You have an orders table, and when an order is updated, you overwrite the row. Clean. Simple. And, in many real-world scenarios, not enough.

Event Sourcing flips this model. Instead of storing what something is, you store what happened to it. The current state becomes a derived value something you compute from history, not something you persist directly. This naturally pairs with CQRS (Command Query Responsibility Segregation): commands produce events on the write side, queries read projections on the read side.

This article walks through the core concepts and builds a concrete Event Sourcing implementation in Java with Spring Boot, from domain events to projections.

Why CRUD isn't always enough

Imagine an e-commerce order. In a traditional CRUD model, the lifecycle might look like this:

INSERT order (status = PENDING)
UPDATE order SET status = CONFIRMED
UPDATE order SET status = SHIPPED
UPDATE order SET status = DELIVERED

At the end of this sequence, your database knows the order is DELIVERED. What it doesn't know and can never tell you is when it was confirmed, who triggered the shipment, or why it was briefly put on hold between those two states.

You've lost the history. And history, in business-critical applications, is not optional.

Event Sourcing solves this by treating every state change as a first-class fact:

OrderPlaced      { orderId, items, total, timestamp }
OrderConfirmed   { orderId, confirmedBy, timestamp }
OrderShipped     { orderId, trackingCode, timestamp }
OrderDelivered   { orderId, timestamp }

These events are immutable and append-only. Nothing is ever updated or deleted. The current state of an order is the result of replaying all its events in sequence.

Core concepts

Before writing a single line of code, it's worth being precise about the vocabulary.

Event Store: the append-only log where events are persisted. This is your source of truth, not a traditional entity table.

Aggregate: a domain object that produces and consumes events. It encapsulates business logic and ensures invariants are respected. In our example, Order is an aggregate.

Projection: a read model built by consuming events. It's optimized for queries, not for writes. You can have multiple projections of the same event stream: one for the order status page, one for the analytics dashboard, one for the warehouse system.

Eventual Consistency: projections are rebuilt asynchronously. For a brief moment after an event is stored, a projection may lag behind. This is a trade-off you accept in exchange for scalability and flexibility.

The project structure

We'll build a simplified order management system following the classic CQRS split: the write side handles commands and produces events, the read side builds projections from those events.

src/
├── domain/
│   ├── event/          # Immutable event classes
│   ├── aggregate/      # Order aggregate
│   └── command/        # Command classes
├── infrastructure/
│   ├── eventstore/     # Event persistence
│   └── projection/     # Read model builders
└── api/
    └── OrderController.java

Step 1: Define the events

Events are plain, immutable Java records. They carry everything that happened no more, no less.

public sealed interface OrderEvent permits
    OrderPlaced, OrderConfirmed, OrderShipped, OrderDelivered {

    String orderId();
    Instant occurredOn();
}

public record OrderPlaced(
    String orderId,
    List<OrderItem> items,
    BigDecimal total,
    Instant occurredOn
) implements OrderEvent {}

public record OrderConfirmed(
    String orderId,
    String confirmedBy,
    Instant occurredOn
) implements OrderEvent {}

public record OrderShipped(
    String orderId,
    String trackingCode,
    Instant occurredOn
) implements OrderEvent {}

public record OrderDelivered(
    String orderId,
    Instant occurredOn
) implements OrderEvent {}

Using a sealed interface is deliberate: the compiler enforces exhaustive pattern matching when you process these events. No forgotten cases.

Step 2: Build the aggregate

The Order aggregate has two responsibilities: validating commands and applying events to build state.

The important distinction here is between handle() which contains business logic and produces events and apply() which is a pure state transition with no logic.

public class Order {

    private String id;
    private OrderStatus status;
    private List<OrderItem> items;
    private BigDecimal total;

    private final List<OrderEvent> uncommittedEvents = new ArrayList<>();

    // ---- Command handlers ----

    public static Order place(String orderId, List<OrderItem> items, BigDecimal total) {
        var order = new Order();
        var event = new OrderPlaced(orderId, items, total, Instant.now());
        order.apply(event);
        order.uncommittedEvents.add(event);
        return order;
    }

    public void confirm(String confirmedBy) {
        if (this.status != OrderStatus.PENDING) {
            throw new IllegalStateException("Only PENDING orders can be confirmed");
        }
        var event = new OrderConfirmed(this.id, confirmedBy, Instant.now());
        apply(event);
        uncommittedEvents.add(event);
    }

    public void ship(String trackingCode) {
        if (this.status != OrderStatus.CONFIRMED) {
            throw new IllegalStateException("Only CONFIRMED orders can be shipped");
        }
        var event = new OrderShipped(this.id, trackingCode, Instant.now());
        apply(event);
        uncommittedEvents.add(event);
    }

    // ---- Event applicators (pure state transitions) ----

    private void apply(OrderEvent event) {
        switch (event) {
            case OrderPlaced e -> {
                this.id = e.orderId();
                this.items = e.items();
                this.total = e.total();
                this.status = OrderStatus.PENDING;
            }
            case OrderConfirmed e -> this.status = OrderStatus.CONFIRMED;
            case OrderShipped e   -> this.status = OrderStatus.SHIPPED;
            case OrderDelivered e -> this.status = OrderStatus.DELIVERED;
        }
    }

    // ---- Rehydration from event history ----

    public static Order reconstitute(List<OrderEvent> history) {
        var order = new Order();
        history.forEach(order::apply);
        return order;
    }

    public List<OrderEvent> getUncommittedEvents() {
        return Collections.unmodifiableList(uncommittedEvents);
    }

    public void clearUncommittedEvents() {
        uncommittedEvents.clear();
    }
}

Notice reconstitute(): to load an order, you don't run a SELECT on an entity table. You load its event history and replay it. The aggregate is always in a consistent state, almost by definition.

Step 3: Persist events

The Event Store is the heart of this architecture. Here's a simple implementation backed by PostgreSQL.

First, the schema:

CREATE TABLE order_events (
    id          BIGSERIAL PRIMARY KEY,
    order_id    VARCHAR(255) NOT NULL,
    event_type  VARCHAR(255) NOT NULL,
    payload     JSONB        NOT NULL,
    occurred_on TIMESTAMPTZ  NOT NULL
);

CREATE INDEX idx_order_events_order_id ON order_events(order_id);

Then, the repository:

@Repository
public class EventStoreRepository {

    private final JdbcTemplate jdbc;
    private final ObjectMapper objectMapper;

    public EventStoreRepository(JdbcTemplate jdbc, ObjectMapper objectMapper) {
        this.jdbc = jdbc;
        this.objectMapper = objectMapper;
    }

    public void append(List<OrderEvent> events) {
        for (var event : events) {
            try {
                var payload = objectMapper.writeValueAsString(event);
                jdbc.update(
                    "INSERT INTO order_events (order_id, event_type, payload, occurred_on) VALUES (?, ?, ?::jsonb, ?)",
                    event.orderId(),
                    event.getClass().getSimpleName(),
                    payload,
                    event.occurredOn()
                );
            } catch (JsonProcessingException e) {
                throw new RuntimeException("Failed to serialize event", e);
            }
        }
    }

    public List<OrderEvent> loadFor(String orderId) {
        return jdbc.query(
            "SELECT event_type, payload FROM order_events WHERE order_id = ? ORDER BY id ASC",
            (rs, rowNum) -> deserialize(rs.getString("event_type"), rs.getString("payload")),
            orderId
        );
    }

    private OrderEvent deserialize(String eventType, String payload) {
        try {
            return switch (eventType) {
                case "OrderPlaced"    -> objectMapper.readValue(payload, OrderPlaced.class);
                case "OrderConfirmed" -> objectMapper.readValue(payload, OrderConfirmed.class);
                case "OrderShipped"   -> objectMapper.readValue(payload, OrderShipped.class);
                case "OrderDelivered" -> objectMapper.readValue(payload, OrderDelivered.class);
                default -> throw new IllegalArgumentException("Unknown event type: " + eventType);
            };
        } catch (JsonProcessingException e) {
            throw new RuntimeException("Failed to deserialize event", e);
        }
    }
}

Step 4: Build a projection

A projection is a denormalized read model, built by consuming events. Here's an OrderSummaryProjection that keeps track of the current status of every order.

@Entity
@Table(name = "order_summary")
public class OrderSummaryView {

    @Id
    private String orderId;
    private OrderStatus status;
    private BigDecimal total;
    private Instant lastUpdated;

    // getters, setters, constructors
}

@Service
@Transactional
public class OrderSummaryProjection {

    private final OrderSummaryViewRepository repository;

    public OrderSummaryProjection(OrderSummaryViewRepository repository) {
        this.repository = repository;
    }

    public void on(OrderPlaced event) {
        var view = new OrderSummaryView(
            event.orderId(),
            OrderStatus.PENDING,
            event.total(),
            event.occurredOn()
        );
        repository.save(view);
    }

    public void on(OrderConfirmed event) {
        repository.findById(event.orderId()).ifPresent(view -> {
            view.setStatus(OrderStatus.CONFIRMED);
            view.setLastUpdated(event.occurredOn());
        });
    }

    public void on(OrderShipped event) {
        repository.findById(event.orderId()).ifPresent(view -> {
            view.setStatus(OrderStatus.SHIPPED);
            view.setLastUpdated(event.occurredOn());
        });
    }
}

This matters: the OrderSummaryView table is completely disposable. If you add a new field or fix a bug in the projection logic, you drop the table and rebuild it by replaying all events from the beginning. This is called projection rebuild and it's one of the most powerful features of Event Sourcing.

Step 5: Wire it together in the service layer

@Service
@Transactional
public class OrderService {

    private final EventStoreRepository eventStore;
    private final OrderSummaryProjection summaryProjection;

    public OrderService(EventStoreRepository eventStore, OrderSummaryProjection summaryProjection) {
        this.eventStore = eventStore;
        this.summaryProjection = summaryProjection;
    }

    public String placeOrder(List<OrderItem> items, BigDecimal total) {
        var orderId = UUID.randomUUID().toString();
        var order = Order.place(orderId, items, total);

        dispatchEvents(order);
        return orderId;
    }

    public void confirmOrder(String orderId, String confirmedBy) {
        var history = eventStore.loadFor(orderId);
        var order = Order.reconstitute(history);
        order.confirm(confirmedBy);
        dispatchEvents(order);
    }

    public void shipOrder(String orderId, String trackingCode) {
        var history = eventStore.loadFor(orderId);
        var order = Order.reconstitute(history);
        order.ship(trackingCode);
        dispatchEvents(order);
    }

    private void dispatchEvents(Order order) {
        var events = order.getUncommittedEvents();
        eventStore.append(events);
        events.forEach(this::routeToProjections);
        order.clearUncommittedEvents();
    }

    private void routeToProjections(OrderEvent event) {
        switch (event) {
            case OrderPlaced e    -> summaryProjection.on(e);
            case OrderConfirmed e -> summaryProjection.on(e);
            case OrderShipped e   -> summaryProjection.on(e);
            default -> { /* no projection for this event yet */ }
        }
    }
}

Event Sourcing trade-offs: what you gain and what you give up

Event Sourcing is not a silver bullet. Here's what this architecture actually buys you, and what it costs.

You gain:

  • Complete audit log: every state change is recorded with who did what and when. This comes for free, as a natural consequence of the model.
  • Temporal queries: you can reconstruct the state of any aggregate at any point in time. Just replay events up to a given timestamp.
  • Projection flexibility: you can create new read models at any time, by replaying existing events. Adding a new reporting view doesn't require a data migration.
  • Debugging clarity: when something goes wrong, you have the exact sequence of events that led to the current state. No guessing.

You give up:

  • Simplicity: the cognitive overhead is real. This architecture requires discipline across the whole team.
  • Immediate consistency: projections are eventually consistent. If your use case requires synchronous, strongly consistent reads immediately after a write, you'll need to design around this carefully.
  • Query simplicity: you can't run arbitrary queries on the event store. You need to design your projections upfront for the queries you know you'll need.

Event Sourcing is not the right default. It shines when auditability is a hard requirement, when your domain is complex enough to benefit from the separation of write and read models, or when the history of state changes carries business value in itself.

If you're building a simple CRUD API for internal tooling, this is overkill. If you're building a financial transaction system, an inventory management platform, or anything where "what happened" matters as much as "what is" this architecture pays for itself.

Going further

This implementation is intentionally minimal. In a production setup, you'd want to consider:

  • Snapshots: for aggregates with thousands of events, replaying the full history on every load becomes slow. Snapshots capture the state at a given point so you only replay events after that snapshot.
  • Outbox pattern: to reliably publish events to external consumers (Kafka, RabbitMQ) without risking inconsistency between your event store and the message broker.
  • Optimistic concurrency: add a version column to order_events and check it on append to prevent concurrent writes from creating conflicting event sequences.
  • Axon Framework: if you want a mature, opinionated Event Sourcing framework for the JVM, Axon handles aggregates, event stores, projections, and sagas out of the box.

The code in this article is the foundation. The interesting decisions start when you hit production.

Additional Resources

  • Event Sourcing — Martin Fowler's foundational write-up on the pattern
  • CQRS — Martin Fowler on Command Query Responsibility Segregation, the natural companion to Event Sourcing
  • Transactional Outbox Pattern — microservices.io reference for reliable event publishing
  • Axon Framework — mature JVM framework for Event Sourcing and CQRS
  • Spring Data JDBC — official docs for the persistence layer used in this article

Happy Coding!