While a relatively simple process, this will solve many of the problems that existed in the stereotypical architecture. The service has been split into two separate services, a read side and a write side or the Command side and the Query side.
This separation enforces the notion that the Command side and the Query side have very different needs. The architectural properties associated with use cases on each side are tend to be quite different. Just to name a few:
Consistency
Command: It is far easier to process transactions with consistent data than to handle all of the edge cases that eventual consistency can bring into play.
Query: Most systems can be eventually consistent on the Query side.
Data Storage
Command: The Command side being a transaction processor in a relational structure would want to store data in a normalized way, probably near 3rd Normal Form (3NF)
Query: The Query side would want data in a denormalized way to minimize the number of joins needed to get a given set of data. In a relational structure likely in 1st Normal Form (1NF)
Scalability
Command: In most systems, especially web systems, the Command side generally processes a very small number of transactions as a percentage of the whole. Scalability therefore is not always important.
Query: In most systems, especially web systems, the Query side generally processes a very large number of transactions as a percentage of the whole (often times 2 or more orders of magnitude). Scalabilityis most often needed for the query side.
Partitioning
A very common performance optimization in today’s systems is the use of Horizontal Partitioning. With Horizontal Partitioning the same schema will exist in many places and some key within the data will be used to determine in which of the places the data will exist. Some have renamed the term to “Sharding” as of late. The basic idea is that you can maintain the same schema in multiple places and based on the key of a given row place it in one of many partitions.
One problem when attempting to use Horizontal Partitioning with a Relational Database it is necessary to define the key with which the partitioning should operate. This problem goes away when using events. Aggregate IDs are the only partition point in the system. No matter how many aggregates exist or how they may change structures, the Aggregate Id associated with events is the only partition point in the system.
Horizontally Partitioning an Event Store is a very simple process.
Saving Objects
When dealing with a stereotypical system utilizing a relational data storage it can be quite complex to figure out what has changed within the Aggregate. Again many tools have been built to help alleviate the pain that arises from this often complex task but is the need for a tool a sign of a bigger problem?
Most ORMs can figure out the changes that have occurred within a graph. They do this generally by maintaining two copies of a given graph, the first they hold in memory and the second they allow other code to interact with. When it becomes time to save a complex bit of code is run, walking the graph the code has interacted with and using the copy of the original graph to determine what has changed while the graph was in use by the code. These changes will then be saved back to the data storage system.
In a system that is Domain Event centric, the aggregates are themselves tracking strong events as to what has changed within them. There is no complex process for comparing to another copy of a graph, instead simply ask the aggregate for its changes. The operation to ask for changes is far more efficient than having to figure out what has changed.
Loading Objects
A similar issue exists when loading objects. Consider the work that is involved with loading a graph of objects in a stereotypical relational database backed system. Very often there are many queries that must be issued to build the aggregate. In order to help minimize the latency cost of these queries many ORMs have introduced a heuristic of Lazy Loading also known as Delayed Loading where a proxy is given in lieu of the real object. The data is only loaded when some code attempts to use that particular object.
Lazy Loading is useful because quite often a given behavior will only use a certain portion of data out of the aggregate and it prevents the developer from having to explicitly represent which data that is while amortizing the cost of the loading of the aggregate. It is this need for amortization of cost that shows a problem.
Aggregates are considered as a whole represented by the Aggregate Root. Conceptually an Aggregate is loaded and saved in its entirety. (Evans, 2001).
Conceptually it is much easier to deal with the concept of an Aggregate being loaded and saved in its entirety. The concept of Lazy Loading is not a trivial one when added and is especially not trivial when optimizing use cases. The heuristic is needed because loading full aggregates from a relational database is operationally too slow.
When dealing with events as a storage mechanism things are quite different. There is but one thing being stored, events. Simply load all of the events for an Aggregate and replay them. There can only ever be a single query on the system, there is no need to attempt to implement things like Lazy Loading. This is bad for people who want to build complex and quite often impressive frameworks for managing things like Lazy Loading but it is good for development teams who no longer need to learn these frameworks.
Many would quickly point out that although it requires more queries in a relational system, when storing events there may be a huge number of events for some aggregates. This can happen quite often and a relatively simple solution exists for the problem.
Rolling Snapshots
A Rolling Snapshot is a denormalization of the current state of an aggregate at a given point in time. It represents the state when all events to that point in time have been replayed. Rolling Snapshots are used as a heuristic to prevent the need to load all events for the entire history of an aggregate. Figure 5 shows a typical Event Stream. One way of process thing the event stream is to replay the events from the beginning of time until the end of the event stream is reached.
There is not an impedance mismatch between events and the domain model. The events are themselves a domain concept, the idea of replaying events to reach a given state is also a domain concept. The entire system becomes defined in domain terms. Defining everything in domain terms not only lowers the amount of knowledge that developers need to have, it also limits the number of representations of the model needed as the events are directly tied to the domain model itself.
Business Value of the Event Log
It needs to be made clear at the very start of this section that the value of the Event Log is directly correlated with places that you would want to use Domain Driven Design in the first place. Domain Driven Design should be used in places where the business derives competitive advantage. Domain Driven Design itself is very difficult and expensive to apply; a company will however receive high ROI on the effort if the domain is complex and if they derive competitive advantage from it. Using an Event Log similarly will have high ROI when dealing with an area of competitive advantage but may have negative ROI in other places.
Storing only current state only allows to ask certain kinds of questions of the data. For example consider orders in the stock market. They can change for a few reasons, an order can change the amount of volume that they would like to buy/sell, the trading system can automatically adjust the volume of an order, or a trade could occur lowering the volume available on the current order.
If posed with a question regarding current liquidity such as the price for a given number of shares in the market, it really does not matter which of these changes occurred, it does not really matter how the data got the way it was, it matters what it is at a given point in time. A vast majority of queries even in the business world are focused on the what, labels to send customers mails, how much was sold in April, how many widgets are in the warehouse.
No comments:
Post a Comment