CQRS
CQRS
CQRS
The Example
Mark Nijhof
* * * * *
This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing process.
Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and many
iterations to get reader feedback, pivot until you have the right book and build traction once you do.
* * * * *
The cover
They say; You judge a book by its cover, and therefore I am really happy to say that the CQRS book
got a new cover.
The cover was generously provided by Sebastian from Leanpubcovers. He also created the covers
for my other writing project Lean Startup for Developers. I highly recommend talking to him if you
are also writing your own Leanpub book.
Lean Startup for Developers
I have been building software professionally since 2000. One thing that is strikingly clear to me is;
the amount of waste that is being created in our industry. We build and build without knowing that
what we build is actually valuable. Will they use it? Will they pay for it? We just don’t know!
So, there is the Lean Startup to the rescue. We now know that we should build a Minimal Valuable
Product or MVP, instead of going all the way. But, now I hear things like; “Our MVP will be ready in
a few months”. Is it really necessary to wait several months before you can validate your ideas? I
don’t think it is!
We have come so accustomed to long running projects that we think delivering something in a month
is fast. So, this book is about creating a different mindset, and about using the right tools. It is about
accepting that building a MVP is not the same as building a product. This book is based on my own
experiences, starting Inqob and Random Manager. Two startups still in their early phases.
Who is it for?
The bundle is mostly written for developers, it contains both the non-technical and the technical book.
If you can relate to any of these points then the book might be right for you:
You are thinking of building your own product, but you don’t know where to start?
You have already started your own adventure, and you would like to hear someone else’s
experiences running their own startup.
You have no plans on starting something on your own, but you like the Lean Startup ideas and
would like to apply them in your day to day work.
You are non-technical, but would like to learn about what goes on inside the head of a technical
person.
Go to: http://ls4d.com
To Work Remote
Have you ever wondered what it would be like to work full-time remote?
I have just accepted a new job, which is 100% remote. I will be documenting my findings while I
start walking the path of freedom, the happy parts but also the sad parts. This is not the story of an
experienced remote worker. It won’t be about best practices and how to’s, I have no knowledge about
this yet. If you are looking for that, then I suggest waiting for 37signals remote book.
But if you want to look through a little window, into the life of someone who is about to start working
remotely on a full-time basis. Maybe just to see if this could be something for you as well? Then wait
no more and start experiencing it for yourself, I will be your proxy.
Go to: http://toworkremote.com
CQRS à la Greg Young
I have had the pleasure of spending a 2 day course and many geek beers with Greg Young talking
about Domain-Driven Design specifically focussed on the Command and Query Responsibility
Segregation (CQRS) pattern. Greg has taken Domain-Driven Design from how Eric Evans describes
it in his book and has adapted mostly the technical implementation of it. Command Query Separation
(CQS) was originally thought of by Bertrand Meyer and is applied at object level
Bertrand defines CQS as: every method should either be a command that performs an action, or
a query that returns data to the caller, but not both. In other words, asking a question should not
change the answer.
Greg however takes this same principle but he applies it to the whole architecture of a system, clearly
separating the write side (Commands) from the read side (Queries) of the system. The write side is
what we already know as the domain, containing all the behavior which is what makes the system
valuable. The read side is specialized towards the specific reporting needs, think for example about
the application screens that enables the users to execute domain behavior, but also any traditional
reporting needs are provided by the read database.
So lets take a quick look at the overall architecture before we dive into the details:
You have to excuse me for not using Visio to make this drawing, but I really didn’t feel like tackling
yet an other layer of complexity tonight. At least I did the labels in typed text, so it is readable. I will
discuss this architecture in 4 phases, the order I choose is what I believe is the natural way your
would think about this. In the end you will see that the whole principle is rather simple. Really.
1. Queries
2. Commands
3. Internal Events
4. External Events (Publishing)
In order to understand this type of architecture better I decided to build an example application; I will
use this example in this chapter to demonstrate the different aspects of the architecture to you. The
example has already been released in the wild (ok I meant the DDD group on Yahoo) and can be
found here: http://github.com/MarkNijhof/Fohjin
Queries (reporting)
The first part I would like to discuss is the reporting needs of a system, Greg defines any need for
data from the system as a reporting need; this includes the various application screens which the user
uses to base his decisions on. This seemed like a strange statement at first, but the more I think about
it the more sense it makes. This data is purely used to inform the user (or other systems) about the
current state of the system in the specific context of the user so that they can make certain decisions
and execute domain behavior.
These reports will never be updated directly by the consumer of these reports, the data represents the
state of the domain, so the domain is responsible for updating it. So all we really do on this side is
report the current state of the system to who or what ever needs it.
When an application is requesting data for an specific screen than this should be done in one single
call to the Query layer and in return it will get one single DTO containing all the needed data. Now
because of this specific use of the data it makes sense to order and group it in such a way that is
determined by the needs of the system. So we de-normalizing the data trying to make a single table
reflect a single screen or usage in the client application. The reason for this is that data is normally
queried many times more than domain behavior is being executed, so by optimizing this you will
enhance the perceived performance of the system.
Here you may choose to use an ORM like NHibernate to facilitate the reading from the database, but
considering that you would only be using a very small percentage of the capabilities of a proper ORM
you may not need to go that way at all. Maybe going for Linq2Sql of even as Greg suggested using
reflection to assemble the SQL statements directly from the DTO’s (using reflection and Convention
over Configuration makes this rather simple) is perhaps a better solution for the problem. This will
be up to you and probably depends on specific scenario and what you feel comfortable with.
In the example that I created to demonstrate this type of architecture I choose for using reflection of
the DTO’s because I wanted to put the emphasis on the CQRS implementation and not on the ORM
implementation.
The more traditional reporting needs will also get its own database schema and the data there will be
optimized for that need as well, so in the end we will end up with quite a bit of duplication of the
data, but that is all right. The process that is responsible for updating the data in the different
databases will make sure that this happens in the correct way, we will go over this in a later part of
this post.
This would result in loosing some very valuable information; the Why did this change happen? You
completely loose the intent the user had when he changed the data, and this is one of the things Greg’s
implementation of CQRS is solving.
CQRS as the name indicates uses Commands, these commands are created on the client application
and then send to the Domain layer. Lets take an example: A customer from a bank comes in the office
and tells the person behind the desk that he needs to change his address. And instead of just getting the
customer information and making the changes directly in the address fields, the bank employee firsts
asks the question; Why would you want to change your address? The most likely response would be
because the customer has moved, but it could also be because there is an error in his address and that
all his mail ends up with his downstairs neighbor. Now these are two completely different reasons to
update a customer address. Why could this be important? Well granted the example is rather silly, but
lets assume the bank wants to know how many customers go to a competitor after moving? How loyal
are our customers, should we keep sending them specific information after they have moved x miles
away? Right this information is completely lost in our original way of working, but when using
commands and events (more on events later) we maintain the original intent of the action. So after
asking the question the customer answers that he has indeed moved and the bank employee selects the
“Customer has moved” in the application and gets the ability to change only the address. When
clicking save a CustomerMovedCommand will be created with only the changed address and is send
to the domain.
We also get one other great benefit from using these commands and that is that these commands are
easy to communicate with our client while building or working on the system. Because our clients
would most likely use these types of behavior when explaining what they want to accomplish. Al do
Greg thinks times have changed, “Our grand failure”, but it should really be the case that our clients
talk their own domain language. When using these commands we can start talking the same language
even in the code.
That is what Domain-Driven Design is all about, instead of doing something technical like update
client, it is actually describing the process that the user uses into the code like; the client has moved.
1 namespace Fohjin.DDD.Commands
2 {
3 [Serializable]
4 public class ClientIsMovingCommand : Command
5 {
6 public string Street { get; private set; }
7 public string StreetNumber { get; private set; }
8 public string PostalCode { get; private set; }
9 public string City { get; private set; }
10
11 public ClientIsMovingCommand(
12 Guid id,
13 string street,
14 string streetNumber,
15 string postalCode,
16 string city) : base(id)
17 {
18 Street = street;
19 StreetNumber = streetNumber;
20 PostalCode = postalCode;
21 City = city;
22 }
23 }
24 }
All these commands will be send to the Command Bus which will delegate each command to the
command handler or command handlers. This also effectively means that there is only one entry point
into the domain and that is via the Bus. The responsibility of these command handlers is to execute the
appropriate behavior on the domain. Close to all of these command handlers will have the repository
injected to provide the ability to load needed the Aggregate Root on which then the appropriate
behavior will be executed. Usually only one Aggregate Root will be needed in a single command
handler. Later we will also take a closer look at the repository as it is different from your ordinary
DDD repository.
Command Handler
1 namespace Fohjin.DDD.CommandHandlers
2 {
3 public class ClientIsMovingCommandHandler
4 : ICommandHandler<ClientIsMovingCommand>
5 {
6 private readonly IDomainRepository _repository;
7
8 public ClientIsMovingCommandHandler(
9 IDomainRepository repository)
10 {
11 _repository = repository;
12 }
13
14 public void Execute(ClientIsMovingCommand compensatingCommand)
15 {
16 var client = _repository.GetById<Client>(
17 compensatingCommand.Id);
18
19 client.ClientMoved(new Address(
20 compensatingCommand.Street,
21 compensatingCommand.StreetNumber,
22 compensatingCommand.PostalCode,
23 compensatingCommand.City));
24 }
25 }
26 }
As you can see a command handler has only one responsibility and that is to handle the one particular
command by executing the appropriate domain behavior. The command handler should not be doing
any domain logic itself. If there is a need for this than that logic should be moved into a service of its
own. An example of this is in my example code is an incoming money transfer, more about that later.
Now we are going to separate the domain behavior from the state changes resulting from this domain
behavior including the triggering of external behavior. This would not be much different from how
you would do this now, first verify the normal guards, do what you have to do, but don’t set any
internal state, and don’t trigger any external behavior (Ok the last part is more an optional thing to
consider, state is the key here). Instead of writing these state changes directly to the internal variables
you create an event and fire it internally. This event as well as the method name of the behavior
should be descriptive in the Ubiquitous Language of the domain. Then the event will be handled
inside the domain Aggregate Root which will set the internal state to the correct values. Remember
that the event handler should not be doing any logic other then setting the state, the logic should be in
the domain method.
Domain Behavior
1 public void ClientMoved(Address newAddress)
2 {
3 IsClientCreated();
4
5 Apply(new ClientMovedEvent(
6 newAddress.Street,
7 newAddress.StreetNumber,
8 newAddress.PostalCode,
9 newAddress.City));
10 }
11
12 private void IsClientCreated()
13 {
14 if (Id == new Guid())
15 throw new NonExistingClientException(
16 "The Client is not created no operations can be executed on it");
17 }
Domain Event
1 namespace Fohjin.DDD.Events.Client
2 {
3 [Serializable]
4 public class ClientMovedEvent : DomainEvent
5 {
6 public string Street { get; private set; }
7 public string StreetNumber { get; private set; }
8 public string PostalCode { get; private set; }
9 public string City { get; private set; }
10
11 public ClientMovedEvent(
12 string street,
13 string streetNumber,
14 string postalCode,
15 string city)
16 {
17 Street = street;
18 StreetNumber = streetNumber;
19 PostalCode = postalCode;
20 City = city;
21 }
22 }
23 }
The reason why we want these events is because they now become part of our persistence strategy,
meaning that the only information we will be persisting of an Aggregate Root are the generated
events. So if every state change is triggered by an event, and an internal event handler has no other
logic then setting the correct state (and that means not even deriving other information from the data in
the event), then what we can do then is load all historical events and have the Aggregate Root replay
them all internally bringing back the original state of the Aggregate Root in exactly the same way it
got there in the first place. It really is the same as replaying a tape.
One thing to note is that these events are write only, you will never add, alter or remove an event. So
if you suddenly end up with a bug in your system which is generating wrong events, then the only way
for you to correct this is to generate a new compensating event correcting the results of the bug. Of
course you want to fix the bug as well. This way you have also tracked when the bug was fixed and
when the effects of the bug where corrected.
By having this architecture we now basically solved the problem of loosing original intent, because
we keep all events that have ever happened and these evens are intent revealing. An other very
interesting thing is that now you have an audit log for free, because nothing will ever change state
without an event and the events are stored and used in building up the Aggregate Roots they are
guaranteed in sync with each other.
The reporting repositories on the other hand would probably look much more like the traditional
repositories from DDD.
So what happens when you have 100.000 events that need to be replayed every-time you load the
Aggregate Root, that will slow down your system immensely. So to counter this effect you would use
the Memento pattern to take a snapshot from the internal state of the Aggregate root every x number of
events. Then the repository will first request the snapshot, load that in the Aggregate Root and then
request all the events that have occurred after the snapshot, bringing it back to the original state. This
is only an optimization technique you would not delete events that happened before the snapshot, that
would pretty much defeat the purpose of this architecture.
Data Mining
One other really interesting fact about storing all the events is that you can replay these events at a
later date and retrieve important business information from them. And you get this information from
the start of the system, instead having to build-in some extra logging and wait a few months for
reliable data.
Greg actually mentioned a really nice way of caching these SQL statements, he would batch them in a
single batch and execute the batch if it gets older then x seconds, or (and this is the interesting part)
whenever a read request came in. So when a read request comes in this SQL statement is appended to
the batch and the whole thing is executed, ensuring that the read request will always have the latest
data available to that part of the system. More on this below here.
The domain repository is responsible for publishing the events, this would normally be inside a
single transaction together with storing the events in the event store.
Events are also used to communicate between different Aggregate Roots, I my example I am using a
transaction from one account to an other account. Here I generate an event that money is being
transfered to an other account, this event will reduce the balance of the current account. Later an event
handler will make the same change in the reporting database. An other event handler will actually
forward the transaction to a service which checks if the target account is a local account, else
forwards a money transfer to a different bank (in my example the different bank is actually the same,
but via a different route). But lets assume that the money transfer goes to an internal account, in that
case the service will publish a money transfer received command in the command bus and the whole
process continues in a different Aggregate Root. So in this case the command is not triggered from a
GUI but from a different part in the system.
There is an other interesting fact in the scenario when money is received from an other bank, because
a money transfer only has the account number to identify the target account and not the Aggregate Root
Id (you can’t expect foreign systems to know this, yes I know you can make this a natural Id as well)
the money received service first needs to do a query to the reporting database requesting an account
DTO where the account number is the same as the target account. When this is successful it will use
the Id from the account DTO and put that in the command to be published.
But it doesn’t stop there, as I mentioned before you could also have events that don’t have any state
change information but for example indicate that a message (Email, SMS or what ever, depends on the
event handler) needs to be send to an user. And because you are using Domain Events for all this,
everything will be stored in the Event Store, so you keep your history.
Eventual Consistency
Normally when beginning to implement CQRS you would start with a direct publishing mechanism so
that storing the events and updating the reporting database happen in the same thread. When using this
approach you have no problems with eventual consistency.
But when you system starts to grow you might get some performance problems and then you could
start by implementing a bus disconnecting the publishing of the events and handling of these events.
This means that it is possible and likely that your event store and reporting database are not
completely in sync with each other, they are eventual consistent. Which means that it is possible that
the user sees old data on his screens.
Depending on how critical this really is you can have different counter measures for this, which I will
be going into in a different chapter as this is also not something the example provides.
Specifications
The specifications that you can write using this architecture is something that I really like, what you
would do is talk to your client and ask him things about how he wants his process to work. So a
possible scenario could be:
So how would that go? Well the clients needs to have opened an account with our bank and he needs
to have some money transfered on it in order to be able to withdraw money from it. And when this
happens the balance of the account must be lowered to the correct amount. Ok so, given an account
was opened, and a cash was deposited, when making a cash withdraw then we get a cash withdrawn
event.
1 namespace Test.Fohjin.DDD.Scenarios.Withdrawing_cash
2 {
3 public class When_withdrawing_cash : CommandTestFixture<
4 WithdrawlCashCommand,
5 WithdrawlCashCommandHandler,
6 ActiveAccount>
7 {
8 protected override IEnumerable<IDomainEvent> Given()
9 {
10 yield return PrepareDomainEvent.Set(
11 new AccountOpenedEvent(
12 Guid.NewGuid(),
13 Guid.NewGuid(),
14 "AccountName",
15 "1234567890")).ToVersion(1);
16 yield return PrepareDomainEvent.Set(
17 new CashDepositedEvent(20, 20)).ToVersion(1);
18 }
19
20 protected override WithdrawlCashCommand When()
21 {
22 return new WithdrawlCashCommand(Guid.NewGuid(), 5);
23 }
24
25 [Then]
26 public void Then_a_cash_withdrawn_event_will_be_published()
27 {
28 PublishedEvents.Last().WillBeOfType<CashWithdrawnEvent>();
29 }
30
31 [Then]
32 public void Then_the_event_will_contain_the_amount_and_balance()
33 {
34 PublishedEvents.Last<CashWithdrawnEvent>().Balance.WillBe(15);
35 PublishedEvents.Last<CashWithdrawnEvent>().Amount.WillBe(5);
36 }
37 }
38 }
Ok ok, but now we have the same story, only now there is not enough money on the account, then we
should give an exception.
1 namespace Test.Fohjin.DDD.Scenarios.Withdrawing_cash
2 {
3 public class When_withdrawling_cash_with_to_little_balance
4 : CommandTestFixture<
5 WithdrawlCashCommand,
6 WithdrawlCashCommandHandler,
7 ActiveAccount>
8 {
9 protected override IEnumerable<IDomainEvent> Given()
10 {
11 yield return PrepareDomainEvent.Set(
12 new AccountOpenedEvent(
13 Guid.NewGuid(),
14 Guid.NewGuid(),
15 "AccountName",
16 "1234567890")).ToVersion(1);
17 }
18
19 protected override WithdrawlCashCommand When()
20 {
21 return new WithdrawlCashCommand(Guid.NewGuid(), 1);
22 }
23
24 [Then]
25 public void Then_acc_balance_to_low_exception_will_be_thrown()
26 {
27 CaughtException.WillBeOfType<AccountBalanceToLowException>();
28 }
29
30 [Then]
31 public void Then_the_exception_message_will_be()
32 {
33 CaughtException.WithMessage(
34 string.Format(
35 "The amount {0:C} is larger than your balance {1:C}",
36 1, 0));
37 }
38 }
39 }
The cool part here is that the whole domain is seen as a black box, you are bringing it to a certain
state exactly the same way as it is used, then you publish a command just like your application would,
and after that you verify that the domain publishes the correct events and that their values are correct.
This means that you are never testing your domain in a state that it cannot naturally get to, which
makes the tests more reliable.
Now imagine a parser that takes all the class names, underneath each class name it would print the
events that occurred to get into the current state. Then you print the command that we are testing. and
finally you print the method names that test the actual outcome. This would be a very nice readable
specification that the client at least can understand.
I think this is of great value for the business, something that is easily overlooked.
Finally
As you can see this is all very simple and straightforward. It is a different mindset, but once you enter
this mindset you will notice that your applications will be much more behavioral versus CRUD. And
hopefully our clients can move back into thinking about their business logic instead of the thinking
about the CRUD way we have forced upon them. Also I would like to thank Greg Young for providing
me with so much information and putting up with all my weird questions, and thank Jonathan Oliver
and Mike Nichols for some improvements on the technical side.
Domain Events
As you may have seen in my previous chapter now our domain aggregate root is responsible for
publishing domain events indicating that some internal state has changed. In fact state changes within
our aggregate root are only allowed through such domain events. Secondly the internal event handlers
are not allowed to have any sort of business logic in them, they are only supposed to set or update the
internal state of the aggregate root directly from the data the event carries. Using these rules you
completely separate the business logic from the state changes. This separation enables us to replay
historical domain events without any business logic being triggered bringing it back to the same state
as the original aggregate root.
Why is this important? Well now we can use these same domain events for our persistence using an
Event Store, this pattern by Martin Fowler is called “Event Sourcing”. Obviously you don’t want to
process a credit card or send an e-mail every time you load an aggregate root from the event store.
Also from the time the original domain event was recorded and when the aggregate root is loaded
from the event store the business logic that decided a state change was needed could have changed,
this should not affect the actual historical state change. So this separation is to be taken seriously.
Domain events can also be used to signal something to the outside world (taken from the aggregate
roots view point) that something has happened without having an actual state change. When persisting
our domain events we would not differentiate between those two different domain events.
All domain events should be named with the ubiquitous language in mind, meaning that they should
closely represent what the user intended to do in the same language as the user would use to explain it
to you. By keeping all these domain events we gain a huge amount of knowledge about what happened
and why it happened.
This means that our aggregate root gets the added responsibility of tracking these domain events, but I
don’t see this as being any different then for example the proxy that NHibernate generates for you,
except perhaps that it is not a proxy and that you have more control over what happens. But it is true
your aggregate root has these added responsibilities.
So let us take a look at how the aggregate root provides this functionality, for obvious reasons we use
a base class for this, but really all that is needed is that the aggregate root implements the following
two interfaces:
1 namespace Fohjin.DDD.EventStore.Storage.Memento
2 {
3 public interface IOrginator
4 {
5 IMemento CreateMemento();
6 void SetMemento(IMemento memento);
7 }
8 }
The IOrginator interface is for the snapshot functionality which is an optimization technique for
speeding up loading aggregate roots from the Event Store. As you can see it is using the “Memento”
patter from the Gang Of Four book. I wanted to get this interface out of the way first; it is not needed,
but does provide a good optimization for loading aggregate roots.
1 namespace Fohjin.DDD.EventStore
2 {
3 public interface IEventProvider
4 {
5 void Clear();
6 void LoadFromHistory(IEnumerable<IDomainEvent> domainEvents);
7 void UpdateVersion(int version);
8 Guid Id { get; }
9 int Version { get; }
10 IEnumerable<IDomainEvent> GetChanges();
11 }
12 }
The IEventProvider interface is the most interesting interface of the two, this one defines how domain
events can be retrieved from the aggregate root and how historical domain events can be loaded back.
It also defines that each aggregate root must have an Id and a Version, both of these are used by the
event store, the Id is obvious so I won’t go into that, the version on the other hand may not be that
obvious. The version is used to detect concurrency violations, meaning this is used to prevent
conflicts that occur because between the time the command was send and the aggregate root was
saved an other user or process has updated the same aggregate root. In this case we would throw an
Concurrency Violation Exception which currently results in a rollback. In a future chapter I plan to
look into how you could try to deal with these concurrency violations automatically.
Each aggregate root has to register the domain events and the internal event handlers with the base
class. I am working on getting this as static information for the type since this will not change between
different instances of the same type. As you can see I am registering the domain event type and an
action to handle the specific type. As you can see the actions have the specific domain event as an
input parameter.
The RegisterEvent method is defined in the BaseAggregateRoot class, here is a little bit of interesting
logic going on where basically a new action is defined that has an IDomainEvent as an input
parameter, that is how they can all be stored in the same Dictionary. Then inside this action the
provided action is invoked where the input value is cast from an IDomainEvent to the actual expected
type TEvent.
And below here is the private apply method that is being called from two different methods in the
aggregate root base class. The method retrieves the action that is registered for the provided domain
event, and than it invokes the action with the domain event as the input parameter. The apply method
takes an IDomainEvent instead of the specific domain event and because of that we have the above
mentioned logic.
Let us take a look from where this apply method is being called, first we will look at some actual
domain behavior in the aggregate root.
1 public void Withdrawl(Amount amount)
2 {
3 Guard();
4
5 IsBalanceHighEnough(amount);
6
7 var newBalance = _balance.Withdrawl(amount);
8
9 Apply(new CashWithdrawnEvent(newBalance, amount));
10 }
First we execute all our valuable domain logic, the business behavior. Then when we are satisfied
that everything is ok and we know what type of state change we need to execute and we Apply a new
domain event with the new internal state. The Apply method used here is a protected method on the
BaseAgregateRoot. Btw there is nothing stating that there can only be one outcome, i.e. only one type
of domain event being Applied.
When we Apply a domain event we will first assign the aggregate root Id to the event so that we can
keep track to which aggregate root this event belongs to. Secondly we get a new version and assign
this to the event, this is to maintain the correct order of the events. Then we call the apply method
which will make the state change to the aggregate root. And finally we will add this domain event to
the internal list of applied events. This is very similar with the dirty check of NHibernate (the idea,
not the actual implementation).
This is all what what is needed to execute domain behavior and keep track of the domain events that
have been used to update internal state of an aggregate root.
Here we simply return all the applied domain events, by tracking these domain events we in effect
track all the state changes that have happened since the aggregate root was instantiated. Here we also
request all the applied domain events from all entities. Than new order the domain events by version.
After having received and processed all the applied domain events we should Clear the aggregate
root from all applied domain events so they won’t be persisted again.
1 void IEventProvider.Clear()
2 {
3 _entityEventProviders.ForEach(x => x.Clear());
4 _appliedEvents.Clear();
5 }
Just a quick word about the entity event providers, in some cases you want to have domain behavior
inside entities that are part of the aggregate but are not the aggregate root. Well in this case you want
to have those entities generate domain events as well, and you want to get those as well when getting
all changes. Same when clearing the domain also the changes inside each entity event provider should
be cleared.
Before saving the changes in the aggregate root is finalized we also need to update the version of the
aggregate root. This version will match the version of the last applied domain event.
As the IEventProvider interface nicely dictates there is a LoadFromHistory method that takes an
IEnumerable<IDomainEvent> below here is the implementation of this method.
1 void IEventProvider.LoadFromHistory(IEnumerable<IDomainEvent> domainEvents)
2 {
3 if (domainEvents.Count() == 0)
4 return;
5
6 foreach (var domainEvent in domainEvents)
7 {
8 apply(domainEvent.GetType(), domainEvent);
9 }
10
11 Version = domainEvents.Last().Version;
12 EventVersion = Version;
13 }
When you take a look in the for each loop you will notice that here we are calling apply again, please
note that this is the private variant which is responsible for updating the internal state of the aggregate
root and that these events are not added to the _appliedEvents collection, nor is the Id or Version
updated. We don’t want to save these events again. After applying all the historical domain events we
update the aggregate root version to the version of the last event.
An other thing is that all the public methods in the BaseAggregateRoot are explicate interface
implementation. I do this because I want to hide these details for any piece of code that is using the
aggregate root as is, only when dealing with persistence we use the interface and get access to the
explicit interface methods. Nice and clean.
Aggregate entities
Ok I have already mentioned the term entities and entity event providers, and I would like to focus on
them for a bit. An entity is an domain object that is part of the same aggregate as the aggregate root,
but is not the aggregate root it self, is however it is managed by the aggregate root. We manage state
changes in these entities in the exact same way as we do this in the aggregate root, so each entity is
also an event provider. There is a different interface for the entities then for the aggregate root so they
can not be confused among each other.
1 namespace Fohjin.DDD.EventStore
2 {
3 public interface IEntityEventProvider
4 {
5 void Clear();
6 void LoadFromHistory(IEnumerable<IDomainEvent> domainEvents);
7 void HookUpVersionProvider(Func<int> versionProvider);
8 IEnumerable<IDomainEvent> GetChanges();
9 Guid Id { get; }
10 }
11 }
You may have also noticed the hookup version provider method, remember it :)
The problem is that we should deal with the whole aggregate as a whole so all state changes need to
be persisted within the same transaction. So we want to have a single point to access the internal state
of the whole aggregate as well as a single point to load the history back into the whole aggregate. In
order to achieve this the aggregate root needs to register all the entity event providers so it can track
them. To enable that we have an additional interface for our aggregate root to implement.
1 namespace Fohjin.DDD.EventStore
2 {
3 public interface IRegisterEntities
4 {
5 void RegisterEntityEventProvider(
6 IEntityEventProvider entityEventProvider);
7 }
8 }
1 void IRegisterEntities.RegisterEntityEventProvider(
2 IEntityEventProvider entityEventProvider)
3 {
4 entityEventProvider.HookUpVersionProvider(GetNewEventVersion);
5 _entityEventProviders.Add(entityEventProvider);
6 }
Now this is only a very simple collection that hold references to IEntityEventProviders which is used
in the previous shown GetEntityEvents method. So getting out the changes is relatively simple this
way. I also created a special collection that will automatically register entities when they are added
to the collection.
Remember I mentioned the HookUpVersionProvider method, well this is used to get a reference to the
method from the aggregate root that deals with assigning a new version to each event. We want a
reference to the same version generator that the aggregate root uses because then all the versions of
each domain event will be in sequence independently is the aggregate root or an entity created it.
1 namespace Fohjin.DDD.Domain
2 {
3 public class EntityList<TEntity>
4 : List<TEntity> where TEntity : IEntityEventProvider
5 {
6 private readonly IRegisterEntities _aggregateRoot;
7
8 public EntityList(IRegisterEntities aggregateRoot)
9 {
10 _aggregateRoot = aggregateRoot;
11 }
12
13 public new void Add(TEntity entity)
14 {
15 _aggregateRoot.RegisterEntityEventProvider(entity);
16 base.Add(entity);
17 }
18 }
19 }
The collection takes a reference to the aggregate root it is part of in the constructor to be able to add
each added entity event provider to the collection in the aggregate root.
Currently there is some duplication between the BaseAggregateRoot and the BaseEntity because they
are inherited from different interfaces, I am sure there is some optimization possible there :)
There is finally one more thing to cover about this and that is how historical domain events are being
passed into the correct entity, currently this is a bit ugly but I am working on a more elegant solution.
Take a look at the code.
1 private void registerEvents()
2 {
3 // Registration of Aggregate Root event handlers
4
5 RegisterEvent<BankCardWasReportedStolenEvent>(
6 onAnyEventForABankCard);
7 RegisterEvent<BankCardWasCanceledByCLientEvent>(
8 onAnyEventForABankCard);
9 }
10
11 private void onAnyEventForABankCard(IDomainEvent domainEvent)
12 {
13 IEntityEventProvider bankCard;
14 if (!_bankCards.TryGetValueById(
15 domainEvent.AggregateId,
16 out bankCard))
17 throw new NonExistingBankCardException(
18 "The requested bank card does not exist!");
19
20 bankCard.LoadFromHistory(new[] { domainEvent });
21 }
So each entity domain event will be registered here as well and are all passed to one specific event
handler (one specific event handler for each different entity type). What happens is that we check if
the specific event provider is present in the collection by looking for the Id, when the requested event
provider is found the historical domain event will be loaded using the load from history method on
the event provider.
The problem here is that the information which events the entity can produce is already registered,
and that is in the entities them self’s. So by making that information statically available I should be
able to auto register the entity event handlers.
Finally
I hope that this was a useful explanation of how the aggregate root works with the internal domain
events, and that you would agree with me that it really is not very difficult. Next time I want to
discuss the event store so we can take a look at persisting the domain events.
Domain State
This morning Aaron Jensen asked a really interesting question on Twitter “Should Aggregate Roots en
Entities always keep their state if it is not needed for business decisions? Is firing events and relying
on the reporting store enough?”. He really made me think, Aaron thanks for that!
So for example you have some behavior on your domain that gets called when a customer moves, this
behavior will publish a Customer Moved event containing the new address. If the domain does not
use the address information for any decision making, does it then need to be persisted in the aggregate
root?
I guess it would greatly depend on whether or not you are using Event Sourcing to persist your
published events (this is what I would suggest). If you do then I don’t see a problem in skipping
storing the address information in the aggregate root. Because if at a later time you need the address
information to make some decisions then it is easy to retrieve this from the events when reloading the
aggregate root from the event store. You just create the address property and an internal event handler
to process the Customer Moved event. Just make sure you delete the snapshots first.
But if you do not store all the events that you publish, but instead store only a snapshot then I would
not skip storing the new address information, because in the end the domain is responsible for the
state, not the reporting side. In other words, you can rebuild the reporting side from the domain, but
you cannot necessarily do the same in reverse. So if you don’t store the address information and you
need to rebuild the reporting store then you cannot do this.
So my conclusion is that the information should be available on the domain side. And when using
CQRS and Event Sourcing to store the events instead of the internal state of the domain then that
makes it possible to skip having the information in the domain structure, else not.
Event Sourcing
So after reading this blog post by Rob Conery about Reporting In NoSQL where he explains very
well what the problem is when using a RDBMS for persisting the state of your domain, or really
anything that is written with Object Orientation in mind.
His solution to the problem is to use a object database or a document database for persisting the state
of your object structure. And I do agree that this is a valuable approach to solve the problem.
But I would like to talk about a different approach, which is called Event Sourcing; a pattern thought
of by Martin Fowler that, “Captures all changes to an application state as a sequence of events.”
Hey that is interesting, so instead of trying to store the object tree in a whole, we just store all the
individual state changes that the object tree encapsulates; meaning all the state changes that have
happened during the complete lifetime of the object tree. These state changes are being represented in
the form of events. And such an event is nothing more then a Plain Old CLR Object, so not an actual
.Net event.
And the objects are also re-constructed from the same events by applying all the state changes that
they represent, and thus coming back to the previous state in the identical way that it came there
originally.
Now the interesting part with respect to persistence is that these events are being serialized using a
technique you like (Binary, JSon or custom) and this serialized event (object) is persisted in an event
store, and this event store will threat all serialized events equally.
This event store can be based on an object database, document database, file system or even a
RDBMS, you basically need to have one collection that describes all the different objects that the
event store has persisted this includes the Id and the Version. Then another collection will contain all
the serialized events for each different object and they should be retrievable by the object Id ordered
by their Version. So to simplify this, in a RDBMS this would mean 2 tables, in total.
So there is no impendence mismatch between the domain (object structure) and the persistence layer
anymore. Which would mean that it is should pass Rob’s criteria’s as well.
But hey it doesn’t stop there, you get a real audit log for free a well. And you have the ability to
replay all the events to abstract new information about certain state changes. For example a web store
is in business for 6 months and now they would like to know when and where an item is being
removed from the shopping cart. With an event store you will have this information from the start of
the system, and you didn’t have to think about it straight from the start.
And this also enables an easier swift to an Event Driven Architecture as well, because you can start
publishing the events outside of the domain and have different behavior or processes react on them.
In my example I use a combination of CQRS and Event Sourcing and this makes a very powerful
solution. I would recommend that when you are when applying CQRS you would do that in
combination with Event Sourcing to get a very flexible system without much more complexity then
just applying CQRS.
Event Versioning
When using Event Sourcing you store your events in an Event Store. This Event Store can only insert
new events and read historical events, nothing more nothing less. So when you change your domain
logic and also the events belonging to this behavior, then you cannot go back into the Event Store and
do a one time convert of all the historical events belonging to the same behavior. The Event Store
needs to stay intact, that is one of its powers.
So you make a new version of the original event, this new version carries more or less information
then the original one. Lets take a look at a very simple example:
1 namespace Fohjin.DDD.Events.Account
2 {
3 [Serializable]
4 public class CashWithdrawnEvent : DomainEvent
5 {
6 public decimal Balance { get; private set; }
7 public decimal Amount { get; private set; }
8
9 public CashWithdrawnEvent(decimal balance, decimal amount)
10 {
11 Balance = balance;
12 Amount = amount;
13 }
14 }
15
16 [Serializable]
17 public class CashWithdrawnEvent_v2 : DomainEvent
18 {
19 public decimal Balance { get; private set; }
20 public decimal Amount { get; private set; }
21 public Guid AtmId { get; private set; }
22
23 public CashWithdrawnEvent_v2(
24 decimal balance,
25 decimal amount,
26 Guid atmId)
27 {
28 Balance = balance;
29 Amount = amount;
30 AtmId = atmId;
31 }
32 }
33 }
This to me looks like a natural evolution for this type of event, so how do you deal with this. Because
after having used the system, before adding this extension there have been many cash withdrawals. So
all these events are in the Event Store, they cannot be altered, and when you retrieve an Aggregate
Root from the Event Store all these historical events need to be processed in order to restore the
internal state.
Now what you don’t want is to maintain code in the Aggregate Root that knows how to handle these
old event versions, sure one version is ok, but what about one hundred different versions? Also we
are not just talking about just in the Aggregate Root, also the different event handlers need to be kept
and maintained.
The better approach is to have a mechanism that you can hook-up with different event convertors.
Then when an event is retrieved from the Event Store it first goes through this pipeline of convertors
to be converted to the latest event version.
Now I wanted to do this properly and write some actual code for this, and then blog about it, but
someone kept nagging me about it, so here is a very rough spike instead, first some tests:
1 namespace Test.Fohjin.DDD.Spike
2 {
3 public class Spike_test_1 : BaseTestFixture
4 {
5 private object ConvertedEvent;
6
7 protected override void When()
8 {
9 ConvertedEvent = new EventConvertor()
10 .Convert(new CashWithdrawnEvent(10.0M, 20.0M));
11 }
12
13 [Then]
14 public void The_converted_event_is_the_latest_version()
15 {
16 ConvertedEvent.WillBeOfType<CashWithdrawnEvent_v4>();
17 }
18
19 [Then]
20 public void The_converted_event_wil_contain_the_correct_data()
21 {
22 ConvertedEvent.As<CashWithdrawnEvent_v4>()
23 .Balance.WillBe(10.0M);
24 ConvertedEvent.As<CashWithdrawnEvent_v4>()
25 .Amount.WillBe(20.0M);
26 ConvertedEvent.As<CashWithdrawnEvent_v4>()
27 .AtmId.WillBe(string.Empty);
28 }
29 }
30
31 public class Spike_test_2 : BaseTestFixture
32 {
33 private object ConvertedEvent;
34
35 protected override void When()
36 {
37 ConvertedEvent = new EventConvertor()
38 .Convert(new CashWithdrawnEvent_v2(10.0M, 20.0M, "12345"));
39 }
40
41 [Then]
42 public void The_converted_event_is_the_latest_version()
43 {
44 ConvertedEvent.WillBeOfType<CashWithdrawnEvent_v4>();
45 }
46
47 [Then]
48 public void The_converted_event_wil_contain_the_correct_data()
49 {
50 ConvertedEvent.As<CashWithdrawnEvent_v4>()
51 .Balance.WillBe(10.0M);
52 ConvertedEvent.As<CashWithdrawnEvent_v4>()
53 .Amount.WillBe(20.0M);
54 ConvertedEvent.As<CashWithdrawnEvent_v4>()
55 .AtmId.WillBe("12345");
56 }
57 }
58
59 public class Spike_test_3 : BaseTestFixture
60 {
61 private object ConvertedEvent;
62
63 protected override void When()
64 {
65 ConvertedEvent = new EventConvertor()
66 .Convert(new CashWithdrawnEvent_v3(10.0M, 20.0M, "12345"));
67 }
68
69 [Then]
70 public void The_converted_event_is_the_latest_version()
71 {
72 ConvertedEvent.WillBeOfType<CashWithdrawnEvent_v4>();
73 }
74
75 [Then]
76 public void The_converted_event_wil_contain_the_correct_data()
77 {
78 ConvertedEvent.As<CashWithdrawnEvent_v4>()
79 .Balance.WillBe(10.0M);
80 ConvertedEvent.As<CashWithdrawnEvent_v4>()
81 .Amount.WillBe(20.0M);
82 ConvertedEvent.As<CashWithdrawnEvent_v4>()
83 .AtmId.WillBe("12345");
84 }
85 }
86
87 public class Spike_test_4 : BaseTestFixture
88 {
89 private object ConvertedEvent;
90
91 protected override void When()
92 {
93 ConvertedEvent = new EventConvertor()
94 .Convert(new CashWithdrawnEvent_v4(10.0M, 20.0M, "12345"));
95 }
96
97 [Then]
98 public void The_converted_event_is_the_latest_version()
99 {
100 ConvertedEvent.WillBeOfType<CashWithdrawnEvent_v4>();
101 }
102
103 [Then]
104 public void The_converted_event_wil_contain_the_correct_data()
105 {
106 ConvertedEvent.As<CashWithdrawnEvent_v4>()
107 .Balance.WillBe(10.0M);
108 ConvertedEvent.As<CashWithdrawnEvent_v4>()
109 .Amount.WillBe(20.0M);
110 ConvertedEvent.As<CashWithdrawnEvent_v4>()
111 .AtmId.WillBe("12345");
112 }
113 }
114 }
So basically some tests to confirm the correct conversion from one event to another event, now below
here is the full implementation:
1 namespace Test.Fohjin.DDD.Spike
2 {
3 public class EventConvertor
4 {
5 private Dictionary<Type, Func<object, object>> _convertors;
6
7 public EventConvertor()
8 {
9 _convertors = new Dictionary<Type, Func<object, object>>();
10 RegisterEventConvertors();
11 }
12
13 private void RegisterEventConvertors()
14 {
15 _convertors.Add(typeof(CashWithdrawnEvent),
16 x => new CashWithdrawnEventConvertor()
17 .Convert((CashWithdrawnEvent)x));
18 _convertors.Add(typeof(CashWithdrawnEvent_v2),
19 x => new CashWithdrawnEvent_v2Convertor()
20 .Convert((CashWithdrawnEvent_v2)x));
21 _convertors.Add(typeof(CashWithdrawnEvent_v3),
22 x => new CashWithdrawnEvent_v3Convertor()
23 .Convert((CashWithdrawnEvent_v3)x));
24 }
25
26 public object Convert(object soureEvent)
27 {
28 Func<object, object> convertor;
29 return _convertors.TryGetValue(
30 soureEvent.GetType(),
31 out convertor)
32 ? Convert(convertor(soureEvent))
33 : soureEvent;
34 }
35 }
36
37 public interface IEventConvertor<TSourceEvent, TTargetEvent>
38 where TSourceEvent : IDomainEvent
39 where TTargetEvent : IDomainEvent
40 {
41 TTargetEvent Convert(TSourceEvent sourceEvent);
42 }
43
44 public class CashWithdrawnEventConvertor
45 : IEventConvertor<CashWithdrawnEvent, CashWithdrawnEvent_v4>
46 {
47 public CashWithdrawnEvent_v4 Convert(
48 CashWithdrawnEvent sourceEvent)
49 {
50 var theEvent = new CashWithdrawnEvent_v4(
51 sourceEvent.Balance, sourceEvent.Amount, string.Empty)
52 {
53 AggregateId = sourceEvent.AggregateId
54 };
55 (theEvent as IDomainEvent).Version =
56 (sourceEvent as IDomainEvent).Version;
57 return theEvent;
58 }
59 }
60
61 public class CashWithdrawnEvent_v2Convertor
62 : IEventConvertor<CashWithdrawnEvent_v2, CashWithdrawnEvent_v3>
63 {
64 public CashWithdrawnEvent_v3 Convert(
65 CashWithdrawnEvent_v2 sourceEvent)
66 {
67 var theEvent = new CashWithdrawnEvent_v3(
68 sourceEvent.Balance, sourceEvent.Amount, sourceEvent.AtmId)
69 {
70 AggregateId = sourceEvent.AggregateId
71 };
72 (theEvent as IDomainEvent).Version =
73 (sourceEvent as IDomainEvent).Version;
74 return theEvent;
75 }
76 }
77
78 public class CashWithdrawnEvent_v3Convertor
79 : IEventConvertor<CashWithdrawnEvent_v3, CashWithdrawnEvent_v4>
80 {
81 public CashWithdrawnEvent_v4 Convert(
82 CashWithdrawnEvent_v3 sourceEvent)
83 {
84 var theEvent = new CashWithdrawnEvent_v4(
85 sourceEvent.Balance, sourceEvent.Amount, sourceEvent.AtmId)
86 {
87 AggregateId = sourceEvent.AggregateId
88 };
89 (theEvent as IDomainEvent).Version =
90 (sourceEvent as IDomainEvent).Version;
91 return theEvent;
92 }
93 }
94 }
This implementation is definitely not very elegant but hey it does show you how a possible solution
would work. When building this yourself you might want to use conventions to auto register the
convertors and chain them together during configuration so there is no need for the recursive
functionality.
Also look at the jump from version 1 to version 4, this is an optimization to speed up the conversion.
You would do this after a few versions, not for each version.
I’ll be adding a proper solution to the example in the near future, something that you would just plug
the convertors in and the system would figure out how to handle them itself.
Scalability
Scalability is one of the several different benefits you gain from applying CQRS and Event Sourcing
to your application architecture. And that is what I wanted to take a closer look at in this post.
One of the first obvious ways to increase the performance of your system is to split the Command
from the Query side by using a service bus or just a simple queue. So instead of one machine being
responsible for both responsibilities you now have 2 machines.
Then you also get the ability to measure more precisely what the actual bottleneck is, normally your
application queries for data many times more then that it is executing behavior. And there are already
many well known approaches for scaling your reporting database so that is not what I would like to
talk about, but I guess it is clear that this side can now be scaled-out individually from the command
side.
Now the command side; because even if this is used many times less then the query side, the actual
behavior may take much more time and or processing power.
There are two natural ways of splitting up the domain, the first one is by Aggregate Root type, so you
can choose to place a single (or multiple) Aggregate Root type on a different server. The second way
is by splitting-up a single Aggregate Root type by the Identity each individual instance has. Think of
for example about splitting them up depending on the last number of the Id, even versus un-even.
So who decides? The decision needs to be made by the process that accepts the commands and
passes them on to the command handlers, these command handlers can then be running on different
machines and each machine may even have it’s own Event Store.
This is a very flexible way of splitting-up your system into different components, and because of the
Event Driven Architecture basically build-into this approach you will be able to trigger other
behavior on different Aggregate Roots without added trouble.
Specifications
I received a couple questions about the Specification Framework that I use in the CQRS example and
thought lets talk about that for a bit. The first thing that should be underlined is that this is not a
framework, they are a few classes and extension methods that rely on NUnit for the actual assertions
and and Moq for mocking of the dependencies. I got the initial bits from Greg Young at his DDD
course which I extended a little bit for my specific needs.
1. BaseTestFixture
2. BaseTestFixture<TSubjectUnderTest>
3. AggregateRootTestFixture<TAggregateRoot>
4. CommandTestFixture<TCommand, TCommandHandler, TAggregateRoot>
5. EventTestFixture<TEvent, TEventHandler>
6. PresenterTestFixture<TPresenter>
These different classes are all very specific towards a specific need, which is a direct opposite from
what a framework usually provides.
Black box
I try my best to make my tests treat the subject under test (SUT) as a black box, meaning that in my
tests I don’t directly interact with the actual class that I am testing. Instead I want to trigger the
behavior by executing behavior that lies further outside. The behavior that triggers the behavior on the
SUT may be an actual implementation, or it could be a fake.
The same applies to the result of the behavior that gets tested, instead of verifying some state in the
SUT I want to verify what happens outside of the SUT. So what matters is that I test the behavior not
the state of the subject under test.
I also try to get further away from the SUT then its immediate usage. Doing so makes the tests less
brittle for change. This in itself is not always an easy task, but I recommend you try it anyway.
The BaseTestFixture
This is the simplest test fixture class that I have and I use this for and I actually don’t use this
anywhere in the example code, but it serves a really good basic overview of the semantics that are
shared among the other test fixture classes.
1 namespace Test.Fohjin.DDD
2 {
3 [Specification]
4 public abstract class BaseTestFixture
5 {
6 protected Exception CaughtException;
7 protected virtual void Given() { }
8 protected abstract void When();
9 protected virtual void Finally() { }
10
11 [Given]
12 public void Setup()
13 {
14 CaughtException = new NoExceptionButOneWasExpectedException();
15 Given();
16
17 try
18 {
19 When();
20 }
21 catch (Exception exception)
22 {
23 CaughtException = exception;
24 }
25 finally
26 {
27 Finally();
28 }
29 }
30 }
31
32 public class GivenAttribute : SetUpAttribute { }
33
34 public class ThenAttribute : TestAttribute { }
35
36 public class SpecificationAttribute : TestFixtureAttribute { }
37 }
It is following the Given When Then (GWT) approach, and as you can see it is really simple. Also
note that I introduced some other named attributes by simply inheriting from the default NUnit
attributes, this was purely done to stay with the GWT approach.
Below here is an incredible KISS example of how you would use this BaseTestFixture, which I
believe doesn’t need further explanation. (I know I am misusing the term KISS here, but I thought if
was fitting anyway).
The BaseTestFixture<TSubjectUnderTest>
Now we are getting into a more interesting case because now my subject under test is actually
provided by the generic parameter of the base test fixture class. And to be honest this class is only
used with 12 specifications out of the 122 specification classes. This is mostly because it is still a
very generic solution, but again a nice way to ease into it.
1 namespace Test.Fohjin.DDD
2 {
3 [Specification]
4 public abstract class BaseTestFixture<TSubjectUnderTest>
5 {
6 private Dictionary<Type, object> mocks;
7
8 protected Dictionary<Type, object> DoNotMock;
9 protected TSubjectUnderTest SubjectUnderTest;
10 protected Exception CaughtException;
11 protected virtual void SetupDependencies() { }
12 protected virtual void Given() { }
13 protected abstract void When();
14 protected virtual void Finally() { }
15
16 [Given]
17 public void Setup()
18 {
19 mocks = new Dictionary<Type, object>();
20 DoNotMock = new Dictionary<Type, object>();
21 CaughtException = new NoExceptionButOneWasExpectedException();
22
23 BuildMocks();
24 SetupDependencies();
25 SubjectUnderTest = BuildSubjectUnderTest();
26
27 Given();
28
29 try
30 {
31 When();
32 }
33 catch (Exception exception)
34 {
35 CaughtException = exception;
36 }
37 finally
38 {
39 Finally();
40 }
41 }
42
43 public Mock<TType> OnDependency<TType>() where TType : class
44 {
45 return (Mock<TType>)mocks[typeof(TType)];
46 }
47
48 private TSubjectUnderTest BuildSubjectUnderTest()
49 {
50 var constructorInfo = typeof(TSubjectUnderTest)
51 .GetConstructors().First();
52
53 var parameters = new List<object>();
54 foreach (var mock in mocks)
55 {
56 object theObject;
57 if (!DoNotMock.TryGetValue(mock.Key, out theObject))
58 theObject = ((Mock) mock.Value).Object;
59
60 parameters.Add(theObject);
61 }
62
63 return (TSubjectUnderTest)constructorInfo
64 .Invoke(parameters.ToArray());
65 }
66
67 private void BuildMocks()
68 {
69 var constructorInfo = typeof(TSubjectUnderTest)
70 .GetConstructors().First();
71
72 foreach (var parameter in constructorInfo.GetParameters())
73 {
74 mocks.Add(
75 parameter.ParameterType,
76 CreateMock(parameter.ParameterType));
77 }
78 }
79
80 private static object CreateMock(Type type)
81 {
82 var constructorInfo = typeof(Mock<>)
83 .MakeGenericType(type).GetConstructors().First();
84 return constructorInfo.Invoke(new object[] { });
85 }
86 }
87 }
Wow there is a lot more going on here! You are right, because here I make the base test fixture
responsible for instantiating the subject under test, including providing mocks for any dependencies
that it may have. So it is an auto mocker as well, but the interesting part here is that it puts a reference
of the injected mocks in a collection that you can access inside your tests by using the
OnDependency<TType> method that returns a Moq object.
1 namespace Test.Fohjin.DDD.Scenarios.Receiving_money_transfer
2 {
3 public class When_receiving_a_money_transfer
4 : BaseTestFixture<MoneyReceiveService>
5 {
6 protected override void SetupDependencies()
7 {
8 OnDependency<IReportingRepository>()
9 .Setup(x => x.GetByExample<AccountReport>(
10 It.IsAny<object>()))
11 .Returns(new List<AccountReport> {
12 new AccountReport(
13 Guid.NewGuid(),
14 Guid.NewGuid(),
15 "AccountName",
16 "target account number") });
17 }
18
19 protected override void When()
20 {
21 SubjectUnderTest.Receive(
22 new MoneyTransfer(
23 "source account number",
24 "target account number",
25 123.45M));
26 }
27
28 [Then]
29 public void Then_the_newly_created_account_will_be_saved()
30 {
31 OnDependency<IBus>().Verify(
32 x => x.Publish(It.IsAny<ReceiveMoneyTransferCommand>()));
33 }
34 }
35 }
So the first thing you see is the method SetupDependencies that is requesting the mock object for
injected type IReporintgRepository and it is using the Moq way of setting up the Mock object. This is
only needed when in your specification you need the mocks to be setup in a specific way. I
intentionally separated the SetupDependencies from the Given as they may be two different things.
And in the actual test you see the usage of the OnDependency method again where its being used to
verify that something was indeed triggered on the injected class.
Now indeed this is not really treating the subject under test as a black box, for example in the When I
make a direct call to a method on the subject under test. So my test knows about this now, meaning
when it changes I need to change this test as well. So lets take a look where I go a bit further into the
black box mentality.
The PresenterTestFixture<TPresenter>
Here I am not going to show you the code of the PresenterTestFixture implementation as it is almost
identical the the previous base test fixture. So lets go straight to an actual specification:
1 namespace Test.Fohjin.DDD.Scenarios.Adding_a_new_client
2 {
3 public class When_the_phone_number_of_the_new_client_is_saved
4 : PresenterTestFixture<ClientDetailsPresenter>
5 {
6 private object CreateClientCommand;
7
8 protected override void SetupDependencies()
9 {
10 OnDependency<IPopupPresenter>()
11 .Setup(x => x.CatchPossibleException(It.IsAny<Action>()))
12 .Callback<Action>(x => x());
13
14 OnDependency<IBus>()
15 .Setup(x => x.Publish(It.IsAny<object>()))
16 .Callback<object>(x => CreateClientCommand = x);
17 }
18
19 protected override void Given()
20 {
21 Presenter.SetClient(null);
22 Presenter.Display();
23 On<IClientDetailsView>().ValueFor(
24 x => x.ClientName).IsSetTo("New Client Name");
25 On<IClientDetailsView>().FireEvent(
26 x => x.OnFormElementGotChanged += null);
27 On<IClientDetailsView>().FireEvent(
28 x => x.OnSaveNewClientName += null);
29
30 On<IClientDetailsView>().ValueFor(
31 x => x.Street).IsSetTo("Street");
32 On<IClientDetailsView>().ValueFor(
33 x => x.StreetNumber).IsSetTo("123");
34 On<IClientDetailsView>().ValueFor(
35 x => x.PostalCode).IsSetTo("5000");
36 On<IClientDetailsView>().ValueFor(
37 x => x.City).IsSetTo("Bergen");
38 On<IClientDetailsView>().FireEvent(
39 x => x.OnFormElementGotChanged += null);
40 On<IClientDetailsView>().FireEvent(
41 x => x.OnSaveNewAddress += null);
42
43 On<IClientDetailsView>().ValueFor(
44 x => x.PhoneNumber).IsSetTo("1234567890");
45 On<IClientDetailsView>().FireEvent(
46 x => x.OnFormElementGotChanged += null);
47 }
48
49 protected override void When()
50 {
51 On<IClientDetailsView>().FireEvent(
52 x => x.OnSaveNewPhoneNumber += null);
53 }
54
55 [Then]
56 public void Then_the_save_button_will_be_disabled()
57 {
58 On<IClientDetailsView>().VerifyThat.Method(
59 x => x.DisableSaveButton()).WasCalled();
60 }
61
62 [Then]
63 public void Then_create_client_command_will_be_published()
64 {
65 On<IBus>().VerifyThat.Method(
66 x => x.Publish(It.IsAny<CreateClientCommand>
())).WasCalled();
67
68 CreateClientCommand.As<CreateClientCommand>()
69 .ClientName.WillBe("New Client Name");
70 CreateClientCommand.As<CreateClientCommand>()
71 .Street.WillBe("Street");
72 CreateClientCommand.As<CreateClientCommand>()
73 .StreetNumber.WillBe("123");
74 CreateClientCommand.As<CreateClientCommand>()
75 .PostalCode.WillBe("5000");
76 CreateClientCommand.As<CreateClientCommand>()
77 .City.WillBe("Bergen");
78 CreateClientCommand.As<CreateClientCommand>()
79 .PhoneNumber.WillBe("1234567890");
80 }
81
82 [Then]
83 public void Then_overview_panel_will_be_shown()
84 {
85 On<IClientDetailsView>().VerifyThat.Method(
86 x => x.Close()).WasCalled();
87 }
88 }
88 }
89 }
Again there is the setting up of a dependency in the beginning and then there is the Given method,
Presenter in this case is the subject under test, so you can see that the specification still knows about
the SUT in the Given, but you will notice that this is not used in either the When or the Then.
Hey what is that On thing in there? Well that is a very small DSL wrapping the Moq API. I did this to
make it slightly more readable, and in this case it is very adapt towards working with a view and
presenter. In the Given I am setting up my IClientDetailsView with the correct data, but I am also
simulating that an event was triggered. This is not logic that the test is concerned about, all we do
here is bring the view and the presenter in the correct state for this particular specification. So instead
of setting these things directly on the presenter this will all be directed from the view.
Then in the When we again fire an event from the view, but in this case it is the going to trigger the
behavior on the presenter that we want to verify. And in the Then methods we verify on the view
again that the presenter actually did the correct things, but we also verify on other dependencies that
the correct methods where called, in this case the IBus.
I am not completely happy with the mini DSL yet, but I think it is cleaner then the default Moq API.
Just for those that are curious, here is the mini DSL which gets returned by the On method:
1 namespace Test.Fohjin.DDD
2 {
3 public class MockDsl<TType> where TType : class
4 {
5 private readonly IDictionary<Type, object> _mocks;
6
7 public MockDsl(IDictionary<Type, object> mocks)
8 {
9 _mocks = mocks;
10 }
11
12 public ValueSetter<TType, TProperty> ValueFor<TProperty>(
13 Expression<Func<TType, TProperty>> selector)
14 {
15 return new ValueSetter<TType, TProperty>(_mocks, selector);
16 }
17
18 public void FireEvent(Action<TType> fieldSelector)
19 {
20 if (!_mocks.ContainsKey(typeof(TType)))
21 throw new Exception(
22 string.Format(
23 "The requested dependency '{0}' is not specified",
24 typeof(TType).FullName));
25
26 var mock = (Mock<TType>)_mocks[typeof(TType)];
27 mock.Raise(fieldSelector);
28 }
29
30 public Verifier<TType> VerifyThat {
31 get {
32 return new Verifier<TType>(_mocks); } }
33 }
34
35 public class Verifier<TType> where TType : class
36 {
37 private readonly IDictionary<Type, object> _mocks;
38
39 public Verifier(IDictionary<Type, object> mocks)
40 {
41 _mocks = mocks;
42 }
43
44 public void ValueIsSetFor(Action<TType> selector)
45 {
46 if (!_mocks.ContainsKey(typeof(TType)))
47 throw new Exception(
48 string.Format(
49 "The requested dependency '{0}' is not specified",
50 typeof(TType).FullName));
51
52 var mock = (Mock<TType>)_mocks[typeof(TType)];
53 mock.VerifySet(selector);
54 }
55
56 public MethodVerifier<TType> Method(
57 Expression<Action<TType>> selector)
58 {
59 return new MethodVerifier<TType>(_mocks, selector);
60 }
61 }
62
63 public class MethodVerifier<TType> where TType : class
64 {
65 private readonly IDictionary<Type, object> _mocks;
66 private readonly Expression<Action<TType>> _fieldSelector;
67
68 public MethodVerifier(
69 IDictionary<Type, object> mocks,
70 Expression<Action<TType>> fieldSelector)
71 {
72 _mocks = mocks;
73 _fieldSelector = fieldSelector;
74 }
75
76 public void WasCalled()
77 {
78 if (!_mocks.ContainsKey(typeof(TType)))
79 throw new Exception(
80 string.Format(
81 "The requested dependency '{0}' is not specified",
82 typeof(TType).FullName));
83
84 var mock = (Mock<TType>)_mocks[typeof(TType)];
85 mock.Verify(_fieldSelector);
86 }
87 }
88
89 public class ValueSetter<TType, TProperty> where TType : class
90 {
91 private readonly IDictionary<Type, object> _mocks;
92 private readonly Expression<Func<TType, TProperty>>
93 _fieldSelector;
94
95 public ValueSetter(
96 IDictionary<Type, object> mocks,
97 Expression<Func<TType, TProperty>> fieldSelector)
98 {
99 _mocks = mocks;
100 _fieldSelector = fieldSelector;
101 }
102
103 public void IsSetTo(TProperty value)
104 {
105 if (!_mocks.ContainsKey(typeof(TType)))
106 throw new Exception(
107 string.Format(
108 "The requested dependency '{0}' is not specified",
109 typeof(TType).FullName));
110
111 var mock = (Mock<TType>)_mocks[typeof(TType)];
112 mock.SetupGet(_fieldSelector).Returns(value);
113 }
114 }
115 }
CQRS and Event Sourcing
By combining CQRS and Event Sourcing we get an architecture that is very suitable for black box
testing its behavior, which was a real eye opener when Greg demonstrated this to me. He says that the
way to bring your aggregate root back into the desired state is to playback the events that are needed
to do so. Then you can execute the behavior on the aggregate root, and finally to actually verify your
behavior you retrieve the published events and verify that they are as expected.
Now the beauty of this is that the setup and the verification this will work on any aggregate root
because we are using the IEventProvider interface there that they all implement. The only actual
knowledge about the aggregate root that remains is the specific behavior that you trigger.
But I went a little bit further then what was shown during the course, I am saying that instead of
executing the behavior on the aggregate root we could also just provide the command that should
trigger this behavior to be executed. Because the command would be handled by a specific command
handler which in turn would execute the domain behavior.
Now below here is the command test fixture that allows me to do just that, I need to provide the
actual command, command handler and aggregate root types that are to be used in this specification.
1 namespace Test.Fohjin.DDD
2 {
3 [Specification]
4 public abstract class CommandTestFixture<
5 TCommand,
6 TCommandHandler,
7 TAggregateRoot>
8 where TCommand : class, ICommand
9 where TCommandHandler : class, ICommandHandler<TCommand>
10 where TAggregateRoot : class, IOrginator,
11 IEventProvider<IDomainEvent>, new()
12 {
13 private IDictionary<Type, object> mocks;
14
15 protected TAggregateRoot AggregateRoot;
16 protected ICommandHandler<TCommand> CommandHandler;
17 protected Exception CaughtException;
18 protected IEnumerable<IDomainEvent> PublishedEvents;
19 protected virtual void SetupDependencies() { }
20 protected virtual IEnumerable<IDomainEvent> Given()
21 {
22 return new List<IDomainEvent>();
23 }
24 protected virtual void Finally() { }
25 protected abstract TCommand When();
26
27 [Given]
28 public void Setup()
29 {
30 mocks = new Dictionary<Type, object>();
31 CaughtException = new NoExceptionButOneWasExpectedException();
32 AggregateRoot = new TAggregateRoot();
33 AggregateRoot.LoadFromHistory(Given());
34
35 CommandHandler = BuildCommandHandler();
36
37 SetupDependencies();
38 try
39 {
40 CommandHandler.Execute(When());
41 PublishedEvents = AggregateRoot.GetChanges();
42 }
43 catch (Exception exception)
44 {
45 CaughtException = exception;
46 }
47 finally
48 {
49 Finally();
50 }
51 }
52
53 public Mock<TType> OnDependency<TType>() where TType : class
54 {
55 return (Mock<TType>)mocks[typeof(TType)];
56 }
57
58 private ICommandHandler<TCommand> BuildCommandHandler()
59 {
60 var constructorInfo = typeof(TCommandHandler)
61 .GetConstructors().First();
62
63 foreach (var parameter in constructorInfo.GetParameters())
64 {
65 if (parameter.ParameterType == typeof(
66 IDomainRepository<IDomainEvent>))
67 {
68 var repositoryMock = new Mock<IDomainRepository<
69 IDomainEvent>>();
70 repositoryMock.Setup(
71 x => x.GetById<TAggregateRoot>(It.IsAny<Guid>()))
72 .Returns(AggregateRoot);
73 repositoryMock.Setup(
74 x => x.Add(It.IsAny<TAggregateRoot>()))
75 .Callback<TAggregateRoot>(
76 x => AggregateRoot = x);
77 mocks.Add(parameter.ParameterType, repositoryMock);
78 continue;
79 }
80
81 mocks.Add(
82 parameter.ParameterType,
83 CreateMock(parameter.ParameterType));
84 }
85
86 return (ICommandHandler<TCommand>)constructorInfo
87 .Invoke(mocks.Values.Select(
88 x => ((Mock) x).Object).ToArray());
89 }
90
91 private static object CreateMock(Type type)
92 {
93 var constructorInfo = typeof (Mock<>)
94 .MakeGenericType(type).GetConstructors().First();
95 return constructorInfo.Invoke(new object[]{});
96 }
97 }
98
99 public class ThereWasNoExceptionButOneWasExpectedException
100 : Exception {}
101 }
Please note that the Given method now returns an IEnumerable<IDomainEvent> this is to be used to
provide the events that are needed to bring the aggregate root into the correct state for this
specification. This is using the exact same mechanism as the actual code uses to make state changes in
the aggregate root, so there cannot be a case that you are testing your aggregate root using a state that
it cannot get into.
The When method returns the expected command, so all you do there is create the command with the
correct information and return it to the specification.
Then in the try catch block you may have noticed that a command handler is executing the provided
command and that after that the events are being retrieved from the aggregate root. These events are
what you would verify to make sure your domain behavior is correct.
But this may all sound very abstract, lets look at a simple specification and see how clean and
readable is really is:
1 namespace Test.Fohjin.DDD.Scenarios.Withdrawing_cash
2 {
3 public class When_withdrawing_cash : CommandTestFixture<
4 WithdrawlCashCommand,
5 WithdrawlCashCommandHandler,
6 ActiveAccount>
7 {
8 protected override IEnumerable<IDomainEvent> Given()
9 {
10 yield return new AccountOpenedEvent(
11 Guid.NewGuid(),
12 Guid.NewGuid(),
13 "AccountName",
14 "1234567890");
15 yield return new CashDepositedEvent(20, 20);
16 }
17
18 protected override WithdrawlCashCommand When()
19 {
20 return new WithdrawlCashCommand(Guid.NewGuid(), 5);
21 }
22
23 [Then]
24 public void Then_a_cash_withdrawn_event_will_be_published()
25 {
26 PublishedEvents.Last().WillBeOfType<CashWithdrawnEvent>();
27 }
28
29 [Then]
30 public void Then_the_event_will_contain_the_amount_and_balance()
31 {
32 PublishedEvents.Last<CashWithdrawnEvent>().Balance.WillBe(15);
33 PublishedEvents.Last<CashWithdrawnEvent>().Amount.WillBe(5);
34 }
35 }
36 }
So you provide historical events to bring the aggregate root into the expected state, you fire of the
command, then you verify the published events to ensure your domain behavior is correct. If you
choose the correct naming for your events and commands, then a business person would be able to
understand the specification. Especially if you parse the text and do a little bit of formatting.
Where is Should?
Hey what are all those WillBe and WillBeOfType things that I see in your specifications, should they
not be ShouldBe and ShouldBeOfType? Well I used to think so as well, until I attended a presentation
by Kevlin Henney at NDC where he explained that Should is not specific enough. Should indicates
that it might not. I like to use the example; “I should really do the dishes, but I won’t”. By using Will
Be and Must you are much more dictating what will or must happen, its not a question anymore.
Finally
I am completely taken by this approach and as you see you don’t need a big BDD framework for this.
I think using something like this gives a good learning experience before going towards an actual
BDD framework like MSpec. Also Uncle Bob just wrote a good post about to not abuse the Given
When Then approach and also take a look at the Mocks aren’t Stubs article by Martin Fowler.
By now you must <grin> understand that I like to throw with code examples so yes the chapter is very
long, but I hope it provides enough value instead of just confusion.
Using conventions with Passive View
I was reading Ayende’s blog post about building UI based on conventions and thought; hey I have
something similar in my CQRS example. And since this is the least interesting part of the whole
example I guess it will be missed by many, and I can’t let that happen.
Passive View
The example has a Win Forms application in there that is build accordingly to the Passive View
pattern, so my actual forms are being dumbed down to simple views without any logic in them, well
almost no logic. Then I have presenters that have the actual logic in them, or delegate the logic to
other responsible entities (services or whatever).
The reason for doing this is because you want to be able to test the behavioral parts of your code as
simple as possible, and nothing is more simple then being able to unit test your behavior, hey even
better drive your design / behavior through Test-Driven Design (TDD). Imagine how hard that would
be to do when not abstracting these different responsibilities from each other (hehe I am sure some of
you don’t even need to imagine this ;) ).
I realize the Win Forms is so not in anymore and that nobody uses them, but perhaps what I show you
here makes you think about other areas that this could be applied to.
The View
The view is responsible for displaying information to a user, capturing user requests, and … uhm no
that’s it. Let’s take a look at a simple view.
1 namespace Fohjin.DDD.BankApplication.Views
2 {
3 public partial class ClientSearchForm : Form, IClientSearchFormView
4 {
5 public ClientSearchForm()
6 {
7 InitializeComponent();
8 RegisterClientEvents();
9 }
10
11 public event EventAction OnCreateNewClient;
12 public event EventAction OnOpenSelectedClient;
13
14 private void RegisterClientEvents()
15 {
16 addANewClientToolStripMenuItem.Click +=
17 (s, e) => OnCreateNewClient();
18 _clients.Click += (s, e) => OnOpenSelectedClient();
19 }
20
21 public IEnumerable<ClientReport> Clients
22 {
23 get { return (IEnumerable<ClientReport>)_clients.DataSource; }
24 set { _clients.DataSource = value; }
25 }
26
27 public ClientReport GetSelectedClient()
28 {
29 return (ClientReport)_clients.SelectedItem;
30 }
31 }
32 }
So what is happening here? As you can see there are two public events declared and there are two
properties that provide access to some form controls. The interesting thing here is that the two public
events are wired to two events from two form controls. But hey nothing is happening; no behavior, no
calls to services or anything. Below here is the declaration of the interface that the form implements.
1 namespace Fohjin.DDD.BankApplication.Views
2 {
3 public interface IClientSearchFormView : IView
4 {
5 IEnumerable<ClientReport> Clients { get; set; }
6 ClientReport GetSelectedClient();
7 event EventAction OnCreateNewClient;
8 event EventAction OnOpenSelectedClient;
9 }
10 }
Here are both the two public events and the two properties that provide access to two form controls,
of course the implementation could be anything. This interface is used by the presenter to control the
view.
The Presenter
So the presenter is responsible for controlling the view; which includes setting and retrieving data
and executing behavior that is triggered by the user of the view. Below is the presenter that is
responsible for managing the previous mentioned view.
1 namespace Fohjin.DDD.BankApplication.Presenters
2 {
3 public class ClientSearchFormPresenter
4 : Presenter<IClientSearchFormView>, IClientSearchFormPresenter
5 {
6 private readonly IClientSearchFormView _clientSearchFormView;
7
8 public ClientSearchFormPresenter(
9 IClientSearchFormView clientSearchFormView)
10 : base(clientSearchFormView)
11 {
12 _clientSearchFormView = clientSearchFormView;
13 }
14
15 public void CreateNewClient()
16 {
17 // Do something
18 }
19
20 public void OpenSelectedClient()
21 {
22 // Do something
23 }
24
25 public void Display()
26 {
27 LoadData();
28 try
29 {
30 _clientSearchFormView.ShowDialog();
31 }
32 finally
33 {
34 _clientSearchFormView.Dispose();
35 }
36 }
37
38 private void LoadData()
39 {
40 // Do something
41 }
42 }
43 }
For simplicity I removed any other external dependencies and replaced the behavior with a comment
“Do something”. Talking about external dependencies; the interface that is implemented by the view is
injected into the presenter, this is interesting.
Look at the Display method, there you see that the view that is injected first gets its data loaded and
then actually gets activated. This means that the view will not get its own data, but that the presenter
will provide it to the view, for this the presenter will use the two properties that where declared in
the interface. And it means that the presenter is responsible for activating the view. This is different
from how Win Forms works out of the box.
For this to work I had to change the Program class, here take a look:
1 namespace Fohjin.DDD.BankApplication
2 {
3 static class Program
4 {
5 /// <summary>
6 /// The main entry point for the application.
7 /// </summary>
8 [STAThread]
9 static void Main()
10
11 {
12 ApplicationBootStrapper.BootStrap();
13
14 var clientSearchFormPresenter =
15 ObjectFactory.GetInstance<IClientSearchFormPresenter>();
16
17 Application.EnableVisualStyles();
18
19 clientSearchFormPresenter.Display();
20 }
21 }
22 }
Look at line 17 in there, the method Display is called on the presenter, which then prepares and
activates the view.
But I can hear you think; what about these two events that where declared on that interface as well,
you know “OnCreateNewClient” and “OnOpenSelectedClient”? I can see some obvious candidates
“CreateNewClient” and “OpenSelectedClient” but they are not wired-up together. What gives?
The Magic
You must have noticed that the names are very similar and that they follow a certain pattern, this is the
convention that I have chosen to use. Basically I call my event handlers the same as the events without
the “On” prefix. Then I have a base presenter class without the word “Base” because that would be
EVIL. And this class will wire-up the events with the event handlers for me.
1 namespace Fohjin.DDD.BankApplication.Presenters
2 {
3 public abstract class Presenter<TView> where TView : class, IView
4 {
5 protected Presenter(TView view)
6 {
7 HookUpViewEvents(view);
8 }
9
10 private void HookUpViewEvents(TView view)
11 {
12 var viewDefinedEvents = GetViewDefinedEvents();
13 var viewEvents = GetViewEvents(view, viewDefinedEvents);
14 var presenterEventHandlers =
15 GetPresenterEventHandlers(viewDefinedEvents, this);
16
17 foreach (var viewDefinedEvent in viewDefinedEvents)
18 {
19 var eventInfo = viewEvents[viewDefinedEvent];
20 var methodInfo = GetTheEventHandler(
21 viewDefinedEvent, presenterEventHandlers, eventInfo);
22
23 WireUpTheEventAndEventHandler(
24 view,
25 eventInfo,
26 methodInfo);
27 }
28 }
29
30 private MethodInfo GetTheEventHandler(
31 string viewDefinedEvent,
32 IDictionary<string, MethodInfo> presenterEventHandlers,
33 EventInfo eventInfo)
34 {
35 var substring = viewDefinedEvent.Substring(2);
36 if (!presenterEventHandlers.ContainsKey(substring))
37 throw new Exception(
38 string.Format(
39 "\n\nThere is no event handler for event '{0}' "+
40 "on presenter '{1}' expected '{2}'\n\n",
41 eventInfo.Name,
42 GetType().FullName,
43 substring));
44
45 return presenterEventHandlers[substring];
46 }
47
48 private void WireUpTheEventAndEventHandler(
49 TView view,
50 EventInfo eventInfo,
51 MethodInfo methodInfo)
52 {
53 var newDelegate = Delegate.CreateDelegate(
54 typeof(EventAction),
55 this,
56 methodInfo);
57 eventInfo.AddEventHandler(view, newDelegate);
58 }
59
60 private static IDictionary<string, MethodInfo>
61 GetPresenterEventHandlers<TPresenter>(
62 ICollection<string> actionProperties,
63 TPresenter presenter)
64 {
65 return presenter
66 .GetType()
67 .GetMethods(BindingFlags.Instance | BindingFlags.Public)
68 .Where(x => Contains(actionProperties, x))
69 .ToList()
70 .Select(x => new KeyValuePair<string, MethodInfo>(
71 x.Name,
72 x))
73 .ToDictionary(x => x.Key, x => x.Value);
74 }
75
76 private static List<string> GetViewDefinedEvents()
77 {
78 return typeof(TView).GetEvents().Select(x => x.Name).ToList();
79 }
80
81 private static IDictionary<string, EventInfo> GetViewEvents(
82 TView view,
83 ICollection<string> actionProperties)
84 {
85 return view
86 .GetType()
87 .GetEvents()
88 .Where(x => Contains(actionProperties, x))
89 .ToList()
90 .Select(x => new KeyValuePair<string, EventInfo>(
91 x.Name,
92 x))
93 .ToDictionary(x => x.Key, x => x.Value);
94 }
95
96 private static bool Contains(
97 ICollection<string> actionProperties,
98 EventInfo x)
99 {
100 return actionProperties.Contains(x.Name);
101 }
102
103 private static bool Contains(
104 ICollection<string> actionProperties,
105 MethodInfo x)
106 {
107 return actionProperties.Contains(
108 string.Format("On{0}", x.Name));
109 }
110 }
111 }
The abstract base class Presenter needs the view interface as the generic parameter of the view that
the presenter controls. A reference to the actual view will be injected into the presenter class. Then
the first thing that happens is that I get all the events declared from the provided interface. Then I get
the actual events from the provided view, but only those that have been defined on the provided
interface. This makes it possible to define multiple interfaces on the same view that get controlled by
different presenters. After this I get all the public methods from the presenter.
Once I have the events form the view and the methods from the presenter then I can start matching
them together. As I mentioned before in my case I use a very simple convention where the event is
prefixed with “On” so in order to get the event handler I only need to search my event handler
collection for a name of the event minus “On”. Then finally the event gets the event handler added to
its collection of event handlers.
When there is no event handler for a provided event then I throw an exception, because I consider this
to be a bug. There might be event handlers that does not have an event associated with it, but I
consider that less harmful since this logic would not be called anyway. This exception will be visible
in the unit tests for the presenter.
Reflection
Yes this relies very heavily on reflection, but for this scenario I don’t mind. Indeed it is slower, but
the question is; will you notice this when displaying a form, and I don’t think you will. You could
improve this code by for example making the WireUpEventAndEventHandler an action and cache
those for the combination of the presenter and view interface, but I don’t think that is worth the effort.
The Specifications
I am going to leave you with the specifications that I have for the presenter that I used for this post.
The whole presenter is being tested by setting data on the view and triggering events. I am not calling
the methods directly on the presenter itself. And if you want to see more, then get the code from
GitHub.
1 namespace Test.Fohjin.DDD.Scenarios.Opening_the_bank_application
2 {
3 public class When_in_the_GUI_openeing_the_bank_application
4 : PresenterTestFixture<ClientSearchFormPresenter>
5 {
6 private List<ClientReport> _clientReports;
7
8 protected override void SetupDependencies()
9 {
10 _clientReports = new List<ClientReport> {
11 new ClientReport(Guid.NewGuid(), "Client Name") };
12 OnDependency<IReportingRepository>()
13 .Setup(x => x.GetByExample<ClientReport>(null))
14 .Returns(_clientReports);
15 }
16
17 protected override void When()
18 {
19 Presenter.Display();
20 }
21
22 [Then]
23 public void Then_show_dialog_will_be_called_on_the_view()
24 {
25 On<IClientSearchFormView>().VerifyThat.Method(
26 x => x.ShowDialog()).WasCalled();
27 }
28
29 [Then]
30 public void Then_data_from_the_repository_is_loaded_in_view()
31 {
32 On<IClientSearchFormView>().VerifyThat.ValueIsSetFor(
33 x => x.Clients = _clientReports);
34 }
35 }
36 }
1 namespace Test.Fohjin.DDD.Scenarios.Adding_a_new_client
2 {
3 public class When_in_the_GUI_adding_a_new_client
4 : PresenterTestFixture<ClientSearchFormPresenter>
5 {
6 protected override void When()
7 {
8 On<IClientSearchFormView>().FireEvent(
9 x => x.OnCreateNewClient += delegate { });
10 }
11
12 [Then]
13 public void Then_data_from_the_repository_is_loaded_in_view()
14 {
15 On<IClientDetailsPresenter>().VerifyThat.Method(
16 x => x.SetClient(null)).WasCalled();
17 }
18
19 [Then]
20 public void Then_display_will_be_called_on_the_view()
21 {
22 On<IClientDetailsPresenter>().VerifyThat.Method(
23 x => x.Display()).WasCalled();
24 }
25 }
26 }
1 namespace Test.Fohjin.DDD.Scenarios.Displaying_client_details
2 {
3 public class When_in_the_GUI_opening_an_existing_client
4 : PresenterTestFixture<ClientSearchFormPresenter>
5 {
6 private ClientReport _clientReport;
7
8 protected override void SetupDependencies()
9 {
10 OnDependency<IPopupPresenter>()
11 .Setup(x => x.CatchPossibleException(It.IsAny<Action>()))
12 .Callback<Action>(x => x());
13
14 _clientReport = new ClientReport(
15 Guid.NewGuid(),
16 "Client Name");
17
18 OnDependency<IClientSearchFormView>()
19 .Setup(x => x.GetSelectedClient())
20 .Returns(_clientReport);
21 }
22
23 protected override void When()
24 {
25 On<IClientSearchFormView>().FireEvent(
26 x => x.OnOpenSelectedClient += delegate { });
27 }
28
29 [Then]
30 public void Then_get_selected_client_will_be_called_on_the_view()
31 {
32 On<IClientSearchFormView>().VerifyThat.Method(
33 x => x.GetSelectedClient()).WasCalled();
34 }
35
36 [Then]
37 public void Then_data_from_the_repository_is_loaded_in_view()
38 {
39 On<IClientDetailsPresenter>().VerifyThat.Method(
40 x => x.SetClient(_clientReport)).WasCalled();
41 }
42
43 [Then]
44 public void Then_display_will_be_called_on_the_view()
45 {
46 On<IClientDetailsPresenter>().VerifyThat.Method(
47 x => x.Display()).WasCalled();
48 }
49 }
50 }
The Ubiquitous Language is not Ubiquitous
I attended an interesting Domain-Driven Design talk today given by Janniche Haugen talking about
why you would want to use Domain-Driven Design in a project, and this presentation is what
triggered this post.
Lets first look at the definition of ubiquitous: Being present everywhere at once. Hmm that sounds a
bit vague; here is a definition of ubiquitous language: A language structured around the domain model
and used by all team members to connect all activities of the team with the software. Ah that is better.
So the ubiquitous language is a common language shared by the domain experts, developers and the
code.
But it is not a common language throughout all of the domain and code, this is one of the reasons why
we have different bounded contexts. Because the same language may mean different things to different
domain experts. Think for example about a shipping company, the meaning of the word ship is
completely different when talking to accounting versus maintenance. For accounting a ship is an asset
that degrades in value over time, but for maintenance a ship is an object that needs service every x
nautical miles.
The same word has a different meaning when used in a different context, that is not ubiquitous.
Lets continue the small example, domain experts from maintenance also talk about an engine and rotor
blades. But the domain expert in accounting don’t use these words at all, they have no meaning to
them.
Some words even have no meaning at all when used in a different context, this as well is not
ubiquitous.
So the Ubiquitous Language is only Ubiquitous within a given Context. What do you think?
Convention over Configuration
Just a quick note on Convention over Configuration, I believe this is one of the more useful practices
that you can apply in your codebase. So what is this all about then? Well think of about everything you
do in life; think for example about “opening a door” or “turning on the water” we all exactly know
how to do those things, and because of that it is a fast action. It is something that we don’t have to
figure out just before doing it, again and again. These are conventions.
When we start applying these sorts of ideas to our code base this would mean that certain operations
need far less thought as they will be the same each and every time. A good example here is mapping
the controllers and view together and extracting the URL from the controller name in MVC. If you are
a developer that knows these conventions then you know exactly where to look for the code of a
certain URL. No need to figure this out each and every time.
Conventions also bring an opportunity to automate things within your system, because you are
following certain rules in your code, you can write some other code that does something accordingly
to these same rules. Configuration is a area that greatly benefits from conventions. Instead of having to
manually configure how each different part works together you can write some code that configures
this for you. You can also write some tests that verify your conventions.
But frameworks that allow you to configure your own conventions are much more powerful, because
not all conventions make sense in every scenario. FubuMVC is one such framework where we try not
to be to opinionated (we are) so that you may configure your own conventions.
Now adding a new event to the application is like turning on the water or opening the door.
Trying to make it re-usable
If you have been following the source code changes on GitHub you may have noticed that I renamed
the folder Fohjin.DDD to Fohjin.DDD.Example, my intention is to not make anymore changes there.
Instead I have created a new folder next to it and in there I am rebuilding the same components but
now with re-use and ease-of-use in mind.
The first thing that is very obvious in the example code is that the domain is not very persistence
ignorant, something that is valued a lot in Domain-Driven Design. So this is something that I wanted
to try to address first. I really like the way NHibernate makes our code persistence ignorant and am
attempting to solve this in a similar manner, using Castle DynamicProxy.
Since I am a complete Castle DynamicProxy noob I asked the help of Krzysztof Koźmic who is an
active committer on the Castle project. Krzysztof was gracias enough to get me through the basics in
.Net proxy-ing. For those interested in this technology I would also recommend reading his excellent
blog series about the same topic.
What I want to achieve is that you get back your poco and I am going to show you one suggestion that
I have working now.
For comparison I’ll show you here how an aggregate root looks like in my initial example code (it is
stripped down to only contain one behavioral method):
1 namespace Fohjin.DDD.Domain.Client
2 {
3 public class Client : BaseAggregateRoot<IDomainEvent>, IOrginator
4 {
5 private Address _address;
6
7 public Client()
8 {
9 registerEvents();
10 }
11
12 public void ClientMoved(Address newAddress)
13 {
14 Apply(new ClientMovedEvent(
15 newAddress.Street,
16 newAddress.StreetNumber,
17 newAddress.PostalCode,
18 newAddress.City));
19 }
20
21 IMemento IOrginator.CreateMemento()
22 {
23 return new ClientMemento(
24 Id,
25 Version,
26 _address.Street,
27 _address.StreetNumber,
28 _address.PostalCode,
29 _address.City);
30 }
31
32 void IOrginator.SetMemento(IMemento memento)
33 {
34 var clientMemento = (ClientMemento) memento;
35 Id = clientMemento.Id;
36 Version = clientMemento.Version;
37 _address = new Address(
38 clientMemento.Street,
39 clientMemento.StreetNumber,
40 clientMemento.PostalCode,
41 clientMemento.City);
42 }
43
44 private void registerEvents()
45 {
46 RegisterEvent<ClientMovedEvent>(onNewClientMoved);
47 }
48
49 private void onNewClientMoved(ClientMovedEvent clientMovedEvent)
50 {
51 _address = new Address(
52 clientMovedEvent.Street,
53 clientMovedEvent.StreetNumber,
54 clientMovedEvent.PostalCode,
55 clientMovedEvent.City);
56 }
57 }
58 }
And here is the same aggregate root but now more persistence ignorant:
1 namespace Fohjin.DDD.Domain.Client
2 {
3 public class Client
4 {
5 protected virtual void Apply(object @event) { }
6 protected IEnumerable<Type> RegisteredEvents()
7 {
8 yield return typeof(ClientMovedEvent);
9 }
10
11 protected virtual Address Address { get; set; }
12
13 public void ClientMoved(Address newAddress)
14 {
15 Apply(new ClientMovedEvent(newAddress));
16 }
17 }
18 }
Ok this may seem a bit strange at first, I mean we are sending an event into nothing and we don’t set
our state anymore. Let me try to explain :)
Mandatory methods
The protected virtual method Apply and the protected method RegisteredEvents are both mandatory if
you want to instantiate the aggregate root using this approach.
In the RegisteredEvents you return all the events that the aggregate root can publish, this is to let the
persistence code know what it can expect. So when you add some new behavior that will publish a
new event then you just add it to the list here. I think that you can sort of think about this as the
mapping classes in NHibernate.
The Apply method is used to publish the events from the domain behavior, but as you can see it has an
empty method body, so nothing will happen, right? Here we use Castle DynamicProxy to intercept
calls to this method and replace the behavior (non-existing) with our persistence logic. Where this
logic comes from I’ll explain in a minute.
Internal state needs to be protected virtual properties
One other difference with this approach versus the example code is that instead of private fields or
private properties for the internal state, it now needs to be in the form of protected virtual properties.
The reason for this is because state will not be managed by the aggregate root anymore, but instead by
something that we add using Castle DynamicProxy and we will be using interception to map the
properties to the internal representation of the state. The code even throws an exception if the
aggregate root tries to set the internal state directly.
Mix-in magic
So after reading the previous chapter you would probably say, but this aggregate root doesn’t
implement anything? This is true, but when our repository is instantiating the aggregate root it is also
adding or injecting two more classes into it. One class that implements the IEventProvider interface
and an other class that implements the IOrginator interface. This is called mixing-in classes and by
doing so the aggregate root will implement both these two interfaces and it will re-direct calls to it to
the mix-in classes. So as far as our event store knows it just is a class that implements the needed
interfaces.
I have been thinking about providing a basic interface that dictates that the Apply and RegisterEvents
methods needs to be present, but I like this approach better as from the outside these implementation
details would not be known, but I could imagine being able to support both approaches.
One other limit is that if you instantiate the aggregate root yourself nothing will be intercepted and
mixed-in so no state will change nor will the events be published.
Current state
Currently the code only supports very basic usages and there are some challenges ahead of it being
truly useable. Think for example about other entities in the same aggregate managed by the aggregate
root and collecting their events as well. Also think about adding and removing entities from a list
which may need some configurable conventions so you can decide when something needs to be added
or removed.
I am thinking about the ability to provide adapters to be able to handle certain state changes
differently from the conventional way. One that I see a need for already is the ability to set the
aggregate root Id, since this is not managed by the aggregate root but by the event provider mix-in.
Finally
Currently the functionality is very limited but I have tests passing proving that this is working :) so
what I would like is to get some feedback on this approach. And yes I realize that this is very
opinionated, and will not fit everybody’s approach. Having said that, keep the suggestions coming
anyway.