Implementing Domain-Driven Design

Vaughn Vernon

Mentioned 28

“For software developers of all experience levels looking to improve their results, and design and implement domain-driven enterprise applications consistently with the best current state of professional practice, Implementing Domain-Driven Design will impart a treasure trove of knowledge hard won within the DDD and enterprise application architecture communities over the last couple decades.” –Randy Stafford, Architect At-Large, Oracle Coherence Product Development “This book is a must-read for anybody looking to put DDD into practice.” –Udi Dahan, Founder of NServiceBus Implementing Domain-Driven Design presents a top-down approach to understanding domain-driven design (DDD) in a way that fluently connects strategic patterns to fundamental tactical programming tools. Vaughn Vernon couples guided approaches to implementation with modern architectures, highlighting the importance and value of focusing on the business domain while balancing technical considerations. Building on Eric Evans' seminal book, Domain-Driven Design, the author presents practical DDD techniques through examples from familiar domains. Each principle is backed up by realistic Java examples–all applicable to C# developers–and all content is tied together by a single case study: the delivery of a large-scale Scrum-based SaaS system for a multitenant environment. The author takes you far beyond “DDD-lite” approaches that embrace DDD solely as a technical toolset, and shows you how to fully leverage DDD's “strategic design patterns” using Bounded Context, Context Maps, and the Ubiquitous Language. Using these techniques and examples, you can reduce time to market and improve quality, as you build software that is more flexible, more scalable, and more tightly aligned to business goals. Coverage includes Getting started the right way with DDD, so you can rapidly gain value from it Using DDD within diverse architectures, including Hexagonal, SOA, REST, CQRS, Event-Driven, and Fabric/Grid-Based Appropriately designing and applying Entities–and learning when to use Value Objects instead Mastering DDD's powerful new Domain Events technique Designing Repositories for ORM, NoSQL, and other databases

More on Amazon.com

Mentioned in questions and answers.

I read about DDD and Access Control, and I found some contradiction between the following two opinions:

  • "security concerns should be handled outside the domain"
  • "access control requirements are domain specific"

I am looking for a best practice about this. So where should I put the access control logic by domain driven design, and how should I implement it?

(To be more specific by DDD + CQRS + ES.)

I think it should be somewhere near to the business logic, for example a user story could be something like this:

The user can edit his profile by sending an user name, a list of hobbies, cv, etc...

Based on the user story we implement the domain model and the services, for example:

UserService
    editProfile(EditUserProfileCommand command)
        User user = userRepository.getOneById(command.id)
        user.changeName(command.name)
        user.changeHobbies(command.hobbies)
        user.changeCV(command.cv)

UserRepository
    User getOneById(id)

User
    changeName(String name)
    changeHobbies(String[] hobbies)
    changeCV(String cv)

This is okay, but where is the HIS profile part of the story?

This is obviously attribute based access control, because we should write a rule something like this:

deny all, but if subject.id = resource.owner.id then grant access

But where should we enforce this rule, and how should we implement it?

So where should I put the access control logic?

According to this: http://programmers.stackexchange.com/a/71883/65755 the policy enforcement point should be right before the call of the UserService.editProfile().

I came to the same conclusion: it cannot be in the UI because by multiple UIs we would have code repetition. It should be before the creation of domain events, because they indicated that we have already done something in the system. So we can restrict the access to domain objects or to services which use those domain objects. By CQRS we don't necessary have domain objects by the read model, just services, so we have to restrict access to the services if we want a general solution. We could put the access decisions at the beginning of every service operation, but that would be grant all, deny x security anti pattern.

How should I implement it?

This depends on which access control model fits to the domain, so it depends on the user story. By an access decision we usually send an access request and wait a permission in return. The access request usually has the following parts: subject, resource, operation, environment. So the subject requires permission to perform an operation on the resource in an environment. First we identify the subject, then we authenticate it, and after that comes the authorization, where we check whether the access request fits to our access policy. Every access control model works in a similar way. Ofc. they can lack of some of these steps, but that does not matter...

I created a short list of access control models. I put the rules, policies into annotations, but normally we should store them in a database probably in XACML format if we want to have a well maintainable system...

  • By identity based access control (IBAC) we have an identity - permission storage (access control list, capability list, access control matrix). So for example by an access control list, we store the list of the users or groups whose can have permissions.

    UserService
        @AccessControlList[inf3rno]
        editProfile(EditUserProfileCommand command)
    
  • By lattice based access control (LBAC) the subject has a clearance level, the resource has a required clearance level, and we check which level is higher...

    @posseses[level=5]
    inf3rno
    
    UserService
        @requires(level>=3)
        editProfile(EditUserProfileCommand command)
    
  • By role based access control (RBAC) we define subject roles and we grant permissions to subjects whose act the actual role.

    @roles[admin]
    inf3rno
    
    UserService
        @requires(role=admin)
        editProfile(EditUserProfileCommand command)
    
  • By attribute based access control (ABAC) we define subject, resource and environment attributes and we write our policies based on them.

    @attributes[roles=[admin]]
    inf3rno
    
    UserService
        @policy(subject.role=admin or resource.owner.id = subject.id)
        editProfile(EditUserProfileCommand command)
        @attribute(owner)
        Subject getOwner(EditUserProfileCommand command)
    
  • By policy based access control (PBAC) we don't assign our policies to anything else, they are standalone.

    @attributes[roles=[admin]]
    inf3rno
    
    UserService
        editProfile(EditUserProfileCommand command)
        deleteProfile(DeleteUserProfileCommand command)
        @attribute(owner)
        Subject getOwner(EditUserProfileCommand command)
    
    @permission(UserService.editProfile, UserService.deleteProfile)
    @criteria(subject.role=admin or resource.owner.id = subject.id)
    WriteUserServicePolicy
    
  • By risk-adaptive access control (RAdAC) we base our decision on the relative risk profile of the subject and the risk level of the operation. This cannot be described with rules I think. I am unsure of the implementation, maybe this is what stackoverflow uses by its point system.

  • By authorization based access control (ZBAC) we don't do identification and authentication, instead we assign permissions to identification factors. For example if somebody sends a token, then she can have access to a service. Everything else is similar to the previous solutions. For example with ABAC:

    @attributes[roles=[editor]]
    token:2683fraicfv8a2zuisbkcaac
    
    ArticleService
        @policy(subject.role=editor)
        editArticle(EditArticleCommand command)
    

    So everybody who knows the 2683fraicfv8a2zuisbkcaac token can use the service.

and so on...

There are many other models, and the best fit always depends on the needs of your customer.

So to summarize

- "security concerns should be handled outside the domain"
- "access control requirements are domain specific"

both can be right, because security is not part of the domain model, but its implementation depends on the domain model and the application logic.

edit after 2 years 2016-09-05

Since I answered my own question as a DDD newbie, I have read Implementing Domain-Driven Design from Vaughn Vernon. It was an interesting book in the topic. Here is a quote from it:

This constitutes a new Bounded Context - the Identity and Access Context - and will be used by other Bounded Contexts through standard DDD integration techniques. To the consuming contexts the Identity and Access Context is a Generic Subdomain. The product will be named IdOvation.

So according to Vernon probably the best solution to move the access control to a generic subdomain.

Web-applications experienced a great paradigm shift over the last years.

A decade ago (and unfortunately even nowadays), web-applications lived only in heavyweighted servers, processing everything from data to presentation formats and sending to dumb clients which only rendered the output of the server (browsers).

Then AJAX joined the game and web-applications started to turn into something that lived between the server and the browser.

During the climax of AJAX, the web-application logic started to live entirely on the browser. I think this was when HTTP RESTful API's started to emerge. Suddenly every new service had its kind-of RESTful API, and suddenly JavaScript MV* frameworks started popping like popcorns. The use of mobile devices also greatly increased, and REST fits just great for these kind of scenarios. I say "kind-of RESTful" here because almost every API that claims to be REST, isn't. But that's an entirely different story.

In fact, I became a sort of a "REST evangelist".

When I thought that web-applications couldn't evolve much more, a new era seems to be dawning: Stateful persistent connection web-applications. Meteor is an example of a brilliant framework of that kind of applications. Then I saw this video. In this video Matt Debergalis talks about Meteor and both do a fantastic job! However he is kind of bringing down REST API's for this kind of purposes in favor of persistent real-time connections.

I would like very much to have real-time model updates, for example, but still having all the REST awesomeness. Streaming REST API's seem like what I need (firehose.io and Twitter's API, for example), But there is very few info on this new kind of API's.

So my question is:

Is web-based real-time communication incompatible with REST paradigm?

(Sorry for the long introductory text, but I thought that this question would only make sense with some context)

I'm very interested in this subject too. This post has a few links to papers that discuss some of the troubles with poorly-designed RPC:

http://thomasdavis.github.com/2012/04/11/the-obligatory-refutation-of-rpc.html

I am not saying Meteor is poorly designed, because I do not know much about Meteor.

In any case, I think I want the best of both "world". I want to benefit from REST and all that it affords with the constrained generic interface, addressability, statelessness, etc.

And, I don't want to get left behind in this "real-time" web revolution either! It's definitely very awesome.

I am wondering if there is not a hybrid approach that can work:

RESTful endpoints can allow a client to enter a space, and follow links to related documents as HATEOAS calls for. But, for the "stream of updates" to a resource, perhaps the "subcription name" could itself be a URI, which when browsed to in a point-in-time single request, like through the web browser's address bar or curl, would return either a representation of the "current state", or a list of links with the href for prior states of the resource and/or a way to query the discrete "events" that have occured against the object.

In this way, if you state with the "version 1" of the entity, and then replay each of the events against it, you can mutate it up to its "current state", and this events could be streamed into a client that does not want to get complete representations just because one small part of an entity has changed. This is basically the concept of an "event store", which is covered in lots of the CQRS info out there.

As far as being REST-compatible, I believe this approach has been done (though I'm not sure about the streaming side of this), I cannot remember if it was in this book http://shop.oreilly.com/product/9780596805838.do (REST in Practice), or in a presentation I heard by Vaughn Vernon at this recorded talk in QCon 2010: http://www.infoq.com/presentations/RESTful-SOA-DDD.

He talked about a URI design something like this (I don't remember exactly)

host/entity <-- current version of a resource host/entity/events <-- list of events that have happened to mutate the object into its current state

Example:

host/entity/events/1 <-- this would correspond to the creation of the entity host/entity/events/2 <-- this would correspond to the second event ever against the entity

He may have also had something there for history, the complete monent-in-time state, like:

host/entity/version/2 <-- this would be the entire state of the entity after the event 2 above.

Vaughn recently published a book, Implementing Domain-Driven Design, which from the table of contents looks like it covers REST and event-driven architecture: http://www.amazon.com/gp/product/0321834577

It is often said that DDD (Domain-driven Design) is better suited for complex domains instead of simpler ones.

What characterizes a complex domain? (please be more specific than "it has complex business rules");

Which are examples of complex domains?

How can I classify a domain as complex (i.e. suitable for DDD) or not?

There is no unique definition of complexity, but there is a useful description in Vaughn Vernon book (Implementing Domain Driven Design) : Table 1.1 The DDD Scorecard.

He describes the project with different criteria, for example : a complex project is going to change often (new features and it will be hardly to anticipate), you don't fully understand the domain (or there is a lot of ambiguity that you need to discuss with business expert), the size as @jlvaquero said (number of feature/rules/richness of the language...).

Recently I've read multiple times that two-phase commits are bad, but always as a side note. So there was never a good explanation with it.

For example in CQRS Journey Chapter 5:

Second, we're trying to avoid two-phase commits because they always cause problems in the long run.

Or in Implementing Domain-Driven Design on page 563:

The second ReadRecorts() is used by the infrastructure to replicate events, to publish them without the need for two-phase commit, ...

I thought two-phase commits are implemented to ensure consistency among multiple database servers.

What problems can occur when using two-phase commits? Why is it better to avoid them?

The biggest problem is scalability due to the blocking nature of the 2 phase commit protocol.

2PC requires a careful coordination between the participating parties: In particular, each party has to acknowledge the prepare phase and the commit. Once a party has acknowledged that it is ready to commit, it has to block until the transaction coordinator sends the commit or rollback message. If either of the parties is over a network, the network latency causes a bottleneck for the communication between the nodes.

Furthermore, once a party has acknowledged that it is ready to commit, it must actually be able to commit the transaction afterwards even if it crashed inbetween. This requires checkpointing to persistence storage (even when the transaction is rolled back afterwards) and also possibly limits the throughput.

When operation doesn't conceptually belong to any Entity or Value Object, then rather than forcing the behavior into an object, we should create a Domain Service.

The interface of a Service should be defined in terms of other elements of the domain model. In other words, parameters and return values of a Service shold be domain objects

a) Why should/must Domain services use domain objects as parameters and return values?

b) Why doesn't DDD also require methods of Entities and Value Objects to use domain objects as parameters and return values? Why instead is this constraint placed only on Services?

Thank you

EULERFX:

1)

Both of these constraints promote immutability and a functional style

a) How do the two constraints promote immutability?

b) What is functional style?

c) So we should try ( since it may not always be possible to use force ) to force the Service to use domain objects as parameters and return values, even though it may be more natural for that service ( ie behavior ) to accept/return non-domain objects?

2)

Entities and value objects compose more primitive types to form complex types and some behaviors may depend on a primitive parameter.

So is it due to some sort of intrinsic characteristic of Domain Entities/Value objects that in most cases their behaviors ( ie their operations ) operate on primitive types ( ie use primitive types as parameters )? If yes, then is in majority of cases this intrinsic characteristic found in domain objects, but rarely in domain services?

SECOND UPDATE:

How do the two constraints promote immutability?

The idea is that a domain service does not mutate state and all state changes are made explicit through parameters.

a) Not mutate its own state or some domain object's state? Since domain service should be stateless, I assume you mean it shouldn't mutate DO's state? In other words, service promotes immutability by making sure that any DO which it intends to modify is passed to it ( ie passed to its operation ) as an argument?

b) But if instead DO to be modified by service isn't passed as an argument, then we say that domain service mutated the state of this DO?

c) Is reason why mutating the state of a DO is considered a bad thing, is because it doesn't promote clarity ( ie it's not immediately obvious when looking at the signature of a service operation, which DOs will get their state changed by the operation )?

d) If domain service is going to modify the state of a DO passed to it as an argument, would it be ideal if values it'll use to change the state of this DO would also be passed as an argument to the service. If yes, is it because it promotes clarity or...?

2) I still don't understand how return value being of same type as argument also promotes immutability?

EULERFX 3

a)

A domain service can avoid state mutation by returning new instances of objects instead of modifying the objects that were passed in.

Not a question per say, more of an observation, but some difficulties understanding why would such services behavior be common in most domain models or even whether such behavior comes naturally about when modeling the domain or we must force it into the concept a bit?!

b)

Yes, although in that case it would be better for the domain object to mutate itself.

And major reason why DO should mutate itself is because that way code performing the mutation on a particular DO is concentrated in one place and so if we need to inspect this code, we know where to look for it?

a) This is not a strict constraint, but offers certain advantages. The idea behind the rule is that domain services contain functionality that supplements existing entities and value objects. Another non-strict constraint is closure of operations where both the argument and the return value of domain service methods are of the same type. Both of these constraints promote immutability and a functional style thereby reducing side-effects and making it easier to reason about the code, refactor the code, etc.

It is possible to have a domain service method that accepts a primitive type which is neither an entity or value object. However, extensive use of primitive types can result in primitive obsession.

b) This constraint can be applied at the entity and value object level to an extent. Entities and value objects compose more primitive types to form complex types and some behaviors may depend on a primitive parameter. This primitive parameter can itself be turned into a value object.

UPDATE

Just returned from DDD meetup where I had a chance to talk this over with Vaughn Vernon author of Implementing Domain-Driven Design. He agrees that the specified constraint is not strict. In other words, there are scenarios where it is perfectly acceptable for a domain service method to be parameterized by primitive types.

How do the two constraints promote immutability?

The idea is that a domain service does not mutate state and all state changes are made explicit through parameters. This is the essence of a pure function. Given that domain services complement entities, their methods should be expressed in those terms.

What is functional style?

I'm referring to functional programming. Programming in a functional style usually entails immutability and pure functions. Another trait of a functional approach is a declarative style contrasted with imperative.

So we should try ( since it may not always be possible to use force ) to force the Service to use domain objects as parameters and return values

No. If a primitive type suffices for an operation there is no reason to coerce it into something else. The use of entities and value objects is only a guideline and some prefer to be more strict than others. Some, for instance, use an explicit type to represent identities for each entity. So instead of using int you'd create a value object called OrderId to represent an identity of an order.

So is it due to some sort of intrinsic characteristic of Domain Entities/Value objects that in most cases their behaviors ( ie their operations ) operate on primitive types ( ie use primitive types as parameters )?

I wouldn't say it is intrinsic to DDD. I was referring to the more general idea of composition - complex entities (non-DDD) are composed out of simpler ones. By this token, it makes sense that operations on complex entities would be expressed in terms of constituent parts.

UPDATE 2

a) A domain service can avoid state mutation by returning new instances of objects instead of modifying the objects that were passed in. In this way, the signature of the method fully describes what it does because there are no side-effects.

b) A domain service can mutate state of an object passed to it, in which case the return type would likely be full. This however is less desirable - it would be better for a DO to mutate its own state.

c) Yes that is part of it. Immutability and purity allow you to refactor code much like you would factor an algebraic equation with substitution. Another reason is that it makes reasoning about the code easier since if you look at a piece of immutable data you can be certain it doesn't change for the remainder of its scope.

d) Yes, although in that case it would be better for the domain object to mutate itself. This mutation would be invoked by a surrounding application service. A lot of times I pass domain services to entity behavior methods to provide them functionality they don't have access to directly.

e) The notion of closure of operations does not in and of itself promote immutability but it is a characteristic of immutable code. The reason is that a if a domain service method accepts a value of type T and returns a value of type T it can indicate that it returns a new value of T resulting from the encapsulated operation. This is a characteristic of immutability because the change resulting from the operation is made explicit as a new object.

UPDATE 3

a) This has more to do with traditional OOP than it does with DDD. OOP tries to hide the moving parts behind objects - encapsulation. FP tries to minimize the moving parts - immutability. Immutability can be seen as more "natural" in some scenarios. For example, in event-centric scenarios, events are immutable because they are a record of what has happened. You don't change what has happened, but you can create compensating actions.

b) Again, this has more to do with OOP than DDD and is based on the information expert pattern which essentially states that behaviors on data should be as close as possible to that data. In DDD, this means that an entity should encapsulate the contained data as much as possible so that it can ensure its own integrity.

In this great book about Domain-Driven Design, a chapter is dedicated to the user interface and its relationship to domain objects.

One point that confuses me is the comparison between Use case optimal queries and presenters.

The excerpt dealing with optimal queries (page 517) is:

Rather than reading multiple whole Aggregate instances of various types and then programmatically composing them into a single container (DTO or DPO), you might instead use what is called a use case optimal query.
This is where you design your Repository with finder query methods that compose a custom object as a superset of one or more Aggregate instances.
The query dynamically places the results into a Value Object (6) specifically designed to address the needs of the use case.
You design a Value Object, not a DTO, because the query is domain specific, not application specific (as are DTOs). The custom use case optimal Value Object is then consumed directly by the view renderer.

Thus, the benefit of optimal queries is to directly provide a specific-to-view value object, acting as the real view model.

A page later, presenter pattern is described:

The presentation model acts as an Adapter. It masks the details of the domain model by providing properties and behaviours that are designed in terms of the needs of the view.
Rather than requiring the domain model to specifically support the necessary view properties, it is the responsibility of the Presentation Model to derive the view-specific indicators and properties from the state of the domain model.

It sounds that both ways achieve the construction of a view model, specific to the use case.

Currently my call chain (using Play Framework) looks like:

For queries: Controllers (acting as Rest interface sending Json) -> Queries (returning specific value object through optimal queries)

For commands: Controllers (acting as Rest interface sending Json) -> Application services (Commands) -> domain services/repositories/Aggregates (application services returns void)

My question is: if I already practice the use case optimal query, what would be the benefit of implementing the presenter pattern? Why bother with a presenter if one could always use optimal queries to satisfy the client needs directly?

I just think of one benefit of the presenter pattern: dealing with commands, not queries, thus providing to command some domain objects corresponding to the view models determined by the presenter. Controller would then be decoupled from domain object. Indeed, another excerpt of Presenter description is:

Additionally, edits performed by the user are tracked by the Presentation Model.
This is not the case of placing overloaded responsibilities on the Presentation Model, since it's meant to adapt in both directions, model to view and view to model.

However, I prefer sending pure primitives to application services (commands), rather than dealing directly with domain object, so this benefit would not apply for me.
Any explanation?

Just a guess :)

The preseneter pattern could reuse your repository's aggregate finder methods as much as possible. For example, we have two views, in this case we need two adapters(an adapter per view), but we only need one repository find method:

class CommentBriefViewAdapter {
    private Comment comment;

    public String getTitle() {
         return partOf(comment.getTitle()); 
         //return first 10 characters of the title, hide the rest
    } 
    .....//other fields to display
}

class CommentDetailViewAdapter {
    private Comment comment;

    public String getTitle() {
         return comment.getTitle();//return full title
    } 
    .....//other fields to display
}

//In controller:
model.addAttribute(new CommentBriefViewAdapter(commentRepo.findBy(commentId)));
// same repo method
model.addAttribute(new CommentDetailViewAdapter(commentRepo.findBy(commentId)));

But optimal queries is view oriented(a query per view). I think these two solutions are designed for none-cqrs style ddd architecture. They're no longer needed in a cqrs-style arichitecture since queries are not based on repository but specific thin data layer.

I've recently been reading up on messaging systems and have specifically looked at both RabbitMQ and NServiceBus. As I have understood it, if a message fails for some reason it is tried again immidiately a number of times. Both systems then offers the possibility to try again later, for example in 5 seconds. When the five seconds have passed the message is sent again a number of times.

I quote Vaughn Vernon in Implementing Domain-Driven Design (p.502):

The other way to handle this is to simply retry the send until it succeeds, perhaps using a Capped Exponential Back-off. In the case of RabbitMQ, retries could fail for quite a while. Thus, using a combination of message NAKs and retries could be the best approach. Still, if our process retries three times every five minutes, it could be all we need.

For NServiceBus, this is called second level retries, and when the retry happens, it happens multiple times.

Why does it need to happen multiple times? Why does it not retry once every five minutes? What is the chance that the first retry after five minutes fails and the second retry, probably just milliseconds later, should succeed?

And in case it does not need to due to some configuration (does it?), why do all the examples I have found have multiple retries?

My background is NServiceBus so my answer may be couched in those terms.

First level retries are great for very transient errors. Deadlocks are a perfect example of this. You try to change the database, and your transaction is chosen as the deadlock victim. In these cases, a first level retry is perfect. Most of the time, one first level retry is all you need. If there is a lot of contention in the database, maybe 2 or 3 retries will be good enough.

Second level retries are for your less transient errors. Think about things like a web service being down for 10 seconds, or a SQL Server database in a failover cluster switching over, which can take 30-60 seconds. If you retry a few milliseconds later, it's not going to do you any good, but 10, 20, 30 seconds later you might have a good shot.

However, the crux of the question is after 5 first level retries and then a delay, why try again 5 times before an additional delay?

First, on your first second-level retry, it's still possible that you could get a deadlock or other very transient error. After all, the goal is usually not to make as slow a system as possible so it would be preferable to not have to wait an additional delay before retrying if the problem is truly transient. Of course there's no way for the infrastructure to know just how transient the problem is.

The second reason is that it's just easier to configure if they're all the same. X levels of retry and Y tries per level = X*Y total tries and only 2 numbers in the configuration file. In NServiceBus, it's these 2 values plus the back-off time span, so the config looks like this:

<SecondLevelRetriesConfigEnabled="true" TimeIncrease ="00:00:10" NumberOfRetries="3" />
<TransportConfig MaxRetries="3" />

That's fairly simple. Try 3 times. Wait 10 seconds. Try 3 times. Wait 20 seconds. Try 3 times. Wait 30 seconds. Try 3 times. Then you're done and you move on to an error queue.

Configuring different values for each level would require a much more complex config story.

We're trying to figure out the separated bounded context integration for a scenario.

Say one context is the Document Core Bounded Context (BC) and has a Document Entity, with an Author. Using the IdentityAccessContext BC as in the Implementing DDD book which separates Users, Groups, and Roles into their own context makes sense.

The problem that is occurring is when considering fetching a list of say 100+ Documents.

Say the Document Core BC has it's own Entity to mark the Author of a Document.

public class Author
{
    long Id; // Same as UserId
    long Document;  
}

And then the Identity BC has a User with relevant info.

public class User
{
    long Id;
    string FullName;
}

When fetching a List of Documents, how is the information from the IdentityAccess BC supposed to be retrieved into/with the Document Author for displaying (Full Name for example)?

There seem to be a couple alternatives:

  1. Perhaps a Anti-corruption Layer which fetches data from both tables?
  2. Duplicate the user's full name across the two BC's?

Neither feel quite right, since #1 requires joining data (at some level) from 2 BC's, while #2 requires potentially updating several BC's when changing the user's name.

What can be done about this? (Using C#, MVC, NHibernate if that matters) Clearly fetching a list of objects and later fetching e.g. the Author's name and additional data later isn't realistic.

When looking at the BC integration, however, given the 3 options mentioned in the book RPC, Domain Events, and RESTful service integration, at least the latter 2 don't make sense in this case where the application is MVC, and it directly uses the 2 BC's as class libraries and they both use the same database. Updating Users information can be done directly from MVC through the Identity BC's Application Services. The database and BC can be changed as/if needed.

As title is suggesting, I am interested in general opinion on where is the best to put all security related code(like code for JWT, standard authentication, etc.)

I am thinking about it quite a while and I do not have a clue what should be suitable place for this.

Does somebody has any experience with this. What is for you correct place for this, according to DDD?

As mentioned by @inf3rno in Access Control in Domain Driven Design, Vaughn Vernon briefly touches upon this in his book Implementing Domain-Driven Design.

Security and permissions should be centralized in its own bounded context, which is then used by other bounded contexts. Have a look at the Identity Access bounded context for inspiration, but I recommend following Schneier's Law, which states that you should not build your own security system.

From a post I read it seems that Entity is just a subset of Aggregate. I've read about the two patterns in both Domain-Driven Design and Implementing Domain-Driven Design, and I'm trying to understand the UML difference between them.

Let's consider a simple class. It's a Letter holding a message, a receiver, and possibly the sender.

enter image description here

I guess this Letter class would be considered an Entity?

Now let's say we want to expand our parcel business to be able to send also Packages, then it could look like the following.

enter image description here

Since all the Items in the Package will be lost if the whole Package is lost, we use a UML Composition relation (a filled diamond). We also want to preserve the Package's consistency by prohibiting Items from being changed or removed from outside the Package. The description of Aggregate reads

The aggregate root guarantees the consistency of changes being made within the aggregate by forbidding external objects from holding references to its members.

We therefore make sure the Composition relation is hidden, and with preserved invariants, within the Aggregate.

My question is:
Can we say that the UML difference between Entity and Aggregate is that Entity does not contain any Composition relation whereas Aggregate contains at least one Composition relation?

To answer your question, no you can't say that. An aggregate root is entity itself, and may or may not be composed of child entities. The child entities can also be composed of other entities as well (though not recommended usually).

The aggregate root is responsible for maintaining the state and enforcing the invariants of both itself and it's child entities.

So to recap, an aggregate and a child entity can each have 0 or more child entities. All child entities require an aggregate root however.

I'm learning DDD and trying to implement Repository using Google Datastore.

I find recreating DDD entities from datastore quite tricky. I've read there are frameworks to map my DDD entities to datastore entities, but I would like to learn low-level API first.

I though, the repository could set the state of an entity using setters, but this is often considered anti-pattern in DDD.

An alternative would be to use builder pattern, where builder instance is passed to the constructor of an entity. However, this introduce to the entity a functionality (restoring entity state) that is out of its responsibility.

What are good patterns to solve problem?

The whole Chapter 6 of Eric Evans book is devoted to the problems you are describing.

First of all, Factory in DDD doesn't have to be a standalone service -

Evans DDD, p. 139:

There are many ways to design FACTORIES. Several special-purpose creation patterns - FACTORY METHOD, ABSTRACT FACTORY, and BUILDER - were thoroughly treated in Gamma et. al 1995. <...> The point here is not to delve deeply into designing factories, but rather to show the place of factories as important components of a domain design.

Each creation method in Evans FACTORY enforces all invariants of the created object, however, object reconstitution is a special case

Evans DDD, p. 145:

A FACTORY reconstituting an object will handle violation of an invariant differently. During creation of a new object, a FACTORY should simply balk when invariant isn't met, but a more flexible response may be necessary in reconstitution.

This is important, because it leads us to creating separate FACTORIES for creation and reconstitution. (in the diagram on page 155 TradeRepository uses a specialized SQL TradeOrderFactory, not a general general purpose TradeOrderFactory )

So you need to implement a separate logic for reconstitution, and there are several ways to do it (You can find the full theory in Martin J Fowler Patterns Of Enterprise Application Architecture, on page 169 there's a subheading Mapping Data to Domain Fields, but not all of methods described look suitable(for example making the object fields package-private in java is seems to be too intrusive) so I'd prefer only one of the following two options

  • You can create a separate FACTORY and document it so that developers should only use it only for persistence or testing.
  • You can set the private field values with reflection, as for example Hibernate does.

Regarding the anemic domain model with setters/and getters, the upcoming Vaughn Vernon book criticizes this approach a lot so I dare say it is an antipattern in DDD.

Is there a place for BPEL when doing Domain Driven Design?

As far as I understand from Vaughn Vernon's Implementing Domain Driven Design we should use Domain Events to communicate between different bounded contexts.

Does that exclude the usage of BPEL? or do they solve different problems?

What I'm trying to achieve is to have each bounded context run as a service and use domain events to communicate between these services. Another team member (not familiar with DD) suggested using BPEL instead.

I'd say that if BPEL were to be used at all, you would use it in a specific bounded context. In any case, you'd want to use an event-based collaboration model between your bounded contexts (assuming that they are well aligned with your sub-domains), as well as UI composition as needed to decrease the amount of data that needs to be passed between them.

The reason I'd say not to use BPEL to coordinate bounded contexts is that it will create an additional place where domain logic may be written. The whole idea of a bounded context is that everything that deals with the given ubiquitous language stays within the boundary.

I'd say that the use of BPEL (and other integration) tools can be appropriate within a bounded context for the purpose of integrating multiple 3rd-party systems and other cases where we want to provide some kind of UI that a semi-technical domain expert could use to tweak the behavior of some aspects of that bounded context.

Silly question...but why do I need a Domain model at all if I use event sourcing.

I have (an Event Bus of course) and

  • Application Services with business operations that each send a Command after basic validation
  • Command Handlers which receive Commands perform additional Command validation and publish Events
  • Event Handlers which handle Events, update the Read Model, and store the event in a Repository (the Event Source)
  • Read Model Services which provide Read Models
  • Front ends (UI or otherwise) that consume Read Models from the Read Model Services)...and utilize Application Services for business operations.

Why do I need aggregate roots and domain entities at all? What's the function of the additional layer?

EventSourcing is simple how you choose to store the state of your application. If you are not solving a specific problem you probably don't need a model of your domain and could just create a simple CRUD application. A domain model is a simplified abstraction of the domain in wich your application is solving a specific business problem. The domain model is tool for communicating between you and your team mates and the domain experts. I would recommend reading this excellent book, or downloading this short introdcution to domain driven design.

Implementing DDD, page 233:

There are times when an object in a downstream Context must be eventually consistent with the partial state of one or more Aggregates in a upstream Context. In that case we'd design an Aggregate in the downstream consuming Context, because Entities are used to maintain a thread of continuity of change

According to the author, if eventual consistency is needed, then downstream object should always be an Aggregate Root. Is there a particular reason why it should never be designed as an internal entity?

UPDATE:

One could argue they always need to be the root to prevent having several downstream objects ( ie objects reflecting the state of upstream objects ) with the same id, but if synchronization is one way only ( from upstream to downstream context ), are there really no situations where it's ok for two downstream objects to have identical ids?

thanks

I am using Spring Boot 1.3 with Spring Data JPA. I have want to use early primary key generation using a dedicated object for the primary key (As advised in Implementing Domain Driven Design).

Suppose this entity:

@Entity
public class Book {
  @EmbeddedId
  private BookId id;
}

and this value object:

@Embeddable
public class BookId implements Serializable {

  private UUID id;

  protected BookId(){} //for hibernate

  public BookId( UUID id ) {
    this.id = id;
  }

  public UUID getId() {
    return id;
  }
}

Then this works fine. However, I want to create a superclass for all id classes, something like:

public class EntityUuidId implements Serializable {

  private UUID id;

  protected EntityUuidId(){} //for hibernate

  public EntityUuidId( UUID id ) {
    this.id = id;
  }

  public UUID getId() {
    return id;
  }
}

Now the BookId class changes to:

@Embeddable
public class BookId extends EntityUuidId {

  protected BookId(){} //for hibernate

  public BookId( UUID id ) {
    super(id);
  }
}

The problem is now when I run my application there is the following exception:

org.hibernate.AnnotationException: BookId has no persistent id property: Book.id

Why does that suddenly not work anymore?

Put @MappedSupperclass on EntityUuidId class, that way its properties will be treated as persistent.

I am working on an application that has some scalability requirements and consists of a web-based front-end along with a set of services and workflows. In the architecture that I have designed, some of these services will perform necessary transformations on a given set of data, pull additional data from a database, and so on.

In terms of documenting my architectural design, I am wondering if someone can suggest a couple books or some reading material on what are the best practices. I am not looking for a guide on UML. Let me clarify...

For example: I have a service... let's call it my Workflow service. It will take a request, read some stuff from a database to look up that request, and trigger a workflow. Sounds easy enough. In terms of the architectural design, lets say I break off the database logic into its own module or package... should this just be called the blahblahblahDAO or blahblahblahBusinessObjects?

Thanks in advance.

If you are looking for deeper insights in how to layer real software and what proper names they should have you should read about Domain Driven Design

First and classic book (be aware that it's very general). As for something practical you can check out this book or just google for some online examples.

I'm hoping someone has read Implementing DDD ( Vernon ), since all of my questions reference it

In article we can see from Figure 6 that both BankingAccount and PayeeAccount represent the same underlying concept of Banking Account BA

1. On page 64 author gives an example of a publishing organization, where the life-cycle of a book goes through several stages ( proposing a book, editorial process, translation of the book ... ) and at each of those stages this book has a different definition.

Each stage of the book is defined in a different Bounded Context, but do all these different definitions still represent the same underlying concept of a Book, just like both BankingAccount and PayeeAccount represent the same underlying concept of a BA?

2.

a) I understand why User shouldn't exist in Collaboration Context ( CC ), but instead should be defined within Identity and Access Context IAC ( page 65 ). But still, do User ( IAC ), Moderator ( CC ), Author ( CC ), Owner ( CC ) and Participant ( CC ) all represent the same underlying concept of a Customer?

b) If yes, doesn't then CC contain several model elements ( Moderator, Author, Owner and Participant ) which represent the same underlying concept of a Customer, just like both BankingAccount and PayeeAccount represent the same underlying concept of a BA?

c - If Moderator, Author ... don't represent the underlying concept of Customer, then what underlying concept(s) do they represent?

3. In an e-commerce system, the term Customer has multiple meanings ( page 49 ): When user is browsing the Catalog, Customer has different meaning than when user is placing an Order.

But do these two different definitions of a Customer represent the same underlying concept, just like both BankingAccount and PayeeAccount represent the same underlying concept of a BA?

UPDATE:

1.

I'd say that they don't have the same concept of book. Your proposal stage probably wont have the concept of a book at all and the editorial process probably wont use the concept of book either, they'll probably refer to a Proposal and a Draft respectively, which would be completely different things to a book.

As far as I can tell, author is implying that the concept of a book will indeed be modeled in all stages?

2.

The concept of Customer isn't mentioned in his example and your e-commerce definition of customer wouldn't fit the model of Moderators, Author, Owner, etc. You'd be best off modelling this around your own distinct business needs.

Perhaps to avoid the confusion, instead of naming the underlying concept a Customer I should use a different name for it, maybe a Consumer. In any case, I used the name Customer for an underlying concept, which I assumed model elements such as User, Moderator, Author all represent.

3.

The two different meanings of customer in the two different contexts probably wont have a basic underlying type. I doubt that during browsing of the catalogue you'd be interested in the customer's name, address, etc. whereas when placing the order you'd be interested in these things, but less interested in what the last 10 products they visited were.

But the whole point of DDD is that you model selected aspects of reality. In other words, aren't customer's name, address and its browsing history all properties of the same underlying concept of a Customer? As such, if the team is working on Catalog, it will model only those aspects/properties of an underlying Customer concept that are relevant to the browsing ( browsing history ... ), while team working on placing an order will model only those aspects of an underlying Customer concept that are relevant to placing an order ( address, name ... )?

thanks

I'm tying to use Domain Driven Design in one of my applications and have some question about user authentication.

I have an aggregate root called User which has Value Objects like UserCredentials, Password, ActivationToken, etc. I also have few domain services for managing users. For example UserRegistration service looks like this:

public interface IUserRegistrationService
{
    IEnumerable<string> Register(NewUserRequest request);
}

It checks business rules that are assigned to user registration process and persist user in the database.

Now I want to authenticate user, so I've created UserAuthentication domain service:

public interface UserAuthenticationService
{
    IEnumerable<string> Authenticate(AuthRequest request);
}

It takes user from the repository, checks business rules, updates and persists user data changes like LastLoginDate.

But I have some doubts if authentication process belongs to domain itself or it should belong to application service, as for my domain it doesn't matter how user is authenticated. But on the other hand authentication rules, that are checked inside this service, belong to my domain rules, so they're integral part of my domain.

So where do you put authentication in your DDD based appllications and what is your solution to this issue?

1.Generally, authentication and authorization are su-domains in an application. You'd better build an abstraction in application layer/core domain to isolate them.

public interface OrderingService// application layer
{
    void PlaceOder(Order order) {
          //delegate to identity subdomain to validate user request
          UserAuthenticationService.Authenticate(ExtractFrom(order));

          //delegate to booking core domain to handle core business 
          BookingService.placeOrder(order);
    }
}

2.In Identity subdomain, the authentication algorithm could be placed in infrastructure layer:

public class OathUserAuthenticationService:UserAuthenticationService //infrastructure layer
{
    IEnumerable<string> Authenticate(AuthRequest request) {
         ......
    }
}

There are excellent discussion and examples in Implementing Domain Driven Design. The author seperate authentication to an identity subdomain.

In Domain Driven Design literature it is often said that domain services should be stateless.

I believe the reason for this is because service calls should represent single units of work. There shouldn't be any service state which multiple service methods would use.

I break this rule in my service architecture so that I can constructor inject all the relevant repositories required in the service. Example:

public class UserService : IUserService
{
    public IUnitOfWork UnitOfWork { get; set; }

    public IUserRepository UserRepository { get; set; }

    public ICustomerRepository CustomerRepository { get; set; }

    public UserService(IUnitOfWork unitOfWork, IUserRepository userRepository, ICustomerRepository customerRepository)
    {
        UnitOfWork = unitOfWork;
        UserRepository = userRepository;
        CustomerRepository = customerRepository;
    }

    public User RegisterNewUser(...)
    {
        // Perform relevant domain logic
    }

    // ...
}

In order for me to use constructor injection on the UserService, I would need to have state (properties) so that the service methods have access to the relevant repositories and such.

Although I hope to design the individual service methods as isolated units of work, I cannot necessarily prevent that from happening.

How could I architecture domain services so that they are stateless? Is this even necessary?

EDIT:

Eric Evans in Domain-driven Design: Tackling Complexity in the Heart of Software:

When a significant process or transformation in the domain is not a natural responsibility of an ENTITY or VALUE OBJECT, add an operation to the model as standalone interface declared as a SERVICE. Define the interface in terms of the language of the model and make sure the operation name is part of the UBIQUITOUS LANGUAGE. Make the SERVICE stateless.

Vaughn Vernon also recommends stateless services in his book Implementing Domain Driven Design.

One way to get close to your goal is to inject an IOC container into your service class then override your property get methods to resolve an instance of the necessary class. Your new class would look something like this:

public class UserService : IUserService
{
  private IUnitOfWork UnitOfWork 
  { 
    get{return container.Resolve<IUnitOfWork>()}
  }
  private IUnityContainer container {get;set;}

  public UserService(IUnityContainer container )
  {
    this.container = container;
  }

  public User RegisterNewUser(User user)
  {
     //Domain logic
  }

}

Your service class now has a dependency on an IOC container which is not a good thing, but if you are trying to get closer to a stateless service, this would do it.

I was reading Implementing Domain-Driven Design by Vaughn Vernon and in the chapter about aggregates the following structure is shown:

enter image description here

This structure can be mapping easily using Hibernate/NHibernate as the each entity references the aggregate root by reference.

However, he decides to refactor the design to this:

enter image description here

Now all entities reference the root using the ProductId value object instead.

How can one model this using Hibernate/NHibernate?

The explanation for the diagrams can be found here Effective Aggregate Design by Vaughn Vernon

This is a design question which confuses me.

As you know, object consist of attributes and behaviours. In web programming, I have implemented several protocol objects as DTO. these are like:

abstract AbstractRequest{

   public abstract AbstractResponse apply();
...
}

MathLessonRequest extends AbstractRequest{

    public AbstractResponse apply(){
          ..do something based on request
    }
...
}

HistoryLessonRequest extends AbstractRequest{
    public AbstractResponse apply(){
          ..do something based on request
    }
}

and what I want to do is , in my controller I simply want to do something like this:

@RestController
class SchoolRequestController{

    @RequestMapping(value="/",method = RequestMethod.POST, produces = "application/json")
    @ResponseStatus(HttpStatus.OK)
    @ResponseBody
    public AbstractResponse query(AbstractRequest request){

          return request.apply();
    }

}

So , as you can see, I want to give Request classes the responsibility to execute all what they are asked for.

My question is , is it a good design? Is it right to give DTO objects the responsibilities to execute what they are for? Or Are DTO objects only for data transfer?

PS:This design comes with a problem that, apply method needs some outer references of some other objects like services, dao etc. So what is the elegant way to inject this dependencies into this instances?

Usually DTOs have no logic (or very simple transformation logic, such as returning a person's age from a date of birth).

You can use the pattern you have there... definitely, it's just that the objects are not really DTOs but more rich objects (that's usually good). You're not adding a 'DTO' suffix to you class names, so I would say that you're doing fine, because a Request object could have some behaviour.

Edit

I see what you're trying to do. It's possible to do using Dependency Injection + AOP, but I think there are other patterns that might have a more clear distinction and a lot less black magic.

With the approach you want to use, your Request is the entry point to your application (to the core of your domain) and represents the use case you want to run.

The approach I usually use, which is based on Domain-Driven Design (DDD) and Hexagonal Architecture, is to have DTOs which might some kind of binding to the transport technology (for example xml/json annotations). And I use a layer of Application Services which serve as a façade into the domain logic. The Application Service is just responsible for orchestration, not for business logic.

As part of the orchestration, the Application Service needs to get a reference to an object that does have the business logic. In DDD these objects are usually Aggregates.

I think I would write a lot more about this, but there are already quite a few really good resources explaining how to design good applications, and the explanation there is way better than what I can do here :).

If you are interested in this, and don't mind spending a bit more time (and maybe a few bucks). I strongly suggest you to get a copy of Growing Object-Oriented Software and Implementing Domain-Driven Design. Both are excellent books, very easy to read, and luckily all the examples are in Java.

We are building a system for managing our intern production of electrical goods manufacturing.

Regarding the complexity of our model, we think that DDD would be a good fit for our project.

The system is composed of a distributed web based system but also a heavy desktop application coded in WPF (using MVVM as presentation pattern).

I've just read the book "implementing domain driven design" (http://www.amazon.com/dp/0321834577) from Vaughn Vernon (which is very good) and I'm quite confused because DDD seems to fit pretty well in web-based environments, but is is the case for a WPF Desktop application?

Being a rookie with DDD, I'm quite confused about the integration of DDD in a desktop application.

Given a lot of sources on DDD including the iDDD book, it is stated that BC should be decoupled from other BCs. The pattern for managing integration between different BC is often to create Open Host Service. OHS can be implement using REST, messaging or SOAP. I understand this for a distributed system with several web applications.

But what if I have different BC inside one desktop application (WPF for the case)?

The application has to cover a large set of context like "production monitoring", "production qualitiy", which for me sounds like different BC.

Is it OK to have several BC in the same windows application? Or 1 BC should be 1 application? (which seems to be often the case for distributed systems for

What would be a good pattern for integrating different BC in one single desktop application?

A solution seems to create a "event bus" inside my application and BC could communicate with each others by publishing / subscribing events on the bus. This implementation will look like Prism's EventAgreggator or mvvm light's Messenger, but dedicated to the model.

Udi Dahan proposes similar pattern in Domain Events Salvation http://www.udidahan.com/2009/06/14/domain-events-salvation/ but it seems to be limited to one BC

a) Assuming we don't use IoC, where should handlers be register? In an Application layer?

b) Perhaps a useless question, but is part of the reason for such a design where handler's Handle method takes Domain Event as an argument because this way we explicitly state which Domain Event is being handled and it also makes code easier to understand if arguments are expressed in terms of a domain model?

c) From

A domain event is a role, and thus should be represented explicitly

What does the author mean by "domain event being a role"?

Thank you

UPDATE:

a)

In IoC terms with would be the composition root of your application.

I don't quite understand what you're tying to convey here?!

b)

Yes, although I don't fully understand your question. What would be the alternative?

I wasn't implying that the design Udi came up with could have an alternative to passing events as arguments, I was just curious whether this design also brings the benefits I mentioned under b)

c)

The concept of a role is based on the idea that a single object can play multiple roles depending on the context.

I haven't read chapters 16 and 17 ( Evans book ), since I doubt I will be involved in large-scale projects anytime soon, but to my knowledge Evans's book doesn't cover this subject ( I'm not implying that this is not important topic, I'm just curious if perhaps I somehow managed to overlook this topic )?

a) Handlers should be registered in the same place that other dependencies such as repositories are registered. In IoC terms with would be the composition root of your application.

b) Yes, although I don't fully understand your question. What would be the alternative?

c) The concept of a role is based on the idea that a single object can play multiple roles depending on the context. Take a look at the author's presentation: Making Roles Explicit.

UPDATE

a) It basically means the place in your application where you configure all dependencies. In a simple console application, with would be somewhere near the start of the Main method. In an ASP.NET application it would be in the method which handles application start. Take a look at this question.

b) Yes, IMO it does bring those benefits, but again note that the handler class itself isn't the interesting part, it can just as well be a lambda.

c) Those parts of the book cover some very important DDD concepts. In fact Evans himself has been somewhat regretful of not putting the strategic aspects of DDD in the beginning. Take a look at the new book in the series Implementing Domain-Driven Design.

As far as roles however, I don't think Evans covers it explicitly in the book. It doesn't have as much to do with DDD as it does with OOP.

Could anyone offer any advice on how they would organise entities, aggregate roots, etc in a hierarchical domain model when using event sourcing?

A project has assets. Assets are a hierarchy. Each asset has a set of data (level contains a set of category, category contains a set of item, etc). There could be 100,000s of assets and they in turn could have 10,000s of sub items.

  • project (id)
    • assets (id, parentId)
      • level
        • category
          • item
            • case
    • revenues
    • other...

From my understanding of DDD there would be one aggregate with project being the aggregate root because level cant exist without assets and assets cant exist without projects, etc.

This would lead to there being an enormous amount of objects in the project. It would also mean that to add a case I would have to have a method on the root such as CreateNewCase(asset, level, category, item, case) or CreateNewCase(itemId, case) if each item had a unique key. Project would end up enormous and contain methods for handling events for almost all the entities in the application.

Any help would be appreciated.

You pretty much answered your own question -- this model is impractical, and will be slow as you need to load tens of thousands of items each time you rehydrate an aggregate root from the database (unless resorting to complex lazy loading).

The "can't exist without" rule is rarely applicable. I suggest you read the aggregate design recommendations here : http://dddcommunity.org/library/vernon_2011/ http://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577

How do you handle situation with blameable in the DDD way? Ofcourse we can ignore some things, but i think that when entity need some tracking (creator, updater, time updated / created) it should be in the class that actually performs some actions on entity. For example we have post and user, what whould be the correct way?

$post = new Post();
$post->create(); // here we can set some created_id and 
other attributes by using mixins or traits like some fw do

Or it is better like this:

$user->createPost($post);
$user->update($post);

As for me second is better, even when we need to track changes that does not apply to post directly, for example:

$post->doSomethingWithPost();
$user->updatePost($post);

Seems like blameable just throws out one important entity - user who manages some things on entities. Ofcourse we should not overcomplicate things, but usually when blameable is implemented, entity from which you will get id is a logged in user, that is incorrect to the bounded context. Here it is some Blogging Context, where user of this context updates post and not some authenticated user.

Whats your thoughts on this one? Is there some similar questions that i maybe missed?

All your examples seem like they are not designed with the DDD principles in mind. The first indicator to me is the usage of a $user variable. In 99% of the cases this is too generic to really capture the intent of a given Model. I think there are hidden concepts that would first have to be made explicit. I think along the lines of RegisteredAuthor and Administrator. At least that's what I understand from:

Here it is some Blogging Context, where user of this context updates post and not some authenticated user.

Another question is how can a "user of this context" not be authenticated? How do you know who he is?

In general in an application that explicitly requires User management we normally have something like an IdentityContext as a supporting Sub Domain. In the different contexts we then have other Models like Author or BlogAdministrator holding a reference to the User's identity (UserId) from the IdentityContext. The Red Book has some nice examples on how to implement this.

To answer the question on how to track who changed something and when:

This concept is also referred to as Auditability, which in most revenue relevant parts of system is actually a must once your organization is reaching a certain size. In this scenario I actually always recommend an Event Sourcing approach since it comes with auditability batteries included.

In your case it would actually be enough to either capture the executing UserId as Metadata to the commands like WritePostCommand or ChangePostContentsCommand or use the UserId in a RequestContext object that knows about the execution context (who was sending this command, when was it sent, is this user allowed to execute this use case).

You can then, as Alexander Langer pointed out in the comments, just use this metadata inside your Repositories or Handlers to pass the information to the Aggregates that need it, or could even just send them to an audit log to not pollute your Domain Model with this responsibilities.

NOTE: Generally I would not use the DoctrineExtensions like Blameable in your Domain Model. They depend heavily on Doctrine's Event system, and you do not want to tie your Model into an Infrastructure concern.

Kind regards

If have the following Code.

public class CountryFactory : IEntityFactory
{
    private readonly IRepository<Country> countryRepository;

    public CountryFactory(IRepository<Country> countryRepository)
    {
        this.countryRepository = countryRepository;
    }

    public Country CreateCountry(string name)
    {
        if (countryRepository.FindAll().Any(c => c.Name == name))
        {
            throw new ArgumentException("There is already a country with that name!");
        }

        return new Country(name);
    }
}

From a DDD approach, is the the correct way to create a Country. Or is it better to have a CountryService which checks whether or not a country exists, then if it does not, just call the factory to return a new entity. This will then mean that the service will be responsible of persisting the Entity rather than the Factory.

I'm a bit confused as to where the responsibility should lay. Especially if more complex entities needs to be created which is not as simple as creating a country.

In DDD factories are used to encapsulate complex objects and aggregates creation. Usually, factories are not implemented as separate classes but rather static methods on the aggregate root class that returns the new aggregate.

Factory methods are better suited than constructors since you might need to have technical constructors for serialization purposes and var x = new Country(name) has very little meaning inside your Ubiquitous Language. What does it mean? Why do you need a name when you create a country? Do you really create countries, how often new countries appear, do you even need to model this process? All these questions arise if you start thinking about your model and ubiquitous language besides tactical pattern.

Factories must return valid objects (i.e. aggregates), checking all invariants inside it, but not outside. Factory might receive services and repositories as parameters but this is also not very common. Normally, you have an application service or command handler that does some validations and then creates a new aggregate using the factory method and adds it to the repository.

There is also a good answer by Lev Gorodinski here Factory Pattern where should this live in DDD?

Besides, implementation of Factories is extensively described in Chapter 11 of the Red Book.

I have a scenario am trying to refactor to DDD. I have a Batch which is an aggregate and List of BatchEntries. After a Batch is created and BatchEntries added, an SMS is sent to the individuals in the batch and the status of the batch changes from running to posted.

Any ideas on how to make the design better? The domain has two aggregates Batch and BatchEntry with Batch being the aggregate root.

The code looks like this

public class Batch : EntityBase, IValidatableObject
{
    public int BatchNumber { get; set; }
    public string Description { get; set; }
    public decimal TotalValue { get; set; }
    public bool SMSAlert { get; set; }
    public int Status { get; set; }

    private HashSet<BatchEntry> _batchEntries;
    public virtual ICollection<BatchEntry> BatchEntries
    {
        get{
            if (_batchEntries == null){
                _batchEntries = new HashSet<BatchEntry>();
            }
            return _batchEntries;
        }
        private set {
            _batchEntries = new HashSet<BatchEntry>(value);
        }
    }

    public static Batch Create(string description, decimal totalValue, bool smsAlert)
    {
        var batch = new Batch();
        batch.GenerateNewIdentity();
        batch.Description = description;
        batch.TotalValue = totalValue;
        batch.SMSAlert = smsAlert;
        return batch;
    }

    public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
    {
        // 
    }
}

public interface IBatchRepository : IRepository<Batch>
{
    int NextBatchNumber();
}

public class BatchEntry : EntityBase, IValidatableObject
{
    public Guid BatchId { get; set; }
    public virtual Batch Batch { get; private set; }
    public decimal Amount { get; set; }
    public Guid CustomerAccountId { get; set; }
    public virtual CustomerAccount CustomerAccount { get; private set; }

    public static BatchEntry Create(Guid batchId, Guid customerAccountId, decimal amount)
    {
        var batchEntry = new BatchEntry();
        batchEntry.GenerateNewIdentity();
        batchEntry.BatchId = batchId;
        batchEntry.CustomerAccountId = customerAccountId;
        batchEntry.Amount = amount;
        return batchEntry;
    }

    public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
    {
        //
    }
}

public interface IBatchEntryRepository : IRepository<BatchEntry>{}

The domain and domain services are exposed via Application Services. The Code in the application services is as follows:

//Application Services Code

public class BatchApplicationService : IBatchApplicationService
{
    private readonly IBatchRepository _batchRepository;
    private readonly IBatchEntryRepository _batchEntryRepository;

    public BatchAppService(IBatchRepository batchRepository, IBatchEntryRepository batchEntryRepository)
    {
        if (batchRepository == null) throw new ArgumentNullException("batchRepository");

        if (batchEntryRepository == null) throw new ArgumentNullException("batchEntryRepository");

        _batchRepository = batchRepository;
        _batchEntryRepository = batchEntryRepository;
    }

    public BatchDTO AddNewBatch(BatchDto batchDto)
    {
        if (batchDto != null)
        {
            var batch = Batch.Create(batchDto.Description, batchDto.TotalValue, batchDto.SMSAlert);
            batch.BatchNumber = _batchRepository.NextBatchNumber();
            batch.Status = (int)BatchStatus.Running;
            SaveBatch(batch);
            return batch.Map<BatchDto>();
        }
        else
        {
            //
        }
    }

    public bool UpdateBatch(BatchDto batchDto)
    {
        if (batchDto == null || batchDto.Id == Guid.Empty)
        {
            //
        }

        var persisted = _batchRepository.Get(batchDto.Id);
        if (persisted != null)
        {
            var result = false;
            var current = Batch.Create(batchDto.Description, batchDto.TotalValue, batchDto.SMSAlert);
            current.ChangeCurrentIdentity(persisted.Id);
            current.BatchNumber = persisted.BatchNumber;
            current.Status = persisted.Status;

            _batchRepository.Merge(persisted, current);
            _batchRepository.UnitOfWork.Commit();

            if (persisted.BatchEntries.Count != 0){
                persisted.BatchEntries.ToList().ForEach(x => _batchEntryRepository.Remove(x));
                _batchEntryRepository.UnitOfWork.Commit();
            }

            if (batchDto.BatchEntries != null && batchDto.BatchEntries.Any())
            {
                List<BatchEntry> batchEntries = new List<BatchEntry>();
                int counter = default(int);
                batchDTO.BatchEntries.ToList().ForEach(x =>
                {
                    var batchEntry = BatchEntry.Create(persisted.Id, x.CustomerAccountId, x.Amount);
                    batchEntries.Add(batchEntry);
                });
            }
            else result = true;
            return result;
        }
        else
        {
            //
        }
    }

    public bool MarkBatchAsPosted(BatchDto batchDto, int authStatus)
    {
        var result = false;
        if (batchDto == null || batchDto.Id == Guid.Empty)
        {
            //
        }

        var persisted = _batchRepository.Get(batchDto.Id);
        if (persisted != null)
        {
            var current = Batch.Create(batchDto.Description, batchDto.TotalValue, batchDto.SMSAlert);
            current.ChangeCurrentIdentity(persisted.Id);
            current.BatchNumber = persisted.BatchNumber;
            current.Status = authStatus;
            _batchRepository.Merge(persisted, current);
            _batchRepository.UnitOfWork.Commit();
            result = true;
        }
        else
        {
            //
        }
        return result;
    }

    private void SaveBatch(Batch batch)
    {
        var validator = EntityValidatorFactory.CreateValidator();
        if (validator.IsValid<Batch>(batch))
        {
            _batchRepository.Add(batch);
            _batchRepository.UnitOfWork.Commit();
        }
        else throw new ApplicationValidationErrorsException(validator.GetInvalidMessages(batch));
    }
}

Questions:

  1. Where should the BatchStatus i.e Running, Posted be assigned?
  2. Should the MarkBatchAsPosted method be defined as a mehod in the Batch Entity?
  3. How best can this be redesigned for domain-driven design?

Although it looks simple, I'm not sure that I really understand your domain.

Statements such as

"After a Batch is created and BatchEntries added, an SMS is sent to the individuals in the batch and the status of the batch changes from running to posted"

makes very little sense to me. Can a batch really be a batch without any entries? If not, why would the batch automatically start when entries are added?

Anyway, I did not risk answering your 3 questions, but there's a few guidelines you seem to be violating and understanding them will allow you to come up with your own answers:

  • Your domain is suffering from anemia.

  • Non-root aggregates should not have their own repository because they should be accessed only through the root. Aggregate root's children should only be modified through their root (Tell Don't Ask). You should not have a BatchEntryRepository if EntryRepository is not a root.

  • An aggregate root is a transactionnal boundary and only one should be modified in the same transaction. In addition, aggregate roots should be as small as possible, therefore you only keep the pieces needed to enforce invariants within the cluster. In your case, adding/removing batch entries seems to impact the Batch's status, so having a collection of BatchEntry under Batch makes sense and allows to protect invariants transactionnaly.

    Note: If there was a lot of contention on a Batch, e.g. multiple people working on the same Batch instance, adding and removing BatchEntry instances, then you might have to make BatchEntry it's own aggregate root and use enventual consistency to bring the system to a consistent state.

  • Domain objects should usually be designed using an always-valid approach, meaning they can never be put in an invalid state. The UI should usually take care of validating user input to avoid sending incorrect commands, but the domain can just throw on you. Therefore, validator.IsValid<Batch>(batch) makes very little sense unless it is validating something the Batch couldn't possibly enforce by itself.

  • Domain logic should not leak in application services and should usually be encapsulated in entities when possible (domain services otherwise). You are currently executing a lot of business logic in your application service e.g. if (persisted.BatchEntries.Count != 0){ ... }

  • DDD is not CRUD. Using tactical DDD patterns in CRUD is not necessary wrong, but it's certainly not DDD. DDD is all about the ubiquitous language and modeling the domain. When you see methods named Update... or a tons of getter/setters, it usually means you are doing it wrong. DDD works best with task-based UI's which allows to focus on one business operation at a time. Your UpdateBatch method is doing way too much and should be segregated into more meaninful and granular business operations.

Hopefully my answer will help you refining your model, but I strongly advise you to read either Evans or Vernon... or both ;)

In my company we have two "applications" - one is essentially a big CMS for product department to manage products, promos, customers etc. The second one is more or less e-commerce solution which is a direct consumer of CMS bounded context. The thing is, we share infrastructure - mainly databases. This is the origin of Product in e-commerce BC - it's loaded from table maintained by CMS. In Implementing DDD by Vernon author mentions several ways to integrate such remote BCs (REST/RPC/Messaging) but I haven't encountered that scenario anywhere. From performance perspective it's probably (and correct me if I'm wrong) best to use those CMS tables in e-commerce BC.

Now:

  • Should I create a Inventory context in e-commerce that would serve as a integration bridge between CMS and e-commerce BCs?
  • Should I move persistence models from CMS to some kind of shared kernel and use it on both BCs?

What are my options here?