Domain-driven Design

Eric Evans

Mentioned 190

Describes ways to incorporate domain modeling into software development.

More on

Mentioned in questions and answers.

POCO = Plain Old CLR (or better: Class) Object

DTO = Data Transfer Object

In this post there is a difference, but frankly most of the blogs I read describe POCO in the way DTO is defined: DTOs are simple data containers used for moving data between the layers of an application.

Are POCO and DTO the same thing?

(ps: look at this great article about POCO as a lifestyle)

A POCO follows the rules of OOP. It should (but doesn't have to) have state and behavior. POCO comes from POJO, coined by Martin Fowler [anecdote here]. He used the term POJO as a way to make it more sexy to reject the framework heavy EJB implementations. POCO should be used in the same context in .Net. Don't let frameworks dictate your object's design.

A DTO's only purpose is to transfer state, and should have no behavior. See Martin Fowler's explanation of a DTO for an example of the use of this pattern.

Here's the difference: POCO describes an approach to programming (good old fashioned object oriented programming), where DTO is a pattern that is used to "transfer data" using objects.

While you can treat POCOs like DTOs, you run the risk of creating an anemic domain model if you do so. Additionally, there's a mismatch in structure, since DTOs should be designed to transfer data, not to represent the true structure of the business domain. The result of this is that DTOs tend to be more flat than your actual domain.

In a domain of any reasonable complexity, you're almost always better off creating separate domain POCOs and translating them to DTOs. DDD (domain driven design) defines the anti-corruption layer (another link here, but best thing to do is buy the book), which is a good structure that makes the segregation clear.

I am writing a project in Django and I see that 80% of the code is in the file This code is confusing and, after a certain time, I cease to understand what is really happening.

Here is what bothers me:

  1. I find it ugly that my model level (which was supposed to be responsible only for the work with data from a database) is also sending email, walking on API to other services, etc.
  2. Also, I find it unacceptable to place business logic in the view, because this way it becomes difficult to control. For example, in my application there are at least three ways to create new instances of User, but technically it should create them uniformly.
  3. I do not always notice when the methods and properties of my models become non-deterministic and when they develop side effects.

Here is a simple example. At first, the User model was like this:

class User(db.Models):

    def get_present_name(self):
        return or 'Anonymous'

    def activate(self):
        self.status = 'activated'

Over time, it turned into this:

class User(db.Models):

    def get_present_name(self): 
        # property became non-deterministic in terms of database
        # data is taken from another service by api
        return remote_api.request_user_name(self.uid) or 'Anonymous' 

    def activate(self):
        # method now has a side effect (send message to user)
        self.status = 'activated'
        send_mail('Your account is activated!', '…', [])

What I want is to separate entities in my code:

  1. Entities of my database, database level: What contains my application?
  2. Entities of my application, business logic level: What can make my application?

What are the good practices to implement such an approach that can be applied in Django?

It seems like you are asking about the difference between the data model and the domain model – the latter is where you can find the business logic and entities as perceived by your end user, the former is where you actually store your data.

Furthermore, I've interpreted the 3rd part of your question as: how to notice failure to keep these models separate.

These are two very different concepts and it's always hard to keep them separate. However, there are some common patterns and tools that can be used for this purpose.

About the Domain Model

The first thing you need to recognize is that your domain model is not really about data; it is about actions and questions such as "activate this user", "deactivate this user", "which users are currently activated?", and "what is this user's name?". In classical terms: it's about queries and commands.

Thinking in Commands

Let's start by looking at the commands in your example: "activate this user" and "deactivate this user". The nice thing about commands is that they can easily be expressed by small given-when-then scenario's:

given an inactive user
when the admin activates this user
then the user becomes active
and a confirmation e-mail is sent to the user
and an entry is added to the system log
(etc. etc.)

Such scenario's are useful to see how different parts of your infrastructure can be affected by a single command – in this case your database (some kind of 'active' flag), your mail server, your system log, etc.

Such scenario's also really help you in setting up a Test Driven Development environment.

And finally, thinking in commands really helps you create a task-oriented application. Your users will appreciate this :-)

Expressing Commands

Django provides two easy ways of expressing commands; they are both valid options and it is not unusual to mix the two approaches.

The service layer

The service module has already been described by @Hedde. Here you define a separate module and each command is represented as a function.

def activate_user(user_id):
    user = User.objects.get(pk=user_id)

    # set active flag = True

    # mail user

    # etc etc

Using forms

The other way is to use a Django Form for each command. I prefer this approach, because it combines multiple closely related aspects:

  • execution of the command (what does it do?)
  • validation of the command parameters (can it do this?)
  • presentation of the command (how can I do this?)

class ActivateUserForm(forms.Form):

    user_id = IntegerField(widget = UsernameSelectWidget, verbose_name="Select a user to activate")
    # the username select widget is not a standard Django widget, I just made it up

    def clean_user_id(self):
        user_id = self.cleaned_data['user_id']
        if User.objects.get(pk=user_id).active:
            raise ValidationError("This user cannot be activated")
        # you can also check authorizations etc. 
        return user_id

    def execute(self):
        This is not a standard method in the forms API; it is intended to replace the 
        'extract-data-from-form-in-view-and-do-stuff' pattern by a more testable pattern. 
        user_id = self.cleaned_data['user_id']

        user = User.objects.get(pk=user_id)

        # set active flag = True

        # mail user

        # etc etc

Thinking in Queries

You example did not contain any queries, so I took the liberty of making up a few useful queries. I prefer to use the term "question", but queries is the classical terminology. Interesting queries are: "What is the name of this user?", "Can this user log in?", "Show me a list of deactivated users", and "What is the geographical distribution of deactivated users?"

Before embarking on answering these queries, you should always ask yourself two questions: is this a presentational query just for my templates, and/or a business logic query tied to executing my commands, and/or a reporting query.

Presentational queries are merely made to improve the user interface. The answers to business logic queries directly affect the execution of your commands. Reporting queries are merely for analytical purposes and have looser time constraints. These categories are not mutually exclusive.

The other question is: "do I have complete control over the answers?" For example, when querying the user's name (in this context) we do not have any control over the outcome, because we rely on an external API.

Making Queries

The most basic query in Django is the use of the Manager object:


Of course, this only works if the data is actually represented in your data model. This is not always the case. In those cases, you can consider the options below.

Custom tags and filters

The first alternative is useful for queries that are merely presentational: custom tags and template filters.


<h1>Welcome, {{ user|friendly_name }}</h1>

def friendly_name(user):
    return remote_api.get_cached_name(

Query methods

If your query is not merely presentational, you could add queries to your (if you are using that), or introduce a module:

def inactive_users():
    return User.objects.filter(active=False)

def users_called_publysher():
    for user in User.objects.all():
        if remote_api.get_cached_name( == "publysher":
            yield user 

Proxy models

Proxy models are very useful in the context of business logic and reporting. You basically define an enhanced subset of your model.

class InactiveUserManager(models.Manager):
    def get_query_set(self):
        query_set = super(InactiveUserManager, self).get_query_set()
        return query_set.filter(active=False)

class InactiveUser(User):
    >>> for user in InactiveUser.objects.all():
    …        assert is False 

    objects = InactiveUserManager()
    class Meta:
        proxy = True

Query models

For queries that are inherently complex, but are executed quite often, there is the possibility of query models. A query model is a form of denormalization where relevant data for a single query is stored in a separate model. The trick of course is to keep the denormalized model in sync with the primary model. Query models can only be used if changes are entirely under your control.

class InactiveUserDistribution(models.Model):
    country = CharField(max_length=200)
    inactive_user_count = IntegerField(default=0)

The first option is to update these models in your commands. This is very useful if these models are only changed by one or two commands.

class ActivateUserForm(forms.Form):
    # see above

    def execute(self):
        # see above
        query_model = InactiveUserDistribution.objects.get_or_create(
        query_model.inactive_user_count -= 1

A better option would be to use custom signals. These signals are of course emitted by your commands. Signals have the advantage that you can keep multiple query models in sync with your original model. Furthermore, signal processing can be offloaded to background tasks, using Celery or similar frameworks.

user_activated = Signal(providing_args = ['user'])
user_deactivated = Signal(providing_args = ['user'])

class ActivateUserForm(forms.Form):
    # see above

    def execute(self):
        # see above
        user_activated.send_robust(sender=self, user=user)

class InactiveUserDistribution(models.Model):
    # see above

def on_user_activated(sender, **kwargs):
        user = kwargs['user']
        query_model = InactiveUserDistribution.objects.get_or_create(
        query_model.inactive_user_count -= 1

Keeping it clean

When using this approach, it becomes ridiculously easy to determine if your code stays clean. Just follow these guidelines:

  • Does my model contain methods that do more than managing database state? You should extract a command.
  • Does my model contain properties that do not map to database fields? You should extract a query.
  • Does my model reference infrastructure that is not my database (such as mail)? You should extract a command.

The same goes for views (because views often suffer from the same problem).

  • Does my view actively manage database models? You should extract a command.

Some References

Django documentation: proxy models

Django documentation: signals

Architecture: Domain Driven Design

It is much more convenient and cleaner to use a single statement like

import java.awt.*;

than to import a bunch of individual classes

import java.awt.Panel;
import java.awt.Graphics;
import java.awt.Canvas;

What is wrong with using a wildcard in the import statement?

It's not bad to use a wild card with a Java import statement.

In Clean Code, Robert C. Martin actually recommends using them to avoid long import lists.

Here is the recommendation:

J1: Avoid Long Import Lists by Using Wildcards

If you use two or more classes from a package, then import the whole package with

import package.*;

Long lists of imports are daunting to the reader. We don’t want to clutter up the tops of our modules with 80 lines of imports. Rather we want the imports to be a concise statement about which packages we collaborate with.

Specific imports are hard dependencies, whereas wildcard imports are not. If you specifically import a class, then that class must exist. But if you import a package with a wildcard, no particular classes need to exist. The import statement simply adds the package to the search path when hunting for names. So no true dependency is created by such imports, and they therefore serve to keep our modules less coupled.

There are times when the long list of specific imports can be useful. For example, if you are dealing with legacy code and you want to find out what classes you need to build mocks and stubs for, you can walk down the list of specific imports to find out the true qualified names of all those classes and then put the appropriate stubs in place. However, this use for specific imports is very rare. Furthermore, most modern IDEs will allow you to convert the wildcarded imports to a list of specific imports with a single command. So even in the legacy case it’s better to import wildcards.

Wildcard imports can sometimes cause name conflicts and ambiguities. Two classes with the same name, but in different packages, will need to be specifically imported, or at least specifically qualified when used. This can be a nuisance but is rare enough that using wildcard imports is still generally better than specific imports.

In DDD book

In whatever development technology the implementation will be based on, look for ways of minimizing the work of refactoring MODULES . In Java, there is no escape from importing into individual classes, but you can at least import entire packages at a time, reflecting the intention that packages are highly cohesive units while simultaneously reducing the effort of changing package names.

And if it clutters local namespace its not your fault - blame the size of the package.

I'm working on a large project (for me) which will have many classes and will need to be extensible, but I'm not sure how to plan out my program and how the classes need to interact.

I took an OOD course a few semesters back and learned a lot from it; like writing UML, and translating requirements documents into objects and classes. We learned sequence diagrams too but somehow I missed the lecture or something, they didn't really stick with me.

With previous projects I've tried using methods I learned from the course but usually end up with code that as soon as I can say "yeah that looks something like what I had in mind" i have no desire to dig through the muck to add new features.

I've got a copy of Steve McConnell's Code Complete which I continually hear is amazing, here and elsewhere. I read the chapter on design and didn't seem to come out with the information I'm looking for. I know he says that it's not a cut and dried process, that it's mostly based on heuristics, but I can't seem to take all his information and apply it to my projects.

So what are things you do during the high level design phase (before you begin programming) to determine what are the classes you need (especially ones not based on any 'real world objects') and how will they interact with each other?

Specifically I'm interested in what are the methods you use? What is the process you follow that usually yeilds a good, clean design that will closely represent the final product?

You asked question that lots of authors use to write a book. There is number of methodologies and you should pick one that seems "prettiest" to you.
I can recommend book "Domain Driven Design" by Eric Evans. Also, check site

Learn design patterns. It has been my personal revolution the past two years regarding OOP. Get a book. I would recommend you this one:

Head First Design Patterns

It is in Java but it can be extensible to any language.

While creating an app in Laravel 4 after reading T. Otwell's book on good design patterns in Laravel I found myself creating repositories for every table on the application.

I ended up with the following table structure:

  • Students: id, name
  • Courses: id, name, teacher_id
  • Teachers: id, name
  • Assignments: id, name, course_id
  • Scores (acts as a pivot between students and assignments): student_id, assignment_id, scores

I have repository classes with find, create, update and delete methods for all of these tables. Each repository has an Eloquent model which interacts with the database. Relationships are defined in the model per Laravel's documentation:

When creating a new course, all I do is calling the create method on the Course Repository. That course has assignments, so when creating one, I also want to create an entry in the score's table for each student in the course. I do this through the Assignment Repository. This implies the assignment repository communicates with two Eloquent models, with the Assignment and Student model.

My question is: as this app will probably grow in size and more relationships will be introduced, is it good practice to communicate with different Eloquent models in repositories or should this be done using other repositories instead (I mean calling other repositories from the Assignment repository) or should it be done in the Eloquent models all together?

Also, is it good practice to use the scores table as a pivot between assignments and students or should it be done somewhere else?

Keep in mind you're asking for opinions :D

Here's mine:

TL;DR: Yes, that's fine.

You're doing fine!

I do exactly what you are doing often and find it works great.

I often, however, organize repositories around business logic instead of having a repo-per-table. This is useful as it's a point of view centered around how your application should solve your "business problem".

A Course is a "entity", with attributes (title, id, etc) and even other entities (Assignments, which have their own attributes and possibly entities).

Your "Course" repository should be able to return a Course and the Courses' attributes/Assignments (including Assignment).

You can accomplish that with Eloquent, luckily.

(I often end up with a repository per table, but some repositories are used much more than others, and so have many more methods. Your "courses" repository may be much more full-featured than your Assignments repository, for instance, if your application centers more around Courses and less about a Courses' collection of Assignments).

The tricky part

I often use repositories inside of my repositories in order to do some database actions.

Any repository which implements Eloquent in order to handle data will likely return Eloquent models. In that light, it's fine if your Course model uses built-in relationships in order to retrieve or save Assignments (or any other use case). Our "implementation" is built around Eloquent.

From a practical point of view, this makes sense. We're unlikely to change data sources to something Eloquent can't handle (to a non-sql data source).


The trickiest part of this setup, for me at least, is determing if Eloquent is actually helping or harming us. ORMs are a tricky subject, because while they help us greatly from a practical point of view, they also couple your "business logic entities" code with the code doing the data retrieval.

This sort of muddles up whether your repository's responsibility is actually for handling data or handling the retrieval / update of entities (business domain entities).

Furthermore, they act as the very objects you pass to your views. If you later have to get away from using Eloquent models in a repository, you'll need to make sure the variables passed to your views behave in the same way or have the same methods available, otherwise changing your data sources will roll into changing your views, and you've (partially) lost the purpose of abstracting your logic out to repositories in the first place - the maintainability of your project goes down as.

Anyway, these are somewhat incomplete thoughts. They are, as stated, merely my opinion, which happens to be the result of reading Domain Driven Design and watching videos like "uncle bob's" keynote at Ruby Midwest within the last year.

I've recently overheard people saying that Data Transfer Objects (DTO) are an anti-pattern.

Can someone please explain why? What are the alternatives?

The intention of a Data Transfer Object is to store data from different sources and then transfer it into a database (or Remote Facade) at once.

However, the DTO pattern violates the Single Responsibility Principle, since the DTO not only stores data, but also transfers it from or to the database/facade.

The need to separate data objects from business objects is not an antipattern, since it is probably required to separate the database layer anyway.

Instead of DTOs you should use the Aggregate and Repository Patterns, which separates the collection of objects (Aggregate) and the data transfer (Repository).

To transfer a group of objects you can use the Unit Of Work pattern, that holds a set of Repositories and a transaction context; in order to transfer each object in the aggregate separately within the transaction.

I keep seeing DDD (Domain Driven Design) being used a lot in articles - I have read the Wikipedia entry about DDD but still can't figure out what it actually is and how I would go about implementing it in creating my sites?

Take StackOverflow as an example. Instead of starting to design some web forms, you concentrate first on doing object-oriented modelling of the entities within your problem domain, for example Users, Questions, Answers, Votes, Comments etc. Since the design is driven by the details of the problem domain it is called domain-driven design.

You can read more in Eric Evans' book.

I have been programming in object-oriented languages for years now but secretly I look at some of the things my colleagues do with envy. A lot of them seem to have some inner OO instinct that I don't have - no matter how hard I try. I've read all the good books on OO but still can't seem to crack it. I feel like the guy who gave 110% to be a professional footballer but just didn't have the natural talent to make it. I'm at a loss and thinking of switching careers - what should do I?

Become more agile, learn junit testing and study about Domain Driven Design. I suggest the book Domain-Driven Design: Tackling Complexity in the Heart of Software although it's a bit tough at some points.

I need some help from more experienced programmers. I want to improve my MVC skills. But I could not find a good tutorial on Google for MVC. Google always gives "MVC for beginners".

I understand what MVC is and I can make it, but I'm not experienced enough to do something practical in OOP.

If anyone knows a good object-oriented tutorial for MVC, please direct me to the right place — I'm looking for good links, books etc.

Links, that contain PHP-only materials, are marked with php for easier identification.

You cannot even begin to delve into MVC before you have comprehensive understanding of OOP. That include OOP practices (dependency injection, unit testing, refactoring) principles (SOLID, SoC, CQS, LoD) and common patterns (and no, singleton is not an object-oriented pattern).

MVC is an advanced architectural design pattern, which requires solid understanding. It is not meant for beginners or for tiny "hello world" applications. One uses MVC to add additional constraints to the codebase, when simple adherence to OOP practices becomes too loose to control the codebase.

The best I can suggest for you would be to begin by expanding you knowledge regarding object oriented code:

The two lectures above should cover the basics. And then move on to:

When you understand all that was explain in this series, you can expand on:

Also, I would strongly recommend for you to read (in this order):

P.S.: you might also take a look at this book (cautiously, because it has issues): Guide to PHP Design Patterns php

I am looking for podcasts or videos on how to do unit testing.

Ideally they should cover both the basics and more advanced topics.

Other hanselminutes episodes on testing:

Other podcasts:

Other questions like this:

Blog posts:

I know you didn't ask for books but... Can I also mention that Beck's TDD book is a must read, even though it may seem like a dated beginner book on first flick through (and Working Effectively with Legacy Code by Michael C. Feathers of course is the bible). Also, I'd append Martin(& Martin)'s Agile Principles, Patterns & Techniques as really helping in this regard. In this space (concise/distilled info on testing) also is the excellent Foundations of programming ebook. Goob books on testing I've read are The Art of Unit Testing and xUnit Test Patterns. The latter is an important antidote to the first as it is much more measured than Roy's book is very opinionated and offers a lot of unqualified 'facts' without properly going through the various options. Definitely recommend reading both books though. AOUT is very readable and gets you thinking, though it chooses specific [debatable] technologies; xUTP is in depth and neutral and really helps solidify your understanding. I read Pragmatic Unit Testing in C# with NUnit afterwards. It's good and balanced though slightly dated (it mentions RhinoMocks as a sidebar and doesnt mention Moq) - even if nothing is actually incorrect. An updated version of it would be a hands-down recommendation.

More recently I've re-read the Feathers book, which is timeless to a degree and covers important ground. However it's a more 'how, for 50 different wheres' in nature. It's definitely a must read though.

Most recently, I'm reading the excellent Growing Object-Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce. I can't recommend it highly enough - it really ties everything together from big to small in terms of where TDD fits, and various levels of testing within a software architecture. While I'm throwing the kitchen sink in, Evans's DDD book is important too in terms of seeing the value of building things incrementally with maniacal refactoring in order to end up in a better place.

The age old question. Where should you put your business logic, in the database as stored procedures ( or packages ), or in the application/middle tier? And more importantly, Why?

Assume database independence is not a goal.

While there is no one right answer - it depends on the project in question, I would recommend the approach advocated in "Domain Driven Design" by Eric Evans. In this approach the business logic is isolated in its own layer - the domain layer - which sits on top of the infrastructure layer(s) - which could include your database code, and below the application layer, which sends the requests into the domain layer for fulfilment and listens for confirmation of their completion, effectively driving the application.

This way, the business logic is captured in a model which can be discussed with those who understand the business aside from technical issues, and it should make it easier to isolate changes in the business rules themselves, the technical implementation issues, and the flow of the application which interacts with the business (domain) model.

I recommend reading the above book if you get the chance as it is quite good at explaining how this pure ideal can actually be approximated in the real world of real code and projects.

(Note: My question has very similar concerns as the person who asked this question three months ago, but it was never answered.)

I recently started working with MVC3 + Entity Framework and I keep reading that the best practice is to use the repository pattern to centralize access to the DAL. This is also accompanied with explanations that you want to keep the DAL separate from the domain and especially the view layer. But in the examples I've seen the repository is (or appears to be) simply returning DAL entities, i.e. in my case the repository would return EF entities.

So my question is, what good is the repository if it only returns DAL entities? Doesn't this add a layer of complexity that doesn't eliminate the problem of passing DAL entities around between layers? If the repository pattern creates a "single point of entry into the DAL", how is that different from the context object? If the repository provides a mechanism to retrieve and persist DAL objects, how is that different from the context object?

Also, I read in at least one place that the Unit of Work pattern centralizes repository access in order to manage the data context object(s), but I don't grok why this is important either.

I'm 98.8% sure I'm missing something here, but from my readings I didn't see it. Of course I may just not be reading the right sources... :\

I think the term "repository" is commonly thought of in the way the "repository pattern" is described by the book Patterns of Enterprise Application Architecture by Martin Fowler.

A Repository mediates between the domain and data mapping layers, acting like an in-memory domain object collection. Client objects construct query specifications declaratively and submit them to Repository for satisfaction. Objects can be added to and removed from the Repository, as they can from a simple collection of objects, and the mapping code encapsulated by the Repository will carry out the appropriate operations behind the scenes.

On the surface, Entity Framework accomplishes all of this, and can be used as a simple form of a repository. However, there can be more to a repository than simply a data layer abstraction.

According to the book Domain Driven Design by Eric Evans, a repository has these advantages:

  • They present clients with a simple model for obtaining persistence objects and managing their life cycle
  • They decouple application and domain design from persistence technology, multiple database strategies, or even multiple data sources
  • They communicate design decisions about object access
  • They allow easy substitution of a dummy implementation, for unit testing (typically using an in-memory collection).

The first point roughly equates to the paragraph above, and it's easy to see that Entity Framework itself easily accomplishes it.

Some would argue that EF accomplishes the second point as well. But commonly EF is used simply to turn each database table into an EF entity, and pass it through to UI. It may be abstracting the mechanism of data access, but it's hardly abstracting away the relational data structure behind the scenes.

In simpler applications that mostly data oriented, this might not seem to be an important point. But as the applications' domain rules / business logic become more complex, you may want to be more object oriented. It's not uncommon that the relational structure of the data contains idiosyncrasies that aren't important to the business domain, but are side-effects of the data storage. In such cases, it's not enough to abstract the persistence mechanism but also the nature of the data structure itself. EF alone generally won't help you do that, but a repository layer will.

As for the third advantage, EF will do nothing (from a DDD perspective) to help. Typically DDD uses the repository not just to abstract the mechanism of data persistence, but also to provide constraints around how certain data can be accessed:

We also need no query access for persistent objects that are more convenient to find by traversal. For example, the address of a person could be requested from the Person object. And most important, any object internal to an AGGREGATE is prohibited from access except by traversal from the root.

In other words, you would not have an 'AddressRepository' just because you have an Address table in your database. If your design chooses to manage how the Address objects are accessed in this way, the PersonRepository is where you would define and enforce the design choice.

Also, a DDD repository would typically be where certain business concepts relating to sets of domain data are encapsulated. An OrderRepository may have a method called OutstandingOrdersForAccount which returns a specific subset of Orders. Or a Customer repository may contain a PreferredCustomerByPostalCode method.

Entity Framework's DataContext classes don't lend themselves well to such functionality without the added repository abstraction layer. They do work well for what DDD calls Specifications, which can be simple boolean expressions sent in to a simple method that will evaluate the data against the expression and return a match.

As for the fourth advantage, while I'm sure there are certain strategies that might let one substitute for the datacontext, wrapping it in a repository makes it dead simple.

Regarding 'Unit of Work', here's what the DDD book has to say:

Leave transaction control to the client. Although the REPOSITORY will insert into and delete from the database, it will ordinarily not commit anything. It is tempting to commit after saving, for example, but the client presumably has the context to correctly initiate and commit units of work. Transaction management will be simpler if the REPOSITORY keeps its hands off.

I'm a pretty young developer, and still in the emulation phase of my career. I have read a lot about some topics like concurrency, and using unit of work to allow your business layer to control persistence transactions. I have even implemented some painful, but functional code that implements these topics. But I have not really ever seen a real world example of a truly elegant implementation. I don't have a lot of good TDD, pattern focused role models around me, so I'm forced to look at the outside world for guidance.

So, I'm looking for some stellar examples of open source enterprise app domain models. Preferably written in c#, but other languages would be fine as well as long as they are good examples of clean elegant domain model solutions.

Some of the things I would really like to see are elegant solutions for Concurrency, Business Rules and Object Validation, Transactions / Unit of Work, and semi transparent logging mechanisms. I'm also curious to see what some of the real world best practices are for exception handling in domain model code.

I know I could just start tearing into some open source projects at random, and try to decipher the good from the bad, but I was hoping the expert community here would have some good ideas of projects to look at to stream line the effort.

Thanks for your time.


I'm not really interested in frameworks that make design and construction easier. My choice of framework, or whether to use a framework is a necessary consideration, but is entirely separate from my question here. Unless those frameworks are themselves open source, and very good examples to dig through.

What I am looking for is a project that 'got it right', solving a real world problem with code that is flexible, and easily maintainable, so that I can see with my own eyes, and understand an example of how it should be done that is not a trivial 50 line tutorial example.

I liked lot the architecture of oxite cms at I learned lot from that project. I use nhibernate for data access instead of linq2sql and it works good for me. Of course its not a large scale project but its a perfect start. CSLA does not follow the DDD paradigm.

The above book ".NET Domain-Driven Design with C#" by Tim McCarthy (Wrox Press)" is really good one.

The best book for understanding DDD is Domain-Driven Design: Tackling Complexity in the Heart of Software by Eric Evans. This book is the "bible" of good DDD design.

Beyond that many resources can be found on There you can find more videos and articles from DDD experts including Eric Evans and there is a sample application of good DDD design but unfortunately its in java :(

At the moment, we have to build an application which is based on a legacy one. Code for that old application should be thrown away and rewritten, but as it usually goes - instead of rewriting it, we need to base something new on it. Recently, we decided to go the DomainDrivenDesign path. So -- anti corruption layer could be a solution for our problems. As far as I understand, this way it should be possible to gradually rewrite the old application.

But -- I can't find any good example. I would appreciate ANY information.

From the DDD book (Domain-Driven Design: Tackling Complexity in the Heart of Software) by Eric Evans:

The public interface of the ANTICORRUPTION LAYER usually appears as a set of SERVICES, although occasionally it can take the form of an ENTITY.

and a bit later

One way of organizing the design of the ANTICORRUPTION LAYER is as a combination of FACADES, ADAPTERS (both from Gamma et al. 1995), and translators, along with the communication and transport mechanisms usually needed to talk between systems.

So, you might find examples by looking at the suggested adapter pattern and facade pattern.

I'll try to paraphrase what Eric Evans said, your anti-corruption layer will appear as services to the outside of your layer. So outside of the anti-corruption layer the other layers will not know they are "speaking" with a anti-corruption layer. Inside of the layer you would use adapters and facades to wrap your legacy information sources.

More information about the anti-corruption layer:

I'am unclear as to what the roles and responsibility of the factory class is. I know enough that the factory class should be resposible for the creation of domain objects (aggregate root) along with its associated entities and value objects.

But what is not clear to me is where the factory "layer" lies with a DDD architecture? Should the factory be calling directly into the repository to get its data or the service library?

Where does the factory fit into the following framework:
UI > App > Domain > Service > Data

Also, because the factory is the only place allowed for object creation would'nt you get circular references if you wanted to create your objects in your data and service layers?

If the role of the factory class is for object creation then what benefits does the service layer have?

I've asked a lot of questions and appreciate any response. What i'am lacking is a sample application which demonstrates how all the layers in a domain driven design project come together...Is there anything out there?

But what is not clear to me is where the factory "layer" lies with a DDD architecture? Should the factory be calling directly into the repository to get its data or the service library?

The factory should be the one-stop shop to construct domain objects. Any other part of the code that needs to do this should use the factory.

Typically, there are at least three sources of data that are used as input into a factory for domain object construction: input from the UI, the results of queries from persistence, and domain-meaningful requests. So to answer your specific question, the repository would use the factory.

Here is an example. I am using Holub's Builder pattern here. Edit: disregard the use of this pattern. I've started realizing that it doesn't mix too well with DDD factories.

// domain layer
class Order
    private Integer ID;
    private Customer owner;
    private List<Product> ordered;

    // can't be null, needs complicated rules to initialize
    private Product featured; 

    // can't be null, needs complicated rules to initialize, not part of Order aggregate
    private Itinerary schedule; 

    void importFrom(Importer importer) { ... }

    void exportTo(Exporter exporter) { ... }

    ... insert business logic methods here ...

    interface Importer
        Integer importID();
        Customer importOwner();
        Product importOrdered();

    interface Exporter
        void exportID(Integer id);
        void exportOwner(Customer owner);
        void exportOrdered(Product ordered);

// domain layer
interface OrderEntryScreenExport { ... }

// UI
class UIScreen
    public UIScreen(OrderEntryDTO dto) { ... }

// App Layer
class OrderEntryDTO implements OrderEntryScreenExport { ... }

Here is what the OrderFactory might look like:

interface OrderFactory
    Order createWith(Customer owner, Product ordered);
    Order createFrom(OrderEntryScreenExport to);
    Order createFrom(List<String> resultSets);

The logic for the featured Product and the generation of the Itinerary go in the OrderFactory.

Now here is how the factory might be used in each instance.

In OrderRepository:

public List<Order> findAllMatching(Criteria someCriteria)
    ResultSet rcds = this.db.execFindOrdersQueryWith(someCriteria.toString());
    List<List<String>> results = convertToStringList(rcds);

    List<Order> returnList = new ArrayList<Order>();

    for(List<String> row : results)

    return returnList;

In your application layer:

public void submitOrder(OrderEntryDTO dto)
    Order toBeSubmitted = this.orderFactory.createFrom(dto);


    // do other stuff, raise events, etc

Within your domain layer, a unit test perhaps:

Customer carl = customerRepo.findByName("Carl");
List<Product> weapons = productRepo.findAllByName("Ruger P-95 9mm");
Order weaponsForCarl = orderFactory.createWith(carl, weapons);;


Where does the factory fit into the following framework: UI > App > Domain > Service > Data


Also, because the factory is the only place allowed for object creation would'nt you get circular references if you wanted to create your objects in your data and service layers?

In my example, all dependencies flow from top to bottom. I used the Dependency Inversion Principle (PDF link) to avoid the problem you speak of.

If the role of the factory class is for object creation then what benefits does the service layer have?

When you have logic that doesn't fit into any single domain object OR you have an algorithm that involves orchestrating multiple domain objects, use a service. The service would encapsulate any logic that doesn't fit in anything else and delegate to domain objects where it does fit.

In the example I scribbled here, I imagine that coming up with an Itinerary for the Order would involve multiple domain objects. The OrderFactory could delegate to such a service.

BTW, the hierarchy you described should probably be UI > App > Domain Services > Domain > Infrastructure (Data)

I've asked a lot of questions and appreciate any response. What i'am lacking is a sample application which demonstrates how all the layers in a domain driven design project come together...Is there anything out there?

Applying Domain Driven Design and Patterns by Jimmy Nilsson is a great compliment to Eric Evans' Domain-Driven Design. It has lots of code examples, though I don't know if there is an emphasis on layering. Layering can be tricky and is almost a topic separate from DDD.

In the Evans book, there is a very small example of layering you might want to check out. Layering is an enterprise pattern, and Martin Fowler wrote Patterns of Enterprise Application Architecture, which you might find useful too.

public class Student
    public string Name { get; set; }
    public int ID { get; set; }


var st1 = new Student
    ID = 20,
    Name = "ligaoren",

var st2 = new Student
    ID = 20,
    Name = "ligaoren",

Assert.AreEqual<Student>(st1, st2);// How to Compare two object in Unit test?

How to Compare two collection in Unitest?

What you are looking for is what in xUnit Test Patterns is called Test-Specific Equality.

While you can sometimes choose to override the Equals method, this may lead to Equality Pollution because the implementation you need to the test may not be the correct one for the type in general.

For example, Domain-Driven Design distinguishes between Entities and Value Objects, and those have vastly different equality semantics.

When this is the case, you can write a custom comparison for the type in question.

If you get tired doing this, AutoFixture's Likeness class offers general-purpose Test-Specific Equality. With your Student class, this would allow you to write a test like this:

public void VerifyThatStudentAreEqual()
    Student st1 = new Student();
    st1.ID = 20;
    st1.Name = "ligaoren";

    Student st2 = new Student();
    st2.ID = 20;
    st2.Name = "ligaoren";

    var expectedStudent = new Likeness<Student, Student>(st1);

    Assert.AreEqual(expectedStudent, st2);

This doesn't require you to override Equals on Student.

Likeness performs a semantic comparison, so it can also compare two different types as long as they are semantically similar.

I'm working with a domain model and was thinking about the various ways that we have to implement this two methods in .NET. What is your preferred strategy?

This is my current implementation:

    public override bool Equals(object obj)
        var newObj = obj as MyClass;

        if (null != newObj)
            return this.GetHashCode() == newObj.GetHashCode();
            return base.Equals(obj);

    //Since this is an entity I can use it´s Id
    //When I don´t have an Id I usually make a composite key of the properties
    public override int GetHashCode()
        return String.Format("MyClass{0}", this.Id.ToString()).GetHashCode();

Domain-Driven Design makes the distinction between Entities and Value Objects. This is a good distinction to observe since it guides how you implement Equals.

Entities are equal if their IDs equal each other.

Value Objects are equal if all their (important) constituent elements are equal to each other.

In any case, the implementation of GetHashCode should base itself on the same values that are used to determine equality. In other words, for Entities, the hash code should be calculated directly from the ID, whereas for Value Objects it should be calculated from all the constituent values.

I'm trying to understand the concepts behind DDD, but I find it hard to understand just by reading books as they tend to discuss the topic in a rather abstract way. I would like to see some good implementations of DDD in code, preferably in C#.

Are there any good examples of projects practicing DDD in the open source world?

A good read is Jimmi Nilssons book (and blog for that matter) Applying domain driven design

It's a mixture of Evans and Fowlers books (Domain-Driven Design - Evans), and (Patterns of Enterprise Application Architecture - Fowler)

Code Camp Server, Jeffrey Palermo's sample code for the book ASP.NET MVC in Action is open source and uses DDD.

(Same as my answer in Good Domain Driven Design samples)

I just dont understand this - What does IEquatable buy you exactly??

The only reason I can see it being useful is when creating a generic type and forcing users to implement and write a good equals method.

What am I missing

I use IEquatable<T> quite a lot, although from a pure technical perspective, it doesn't really give me any particular benefits. Overriding System.Object.Equals can provide you with the same functionality.

However, I like the explicitness of implementing IEquatable<T>. I use the concepts of Entities and Value Objects from Domain-Driven Design quite a lot, and use IEquatable<T> particularly for Value Objects, simply because it signals that a type has well-defined equality.

I've inherited a project where the class diagrams closely resemble a spider web on a plate of spaghetti. I've written about 300 unit tests in the past two months to give myself a safety net covering the main executable.

I have my library of agile development books within reach at any given moment:

  • Working Effectively with Legacy Code
  • Refactoring
  • Code Complete
  • Agile Principles Patterns and Practices in C#
  • etc.

The problem is everything I touch seems to break something else. The UI classes have business logic and database code mixed in. There are mutual dependencies between a number of classes. There's a couple of god classes that break every time I change any of the other classes. There's also a mutant singleton/utility class with about half instance methods and half static methods (though ironically the static methods rely on the instance and the instance methods don't).

My predecessors even thought it would be clever to use all the datasets backwards. Every database update is sent directly to the db server as parameters in a stored procedure, then the datasets are manually refreshed so the UI will display the most recent changes.

I'm sometimes tempted to think they used some form of weak obfuscation for either job security or as a last farewell before handing the code over.

Is there any good resources for detangling this mess? The books I have are helpful but only seem to cover half the scenarios I'm running into.

Ever since I started using .NET, I've just been creating Helper classes or Partial classes to keep code located and contained in their own little containers, etc.

What I'm looking to know is the best practices for making ones code as clean and polished as it possibly could be.

Obviously clean code is subjective, but I'm talking about when to use things (not how to use them) such as polymorphism, inheritance, interfaces, classes and how to design classes more appropriately (to make them more useful, not just say 'DatabaseHelper', as some considered this bad practice in the code smells wiki).

Are there any resources out there that could possibly help with this kind of decision making?

Bare in mind that I haven't even started a CS or software engineering course, and that a teaching resource is fairly limited in real-life.

Working Effectively with Legacy Code is one of the best books I have seen on this subject.

Don't be put off the title of the book - Rather than treating Refactoring as a formal concept (which has its place), this book has lots and lots of simple "why didn't I think of that" tips. Things like "go through a class and remove any methods not directly realted to that class and put them in a different one".

e.g. You have a grid and some code to persist the layout of that grid to file. You can probably safely move the layout persisting code out to a different class.

A real eye-opener to me was Refactoring: Improving the Design of Existing Code:

With proper training a skilled system designer can take a bad design and rework it into well-designed, robust code. In this book, Martin Fowler shows you where opportunities for refactoring typically can be found, and how to go about reworking a bad design into a good one.


It helped me to efficiently and systematically refactor code. Also it helped me a lot in discussions with other developers, when their holy code has to be changed ...

I'd recommend Domain Driven Design. I think both YAGNI and AlwaysRefactor principles are two simplistic. The age old question on the issue is do i refactor "if(someArgument == someValue)" into a function or leave it inline?

There is no yes or no answer. DDD advises to refactor it if the test represents a buisiness rule. The refactoring is not (only) about reuse but about making the intentions clear.

Check out Martin Fowler's comments and book on Refactoring

I have a question regarding DDD and the repository pattern.

Say I have a Customer repository for the Customer aggregate root. The Get & Find methods return the fully populated aggregate, which includes objects like Address, etc. All good. But when the user is searching for a customer in the UI, I just require a 'summary' of the aggregate - just a flat object with summarised information.

One way I could deal with this is to call the find method on the repository as normal, and then in the application layer, map each customer aggregate to a CustomerSearchResult / CustomerInfo DTO, and send them back to the client.

But my problem with this is performance; each Customer aggregate may require multiple queries to populate all of the associations. So if my search criteria matched 50 customers, that's quite a hit on the DB for potentially retrieving data I'm not even going to need.

The other issue is that I may wish to include summarised data about the customer that is outside of the Customer's aggregate root boundary, such as the date of the last order made for example. Order has it's own aggregate and therefore to get the customer's order information I would have to call the OrderRepository, also degrading performance.

So now I think I'm left with two options:

  1. Add an additional Find method to the CustomerRepository which returns a list of these summary objects by doing one efficient query.

  2. Create a purpose built readonly CustomerInfoRepository, that just has the find method described in 1.

But both of these feel like I'm going against the principles of DDD. My repositories inherit from a generic base: Repository where T : IAggregateRoot. These summary info object are not an aggregates, and are of a different type to T, so really #1 goes against the design.

Perhaps for #2 I would create an abstract SearchRepository without the IAggregateRoot constraint?

There are many similar scenarios in my domain.

How would you implement this scenario?

Thanks, Dave


After reading Theo's answer, I think I will go with option #2 and create a specialised SearchRepository within my infrastructure geared towards these scenarios. The application layer (WCF services) can then call these repositories that just populate the summary DTOs directly rather than mapping domain entities to DTOs.

**** Update 2 ****

Although I asked this over a year ago I thought I'd just add that I've since discovered CQRS which is aimed at solving this exact problem. Udi Dahan ( and Greg Young ( have written a lot about it. If you are creating a distributed application with DDD, CQRS is for you!

I would:

  1. Return a different object that represents a view of my object for display, e.g. CustomerInfo.
  2. Return a DataTable. Often a generic container is the easiest and best way to go.

If the T in your generic base repository is a Customer, then I think you are mis-applying the concept of aggregate roots, though I'm not a strict Evansangelist. I would design a repository for Customer that returned any data that logically or comfortably groups with Customer, including DataTables or read-only objects that are views of Customer data.

I'm laying out a new data layer using EF 4.1 Code First, migrating from an older homebrew data layer.

I have set up two assemblies, one for my context and one for all the POCO code first classes.

I have some business logic, for instance, a query against one table (or a few tables) that is used in several different places. Where should I put it?

It can't go in a POCO class because it joins a couple tables and so needs a context. It could go in the context, but that context would become bloated with hundreds of disorganized queries. Is there a common pattern or arrangement for all the business logic?

If you use EF directly in business methods (Domain Layer Services & Application Layer Services) then you are not isolating the Domain Model Layer from the infrastructure technologies (EF in this case). That is one of the DDD principles. you should probably have one Repository per Aggregate.

For more info about DDD, see:

Eric Evans' book:


I've almost finished my Data Mapper, but now I'm at the point where it comes to relationships.

I will try to illustrate my ideas here. I wasn't able to find good articles / informations on this topic, so maybe I'm re-inventing the wheel (for sure I am, I could just use a big framework - but I want to learn by doing it).

1:1 Relationships

First, lets look at 1:1 relationships. In general, when we've got an domain class called "Company" and one called "Address", our Company class will have something like address_id. Lets say in most cases we just display a list of Companies, and the address is only needed when someone looks at the details. In that case, my Data Mapper (CompanyDataMapper) simply loads lazily, meaning it will just fetch that address_id from the database, but will not do a join to get the address data as well.

In general, I have an getter method for every Relationship. So in this case, there's an getAddress(Company companyObject) method. It takes an company object, looks for it's address property and - if it's NULL - fetches the corresponding Address object from the database, using the Mapper class for that Address object (AddressDataMapper), and assigns that address object to the address property of the specified company object.

Important: Is a Data Mapper allowed to use another Data Mapper?

Lets say in most cases you need both the company object AND the address object, because you always display it in a list all together. In this case, the CompanyDataMapper not only fetches company objects, but does an SQL query with JOIN to also get all the fields of the address object. Finally, it iterates over the record set and feeds new objects with their corresponding values, assigning the address object to the company object.

Sounds simple, so far.

1:n Relationships

How about these? The only difference to 1:1 is, that an Company may have multiple Address objects. Lets have a look: When we're most of the time only interested in the Company, the Data Mapper would just set the addresses property of the company object to NULL. The addresses property is an array which may reference none, one or multiple addresses. But we don't know yet, since we load lazily, so it's just NULL. But what, if we would need all the addresses in most cases as well? If we would display a big list with all companys together with all their addresses? In this case, things start to get really ugly. First, we can't join the address table fifty times for every address object (I strongly believe that's impossible, and if it is, performance would be below zero). So, when we think this further down the road, it's impossible to NOT load lazily in this case.

Important: Is this true? Must I send out 100 queries to get 100 address objects, if I have 10 companies with each 10 addresses?

m:n Relationships

Lets say an address object only contains the country, state, city, road and house number. But one house could be a big business tower with lots of companies in them. Like one of those modern office buildings where anyone can rent a small rom to show off that tower on its website. So: Many companies can share the same address.

I have no plans yet to deal with that kind of problem.

Important: Probably it's not a bigger problem than the 1:n Relationships?

If anyone knows a good ressource that goes into details about solving / implementing this, I would be happy about a link!

I am looking forward to any answers you'll get on this topic, but in the meantime why not just hop over to Amazon (or your local books dealer) and finally buy

These book contain the original patterns you have been pointed at in various of your questions and are considered reference works in Design Patterns and Software Architecture.

I've seen a trend to move business logic out of the data access layer (stored procedures, LINQ, etc.) and into a business logic component layer (like C# objects).

Is this considered the "right" way to do things these days? If so, does this mean that some database developer positions may be eliminated in favor of more middle-tier coding positions? (i.e. more c# code rather than more long stored procedures.)

If the applications is small with a short lifetime, then it's not worth putting time into abstracting the concerns in layers. In larger, long lived applications your logic/business rules should not be coupled to the data access. It creates a maintenance nightmare as the application grows.

Moving concerns to a common layer or also known as Separation of concerns, has been around for a while:


The term separation of concerns was probably coined by Edsger W. Dijkstra in his 1974 paper "On the role of scientific thought"1.

For Application Architecture a great book to start with is Domain Driven Design. Eric Evans breaks down the different layers of the application in detail. He also discusses the database impedance and what he calls a "Bounded Context"

Bounded Context

A blog is a system that displays posts from newest to oldest so that people can comment on. Some would view this as one system, or one "Bounded Context." If you subscribe to DDD, one would say there are two systems or two "Bounded Contexts" in a blog: A commenting system and a publication system. DDD argues that each system is independent (of course there will be interaction between the two) and should be modeled as such. DDD gives concrete guidance on how to separate the concerns into the appropriate layers.

Other resources that might interest you:

Until I had a chance to experience The Big Ball of Mud or Spaghetti Code I had a hard time understanding why Application Architecture was so important...

The right way to do things will always to be dependent on the size, availability requirements and lifespan of your application. To use stored procs or not to use stored procs... Tools such as nHibrnate and Linq to SQL are great for small to mid-size projects. To make myself clear, I've never used nHibranate or Linq To Sql on a large application, but my gut feeling is an application will reach a size where optimizations will need to be done on the database server via views, Stored Procedures.. etc to keep the application performant. To do this work Developers with both Development and Database skills will be needed.

What are the general guidelines and best practices to keep in mind while designing Java application [Simple console apps to J2EE apps]?.


I recently completed Java programming tutorial from Sun and practised core java (I have previous programming experience). Now I understand the basics of Inheritance, Abstraction , Polymorphism,Encapsulation

Now i am writing Java code without much difficulty, but am not sure of application design. This is my main problem: "DESIGNING" the application. Say if i have given a task to create an application in Java, What should I start up with? How to think about? Any formal/informal guidelines I should follow while developing class hierarchies? I am really confused (abstract class or interface or sub class..?). Should I start by model everything, before writing code?

It would be very useful for people like me to have a SET OF GENERAL GUIDELINES/BEST PRACTICES, which we can follow while start developing a new java application.

Please provide me some guidelines/thoughts/books/resources/tools I should read or Use

Thanks in advance Scott

It is difficult to give really general advice as there are so many different Java apps on different domains. However, one absolutely recommended book is Domain Driven Design by Eric Evans. See also Wikipedia for a short intro on it.

General advice:

  • don't try to design everything up front - do a reasonably good design which enables you to start coding, then refactor as your understanding of the problem domain and the implementation deepens
  • try to divide difficult problems into smaller parts/steps/modules which you can tackle one by one
  • try to think in terms of objects with well defined responsibilities, which (more or less) model the problem domain and cooperate to solve a problem / handle a task
  • becoming good at design requires practice, first and foremost; don't be afraid to make mistakes. However, when you do, analyze them and learn from them as much as you can
  • learn design patterns, but don't be overzealous - use them only when they really solve a problem and make your code cleaner

I am currently in the process of searching for a rules engine that works in .NET. Our logic is pretty simple, +, -, *, /, and, or, basic order of operations stuff. However we are going to need to store this information in the database and then generate the rules file when a new version is pulled from the database. So the common UI editors are going to be useless to us, unless one of them has a web version UI editor.

So my question is, given what I have said, which is going to be the best rules engine for us in terms of programmatic configuration and integration with ASP.NET.

My experience with WWF has been pretty bad. Its great for developing a workflow that you know is going to exist (such as "this document goes to a person's manager, then to HR, if its invalid it goes back to the submitter") but a real pain if you want dynamic configuration. As you can tell we tried to use it for a fully configurable system, something that BizTalk does really well, and it looks like MS isn't keen on letting developers replace BizTalk so cheaply.

We also looked at using the Acumen rules engine and tools which looked like a great fit for what we needed though we never got the time to remove WWF and replace it.

I would strongly recommend that if your rules are going to be relatively simple that you either use a rules engine you have the source code to or write it yourself. Justin Etheredge has a two part article about performing domain validation through custom rules using the pattern identified in Domain-Driven Designs (Evans).

I've implemented a similar system in my current project following the same guidelines and I serialize/deserialize rules from the database. I will have to take a look at Drools.NET.

I recently read an interesting comment on an OOP related question in which one user objected to creating a "Manager" class:

Please remove the word manager from your vocabulary when talking about class names. The name of the class should be descriptive of its' purpose. Manager is just another word for dumping ground. Any functionality will fit there. The word has been the cause of many extremely bad designs

This comment embodies my struggle to become a good object-oriented developer. I have been doing procedural code for a long time at an organization with only procedural coders. It seems like the main strategy behind the relatively little OO code we produce is to break the problem down into classes that are easily identifiable as discrete units and then put the left over/generalized bits in a "Manager" class.

How can I break my procedural habits (like the Manager class)? Most OO articles/books, etc. use examples of problems that are inherently easy to transform into object groups (e.g., Vehicle -> Car) and thus do not provide much guidance for breaking down more complex systems.

Becoming good at OO takes years of practice and study of good OO code, ideally with a mentor. Remember that OO is just one means to an end. That being said, here are some general guidelines that work for me:

  • Favor composition over inheritance. Read and re-read the first chapter of the GoF book.
  • Obey the Law of Demeter ("tell, don't ask")
  • Try to use inheritance only to achieve polymorphism. When you extend one class from another, do so with the idea that you'll be invoking the behavior of that class through a reference to the base class. ALL the public methods of the base class should make sense for the subclass.
  • Don't get hung up on modeling. Build a working prototype to inform your design.
  • Embrace refactoring. Read the first few chapters of Fowler's book.

Reading and then practicing OO principles is what works for me. Head First Object-Oriented Analysis & Design works you through examples to make a solution that is OO and then ways to make the solution better.

My eureka moment for understanding object-oriented design was when I read Eric Evans' book "Domain-Driven Design: Tackling Complexity in the Heart of Software". Or the "Domain Driven Design Quickly" mini-book (which is available online as a free PDF) if you are cheap or impatient. :)

Any time you have a "Manager" class or any static singleton instances, you are probably building a procedural design.

Despite having studied Domain Driven Design for a long time now there are still some basics that I simply figure out.

It seems that every time I try to design a rich domain layer, I still need a lot of Domain Services or a thick Application Layer, and I end up with a bunch of near-anemic domain entities with no real logic in them, apart from "GetTotalAmount" and the like. The key issue is that entities aren't aware of external stuff, and it's bad practice to inject anything into entities.

Let me give some examples:

1. A user signs up for a service. The user is persisted in the database, a file is generated and saved (needed for the user account), and a confirmation email is sent.

The example with the confirmation email has been discussed heavily in other threads, but with no real conclusion. Some suggest putting the logic in an application service that gets an EmailService and FileService injected from the infrastructure layer. But then I would have business logic outside of the domain, right? Others suggest creating a domain service that gets the infrastructure services injected - but in that case I would need to have the interfaces of the infrastructure services inside the domain layer (IEmailService and IFileService) which doesn't look too good either (because the domain layer cannot reference the infrastructure layer). And others suggest implementing Udi Dahan's Domain Events and then having the EmailService and FileService subscribe to those events. But that seems like a very loose implementation - and what happens if the services fail? Please let me know what you think is the right solution here.

2. A song is purchased from a digital music store. The shopping cart is emptied. The purchase is persisted. The payment service is called. An email confirmation is sent.

Ok, this might be related to the first example. The question here is, who is responsible for orchestrating this transaction? Of course I could put everything in the MVC controller with injected services. But if I want real DDD all business logic should be in the domain. But which entity should have the "Purchase" method? Song.Purchase()? Order.Purchase()? OrderProcessor.Purchase() (domain service)? ShoppingCartService.Purchase() (application service?)

This is a case where I think it's very hard to use real business logic inside the domain entities. If it's not good practice to inject anything into the entities, how can they ever do other stuff than checking its own (and its aggregate's) state?

I hope these examples are clear enough to show the issues I'm dealing with.

Big part of you requests are related to object oriented design and responsibility assignment, you can think of GRASP Patterns and This, you can benefit from object oriented design books, recommend the following

Applying UML and Patterns

A user signs up for a service. The user is persisted in the database, a file is generated and saved (needed for the user account), and a confirmation email is sent.

You can apply Dependency Inversion Principle here. Define a domain interface like this:

void ICanSendConfirmationEmail(EmailAddress address, ...)


void ICanNotifyUserOfSuccessfulRegistration(EmailAddress address, ...)

Interface can be used by other domain classes. Implement this interface in infrastructure layer, using real SMTP classes. Inject this implementation on application startup. This way you stated business intent in domain code and your domain logic does not have direct reference to SMTP infrastructure. The key here is the name of the interface, it should be based on Ubiquitous Language.

A song is purchased from a digital music store. The shopping cart is emptied. The purchase is persisted. The payment service is called. An email confirmation is sent. Ok, this might be related to the first example. The question here is, who is responsible for orchestrating this transaction?

Use OOP best practices to assign responsibilities (GRASP and SOLID). Unit testing and refactoring will give you a design feedback. Orchestration itself can be part of thin Application Layer. From DDD Layered Architecture:

Application Layer: Defines the jobs the software is supposed to do and directs the expressive domain objects to work out problems. The tasks this layer is responsible for are meaningful to the business or necessary for interaction with the application layers of other systems.

This layer is kept thin. It does not contain business rules or knowledge, but only coordinates tasks and delegates work to collaborations of domain objects in the next layer down. It does not have state reflecting the business situation, but it can have state that reflects the progress of a task for the user or the program.

Our current O/RM tool does not really allow for rich domain models, so we are forced to utilize anemic (DTO) entities everywhere. This has worked fine, but I continue to struggle with where to put basic object-based business logic and calculated fields.

Current layers:

  • Presentation
  • Service
  • Repository
  • Data/Entity

Our repository layer has most of the basic fetch/validate/save logic, although the service layer does a lot of the more complex validation & saving (since save operations also do logging, checking of permissions, etc). The problem is where to put code like this:

Decimal CalculateTotal(LineItemEntity li)
  return li.Quantity * li.Price;


Decimal CalculateOrderTotal(OrderEntity order)
  Decimal orderTotal = 0;
  foreach (LineItemEntity li in order.LineItems)
    orderTotal += CalculateTotal(li);
  return orderTotal;

Any thoughts?

I'm tempted to answer Mu, but I'd like to elaborate. In summary: Don't let your choice of ORM dictate how you define your Domain Model.

The purpose of the Domain Model is to be a rich object-oriented API that models the domain. To follow true Domain-Driven Design, the Domain Model must be defined unconstrained by technology.

In other words, the Domain Model comes first, and all technology-specific implementations are subsequently addressed by mappers that map between the Domain Model and the technology in question. This will often include both ways: to the Data Access Layer where the choice of ORM may introduce constraints, and to the UI layer where the UI technology imposes additional requirements.

If the implementation is extraordinarily far from the Domain Model, we talk about an Anti-Corruption Layer.

In your case, what you call an Anemic Domain Model is actually the Data Access Layer. Your best recourse would be to define Repositories that model access to your Entities in a technology-neutral way.

As an example, let's look at your Order Entity. Modeling an Order unconstrained by technology might lead us to something like this:

public class Order
    // constructors and properties

    public decimal CalculateTotal()
        return (from li in this.LineItems
                select li.CalculateTotal()).Sum();

Notice that this a Plain Old CLR Object ( POCO ) and is thus unconstrained by technology. Now the question is how you get this in and out of your data store?

This should be done via an abstract IOrderRepository:

public interface IOrderRepository
    Order SelectSingle(int id);

    void Insert(Order order);

    void Update(Order order);

    void Delete(int id);

    // more, specialized methods can go here if need be

You can now implement IOrderRepository using your ORM of choice. However, some ORMs (such as Microsoft's Entity Framework) requires you to derive the data classes from certain base classes, so this doesn't fit at all with Domain Objects as POCOs. Therefor, mapping is required.

The important thing to realize is that you may have strongly typed data classes that semantically resemble your Domain Entities. However, this is a pure implementation detail, so don't get confused by that. An Order class that derives from e.g. EntityObject is not a Domain Class - it's an implementation detail, so when you implement IOrderRepository, you need to map the Order Data Class to the Order Doman Class.

This may be tedious work, but you can use AutoMapper to do it for you.

Here's how an implementation of the SelectSingle method might look:

public Order SelectSinge(int id)
    var oe = (from o in this.objectContext.Orders
              where o.Id == id
              select o).First();
    return this.mapper.Map<OrderEntity, Order>(oe);

From what you say it may be that you’re thinking too rigidly about your Service and Repository layers. It sounds like you don’t want your Presentation layer to have a direct dependency on the Repository layer and to achieve this you are duplicating methods from your Repositories (your pass-through methods) in the Service layer.

I would question that. You could relax that and allow both to be used within your Presentation layer and make your life simpler for a start. Maybe ask your self what your achieving by hiding the Repositories like that. You’re already abstracting persistence and querying IMPLEMENTATION with them. This is great and what they are designed for. It seems as though you’re trying to create a service layer that hides the fact your entities are persisted at all. I’d ask why?

As for calculating Order totals etc. Your Service layer would be the natural home. A SalesOrderCalculator class with LineTotal(LineItem lineItem) and OrderTotal(Order order) methods would be fine. You may also wish to consider creating an appropriate Factory e.g. OrderServices.CreateOrderCalculator() to switch the implementation if required (tax on order discount has country specific rules for instance). This could also form a single entry point to Order services and make finding things easy through IntelliSense.

If all this sounds unworkable it may be you need to think more deeply about what your abstractions are achieving, how they relate to each other and the Single Responsibility Principle. A Repository is an infrastructure abstraction (hiding HOW entities are saved and retrieved). Services abstract away the implementation of business actions or rules and allow a better structure for versioning or variance. They are not generally layered in the way you describe. If you have complex security rules in your Services, your Repositories may be the better home. In a typical DDD style model, Repositories, Entities, Value Objects and Services would all be used along side each other in the same layer and as part of the same model. Layers above (typically presentation) would therefore be insulated by these abstractions. Within the model the implementation of one service may use the abstraction of another. A further refinement adds rules to who holds references to which entities or value objects enforcing a more formal lifecycle context. For more information on this I would recommend studying the Eric Evans book or Domain Driven Design Quickly.

I'm interested in perceived "best practice", tempered with a little dose of reality here.

In a web application, do you allow your web tier to directly access the DAL, or should it go through a BLL first?

I'm talking specifically of scenarios where there's no "business logic" really involved -- such as a simple query : "Fetch all customers with surname of 'Atwood'". Scenarios where there's any kind of logic absolutely are gonna go through the BLL, so lets call that moo.

While you could encapsulate this method inside a BLL object, it seems to be somewhat pointless when often the signature will be exactly the same as that of the DLL object, and the code probably as simple as a one liner delegating the query off to the DLL.

If you choose the former -- employing a BLL object -- what do you call these objects? (Assuming they do little more than provide a query layer into the DLL). Helpers? QueryProviders?

Thoughts please.



I disagree with most the posts here.

I call my data layer in the web tier. If there is nothing in between the WEB/UI tier there is no point creating a layer "just in case." It's pre optimization. It's a waste. I can't recall a time the business layer "saved me." All it did was created more work, duplication and higher maintenance. I spent years subscribing to the Business Layer --> Data Layer passing entities between the layers. I always felt dirty creating pass through methods that did nothing.

After being introduced to Domain Driven Design by Eric Evans, I do what makes sense. If there is nothing in between the UI and the Data Layer then I call the Data Layer in the UI.

To allow for future changes I wrapper all my Data Layer classes in interfaces. In the UI, I reference the interfaces and I use dependency injection to manage the implementation. After making these changes, it was like a breath of fresh air. If I need to inject something in between the data layer and UI, I create a service.

Another thing I did, was to reduce the number of projects. Before I would have a project for the Data Layer, Business Logic, Business Entities and some type of UI Project -- what a pain.

I have two projects: The core project(entities, business logic and data layer) and UI projects (web, web services, etc...)

For more information I recommend looking at these guys:

I've been working a lot with PHP. But recently i was sent on a work wich use Java. In PHP i used to do a lot of Singleton object but this pattern has not the same signification in Java that it has in PHP. So i wanted to go for an utility class (a class with static method) but my chief doesn't like this kind of classes and ask me to go for services object. So my guess was that a service object is just a class with a constructor that implement some public methods... Am i right ?

Domain-Driven Design defines a Service as:

A SERVICE is an operation offered as an interface that stands alone in the model, without encapsulating state... [p. 105]

Yes, it's a class with public methods, but in addition to that, it implements an interface that exposes those methods. At its core, the Service is the interface - the class that implements it is just an implementation detail.

Anyone please suggest a good design and architecture book for .Net.

Is there any book I can refer to which has case studies, examples, etc. so that I can update my knowledge well in this field?

In case it's not available for .Net, please suggest in Java also.

Thanks in advance Swapna MC

Here's are a few good enterprise architecture books (based on Java, but the general concepts still apply):

A few of these patterns are a little old, but still useful to know.

If you're interested in WCF for a service-oriented architecture:

Or for framework design:

I would recommend this book: .NET: Architecting Applications for the Enterprise

Not a .net book, but the classic book here is Patterns of Enterprise Application Architecture

I enjoyed Head First Design Patterns:

More design than architecture (obviously) but it makes heavy use of examples. Examples are in Java, btw.

Architectural approaches can vary greatly depending on what you're trying to build. I.e.- Architecting a specific software's internal's, or architecting a distributed system, etc.

For a given software program's internals, I like Patterns of Enterprise Application Architecture as a good reference.

I have also used the SEDA architectural style for some high throughput event-driven applications. The SEDA homepage has the original paper and references to other projects using this style. You might have heard of the Java Open Source projects: MULE and Apache Camel.

Also check out Enterprise Integration Patterns, which is a great companion book to PoEAA. This one pretty much helps you architect the interconnection between distributed systems. Lots of tools in this area... from XMPP to AMQP, to MULE, to JMS, etc.

And I have to suggest reviewing the REST Architectural Style since it is important in today's web software. There is a lot of material about REST, but primarily read (and reread) Roy Fielding's dissertation.

What are the best places to find out everything there is to know about Domain-Driven Design, from beginner to advanced.

  • Books
  • Websites
  • Mailing lists
  • User groups
  • Conferences
  • etc

Applying Domain-Driven Design and Patterns is a very good book on the subject. Lots of good examples as well as discussion of related subjects like test driven development and how they apply.

Also check out

Here are some informative sources:

  • An interview with Eric Evans on Software Engineering Radio
  • A book which applies the principles of DDD using an example in C#
  • A podcast on Getting Started With Domain-Driven Design by Rob Conery
  • A conversation between Scott Hanselman and Rob Conery on Learning DDD.

I recommend Domain Driven Design from Eric Evans, it's a great book on the subject.

I have an aggregate root Products which contains a list of entities Selection, which in turn contains a list of entities called Features.

  • The aggregate root Product has an identity of just name
  • The entity Selection has an identity of name (and its corresponding Product identity)
  • The entity Feature has an identity of name (and also it's corresponding Selection identity)

Where the identities for the entities are built as follows:

var productId = new ProductId("dedisvr");
var selectionId = new SelectionId("os",productId);
var featureId = new FeatureId("windowsstd",selectionId);

Note that the dependent identity takes the identity of the parent as part of a composite.

The idea is that this would form a product part number which can be identified by a specific feature in a selection, i.e. the ToString() for the above featureId object would return dedisvr-os-windowsstd.

Everything exists within the Product aggregate where business logic is used to enforce invariant on relationships between selections and features. In my domain, it doesn't make sense for a feature to exist without a selection, and selection without an associated product.

When querying the product for associated features, the Feature object is returned but the C# internal keyword is used to hide any methods that could mutate the entity, and thus ensure the entity is immutable to the calling application service (in a different assembly from domain code).

These two above assertions are provided for by the two functions:

class Product
    /* snip a load of other code */

    public void AddFeature(FeatureIdentity identity, string description, string specification, Prices prices)
       // snip...

    public IEnumerable<Feature> GetFeaturesMemberOf(SelectionIdentity identity);
       // snip...

I have a aggregate root called Service order, this will contain a ConfigurationLine which will reference the Feature within the Product aggregate root by FeatureId. This may be in an entirely different bounded context.

Since the FeatureId contains the fields SelectionId and ProductId I will know how to navigate to the feature via the aggregate root.

My questions are:

Composite identities formed with identity of parent - good or bad practice?

In other sample DDD code where identities are defined as classes, I haven't seen yet any composites formed of the local entity id and its parent identity. I think it is a nice property, since we can always navigate to that entity (always through the aggregate root) with knowledge of the path to get there (Product -> Selection -> Feature).

Whilst my code with the composite identity chain with the parent makes sense and allows me to navigate to the entity via the root aggregate, not seeing other code examples where identities are formed similarly with composites makes me very nervous - any reason for this or is this bad practice?

References to internal entities - transient or long term?

The bluebook mentions references to entities within an aggregate are acceptable but should only be transient (within a code block). In my case I need to store references to these entities for use in future, storing is not transient.

However the need to store this reference is for reporting and searching purposes only, and even if i did want to retrieve the child entity bu navigate via the root, the entities returned are immutable so I don't see any harm can be done or invariants broken.

Is my thinking correct and if so why is it mentioned keep child entity references transient?

Source code is below:

public class ProductIdentity : IEquatable<ProductIdentity>
    readonly string name;

    public ProductIdentity(string name)
    { = name;

    public bool Equals(ProductIdentity other)

    public string Name
        get { return; }

    public override int GetHashCode()

    public SelectionIdentity NewSelectionIdentity(string name)
        return new SelectionIdentity(name, this);

    public override string ToString()

public class SelectionIdentity : IEquatable<SelectionIdentity>
    readonly string name;
    readonly ProductIdentity productIdentity;

    public SelectionIdentity(string name, ProductIdentity productIdentity)
        this.productIdentity = productIdentity; = name;

    public bool Equals(SelectionIdentity other)
        return ( == && (this.productIdentity == other.productIdentity);

    public override int GetHashCode()

    public override string ToString()
        return this.productIdentity.ToString() + "-" +;

    public FeatureIdentity NewFeatureIdentity(string name)
        return new FeatureIdentity(name, this);

public class FeatureIdentity : IEquatable<FeatureIdentity>
    readonly SelectionIdentity selection;
    readonly string name;

    public FeatureIdentity(string name, SelectionIdentity selection)
        this.selection = selection; = name;

    public bool BelongsTo(SelectionIdentity other)
        return this.selection.Equals(other);

    public bool Equals(FeatureIdentity other)
        return this.selection.Equals(other.selection) && ==;

    public SelectionIdentity SelectionId
        get { return this.selection; }

    public string Name
        get { return; }

    public override int GetHashCode()

    public override string ToString()
        return this.SelectionId.ToString() + "-" +; 

Composite identities formed with identity of parent - good or bad practice?

They are a good practice when they are used properly: when the domain expert identifies things locally (eg "the John from Marketing") they are correct, while they are wrong otherwise.

In general, whenever the code follows the expert's language, it's correct.

Sometimes you face a globally identified entity (like "John Smith") identified locally by the expert when he talks about a specific bounded context. In these cases, BC requirements win.
Note that this means that you will need a domain service to map identifiers between BCs, otherwise, all you need are shared identifiers.

References to internal entities - transient or long term?

If the aggregate root (in your case Product) requires the child entities to ensure business invariants, the references must be "long term", at least until the invariants must hold.

Moreover you correctly grasped the rationale behind internal entities: they are entities if the expert identifies them, mutability is a programming concern (and immutability is always safer). You can have immutable entities, either local to other or not, but what make them entities is the fact that the expert identifies them, not their immutability.

Value object are immutable just because they have no identity, not the other way!

But when you say:

However the need to store this reference is for reporting and searching purposes only

I would suggest you to use direct SQL queries (or queryobject with DTOs, or anything you can have for cheap) instead of domain objects. Reports and search don't mutate entities's state, so you don't need to preserve invariants. That's the main rationale of CQRS, that simply means: "use the domain model only when you have to ensure business invariants! Use WTF you like for components that just need to read!"

Extra notes

When querying the product for associated features, the Feature object is returned but the C# internal keyword is used to hide any methods that could mutate the entity...

Access modifiers to handle modifiers in this context are a cheap approach if you don't need to unit test clients, but if you need to test the client code (or to introduce AOP interceptor, or anything else) plain old interfaces are a better solution.

Someone will tell you that you are using "needless abstraction", but using a language keyword (interface) does not means introducing abstractions at all!
I'm not entirely sure that they really understand what an abstraction is, so much that they confuse the tools (a few language keywords that are common in OO) for the act of abstracting.

The abstractions reside in the programmer mind (and in the expert's mind, in DDD), the code just expresses them through the constructs provided by the language you use.

Are sealed classes concrete? Are structs concrete? NO!!!
You can't throw them to hurt incompetent programmers!
They are as much abstract as the interfaces or abstract classes.

An abstraction is needless (worse, it's dangerous!) if it makes the code's prose unreadable, hard to follow, and so on. But, believe me, it can be coded as a sealed class!

... and thus ensure the entity is immutable to the calling application service (in a different assembly from domain code).

IMHO, you should also consider that if the "apparently immutable" local entities returned by the aggregate can, actually, change part of their state, the clients that received them won't be able to know that such change occurred.

To me, I solve this issue by returning (and also using internally) local entities that are actually immutable, forcing clients to only hold a reference to the aggregate root (aka the main entity) and subscribe events on it.

I’ve almost 6 years of experience in application development using .net technologies. Over the years I have improved as a better OO programmer but when I see code written by other guys (especially the likes of Jeffrey Richter, Peter Golde, Ayende Rahien, Jeremy Miller etc), I feel there is a generation gap between mine and their designs. I usually design my classes on the fly with some help from tools like ReSharper for refactoring and code organization.

So, my question is “what does it takes to be a better OO programmer”. Is it

a) Experience

b) Books (reference please)

c) Process (tdd or uml)

d) patterns

e) anything else?

And how should one validate that the design is good, easy to understand and maintainable. As there are so many buzzwords in industry like dependency injection, IoC, MVC, MVP, etc where should one concentrate more in design. I feel abstraction is the key. What else?

To have your design reviewed by someone is quite important. To review and maintain legacy code helps you to realize what makes the software rotten. Thinking is also very important; One one hand don't rush into implementing the first idea. On the other hand, don't think everything at once. Do it iteratively.

Regular reading of books/articles, like Eric Evan's Model Driven Design, or learning new languages (Smalltalk, Self, Scala) that take different approach to OO, helps you to really understand.

Software, and OO, is all about abstractions, responsibilities, dependencies and duplication (or lack of it). Keep them on your mind on your journey, and your learning will be steady.

It takes being a better programmer to be a better OO programmer.

OO has been evolving over the years, and it has a lot to do with changing paradigms and technologies like n-tier architecture, garbage collection, Web Services, etc.. the kind of things you've already seen. There are fundamental principles such as maintainability, reusability, low coupling, KISS, DRY, Amdahl's law, etc. you have to learn, read, experience, and apply it yourself.

OO is not an end on its own, but rather a means to achieve programming solutions. Like games, sports, and arts, practices cannot be understood without principles; and principles cannot be understood without practices.

To be more specific, here are some of the skills that may make one a better programmer. Listen to the domain experts. Know how to write tests. Know how to design a GUI desktop software. Know how to persist data into database. Separate UI layer and logic layer. Know how to write a class that acts like a built-in class. Know how to write a graphical component that acts like a built-in component. Know how to design a client/server software. Know networking, security, concurrency, and reliability.

Design patterns, MVC, UML, Refactoring, TDD, etc. address many of the issues, often extending OO in creative ways. For example, to decouple UI layer dependencies from logic layer, an interface may be introduced to wrap the UI class. From pure object-oriented point of view, it may not make much sense, but it makes sense from the point of view of separation of UI layer and logic layer.

Finally, realizing the limitations of OO is important too. In modern application architecture, the purist data + logic view of OO doesn't always mesh very well. Data transfer object (Java, MS, Fowler) for example intentionally strips away logic part of the object to make it carry only the data. This way the object can turn itself into a binary data stream or XML/JSON. The logic part may be handled both at client and server side in some way.

Something that's worked for me is Reading. I just had a Bulb moment with this book... David West's Object Thinking which elaborates Alan Kay's comment of 'The object revolution has yet to happen'. OO is different things to different people.. couple that with with the fact that your tools influence how you go about solving a problem. So learn multiple languages.

Object Thinking David West

Personally I think understanding philosophy, principles and values behind a practice rather than mimic-ing a practice helps a lot.

We are taking a long, hard look at our (Java) web application patterns. In the past, we've suffered from an overly anaemic object model and overly procedural separation between controllers, services and DAOs, with simple value objects (basically just bags of data) travelling between them. We've used declarative (XML) managed ORM (Hibernate) for persistence. All entity management has taken place in DAOs.

In trying to move to a richer domain model, we find ourselves struggling with how best to design the persistence layer. I've spent a lot of time reading and thinking about Domain Driven Design patterns. However, I'd like some advice.

First, the things I'm more confident about:

  • We'll have "thin" controllers at the front that deal only with HTTP and HTML - processing forms, validation, UI logic.

  • We'll have a layer of stateless business logic services that implements common algorithms or logic, unaware of the UI, but very much aware of (and delegating to) the domain model.

  • We'll have a richer domain model which contains state, relationships, and logic inherent to the objects in that domain model.

The question comes around persistence. Previously, our services would be injected (via Spring) with DAOs, and would use DAO methods like find() and save() to perform persistence. However, a richer domain model would seem to imply that objects should know how to save and delete themselves, and perhaps that higher level services should know how to locate (query for) domain objects.

Here, a few questions and uncertainties arise:

  • Do we want to inject DAOs into domain objects, so that they can do "" in a save() method? This is a little awkward since domain objects are not singletons, so we'll need factories or post-construction setting of DAOs. When loading entities from a database, this gets messy. I know Spring AOP can be used for this, but I couldn't get it to work (using Play! framework, another line of experimentation) and it seems quite messy and magical.

  • Do we instead keep DAOs (repositories?) completely separate, on par with stateless business logic services? This can make some sense, but it means that if "save" or "delete" are inherent operations of a domain object, the domain object can't express those.

  • Do we just dispense with DAOs entirely and use JPA to let entities manage themselves.

Herein lies the next subtlety: It's quite convenient to map entities using JPA. The Play! framework gives us a nice entity base class, too, with operations like save() and delete(). However, this means that our domain model entities are quite closely tied to the database structure, and we are passing objects around with a large amount of persistence logic, perhaps all the way up to the view layer. If nothing else, this will make the domain model less re-usable in other contexts.

If we want to avoid this, then we'd need some kind of mapping DAO - either using simple JDBC (or at least Spring's JdbcTemplate), or using a parallel hierarchy of database entities and "business" entities, with DAOs forever copying information from one hierarchy to another.

What is the appropriate design choice here?


I am not a Java expert, but I use NHibernate in my .NET code so my experience should be directly translatable to the Java world.

When using ORM (like Hibernate you mentioned) to build Domain-Driven Design application, one of good (I won't say best) practices is to create so-called application services between the UI and the Domain. They are similar to stateless business objects you mentioned, but should contain almost no logic. They should look like this:

public void SayHello(int id, String helloString)
    SomeDomainObject target = domainObjectRepository.findById(id); //This uses Hibernate to load the object.

    target.sayHello(helloString); //There is a single domain object method invocation per application service method.

    domainObjectRepository.Save(target); //This one is optional. Hibernate should already know that this object needs saving because it tracks changes.

Any changes to objects contained by DomainObject (also adding objects to collections) will be handled by Hibernate.

You will also need some kind of AOP to intercept application service method invocations and create Hibernate's session before the method executes and save changes after method finishes with no exceptions.

There is a really good sample how to do DDD in Java here. It is based on the sample problem from Eric Evans' 'Blue Book'. The application logic class sample code is here.

As I am in my starting career year in software development (C++ & C#) I now see my flaws and what I miss in this sphere. Because of that I came into some conclusions and made myself a plan to fill those gaps and increase my knowledge in software development. But the question I stumbled upon after making a tasks which I need to do has not quite obvious answer to me. What is the priority of those tasks? Here are these tasks and my priority by numbering:


  1. Functional programming (Scala)
  2. Data structures & Algorithms (Cormen book to the rescue + TopCoder/ProjectEuler/etc)
  3. Design patterns (GOF or Head First)

Do you agree with this tasks and priorities? Or do I miss something here? Any suggestions are welcome!

I think you have it backwards. Start with design patterns, which will help you reduce the amount messy code you produce, and understand better code made by other people (particularly libraries written with design patterns in mind).

In addition to the book of four, there are many other design pattern books -- Patterns of Enterprise Application Architecture, for example. It might be worth looking at them after you get a good grounding. But I also highly recommend Domain Driven Design, which I think gives you a way of thinking about how to structure your program, instead of just identifying pieces here and there.

Next you can go with algorithms. I prefer Skiena's The Algorithm Design Manual, whose emphasis is more on getting people to know how to select and use algorithms, as well as building them from well known "parts" than on getting people to know to make proofs about algorithms. It is also available for Kindle, which was useful to me.

Also, get a good data structures book -- people often neglect that. I like the Handbook of Data Structures and Applications, though I'm also looking into Advanced Data Structures.

However, I cannot recommend either TopCoder or Euler for this task. TopCoder is, imho, mostly about writing code fast. Nothing bad about it, but it's hardly likely to make a difference on day-to-day stuff. If you like it, by all means do it. Also, it's excellent preparation for job interviews with the more technically minded companies.

Project Euler, on the other hand, is much more targeted at scientific computing, computer science and functional programming. It will be an excellent training ground when learning functional programming.

There's something that has a bit of design patterns, algorithms and functional programming, which is Elements of Programming. It uses C++ for its examples, which is a plus for you.

As for functional programming, I think it is less urgent than the other two. However, I indicate either Clojure or Haskell instead of Scala.

Learning functional programming in Scala is like learning Spanish in a latino neighborhood, while learning functional programming in Clojure is like learning Spanish in Madrid, and learning functional programming in Haskell is like learning Spanish in an isolated monastery in Spain. :-)

Mind you, I prefer Scala as a programming language, but I already knew FP when I came to it.

When you do get to functional programming, get Chris Okasaki's Purely Functional Data Structures, for a good grounding on algorithms and data structures for functional programming.

Beyond that, try to learn a new language every year. Even if not for the language itself, you are more likely to keep up to date with what people are doing nowadays.

I am trying to understand how entities operate in multiple bounded contexts.

Given an Employee of a Company. In (for example) the Human Resources context, this person has a name, surname, address, salary reference number, and bank account. But in the Accounting context all that is relevant is the salary reference number and bank account.

Do you have an Employee entity in the HR context and a Value-Type (e.g. SalariedEmployee) in the Accounting context?

class Employee
    public BankAccount BankAcountDetails { get; set; }
    public string FullName { get; set; }
    public Address ResidentialAddress { get; set; }
    public string SalaryRef { get; set; }

SalariedEmployee class (??) : Employee's value-type

class SalariedEmployee
    public SalariedEmployee(string salaryRef, BankAccount bankAcountDetails)

    public string SalaryRef { get; }
    public BankAccount BankAcountDetails { get; }

Does the HRService in the bounded context return this information? Or do you use the Employee class in both contexts?

If more than one context is necessary, definitely some things can be modeled as an entity in some contexts and a value object in another. Translating from an entity to a value object is usually straightforward, but from a value object to an entity may not be so straightforward. From Domain Driven Design, p. 337:

The translation mechanism is not driven by the model. It is not in the bounded context. (It is part of the boundary itself, which will be discussed in context map.)

If the Human Resources context ever needs to ask the Accounting context a question about a particular employee, it would become a confusing question.

I'm implementing a DAL using entity framework. On our application, we have three layers (DAL, business layer and presentation). This is a web app. When we began implementing the DAL, our team thought that DAL should have classes whose methods receive a ObjectContext given by services on the business layer and operate over it. The rationale behind this decision is that different ObjectContexts see diferent DB states, so some operations can be rejected due to problems with foreign keys match and other inconsistencies.

We noticed that generating and propagating an object context from the services layer generates high coupling between layers. Therefore we decided to use DTOs mapped by Automapper (not unmanaged entities or self-tracking entities arguing high coupling, exposing entities to upper layers and low efficiency) and UnitOfWork. So, here are my questions:

  1. Is this the correct approach to design a web application's DAL? Why?
  2. If you answered "yes" to 1., how is this to be reconciled the concept of DTO with the UnitOfWork patterns?
  3. If you answered "no" to 1., which could be a correct approach to design a DAL for a Web application?

Please, if possible give bibliography supporting your answer.

About the current design:

The application has been planned to be developed on three layers: Presentation, business and DAL. Business layer has both facades and services

There is an interface called ITransaction (with only two methods to dispose and save changes) only visible at services. To manage a transaction, there is a class Transaction extending a ObjectContext and ITransaction. We've designed this having in mind that at business layer we do not want other ObjectContext methods to be accessible.

On the DAL, we created an abstract repository using two generic types (one for the entity and the other for its associated DTO). This repository has CRUD methods implemented in a generic way and two generic methods to map the DTOs and entities of the generic repository with AutoMapper. The abstract repository constructor takes an ITransaction as argument and it expects the ITransaction to be an ObjectContext in order to assign it to its proctected ObjectContext property.

The concrete repositories should only receive and return .net types and DTOs.

We now are facing this problem: the generic method to create does not generate a temporal or a persistent id for the attached entities (until we use SaveChanges(), therefore breaking the transactionality we want); this implies that service methods cannot use it to associate DTOs in the BL)

You should take a look what dependency injection and inversion of control in general means. That would provide ability to control life cycle of ObjectContext "from outside". You could ensure that only 1 instance of object context is used for every http request. To avoid managing dependencies manually, I would recommend using StructureMap as a container.

Another useful (but quite tricky and hard to do it right) technique is abstraction of persistence. Instead of using ObjectContext directly, You would use so called Repository which is responsible to provide collection like API for Your data store. This provides useful seam which You can use to switch underlying data storing mechanism or to mock out persistence completely for tests.

As Jason suggested already - You should also use POCO`s (plain old clr objects). Despite that there would still be implicit coupling with entity framework You should be aware of, it's much better than using generated classes.

Things You might not find elsewhere fast enough:

  1. Try to avoid usage of unit of work. Your model should define transactional boundaries.
  2. Try to avoid usage of generic repositories (do note point about IQueryable too).
  3. It's not mandatory to spam Your code with repository pattern name.

Also, You might enjoy reading about domain driven design. It helps to deal with complex business logic and gives great guidelines to makes code less procedural, more object oriented.

Suppose I have

public class Product: Entity
   public IList<Item> Items { get; set; }

Suppose I want to find an item with max something... I can add the method Product.GetMaxItemSmth() and do it with Linq (from i in Items select i.smth).Max()) or with a manual loop or whatever. Now, the problem is that this will load the full collection into memory.

The correct solution will be to do a specific DB query, but domain entities do not have access to repositories, right? So either I do


(which is ugly, no?), or even if entities have access to repositories, I use IProductRepository from entity

product.GetMaxItemSmth() { return Service.GetRepository<IProductRepository>().GetMaxItemSmth(); }

which is also ugly and is a duplication of code. I can even go fancy and do an extension

public static IList<Item> GetMaxItemSmth(this Product product)
   return Service.GetRepository<IProductRepository>().GetMaxItemSmth();

which is better only because it doesn't really clutter the entity with repository... but still does method duplication.

Now, this is the problem of whether to use product.GetMaxItemSmth() or productRepository.GetMaxItemSmth(product)... again. Did I miss something in DDD? What is the correct way here? Just use productRepository.GetMaxItemSmth(product)? Is this what everyone uses and are happy with?

I just don't feel it is right... if I can't access a product's Items from the product itself, why do I need this collection in Product at all??? And then, can Product do anything useful if it can't use specific queries and access its collections without performance hits?

Of course, I can use a less efficient way and never mind, and when it's slow I'll inject repository calls into entities as an optimization... but even this doesn't sound right, does it?

One thing to mention, maybe it's not quite DDD... but I need IList in Product in order to get my DB schema generated with Fluent NHibernate. Feel free to answer in pure DDD context, though.

UPDATE: a very interesting option is described here:, not only to deal with DB-related collection queries, but also can help with collection access control.

I think that this is a difficult question that has no hard and fast answer.

A key to one answer is to analyze Aggregates and Associations as discussed in Domain-Driven Design. The point is that either you load the children together with the parent or you load them separately.

When you load them together with the parent (Product in your example), the parent controls all access to the children, including retrieval and write operations. A corrolary to this is that there must be no repository for the children - data access is managed by the parent's repository.

So to answer one of your questions: "why do I need this collection in Product at all?" Maybe you don't, but if you do, that would mean that Items would always be loaded when you load a Product. You could implement a Max method that would simply find the Max by looking over all Items in the list. That may not be the most performant implementation, but that would be the way to do it if Product was an Aggregate Root.

What if Product is not an Aggregate Root? Well, the first thing to do is to remove the Items property from Product. You will then need some sort of Service that can retrieve the Items associated with the Product. Such a Service could also have a GetMaxItemSmth method.

Something like this:

public class ProductService
    private readonly IItemRepository itemRepository;

    public ProductService (IItemRepository itemRepository)
        this.itemRepository = itemRepository;

    public IEnumerable<Item> GetMaxItemSmth(Product product)
        var max = this.itemRepository.GetMaxItemSmth(product);
        // Do something interesting here
        return max;

That is pretty close to your extension method, but with the notable difference that the repository should be an instance injected into the Service. Static stuff is never good for modeling purposes.

As it stands here, the ProductService is a pretty thin wrapper around the Repository itself, so it may be redundant. Often, however, it turns out to be a good place to add other interesting behavior, as I have tried to hint at with my code comment.

As part of my domain model, lets say I have a WorkItem object. The WorkItem object has several relationships to lookup values such as:


  • UserStory
  • Bug
  • Enhancement


  • High
  • Medium
  • Low

And there could possibly be more, such as Status, Severity, etc...

DDD states that if something exists within an aggregate root that you shouldn't attempt to access it outside of the aggregate root. So if I want to be able to add new WorkItemTypes like Task, or new Priorities like Critical, do those lookup values need to be aggregate roots with their own repositories? This seems a little overkill especially if they are only a key value pair. How should I allow a user to modify these values and still comply with the aggregate root encapsulation rule?

While the repository pattern as described in the blue book does emphasize its use being exclusive to aggregates, it does leave room open for exceptions. To quote the book:

Although most queries return an object or a collection of objects, it also fits within the concept to return some types of summary calculations, such as an object count, or a sum of a numerical attribute that was intended by the model to be tallied. (pg. 152)

This states that a repository can be used to return summary information, which is not an aggregate. This idea extends to using a repository to look up value objects, just as your use case requires.

Another thing to consider is the read-model pattern which essentially allows for a query-only type of repository which effectively decouples the behavior-rich domain model from query concerns.

I am looking for good resources (books/web sites) for learning object oriented design. Every resource that I find are tutoring me more on UML and RUP instead of OO design. Head first book's sheer repetition is making me not want to read any of their books. I am looking for a book similar to "Structure and interpretation of computer programs" for object oriented design that gets to the point of teaching OO. I have no preference for any specific OO laguage.

Also as replacement for the Gang of Four book.

I can recommend: The Design Patterns Smalltalk Companion

In general learning Smalltalk will help you to be a better OOP Developer on any language.

From the Amazon reviews:

Easier to understand than the original GoF, February 4, 2000 By Nicolas Weidmann
This book gives you a better understanding of the patterns than in its original version (the GoF one). I am not a SmallTalk programmer but a 9 years C++ one. At work I had to use the GoF book and never liked reading it. In contrast to this, the SmallTalk companion is easy to read and you can understand the patterns within the first few lines of their description. Take the Bridge pattern and compare their discussions in the two books. If you really like the Gof one then buy it. But according to me, it would be a big mistake buying the GoF in favour of the SmallTalk companion. Trust a C++ programmer :-)

Object-Oriented Analysis and Design with Applications by Grady Booch is the bible for this topic. It is also very approachable though somewhat dense at points, but definitely worth reading and re-reading.

Quoting myself from another answer on the same topic:

Great resources to learn how to think in patterns and do correct OOP analysis and design are Analysis Patterns: Reusable Object Models by Martin Fowler and Applying UML and Patterns by Craig Larman. Also I need to mention here Domain-Driven Design: Tackling Complexity in the Heart of Software by Eric Evans, the most valuable book I found to think about the whole software design process.

I've googled a lot and can't find what I look for.

I look for some architecture practice. I mean there are a lot of books about Design Patterns, but I want something like analysis of common mistakes in architecture of EE applications. All I've found - antipatterns like string concatenation or something else that can be found with help of FindBug or Sonar.

How I figure it out:

  1. Book with next steps: task definition, wrong decision, why it is bad, right decision.
  2. Educational resources. I heard there are such resources for testers. Some applications are opened for testing and each who want to learn testing can test it; and after some period discuss own result with other people or see the percent of bugs he has found.
  3. Maybe other ideas?

Why I think Design Pattern books are not suitable for me:

A developer may know many design patterns from such books, but can be incapable of selecting the correct one for the specific situation. IMHO, this is because these books don't give you any practice, and fail to educate the reader as to which design pattern(s) should be applied to any situation. Those books just get you a ready solution.


There aren't any answers any more. So I want to expand my question:

I believe, no I'm certain that exist courses dedicated to improve architecture skills, show the common mistakes in designing of web applications and so on. Also I know that there are a lot of conferences linked with this subject.

Advice me where should I look for them, please.

Can I assume that you know how to create independent objects using Dependency Injection? If not, this would be an excellent are in which to cultivate reuse and create a more robust architecture. Using DI would be an excellent way to re-architect an existing solution. (Contrast that with much evolved code, which becomes brittle because of interdependency.)

While you're not looking toward Design Pattern books, I'd ask you to glance at Refactoring to Patterns by J. Kerievsky.

Kerievsky takes you through some real-life refactorings which have titles like "Move Creation Knowledge to Factory." (It's "real-life" in that he uses actual code, not a contrived example.)

Finally, I have been encouraged in our recent use of Spring Integration as an Enterprise Integration Pattern. If you architect and implement even a modest project in Spring Integration, you'll get quite a lot of experience with both DI and EIP.

To me design patterns look like solutions to isolated problems. In real practice we have to deal with combinations of problems, which are highly dependent. I have never seen any pattern used purely. I can recommend one book which gives a wider view of software architecture: Domain-Driven Design: Tackling Complexity in the Heart of Software. It says how to approach to software design, instead of giving a set of common tools. With some examples it explains how to solve a set of problems in combination.

One more book gives an explanation on how to design classes and their collaboration: Interface Oriented Design: With Patterns. To write good understandable and maintainable code. It explains a lower level approaches rather then DDD.

I was reading this paper from Apple:

where it talks about OOP which I never heard before. I graduated in computer science around 1991, before OOP becoming popular, so the use of OOP was merely defining some classes, and then calling the methods, that's it. Objects didn't interact with each other -- everything was done in a main function that calls the various objects' methods.

Until I read the paper above, which talks about Interface, dynamic typing, dynamic binding, that an object can send another object a message, even before the second object is invented -- only the "interface", or the message, needs to be well defined. The second object can have unknown data type as of right now, to be invented in the future, but all it needs to do is to understand the "message".

So this way, each object interacts with one another, and each object may have a list of "outlets" which are the relationship it has with the outside world, and the object will interact with the outlets by sending them messages, and those objects, when getting a message, can in turn send back a messages to the sender. (send a message to an object = call the object's method).

I think this sort of opened my eye for OOP, much more than even the Design Pattern book by the Gang of Four. The Apple paper didn't cite any source, but I wonder it might follow some methodology from a book? Does any OOP book give a good, solid foundation in OOP which is what the Apple paper is talking about?

Nice introduction to OOP is "Coffee maker" (and quite short).

I personally really enjoy reading "Object thinking".

Another interesting book is "Domain-Driven Design: Tackling Complexity in the Heart of Software".

Next in my to-read list is "Object Design: Roles, Responsibilities, and Collaborations".

I am from .Net background and I am planning to read the following book to address this question.

Foundations of Object-Oriented Programming Using .NET 2.0 Patterns - Christian Gross

What I am finding interesting about this book is

  1. Use of generics
  2. Explaining patterns as a solution to a problem

Been watching some Greg Young videos lately and I'm trying to understand why there is a negative attitude towards Setters on Domain objects. I thought Domain objects were supposed to be "heavy" with logic in DDD. Are there any good examples online of the bad example and then they way to correct it? Any examples or explanations are good. Does this only apply to Events stored in a CQRS manner or does this apply to all of DDD?

I strongly recommend reading the DDD book by Eric Evans and Object-Oriented Software Construction by Bertrand Meyer. They have all the samples you would ever need.

For about 2 months I've been reading everything I can find for these 3 topics and I'm not yet sure I got it.

  1. Dependency Inversion Principle. Means you should always only rely on interfaces and not on their implementations. If your class depends on any other class, that's bad, because it depends on that second class' details. If your class depends on interface, that's absolutely OK since this kind of dependence only means that your class needs something abstract that can do something specific and you don't really care the way it does it.

    Since P in "DIP" stands for "Principle", I should probably define it this way: Dependency Inversion Principle is a principle that requires all your code's entities to depend only on details they really need.

    By "details they really need" I mean interfaces for the simplest case. I also used the word "entities" to emphasize that DIP is also applicable to procedures and whatever else, not only to classes.

  2. Dependency Injection. It's only applicable to DI-enabled entities. DI-enabled entity is an entity which is "open" for configuring its behavior without changing its internals. There are 2 basic kinds of injection (when talking about classes):

    • Constructor Injection - is when you pass all the required "abstract details" to the object just by the moment it's about to be constructed.
    • Setter Injection - is when you "clarify" the required aspects after the object has already been created.

    So, the definition is probably like following: Dependency Injection is a process of passing the "abstract details" to the entity that really needs these details.

    By "really needs these details" I mean interfaces for the simplest case. The word "entities" is, as always, used to emphasize that DI is also applicable to procedures and whatever else.

  3. Inversion of Control. It's often defined as "difference between libraries and frameworks", as "writing programs the either way you did in procedural programming" and so forth. That the most confusing thing for me. I believe that main idea here is just about initiating any actions. Either you do something "whenever you want" (Procedural way), or you "wait" until someone asks you (IoC way).

    My Definition is: IoC is a property of your program's execution flow, when you don't do anything until they ask you to do it.

    It sounds exactly as "Hollywood Principle", but I believe that "Hollywood Principle" and IoC are both absolutely the same idea.

Do I understand it?

My take on it is this: DIP is the principle that guides us towards DI. Basically, loose coupling is the goal, and there are at least two ways to achieve it.

  • Dependency Injection
  • Service Locator

However, Service Locator is an anti-pattern and has nothing to do with the DIP. DI, however, is the correct application of the DIP.

The relationship between DI and IoC has been explained before.

BTW, when talking about DI and loose coupling, I find the terminology laid out in Domain-Driven Design the most applicable. Basically, Services are the only kinds of objects subjected to DI.

How are user sessions handled in domain driven design (in a MVC framework)?

I've got a User domain object, a UserRepository and a UserService.

I've got this method in my UserService class that logs users in.

public function login($email, $password, $remember = false)
    $user = $this->userRepo->findByEmail($email);

    if ($user && $user->getPassword() === $password) {
        return $user;

    return false;

How do I keep them logged in with sessions?

How would I automatically load the user based on a session user id?

Can somebody give me an example with code how I could sustain the user in my application in DDD?

From a DDD perspective, managing sessions is a distinct set of behaviors, therefor deserves a dedicated service. So create such a service.

You can pass that service to your UserService as a dependency, so the UserService can use the session manager for storing authentication information.

Better yet, the concept of authentication might also be seen as a distinct set of behaviors, so create a service for that too. Pass your UserService and session manager to this authentication service as dependencies. (So the session manager is no longer a dependency of UserService.)

But even authentication could be broken down into several distinct parts, it depends on how far you want to go.

I unfortunately can't show you any code, because that would highly depend on what kind of authentication you want to perform (HTTP Basic, Form login, OAuth, etc), what level of abstraction you want to achieve, and your personal preferences.

But if you want to see what a complex system can look like, have a look at the Security Component of Symfony 2, here in the documentation and here on github.

And if you would consider using this component, you can look at how Silex implements it (github) to get a feel for how you can use it.

Side note

DDD is about much more than writing your code in a certain way. If you want to learn DDD, I suggest you read the Domain-Driven Design: Tackling Complexity in the Heart of Software (the blue book), Implementing Domain-Driven Design (the red book), or you can start of with Domain Driven Design Quickly which is available for download.

Should value object hold reference to entity in DDD methodology?



This is probably my case. Here I attach class diagram where the Account hold references to collection of IInvoiceable items. I treat with Tenant as entity, but it owns only 1 account and i dont think that Account needs identity. its part of Tenant. Or should I treat it as Entity? To me it doesnt make sense.

enter image description here

Yes it can. This would be a relatively obscure case but DDD allows for it and it can be useful. From the DDD book by Eric Evans:

VALUE OBJECTS can even reference ENTITIES. For example, if I ask an online map service for a scenic driving route from San Francisco to Los Angeles, it might derive a Route object linking L.A. and San Francisco via the Pacific Coast Highway. That Route object would be a VALUE, even though the three objects it references (two cities and a highway) are all ENTITIES.

page #98


Moving from a .NET / SQL Developer role to a more of an architect one. Can anyone reccomend any good books for modern Software Architecture in enterprise?

Appreciate your help.



Domain Driven Design by Eric Evans.

I am trying to get my hands dirty learning DDD (by developing a sample eCommerce site with entities like Order, OrderLines, Product, Categories etc). From what I could perceive about Aggregate Root concept I thought Order class should be an aggregate root for OrderLine.

Things went fine so far, however I am confused when it define a create order flow from UI. When I want to add an order line to my order object, how should I get/create an instance of an OrderLine object:

  1. Should I hardcode the new OrderLine() statement in my UI/Service class
  2. Should I define a method with parameters like productID, quantity etc in Order class?

Also, what if I want to remove the hardcoded instantiations from the UI or the Order class using a DI. What would be the best approach for this?

You could use an OrderLine Factory to get instances of Orderlines. You would "new up" an OrderLine object in the factory with parameters passed into the factory method and then return the new instance to your Order object. Always try to isolate instantiations and dont do it in the UI. There is a question here that uses this technique.

Here is a great book you will find useful on DDD.

In applications following DDD I worked on, we tend to have a Service Layer that contains the Services + Repositories + the interfaces for repositories and services, they all live in the same assembly, while the domain model will live in a different assembly. It feels like everything that doesn't fit the domain model is cluttered in this one big project.

In an application that follows DDD principles and patterns, how do you package the repositories and the interfaces they implement? What are the best practices for packaging different logical parts of DDD application (or packaging in general for that matter)? Should every logical partition live in its own assembly? Does it even matter?

You can find guidelines for designing your layers in the DDD book. You've basically got :

  • Domain
  • Infrastructure
  • Application
  • UI

Services come in 3 kinds : Application layer service, Infrastructure layer service and Domain layer service, depending on what the service does. As for the Repositories, their contracts (interfaces) often reside in the Domain while their concrete implementations are in the Infrastructure layer.

Regarding assemblies, I'd recommend at least one per layer. Assemblies and dll's are all about reusability, separation of concerns and decoupling - will I be able to pick that dll and drop it to reuse it in another application ? Will I be able to do so without dragging along a bunch of unrelated features that will bring unnecessary complexity to that other application ? Will I be able to substitute my dll easily for another one by just changing one line of code in my dependency injection module ? and so on.

We are using builder pattern to generate test data. These domain objects have relations between them. Our functional tests require these objects to be persisted.

Think about this model:

domain model

If I want a plain instance of C I do aNew().c().build()

If I want it to be persisted I do aNew().c().saveIn(session)

If I want an instance of C with a known B I do aNew().c().with(b).build()

Well, you got the idea. My problem is, if I want to persist a C, should it persist it's B? Or should it be persisted before hand? What about if I want a reasonable default B? What about if I want to persist a D? Should it persist all A, B, C?

Of course real system is much more complex (sometimes with circular references). I am looking for a best practice for persisting complex test data.

Edit: It looks like I have bumped into the language barrier, my mother language is not English, so I am sorry for obscurity. Here is more information:

  • It is not legacy code that I am trying to test
  • I am trying to write a coverage test, NOT a unit test (as a result I won't be mocking anything)
  • The piece of software I am trying to test works if the database is populated to some extend (it does not use all entities).

PS. Please don't hesitate to ask for more information, because I have been struggling to find the possible best practice. The closest thing I have come up with is:

  1. Keep track of what has been set explicitly while building an entity.
  2. Assume that explicitly set entities are already persisted, do not persist them.
  3. Persist everything else (with their own persister).

This will work, but my spider sense is tingling, I think I am doing something wrong because, there will be logic involved in test code, it will be very complex to deal with without tests.

Edit 2: I will try to make myself more clear. When I am writing/running my unit and some integration tests I have no problem, because the test data are not persisted, it lives in memory.

But when I try to persist my test data, hibernate will not let me save an entity without it's relations.

How can I overcome this problem?

I separated your answers by topic.

My problem is, if I want to persist a C, should it persist it's B? What about if I want to persist a D? Should it persist all A, B, C?

This is entirely dependent upon the domain constraints you choose to enforce. For example, is C an entity and B a value object? In other words, does C have a unique identity and life of its own? Is B mainly identified by its value and its life cycle tightly coupled to that of its parent C?

Asking these types of questions should help guide your decisions on what to persist, when, and by whom.

For example, if both C and B are entities sharing only a relationship, you might decide to persist them independantly, since each could conceivably have a meaningful life and identity of its own. If B is a value object, you'd probably choose to have its parent entity C control its life, including the creation/retrieval/updating/deleting of the object. This might very well include C persisting B.

Or should it be persisted before hand?

To answer this you could have to map out your object dependencies. These dependencies are frequently represented by foreign key constraints when an object graph is persisted to a RDBMS. If C could not function without a reference to B, then you would probably want to persist them both inside a transaction, with B being done first to comply with the database's foreign key constraints. Following the line of thought above, if B was a child entity or value object of C, you might even have C responsible for persisting B.

What about if I want a reasonable default B?

The creation of B instances could be delegated to the B-Factory. Whether you implement this factory logic as a class (not instance) method, constructor, or separate it out as its own unit doesn't matter. The point is you have one place where the creation and configuration of new Bs takes place. It is in this place that you would have a default configuration of the newly instantiated object take place.

An excellent resource covering these types of questions is Domain-Driven Design by Eric Evans

What is the best way to solve this problem in code?

The problem is that I have 2 dollar amounts (known as a pot), that need to be allocated to 3 people. Each person gets a specific amount that comes from both pots and the rates must be approximately the same. I keep coming across rounding issues where my allocations either add up to too much or too little.

Here is a specific example:

Pot #1 987,654.32
Pot #2 123,456.78

Person #1 gets Allocation Amount: 345,678.89
Person #2 gets Allocation Amount: 460,599.73
Person #3 gets Allocation Amount: 304,832.48

My logic is as follows (Code is in c#):

foreach (Person person in People)
    decimal percentage = person.AllocationAmount / totalOfAllPots;

    decimal personAmountRunningTotal = person.AllocationAmount;

    foreach (Pot pot in pots)
        decimal potAllocationAmount = Math.Round(percentage * pot.Amount, 2);
        personAmountRunningTotal -= potAllocationAmount;

        PersonPotAssignment ppa = new PersonPotAssignment();
        ppa.Amount = potAllocationAmount;


    foreach (PersonPotAssignment ppa in person.PendingPotAssignments)
        if (personAmountRunningTotal > 0) //Under Allocated
            ppa.Amount += .01M;
            personAmountRunningTotal += .01M;
        else if (personAmountRunningTotal < 0) //Over Allocated
            ppa.Amount -= .01M;
            personAmountRunningTotal -= .01M;

The results I get are as follows:

Pot #1, Person #1 = 307,270.13
Pot #1, Person #2 = 409,421.99
Pot #1, Person #3 = 270,962.21
Pot #1 Total = 987,654.33 (1 penny off)

Pot #2, Person #1 = 38,408.76
Pot #2, Person #2 = 51,177.74
Pot #2, Person #3 = 33,870.27
Pot #2 Total = 123,456.77 (1 penny off)

The Pot Totals should match the original totals.

I think I may be missing something or there may be an extra step that I need to take. I think I am on the right track.

Any help would be greatly appreciated.

I think this is exactly the problem that Eric Evans addresses in his "Domain Driven Design" Chapter 8, pp. 198-203.

I am reading right now, and I just need 2 quick examples so I understand what 'value objects' and 'services' are in the context of DDD.

  • Value Objects: An object that describes a characteristic of a thing. Value Objects have no conceptual identity. They are typically read-only objects and may be shared using the Flyweight design pattern.

  • Services: When an operation does not conceptually belong to any object. Following the natural contours of the problem, you can implement these operations in services. The Service concept is called "Pure Fabrication" in GRASP.

Value objexts: can someone give me a simple example this please?

Services: so if it isn't an object/entity, nor belong to repository/factories then its a service? I don't understand this.

The archetypical example of a Value Object is Money. It's very conceivable that if you build an international e-commerce application, you will want to encapsulate the concept of 'money' into a class. This will allow you to perform operations on monetary values - not only basic addition, subtraction and so forth, but possibly also currency conversions between USD and, say, Euro.

Such a Money object has no inherent identity - it contains the values you put into it, and when you dispose of it, it's gone. Additionally, two Money objects containing 10 USD are considered identical even if they are separate object instances.

Other examples of Value Objects are measurements such as length, which might contain a value and a unit, such as 9.87 km or 3 feet. Again, besides simply containing the data, such a type will likely offer conversion methods to other measurements and so forth.

Services, on the other hand, are types that performs an important Domain operation, but doesn't really fit well into the other, more 'noun'-based concepts of the Domain. You should strive to have as few Services as possible, but sometimes, a Service is the best way of encapsulating an important Domain Concept.

You can read more about Value Objects, Services and much more in the excellent book Domain-Driven Design, which I can only recommend.

I'm reading about the idea of Bounded Contexts in DDD, and I'm starting to realize that I don't have a clear understanding of exactly what a Model looks like in practice. (I might not even know exactly what a Domain means, either.)

Let's look at the popular e-commerce example: A customer browses products, adds to their cart, places an order. The order fulfillment people ship out the orders.

Is there one big e-commerce Domain with multiple Bounded Contexts (Product Catalog Context, Shopping Cart Context, Order Context, Fulfillment Context)? Does each Bounded Context contain a bunch of Models (So that the Product Catalog Context contains models for the products, the product images, the product reviews)?

How far off am I?

At least You are on right track. Classic mistake is to see patterns only.

Domain means problems You are dealing with (support for e-commerce, healthcare, accounting, etc.). Domain model is solution of those problems represented in code that follows our mental model as close as possible.

Let's look at the popular e-commerce example: A customer browses products, adds to their cart, places an order. The order fulfillment people ship out the orders.

In Your example, I would start with something like this:

class Product { }

class Customer
    Cart Cart;
    void PlaceAnOrder()
        order = new Order(Cart.Products);
        Cart.Empty(); //if needed
    Orders Orders;
    Orders UnfulfilledOrders()
        Orders.Where(order => !order.IsFilled);

class Cart
    void AddProduct(product)
    void Empty()

class Order
    bool IsFilled;
    void Order(products)
        Products = products;
        IsFilled = false;
    void Fill()
        IsFilled = true;
        //TODO: obviously - more stuff needed here
    Money TotalPrice()
        return Products.Sum(x => x.Price);

class System
    void Main()
    void SimulateCustomerPlacingAnOrder()
        customer = new Customer();
    void SimulateFulfillmentPeople()
        foreach (var customer in allCustomers)
            foreach (var order in customer.UnfulfilledOrders())

At start - this seems like a huge overkill. With procedural code - the same can be achieved with few collections and few for loops. But the idea of domain driven design is to solve really complex problems.

Object oriented programming fits nicely - with it You can abstract away things that doesn't matter when You advance forward. Also - it's important to name things accordingly so You (and Your domain experts (people that understands problem)) would be able to understand code even after years. And not only code but to talk in one ubiquitous language too.

Note that I don't know e-commerce domain and what kind of problems You might be trying to solve, therefore - it's highly likely that I just wrote complete nonsense according to Your mental model. That is one reason why teaching domain modeling is so confusing and hard. Also - it demands great skill in abstract thinking which according to my understanding ain't main requirement to get CS degree.

You are kind a right about bounded contexts. But You should remember that they add need for translation between them. They add complexity and as usual - complex solution is appropriate for complex problems only (this is true for ddd itself too). So - You should avoid spamming them as long as meaning of your domain entities don't overlap. Second reason (less "natural") would be strong need for decomposition.

P.s. Read Evans book. Twice... Until it makes sense... :)

I'm stuck on finding the proper way to refer to entities located inside an aggregate root, when we only got their identities coming from URL parameters. I asked a previous question which ended up focused on value objects, so I'm starting with another example here.

Let's say we want to modify an OrderLine inside an Order:

  • The user goes to a page where he can see the Order summary along with all its Order Lines.
  • The user clicks on the edit button next to an Order Line.
  • He gets directed to edit-order-line?orderId=x&orderLineId=y

Now if I need to update the quantity in the OrderLine, I can do:

Order order = orderRepository.find(orderId);
order.updateQuantity(orderLineId, 2);

However, I don't feel very comfortable with the idea of leaving the responsibility to the Order to retrieve parts of itself by Id. My view on the subject is that within the domain, we should just talk with objects, and never with Ids. Ids are not part of the ubiquitous language and I believe they should live outside of the domain, for example in the Controller.

I would feel more confident with something like:

Order order = orderRepository.find(orderId);
OrderLine orderLine = em.find(OrderLine.class, orderLineId);
order.updateQuantity(orderLine, 2);

Though I don't like the idea of interacting directly with an Entity Manager, either. I feel like I'm bypassing the Repository and Aggregate Root responsibilities (because I could, potentially, interact with the OrderLine directly).

How do you work around that?

In my opinion there is nothing wrong with this approach:

Order order = orderRepository.find(orderId);
order.updateQuantity(orderLineId, 2);

orderLineId is a 'local identity'. It is specific to aggregate root and does not make sense outside of it. You don't have to call it an 'id', it can be 'order line number'. From Eric Evan's book:

ENTITIES inside the boundary have local identity, unique only within the AGGREGATE.

...only AGGREGATE roots can be obtained directly with database queries. All other objects must be found by traversal of associations.

After reading this post (business logic database or application layer) I still don't have the sufficient reasons to fight the "business logic in database" topic.

In my current work, there is a lot of db transaction (actually) and all that crappy code is hard to maintain, a lot of duplication in the stored procedures, so if you want change a value in a table for a bit, you will need to find all of those procedures and change them to what you want. Same happens if you need to change a little bit a table design.

All the current developers know SQL very well, but they still being not experts in any DATABASE as a engine(8 devs).

Currently we are planning to migrate the entire core to a new version (including database design). And I need some examples of:

  • Why Business Logic in Database is sometimes EVIL ?
  • How much and when Business Logic in Database is a good practice ?
  • Why Business Logic in Application Layer is better for an enterprise application. ?

App Language: Java
Database: Oracle11g
The application will have Services, served as HTTP pages and as WebServices.

For one thing, if you want to be able to migrate to a different database brand, you should not have your business logic in stored procedures.

Also, for complex domains, modeling the domain is much more natural on the Java side with an OO model, than on the DB side. OO lends itself well to expressing abstractions and relationships between them.

The canonical book on the subject is Domain Driven Design.

A reason to stay on the DB side may be performance. If you have huge amounts of business data, it may not be efficient enough to retrieve and manipulate it in the application. This is especially true to batch processing.

As I understand it, a UnitOfWork class is meant to represent the concept of a business transaction in the domain. It's not directly supposed to represent a database transaction, which is a detail of only one possible implementation.

Q: So why does so much documentation about the Unit of Work pattern refer to "Commit" and "Rollback" methods?

These concepts mean nothing to the domain, or to domain experts. A business transaction can be "completed", and therefore the UnitOfWork should provide a "Complete" method. Likewise, instead of a "Rollback" method, shouldn't it be modeled as "Clear"?


Answer: Both answers below are correct. Their are two variants of UoW: object registration and caller registration. In object registration, Rollback serves to undo changes to all in-memory objects. In caller registration, Rollback serves to clear all recorded changes such that subsequent call to Commit will do nothing.

The Unit of Work design pattern, at least as defined by Fowler in Patterns of Enterprise Application Architecture - is an implementation detail concerning object-relational persistence mapping. It is not an entity defined in Evans' Domain Driven Design.

As such, it should neither be part of the business discussion, nor an entity that's directly exposed in a domain model - perhaps excepting the commit() method. Instead its intent is tracking "clean" and "dirty" business entities - the objects from a domain model exposed to clients. The purpose is allowing multiple interactions - in web context requests - with a domain model without the need to read and write from persistence (usually a database) each time.

Business entities call it when their methods are called. When their state is altered, they register themselves as dirty with the Unit of Work. Then the Unit of Work's commit() handles the entire persistence transaction in terms of writing out the object graph and rollback() means restoring the state of entities to what they were. So its very much the implementation leaking through to the "abstraction", but its intent is very clear.

On the other hand, "Undo" and "Complete" don't necessarily map one-to-one with this definition. An "Undo" or "Clear" may only rollback an object graph partially for instance depending on the business context. While "Complete" may well be altering state on some entity as well as committing the graph. As such I would put these methods, with business meaning, on a Service Layer or Aggregate Root object.

I am reading Eric Evans book about DDD and I have a question to the following quote. How do you make your equals() method when you should not use the attributes? I am using JPA and I have a id attribute which is unique but this is not set until you actually persist the entity. So what do you do? I have implemented the equals method based on the attributes and I understand why you shouldn't because it failed in my project.

Section about entities:

When an object is distinguished by its identity, rather than its attributes, make this primary to its definition in the model. Keep the class definition simple and focused on life cycle continuity and identity. Define a means of distinguishing each object regardless of its form or history. Be alert to requirements that call for matching objects by attributes. Define an operation that is guaranteed to produce a unique result for each object, possibly by attaching a symbol that is guaranteed unique. This means of identification may come from the outside, or it may be an arbitrary identifier created by and for the system, but it must correspond to the identity distinctions in the model. The model must define what it means to be the same thing.

Couple approaches possible:

  • Use a business key. This is the most 'DDD compliant' approach. Look closely at domain and business requirements. How does your business identify Customers for example? Do they use Social Security Number or phone number? How would your business solve this problem if it was paper-based (no computers)? If there is no natural business key, create surrogate. Choose the business key that is final and use it in equals(). There is a section in DDD book dedicated to this specific problem.

  • For the cases when there is no natural business key you can generate UUID. This would also have an advantage in distributed system in which case you don't need to rely on centralized (and potentially unavailable) resource like database to generate a new id.

  • There is also an option to just rely on default equals() for entity classes. It would compare two memory locations and it is enough in most cases because Unit Of Work (Hibernate Session) holds on to all the entities (this ORM pattern is called Identity Map). This is not reliable because it will break if you use entities that are not limited to the scope of one Hibernate Session (think threads, detached entities etc)

Interestingly enough, 'official' DDD sample uses a very lightweight framework where every entity class is derived from Entity interface with one method:

boolean sameIdentityAs(T other) 
// Entities compare by identity, not by attributes.

I apologize for so many questions, but I felt that they make the most sense only when treated as a unit

Note - all quotes are from DDD: Tackling Complexity in the Heart of Software ( pages 250 and 251 )


Operations can be broadly divided into two categories, commands and queries.


Operations that return results without producing side effects are called functions. A function can be called multiple times and return the same value each time.


Obviously, you can't avoid commands in most software systems, but the problem can be mitigated in two ways. First, you can keep the commands and queries strictly segregated in different operations. Ensure that the methods that cause changes do not return domain data and are kept as simple as possible. Perform all queries and calculations in methods that cause no observable side effects

a) Author implies that a query is a function since it doesn't produce side effects. He also notes that function will always return same value, by which I assume he means that for the same input we will always get the same output?

b) Assume we have a method QandC(int entityId) which queries for specific domain entity, from which it extracts certain values, which in turn are used to initialize a new Value Object and this VO is then returned to the caller. Isn't according to above quote QandC a function, since it doesn't change any state?

c) But author also argues that for same input a function will always produce same output, which isn't the case with QandC, since if we place several calls to QandC, it will produce different results, assuming that in the time between the two calls this entity was modified or even deleted. As such, how can we claim QandC is a function?


Ensure that the methods that cause changes do not return domain data ...

Reason being that the state of returned non-VO may be changed in some future operations and as such the side effects of such methods are unpredictable?


Ensure that the methods that cause changes do not return domain data ...

Is a query method that returns an entity still considered a function, even if it doesn't change any state?


VALUE OBJECTS are immutable, which implies that, apart from initializers called only during creation, all their operations are functions.


An operation that mixes logic or calculations with state change should be refactored into two separate operations. But by definition, this segregation of side effects into simple command methods only applies to ENTITIES. After completing the refactoring to separate modification from querying, consider a second refactoring to move the responsibility for the complex calculations into a VALUE OBJECT. The side effect often can be completely eliminated by deriving a VALUE OBJECT instead of changing existing state, or by moving the entire responsibility into a VALUE OBJECT.


VALUE OBJECTS are immutable, which implies that, apart from initializers called only during creation, all their operations are functions ... But by definition, this segregation of side effects into simple command methods only applies to ENTITIES.

I think author is saying all methods defined on VOs are functions, which doesn't make sense, since even though a method defined on a VO can't change its own state, it still can change the state of other, non-VO objects?!

b) Assuming method defined on an entity doesn't change any state, do we consider such a method as being a function, even though it is defined on an entity?


... consider a second refactoring to move the responsibility for the complex calculations into a VALUE OBJECT.

Why is author suggesting we should only refactor from entities those function that perform complex calculations? Why instead shouldn't we also refactor simpler functions?


... consider a second refactoring to move the responsibility for the complex calculations into a VALUE OBJECT.

In any case, why is author suggesting we should refactor functions out of entities and place them inside VOs? Just because it makes it more apparent to the client that this operation MAY be a function?


The side effect often can be completely eliminated by deriving a VALUE OBJECT instead of changing existing state, or by moving the entire responsibility into a VALUE OBJECT.

This doesn't make sense, since it appears author is arguing if we move a command ( ie operation which changes the state ) into a VO, then we will in essence eliminate any side-effects, even if command is changing the state. So any ideas, what was author actually trying to say?



It depends on the perspective. A database query does not change state and thus has no side effects, however it isn't deterministic by nature, since as you point out the data can change. In the book, the author is referring to functions associated with value object and entities, which don't themselves make external calls. Therefore, the rules don't apply to QandC.

So author was describing only functions that don't make external calls and as such QandC isn't a type of function that author was describing?


QandC does not itself change state - there are no side effects. The underlying state may be changed out of band however. Due to this, it is not a pure function.

But it also isn't the Side-Effect-Free function in the sense author defined them?


Again, this is based on CQS.

I know I'm repeating myself, but I assume discussion in the book is based on CQS and CQS doesn't consider QandC as Side Effect Free function due to a chance of entity returned by QandC having its state modified ( by some other operation ) sometime in the future?


It is considered a query from the CQRS perspective, but it cannot be called a function in the sense that a pure function on a VO is a function due to lack of determinism.

  • I don't quite understand what you were trying to say ( the confusing part is in bold ). Perhaps that while QandC is considered a query, it is not considered a function due to returning an entity and such the side-effects are unpredictable, which makes QandC a non-deterministic by nature

  • So author is only making those statements ( see quote in 1e ) under the implicit assumption that no operation defined in VO will ever try to change the state of non-VO objects?


Given that VOs are immutable, they are a fitting place to house pure functions. This is another step towards freeing domain knowledge from technical constraints.

  • I don't understand why moving function from entity to VO would help free domain knowledge from technical constraints ( I'm also not really sure what you mean by technical – technical as in technology-related or... )?

  • I assume other reason for putting function in VO is because it is that much more obvious ( to client ) that this is a function?


I view this as a hint towards event-sourcing. Instead of changing existing state, you add a new event which represents the change. There is still a net side effect, however existing state remains stable.

I must confess I know nothing about even-source programming, since I'd like to first wrap my head around DDD. Anyway, so author didn't imply that just moving a command to VO would automatically eliminate side-effects, but instead some additional actions would have to be taken ( such as implementing event-sourcing ), only he "forgot" to mention that part?



One of the defining characteristics of an entity is its identity .... By placing business logic into VOs you can consider it outside of the context of an entity's identity. This makes it easier to test this logic, among other things.

I somehwat understand the point you're making ( when thinking about the concept from distance ), but on the other hand I really don't. Why would function within an entity be influenced by an identity of this entity ( assuming this function is pure function, in other word it doesn't change state and is deterministic )?


Yes that is my understanding of it - there is still a net "side effect". However, there are different ways to attain a side effect. One way is to mutate existing state. Another way is to make the state change explicit with an object representing that change.

I - Just to be sure ... From your answer I gather that author didn't imply that side-effects would be eliminated simply by moving a command into VO?

II - Ok,if I understand you correctly, we can move a command into VOs ( even though VOs shouldn't change the state of anything and as such shouldn't cause any side-effects ) and this command inside VO is still allowed to produce some sort of side effects, but this side effect is somehow more acceptable ( OR MORE CONTROLLABLE ) by making state change explicit ( which I interpret as the thing that changed is returned to the caller as VO )?

3) I must say that I still don't quite understand why state-changing method SC shouldn't return domain objects. Perhaps because non-VO may be changed in some future operations and as such the side effects of SC are very unpredictable?


Delegating the management of state to the entity and the implementation of behavior to VOs creates certain advantages. One is basic partitioning of responsibilities.

a) You're saying that even though a method describes a behavior of an entity ( and thus entity containing this method adheres to SRP ) and as such belongs in the entity, it may still be a good idea to move it into VO? Thus in essence, we would partition a responsibility of an entity into two even smaller responsibilities?

b) But won't moving behavior into VO basically turn this entity into a mere data container ( I understand that entity will still manage its state, but still ... )?

thank you

1a) Yes. The discourse on separating queries from commands is based on the Command-query separation principle.

1b) It depends on the perspective. A database query does not change state and thus has no side effects, however it isn't deterministic by nature, since as you point out the data can change. In the book, the author is referring to functions associated with value object and entities, which don't themselves make external calls. Therefore, the rules don't apply to QandC. Determinism could be fabricated however, offering degrees of "pureness". For instance, a serializable transaction could be created which can ensure that data doesn't change for its duration.

1c) QandC does not itself change state - there are no side effects. The underlying state may be changed out of band however. Due to this, it is not a pure function. However, the restriction that QandC doesn't change state is still valuable. The value is fittingly demonstrated by CQRS which is the application of CQS in distributed scenarios.

1d) Again, this is based on CQS. Another take on this is the Tell-Don't-Ask principle. Given an understanding of these principles however, the rule can be bent IMO. A side-effecting method could return a VO representing the result for instance. However, in certain scenarios such as CQRS + Event Sourcing it could be desirable for commands to return void.

1e) It is considered a query from the CQRS perspective, but it cannot be called a function in the sense that a pure function on a VO is a function due to lack of determinism.

2a) No, a VO function shouldn't change state of anything, it should instead return a new object.

2b) Yes.

2c) Because functional purity tends to become more important in more complex scenarios. However, as you point out, isn't a clear and definitive rule. It shouldn't be based on complexity as much as it is based on the domain at hand.

2d) Given that VOs are immutable, they are a fitting place to house pure functions. This is another step towards freeing domain knowledge from technical constraints.

2e) I view this as a hint towards event-sourcing. Instead of changing existing state, you add a new event which represents the change. There is still a net side effect, however existing state remains stable.


1b) Yes.

1c) It is a side-effect free function, however it is not a deterministic function because it cannot be thought to always return the same value given the same input. For example, the function that returns the current time is a side-effect free function, but it certainly does not return the same value in subsequent calls.

1d) QandC can be thought of as side-effect free, but not pure. Another way to look at functional purity is as referential transparency - the ability to replace a function call by its value without changing program behavior. In other words, asking the question does not change the answer. QandC can guarantee that, but only within a context such as a transaction. So QandC can be thought of as a function, but only in a specific context.

1e) I think the confusing part is that the author is talking specifically about functions on VOs and entities - not database queries, where as we are talking about both. My statement extends the discussion to database queries and CQRS given certain restrictions, ie an ambient transaction.

2d) I can see how what I said was a bit vague, I was getting lazy. One of the defining characteristics of an entity is its identity. It maintains its identity throughout its life-cycle while its state may change. By placing business logic into VOs you can consider it outside of the context of an entity's identity. This makes it easier to test this logic, among other things.

2e) Yes that is my understanding of it - there is still a net "side effect". However, there are different ways to attain a side effect. One way is to mutate existing state. Another way is to make the state change explicit with an object representing that change.


2d) This particular point can be argued or can be a matter of preference. One perspective is the idea is based on the single-responsibility principle (SRP). The responsibility of an entity is the association of an identity with behavior and state. Behavior combines input with existing state to produce state transitions. Delegating the management of state to the entity and the implementation of behavior to VOs creates certain advantages. One is basic partitioning of responsibilities. Another is more subtle and perhaps more arguable. It is the idea that logic can be considered in a stateless manner. This allows thinking about such logic easier and more like thinking about a mathematical equation where all changes are explicit - no hidden state.

2e.1) Yes, eliminating a net side effect would alter behavior, which is not the goal.

2e.2) Yes.

3) Commands returning void have several advantages. One is that they become naturally more adept in async scenarios - no need to wait for a result. Another is that it allows you to represent the operation as a single command object - again, because there is no return value. This applies in CQRS and also event sourcing. In these cases, any command output is dispatched as an event instead of a result. But again, if these requirements don't apply returning a result object can be appropriate.


a) Yes, and this is a specific type of partitioning.

b) The responsibility of the entity is to coordinate behavior by delegating to VOs and applying the resulting state changes.

i have a hashMap which i would like its data to be viewed in a JTable how ever i am having trouble getting the hashMap amount of columns and rows and the data to be displayed.i have a hashmap which takes a accountID as the key and a object of students in which each students have their data like name,id, age, etc.however referring to the JTable docs, it says i would need ints for the row and column and a multidimension array of type Object. how can i do it? can i change my hashMap into a multidimenion array?

--Edit i have edited my question so it could be more clear , i am fairly new to Java i do not really get what some of you have posted, especially since the work i am doing is quite related to OO and grasping OO concepts is my biggest challenge,

/I have a dataStorage class, the registered user is added to the HashMap with a Key input of his Username, which is getUser ./

import java.util.*;

public class DataStorage 
    HashMap<String, Student> students = new HashMap<String, Student>();  
    HashMap<String, Staff> staffMembers = new HashMap<String, Staff>();  
    //Default constructor
    public DataStorage(){

    public void addStaffMember(Staff aAcc) 

    public void addStudentMember(Student aAcc)

   public Staff getStaffMember(String user)
   return   staffMembers.get(user);

   public Student getStudent(String user)
    return students.get(user);

   public int getStudentRows()
        return students.size();


/**** This is a student class which extends Account***/

public class Student extends Account {

    private String studentNRIC;
    private String diploma;
    private String gender;
    private double level;
    private int credits;
    private int age;
    private boolean partTime;
    private boolean havePc;
    private boolean haveChild;

    public Student(String n, String nr, String id, String dep, String user, String pass)
        super(n, dep, user, pass, id);
        studentNRIC = nr;

    public void setPartTime(boolean state)
        if(state == true)
            partTime = true;
            partTime = false;

    public boolean getPartTime()
        return partTime;

    public void setHavePc(boolean state)
        if(state == true)
            havePc = true;
            havePc = false;

    public boolean getHavePc()
        return havePc;

    public void setHaveChild(boolean state)
        if(state == true)
            haveChild = true;
            haveChild = false;

    public boolean getHaveChild()
        return haveChild;
    public void setDiploma(String dip)
        diploma = dip;

    public String getDiploma()
        return diploma;

    public void setCredits(String cre)
        credits = Integer.parseInt(cre);

    public int getCredits()
        return credits;

    public void setGender(String g)
        gender = g;

    public String getGender()
        return gender;

    public void setAge(String a)
        age = Integer.parseInt(a);

    public int getAge()
        return age;
    public void setLevel(String lvl)
        level = Double.parseDouble(lvl);

    public double getLevel()
        return level;
    public void setStudentNRIC(String nr)
        studentNRIC = nr;

    public String getStudentNRIC()
        return studentNRIC;


/**** This is a the Account superclass***/

public class Account {

    private String name;
    private String department;
    private String username;
    private String password;
    private String accountID;
    public Account()
    public Account(String nm,String dep,String user,String pass, String accID) 
        name = nm;
        department = dep;
        username = user;
        password = pass;
        accountID = accID;


    public void setName(String nm)
        name = nm;

    public String getName()
        return name;

    public void setDep(String d)
        department = d;

    public String getDep()
        return department;

    public void setUser(String u)
        username = u;
    public String getUser()
        return username;

    public void setPass(String p)
        password = p;

    public String getPass()
        return password;

    public void setAccID(String a)
        accountID = a;

    public String getAccID()
        return accountID;

Your DataStorage is like the StudentRegistration is used in the sample code.

 // TIP: It can be handy to place the student in some order in the Map 
    //      (therefore using the sorted map).
    private SortedMap students = new TreeMap();  
    // QUESTION: Why not use argument name 'student'?
    public void addStudentMember(Student aAcc)
    // Updated implementation
    public void addStudent(Student student)
        students.put(student.getAccID(), student);
 // QUESTION: Would a method name 'getNumberOfStudents' not be better?  
    public int getStudentRows()

For me it is a little unclear why Student extends from Account. The account identification, is that an unique-id, through the hole system? Do staff (users) and student (users) all have that as unique identification? Where / who creates them? If not the system self, it can never be guranteed that they also enter correctly into your system. Even when checking on uniqueness within your system, helps. But who say not someone else (by accedent) used someone else his/her unique id? (How are the student and staff (accounts) created? If these id's are indeed unique, why not use those for placing the student into a SortedMap? If the sorting is not important. Why not just use a List of students?

Is the name parameter unique (by which you place the student in the Map)?

Programming is little more then learning a programming language. As once understanding the OO-language Java it is good to read some more general programming books. In your specific case I would say start with Domain Driven Design. And then continue with books like these Test Driven Development, Refactoring to Patterns and Design Patterns.

I have two years worth of PHP experience however I have never worked with design patterns. I have recently been given a task at work that has made me question the wisdom of that decision.

I have been asked to create a system that:

  1. Insert orders into a database. These orders can have a multitude of attributes and control logic that needs to be applied. The system will not be directly responsible for a front end, it will receive commands via an API.

  2. Write the scripts that extracts orders from the database, decide which supplier to send with and create the associated files for uploading to a provider. File formates vary (XML, CSV).

  3. Process incoming files from suppliers in the various formates (XML, CVS) and update the databases. Also FTP will have to be utilised to gather the files from the remote host.

I would like a central control class that can service each of the required functions, one point of access so to speak. This would then extend into the various specific functionality as required.

I am wondering if it is worth learning a design pattern to facilitate this system, if so could someone point me in a direction as there are many. Any advise on my issue would be very helpful.

Thanks in advance

Yes it's a worthwhile effort, however as it will not solve your problem. If you have been coding without them you sometimes use them in a mixture of patterns and thing are not always clear. Having names and template may make your code clearer and easier to maintain.

In the last 2 year I have been brushing up on design patterns and how to better model solution, and I believe it has payed off.

I personally read used these books for my studies:

I do think studying design design patterns can help you improve your design. Having clear names will aid people that coming on later.

I've been reading up on DDD a little bit, and I am confused how this would fit in when using an ORM like NHibernate.

Right now I have a .NET MVC application with fairly "fat" controllers, and I'm trying to figure out how best to fix that. Moving this business logic into the model layer would be the best way to do this, but I am unsure how one would do that.

My application is set up so that NHibernate's session is managed by an HttpModule (gets session / transaction out of my way), which is used by repositories that return the entity objects (Think S#arp arch... turns out a really duplicated a lot of their functionality in this). These repositories are used by DataServices, which right now are just wrappers around the Repositories (one-to-one mapping between them, e.g. UserDataService takes a UserRepository, or actually a Repository). These DataServices right now only ensure that data annotations decorating the entity classes are checked when saving / updating.

In this way, my entities are really just data objects, but do not contain any real logic. While I could put some things in the entity classes (e.g. an "Approve" method), when that action needs to do something like sending an e-mail, or touching other non-related objects, or, for instance, checking to see if there are any users that have the same e-mail before approving, etc., then the entity would need access to other repositories, etc. Injecting these with an IoC wouldn't work with NHibernate, so you'd have to use a factory pattern I'm assuming to get these. I don't see how you would mock those in tests though.

So the next most logical way to do it, I would think, would be to essentially have a service per controller, and extract all of the work being done in the controller currently into methods in each service. I would think that this is breaking with the DDD idea though, as the logic is now no longer contained in the actual model objects.

The other way of looking at that I guess is that each of those services forms a single model with the data object that it works against (Separation of data storage fields and the logic that operates on it), but I just wanted to see what others are doing to solve the "fat controller" issue with DDD while using an ORM like NHibernate that works by returning populated data objects, and the repository model.

Updated I guess my problem is how I'm looking at this: NHibernate seems to put business objects (entities) at the bottom of the stack, which repositories then act on. The repositories are used by services which may use multiple repositories and other services (email, file access) to do things. I.e: App > Services > Repositories > Business Objects

The pure DDD approach I'm reading about seems to reflect an Active Record bias, where the CRUD functions exist in the business objects (This I call User.Delete directly instead of Repository.Delete from a service), and the actual business object handles the logic of things that need to be done in this instance (Like emailing the user, and deleting files belonging to the user, etc.). I.e. App > (Services) > Business Objects > Repositories

With NHibernate, it seems I would be better off using the first approach given the way NHibernate functions, and I am looking for confirmation on my logic. Or if I'm just confused, some clarification on how this layered approach is supposed to work. My understanding is that if I have an "Approve" method that updates the User model, persists it, and lets say, emails a few people, that this method should go on the User entity object, but to allow for proper IoC so I can inject the messagingService, I need to do this in my service layer instead of on the User object.

From a "multiple UI" point of view this makes sense, as the logic to do things is taken out of my UI layer (MVC), and put into these services... but I'm essentially just factoring the logic out to another class instead of doing it directly in the controller, and if I am not ever going to have any other UI's involved, then I've just traded a "fat controller" for a "fat service", since the service is essentially going to encapsulate a method per controller action to do it's work.

The short answer to you question is yes, in fact, I find NHibernate enhances DDD - you can focus on developing (and altering) your domain model with a code first approach, then easily retro-fit persistence later using NHibernate.

As you build out your domain model following DDD, I would expect that much of the business logic that's found you way into you MVC controllers should probably reside in your domain objects. In my first attempt at using ASP.NET MVC I quickly found myself in the same position as yourself - fat controllers and an anemic domain model.

To avoid this, I'm now following the approach of keeping a rich domain model that implements the business logic and using MVC's model as essentially simple data objects used by my views. This simplifies my controllers - they interact with my domain model and provide simple data objects (from the MVC model) to the views.


The pure DDD approach I'm reading about seems to reflect an Active Record bias...

To me the active record pattern means entities are aware of their persistance mechanism and an entity maps directly to a database table record. This is one way of using NHibernate e.g. see Castle Active Record, however, I find this pollutes domain enitities with knowledge of their persistence mechanism. Instead, typically, I'll have a repository per aggregate root in my domain model which implements an abstract repository. The abstract repository provides basic CRUD methods such as:

public IList<TEntity> GetAll()
public TEntity GetById(int id)
public void SaveOrUpdate(TEntity entity)
public void Delete(TEntity entity)

.. which my concrete repositories can supplement/extend.

See this post on The NHibernate FAQ which I've based a lot of my stuff on. Also remember, NHibernate (depending on how you set up your mappings) will allow you to de-persist a complete object graph, i.e. your aggregate root plus all the objects hanging off it and once you've finished working with it, can cascade saves through you entire object graph, this certainly isn't active record.

...since the service is essentially going to encapsulate a method per controller action to do it's work...

I still think you should consider what functionality that you currently have in your controllers should, more logically, be implemented within your domain objects. e.g. in your approval example, I think it would be sensible for an entity to expose an approve method which does whatever it needs to do to within the entity and if, as in your example, needs to send emails, delegate this to a service. Services should be reserved for cross-cutting concerns. Then, once you've finished working with your domain objects, pass them back to your repository to persist changes.

A couple of books I've found useful on these topics are:
Domain-Driven Design by Eric Evans
Applying Domain-Driven Design and Patterns by Jimmy Nilsson

I've been tasked with adding a website builder to our suite of applications. Most of our clients are non technical small business owners (brick and mortar stores, mom and pop shops). I've been told that I should be looking at Blogger Template Editor and trying to see if I can make something that fully featured and easy to use. The idea being that our customers should be able to bring their business online without knowing anything about web development.

I mostly work with c# and silverlight on a day to day basis. So going with .net is likely the best way for me. ASP.NET MVC and SPARK look very attractive, but I am not too sure about how I will accomplish the following things

1- How do I build a templating system that allows a designer to create templates that use a certain format and are compatible with my app. Is there any generic framework out there for this?

2- How would I persist the changes the client makes to his/her site (for example the client changes the background color and adds a list of ingredients on a page).

Edit: Yes I am aware this is a big task and I am potentially looking at writing a fullblown CMS, however our clients need very limited and basic functionality to begin with and I imagine this would be an iterative process, with perhaps more developers coming on later if the product proves to be successful. I will make these concerns known to our manager though.

Initially I am planning on just giving them a few templated layouts and allow them to customize what goes in the various sections as well as the colors and images with CSS. HAML and Sass look like they could be useful and I could persist all user customizable parameters in a database.

Am I thinking along the right lines or completely off the mark here?

You've effectively been given the task of writing a full blown Content Management System. This is a mammoth task that would probably take a lone developer anything from 6 - 24 months to build depending on experience (this based on development time of other CMS' on the market). For instance, the developers of Umbraco (an Open source ASP.NET CMS) are busy porting their CMS over to ASP.NET MVC, work started around beginning of this year and is not expected to be built until middle of next year, and they're some of the most talented devs in the industry.

I'm not doubting your talents, but unless your boss has given you a very large time scale to work to, or you plan on your website builder being extreme basic with minimal features, perhaps building a full blown Website builder or CMS is biting off more than you can chew.

As other posters have recommended, you should perhaps try existing CMS on the market such as Umbraco if you're a .NET developer.

If you do insist on building your own, it will need some serious planning. Look at software architectural design patterns such as DDD (Domain Driven Design), SOLID principles such as Dependency Injection, the Repository Pattern, Persistance Ignorance, Service Layers etc etc. MVC is definitely the right way to go. Also check out Martins Fowler's book Patterns of Enterprise Application Architecture or Dino Esposito's Microsoft .NET: Architecting Applications for the Enterprise or Eric Evans Domain-Driven Design: Tackling Complexity in the Heart of Software

Let's say we have an aggregate root entity of type Order that relates customers and order lines. When I think about an order entity it's more natural to conceptualize it as not being defined without an Id. An order without an Id seems to be better represented as an order request than an order.

To add an order to a repository, I usually see people instantiate the order without the Id and then have the repository complete the object:

class OrderRepository
    void Add(Order order)
        // Insert order into db and populate Id of new order

What I like about this approach is that you are adding an Order instance to an OrderRepository. That makes a lot of sense. However, the order instance does not have an Id, and at the scope of the consumer of the repository, it still makes no sense to me that an order does not have an Id. I could define an OrderRequest to be an instance of order and add that to the repository, but that feels like deriving an apple from an orange and then adding it to a list of oranges.

Alternatively, I have also seen this approach:

class OrderRepository
    Order AddOrder(Customer customer)
        // It might be better to call this CreateOrder
        // Insert record into db and return a new instance of Order

What I like about this approach is that an order is undefined without an Id. The repository can create the database record and gather all the required fields before creating and returning an instance of an order. What smells here is the fact that you never actually add an instance of an order to the repository.

Either way works, so my question is: Do I have to live with one of these two interpretations, or is there a best practice to model the insertion?

I found this answer which is similar, but for value objects: how should i add an object into a collection maintained by aggregate root. When it comes to a value object there is no confusion, but my question concerns an entity with identiy derived from an external source (Auto-Generated Database Id).

I would like to start by ruling out the second approach. Not only does it seem counter-intuitive, it also violates several good design principles, such as Command-Query Separation and the Principle of Least Surprise.

The remaining options depend on the Domain Logic. If the Domain Logic dictates that an Order without an ID is meaningless, the ID is a required invariant of Order, and we must model it so:

public class Order
    private readonly int id;

    public Order(int id)
        // consider a Guard Clause here if you have constraints on the ID = id;

Notice that by marking the id field as readonly we have made it an invariant. There is no way we can change it for a given Order instance. This fits perfectly with Domain-Driven Design's Entity pattern.

You can further enforce Domain Logic by putting a Guard Clause into the constructor to prevent the ID from being negative or zero.

By now you are probably wondering how this will possibly work with auto-generated IDs from a database. Well, it doesn't.

There's no good way to ensure that the supplied ID isn't already in use.

That leaves you with two options:

  • Change the ID to a Guid. This allows any caller to supply a unique ID for a new Order. However, this requires you to use Guids as database keys as well.
  • Change the API so that creating a new order doesn't take an Order object, but rather a OrderRequest as you suggested - the OrderRequest could be almost identical to the Order class, minus the ID.

In many cases, creating a new order is a business operation that needs specific modeling in any case, so I see no problem making this distinction. Although Order and OrderRequest may be semantically very similar, they don't even have to be related in the type hierarchy.

I might even go so far as to say that they should not be related, because OrderRequest is a Value Object whereas Order is an Entity.

If this approach is taken, the AddOrder method must return an Order instance (or at least the ID), because otherwise we can't know the ID of the order we just created. This leads us back to CQS violation which is why I tend to prefer Guids for Entity IDs.

This is regarding a layered design with EF DB First model.

So far i have not used Entity Framework before, used only Entities and placed on a different project with Domain/ DTO sub folders. Also referred the same in DataAccessLayer, Business Layer and MVC application and written a code with usual ADO.Net queries and prepared POCOs of my entities. No issues.

Now we are developing an application using Entity Framework DB First model. We choosen this DB First model, as the DB Design is not in our control. It is done by DBA.

I thought of reusing the old simple design here. But not sure where/which layer I should exactly fit the edmx file and the generated POCO classes. I didn't find any samples with layered architecture style uses DBFirst approach.

I referred this. But they use NHybernate

Here is the highlevel overview of old design.enter image description here

Any suggestions on design/samples, please you are welcome.


From the below answer, I think the entity framework produces the POCOs we could rename existing Entities/Domain layer to Domain Layer and putting the generated POCO classes there. Also we can simply keep .edmx in DataAccessLayer with the list of IRepository classes that wraps EF for TDD. Does this makes sence? or any valuable points?


Currently i removed DataAccessLayer and keep only Entities layer which has a model.edmx file and classes generated by EF and also all Repository classes implementing IRepository. I refer this into Business Layer, MVC as well. Am i doing right? I feel like i am doing a bad design :( Please suggest/help

Because you're unfortunately severely handicapped by the decision to create the database first, you will need to use an Anti-Corruption layer per Eric Evans' Domain-Driven Design.

This is a good solution for what to do when you're given a shitty interface that you absolutely must code against - make an interface wrapped around the DB that behaves the way you want it to. Do not expose any EF classes directly to anything except the anti-corruption layer itself.

Here's a reading example:

public class SomeReadService : ISomeReadService {
  public SomeViewModel Load(Guid id) {
    using (var dbContext = new DbContext()) {
      // Query the DB model, then *map* the EF classes to SomeVieWModel.
      // This way you are not exposing the shitty DB model to the outside world.

Here's a writing example:

public class SomeRepository : ISomeRepository {
  public SomeDomainObject Load(Guid id) {
    using (var dbContext = new DbContext()) {
      // Map the EF classes to your domain object.  Same idea as before.

I would still try to demonstrate to the client that having a separate team design the DB will be a disaster (and my experience strongly indicates that it will). See if there is some way you can provide the feedback that will demonstrate to the client that it would be better if you designed the DB.

We are developing an extension (in C# .NET env.) for a GIS application, which will has predefined types for modeling the real world objects, start from GenericObject, and goes to more specific types like Pipe and Road with their detailed properties and methods like BottomOfPipe, Diameter and so on.

Surely, there will be an Object Model, Interfaces, Inheritance and lots of other essential parts in the TypeLibrary, and by now we fixed some of them. But as you may know, designing an Object Model is a very ambiguous work, and (I as much as I know), can be done in many different ways and many different results and weaknesses.

Is there any distinct rules in designing O.M.: the Hierarchy, the way of defining Interfaces, abstract and coclasses enums?

Any suggestion, reference or practice?

Check out Domain-Driven Design: Tackling Complexity in the Heart of Software. I think it will answer your questions.

What is the best way to start Domain Driven Design?

What are the recommended resources?


I mean, I'd like to know how to start learning DDD (the same way as to start TDD by reading K. Beck).

There's a really big book available on domain driven design, which was brilliantly abridged and made available as a free download here:

To start "doing" domain driven design, you just need to follow the points in this book. Share a language with the business, create objects that represent something that the business would recognise and so on.

It is more difficult to get in full swing on large existing applications (but not impossible) but if you are writing something new, that's a great opportunity to go at it 100%.

The definitive book on DDD is Domain-Driven Design: Tackling Complexity in the Heart of Software

However its a book that takes some gestation that is best backed up with practice and observing how experienced DDD'ers think.
The site has some excellent resources including example projects. I also find it useful to trawl the various open source code repositories such as GitHub, Codeplex and SourceForge for projects that use DDD

In addition there is an excellent discussion forum where a lot of very experienced DDD'ers hang out.

Good luck on your DDD journey, its a long road without a turn!

My personal advice is to forget the "DDD Quickly" book and go straight to the "Domain-Driven Design: Tackling Complexity in the Heart of Software" book from Eric Evans. I'd also suggest not to read the book in the original order, but to read the intro and then move to the Strategic Design section, and only then go back to the first part of the book. You'll discover that there's more to DDD than a collection of patterns.

However, after the book has been published there's been some evolution in the DDD community (have a look to this video as a refresher). A new pattern Domain Event has been published, and many alternative supporting architectures have been discussed: CQRS and Event Sourcing above all.

Hi guys although I am seeing lots of discussions about this topic. cant seem to find a very detailed answer regarding this. I want to know which should I put here and there.

Where should I put the IRepository interface. In the DataAccess or in separate project say "Repository"?? How about other abstract Repositories that will extend IRepository<> like ICustomerRepository : IRepository ?? Will they reside on the same project? How about the concrete implementation of CustomerRepository : BaseRepository, ICustomerRepository

And my POCO, where should I put them??

The UnitOfWork and Service Layers???

I guess you know what I am trying to say...

Can you please help and give me an idea or more detailed would be appreciated.

PS: Can all my Services just contain the UnitOfWork so I can call any repository? Is there a drawback there? or why would I want to use Repository over the UnitOfWork on SErvices?


All of your questions have answered very carefully in "Microsoft Spain - Domain Oriented N-Layered .NET 4.0 Sample App".

I recommend you reading up the document first and then investigating the sample app.

It would be helpful to read the "Domain-Driven Design: Tackling Complexity in the Heart of Software" by Eric Evans. This book is like a bible in Domain-Driven Design.

I am attempting to refactor my application from a repository per entity to a repository per aggregate root.

A basic example would be I have an entity root of Cars. Cars have hire contracts. As far as I can see contracts don't exist without cars hence cars is the aggregate root.

I am trying to implement a user view which will shows every contract in the system(all the child entities of the root entities). Before refactoring I could just go to my contracts repository and get All. As contracts repository has been removed (as its not a root) I now need to get all cars out of my repository and then get all their contracts.

My repository has the interface

public interface ICarRepository
    IQueryable<Car> All { get; }
    IQueryable<Car> AllIncluding(params Expression<Func<Car, object>>[] includeProperties);
    Car Find(long id);
    void InsertOrUpdate(Car car);
    void Delete(long id);
    void Save();

I thought of creating an ICarManagementService and having it have a GetAllContracts method (perhaps with filter parameters). Would that mean to get all contracts I need to pull all car entities out with their contracts and then retrieve each entities associated hire contracts and filter them?

I can then pass these to the controller and AutoMap the contracts as before.

Is this best practice?



As far as I can see contracts don't exist without cars hence cars is the aggregate root.

This is not necessarily true. 'Don't exist without' is not enough for an entity to become a part of an Aggregate Root. Consider classic order processing domain. You have an Order that is an Aggregate Root. You also have a Customer that is an Aggregate Root. Order can not exist without a Customer but it does not mean that Orders are part of the Customer Aggregate. In DDD entities inside one Aggregate can have references to other Aggregate Roots. From DDD book:

Objects within the AGGREGATE can hold references to other AGGREGATE roots.

Aggregate is a life cycle and data exchange unit. It is essentially a cluster of objects that enforces invariants. This is something you want to be locked if you have multiple users changing domain at the same time.

Back to your question, my understanding is that the domain is something like rent / lease a car / truck / limo / bulldozer. I think that HireContract may not be a part of Car aggregate because they may have different lifecycles and HireContract just makes sense on its own, without a Car. It seem to be more of a Order-Product relationship that is also a classic example of two different Aggregates referencing each other. This theory is also confirmed by the fact that business needs to see "All Contracts". They probably don't think of Car containing all Contracts. If this is true than you need to keep your ContractsRepository.

On an unrelated note, you might be interested in reading this answer about repository interface design.

I am developing a new application using object oriented approach with some REST involved, I am not using any frameworks.

The question I have is where is the best place to validate a user’s input in a setter like below:

public function setSalary($salary)
    if (Validator::money($salary))
        $this->salary = $salary;
        return 'Error that is an invalid number';

Or in the controller?

public function updateSalary()
    $errors = array();

    if (Validator::money($_POST['salary']))
        $salary = $_POST['salary'];
            $errors ['salary']  = 'Error that is an invalid number';

        return $errors;

    $employee = new Employee($_POST['e_Id']);


If I was to put in the setter how should my controller look, and return validation errors?

I have seen most people do validation in the controller, however I think should be the models responsibility for validation as it going to be using the data, and we can reuse that model without repeating ourselves. However there can be times when validation rules may need to be different in some special cases like different validation for a different view or different validation for a supper admin.

Which one would you say is in accordance with best practices?

First of all, since you seem to aspire to implement MVC-like structure, lets start by some general mistakes, that are not related to validation directly.

  • Only part of your code, containing PHP superglobals, should be the bootstrap stage. Having superglobals sprinkled all over your code makes it really hard to test. And your code also becomes tightly couple to your HTML, via the <input> names.

  • Even if your for or if statement contains a single line, you should always use curly brackets. Well, in general your code should follow the PSR-1 and PSR-2 guidelines.

  • Controllers should not have any logic, or be dealing with saving of data. Read this post, maybe it clears some things up.

Ok .. now back to the original subject.

In general there are two schools of thought:

  1. You do the validation in the domain entity

    Your domain entity (in your case Employee) contains all the business roles, that pertain to it. And it can use those rules to assess, if it is in a valid state.

    The code would go something like this:

    $employee = new Entity\Employee;
    if ($employee->isValid()) {
        $mapper = new Mapper\Employee($dbConn);
  2. You never create invalid domain entity

    This approach comes from DDD, where you domain entity is created by some other class and it can only be changes from one valid state to another valid state. Essentially, if you want to explore this approach, you will have to read this book (probably several times).

Also, there is one other validation form, which is note covered by the previous two: data integrity checks. This is type of validation, that is actually done my RDBMS. For example, the UNIQUE constraints.

When you encounter ans integrity violation, it usually would throw an exception, that you handle in service layer.

I am quite confident with developing DDD applications, but one area which is continuing to cause me problems is when two applications integrate with each other. I am struggling to find any useful books or resources on the subject. Books such as Patterns of EAI go into depth about messaging patterns and message construction, but don't really explain how to architect systems that make use of these patterns.

I've searched high and low and I'm quite sure there are no sample applications that demostrate how to integrate two systems. I understand the concept of asynchronous messaging, but again can't find good examples of how to apply it.

Resources on SOA seem to keep repeating the same concepts without demonstarting how to implement them, and more often than not seem more concerned with selling me products.

Here are the sort of questions I am struggling to answer:

  1. Should each application have it's own copy of the data? For example, should every application within an organisation have it's own list of clients, which it updates upon the receipt of a message?

  2. At what point in the DDD stack are messages passed? Are they the result of domain events?

  3. Can I combine asynchrounous messaging and WCF or do I have to chose? Do I use WCF for request/response and messaging for publish/subscribe?

  4. How does one DDD application consume the services of another? Should one DDD application query another system for its data via its application services, or should it already have its own local copy of the data, as mentioned in point 1?

  5. Apparently I can't have a transaction across two systems. How do I avoid this?

If I sound confused it's because I am. I'm not looking for answers to the above questions, just pointing in the direction of resources that will answer this and simmilar questions.


I've been making a similar transition. My advice:

  • Start at
  • Listen the the Distributed Podcast.
  • Catch any of Greg Young's talks. For example, here is Eric Evans interviewing Greg. He's got some all-day sessions that are recorded as well.
  • Read/listen to anything from Udi Dahan (poscasts, lectures, articles, etc.). He's got some good stuff on InfoQ.
  • Wait for Greg's book.
  • Read whatever you can find on EDA (Event Driven Architecture).

Good luck!

In addition to what Eric Farr had said, it may be worth looking closely at Part 4 of the DDD book (Strategic Design). It does not approach the problem from 'Distributed' angle but has a lot of information on how to integrate applications (Bounded Contexts). Patterns like Anticorruption Layer can be very helpful when designing at the boundary of the application.

I have a bunch of classes in a CUDA project that are mostly glorified structs and are dependent on each other by composition:

class A {
        typedef boost::shared_ptr<A> Ptr;
        A(uint n_elements) { ... // allocate element_indices };
        DeviceVector<int>::iterator get_element_indices();
        DeviceVector<int> element_indices;

class B {
        B(uint n_elements) { 
            ... // initialize members
        A::Ptr get_a();
        DevicePointer<int>::iterator get_other_stuff();
        A::Ptr a;
        DeviceVector<int> other_stuff;

DeviceVector is just a wrapper around thrust::device_vectors and the ::iterator can be cast to a raw device pointer. This is needed, as custom kernels are called and require handles to device memory.

Now, I do care about encapsulation, but

  • raw pointers to the data have to be exposed, so the classes using A and B can run custom kernels on the GPU
  • a default constructor is not desired, device memory should be allocated automatically --> shared_ptr<T>
  • only very few methods on A and B are required

So, one could make life much simpler by simply using structs

struct A {
    void initialize(uint n_elements);
    DeviceVector<int> element_indices;

struct B {
    void initialize(uint n_elements);
    A a;
    DeviceVector<int> other_stuff;

I'm wondering whether I'm correct that in the sense of encapsulation this is practically equivalent. If so, is there anything that is wrong with the whole concept and might bite at some point?

It's a trade off.

Using value structs can be a beautifully simple way to group a bunch of data together. They can be very kludgy if you start tacking on a lot of helper routines and rely on them beyond their intended use. Be strict with yourself about when and how to use them and they are fine. Having zero methods on these objects is a good way to make this obvious to yourself.

You may have some set of classes that you use to solve a problem, I'll call it a module. Having value structs within the module are easy to reason about. Outside of the module you have to hope for good behavior. You don't have strict interfaces on them, so you have to hope the compiler will warn you about misuse.

Given that statement, I think they are more appropriate in anonymous or detail namespaces. If they end up in public interfaces, people tend to adding sugar to them. Delete the sugar or refactor it into a first class object with an interface.

I think they are more appropriate as const objects. The problem you fall into is that you are (trying to) maintain the invariance of this "object" everywhere that its used for its entire lifetime. If a different level of abstraction wants them with slight mutations, make a copy. The named parameter idiom is good for this.

Domain Driven Design gives thoughtful, thorough treatment on the subject. It characterizes it a more practical sense of how to understand and facilitate design.

Clean Code also discusses the topic, though from a different perspective. It is more of a morality book.

Both are awesome books and generally recommend outside of this topic.

I have Category and Product entities. The relationship between the two is one to many. Since, Category is aggregate root I think I should only make a single repository ICategoryRepository which should also handle products.


I'm without my copy of Domain Driven Design by Evans at the moment, which is where I'd turn for the definitive answer, but this reference at dddstepbystep states that:

Within an Aggregate there is an Aggregate Root. The Aggregate Root is the parent Entity to all other Entities and Value Objects within the Aggregate.

A Repository operates upon an Aggregate Root

So yes, going by this definition, your Category Repository should be responsibly for persisting all entities within the Category aggregate.

That said though, my question from my comment still stands - are you sure that Category really is a useful aggregate root? The fact that you are asking this question about persisting products indicates that you often consider them seperate from their Category, or at least would like to be able to deal with some product aside from their category.

I'm looking for pointers and information here, I'll make this CW since I suspect it has no single one correct answer. This is for C#, hence I'll make some references to Linq below. I also apologize for the long post. Let me summarize the question here, and then the full question follows.

Summary: In a UI/BLL/DAL/DB 4-layered application, how can changes to the user interface, to show more columns (say in a grid), avoid leaking through the business logic layer and into the data access layer, to get hold of the data to display (assuming it's already in the database).

Let's assume a layered application with 3(4) layers:

  • User Interface (UI)
  • Business Logic Layer (BLL)
  • Data Access Layer (DAL)
  • Database (DB; the 4th layer)

In this case, the DAL is responsible for constructing SQL statements and executing them against the database, returning data.

Is the only way to "correctly" construct such a layer to just always do "select *"? To me that's a big no-no, but let me explain why I'm wondering.

Let's say that I want, for my UI, to display all employees that have an active employment record. By "active" I mean that the employment records from-to dates contains today (or perhaps even a date I can set in the user interface).

In this case, let's say I want to send out an email to all of those people, so I have some code in the BLL that ensures I haven't already sent out email to the same people already, etc.

For the BLL, it needs minimal amounts of data. Perhaps it calls up the data access layer to get that list of active employees, and then a call to get a list of the emails it has sent out. Then it joins on those and constructs a new list. Perhaps this could be done with the help of the data access layer, this is not important.

What's important is that for the business layer, there's really not much data it needs. Perhaps it just needs the unique identifier for each employee, for both lists, to match upon, and then say "These are the unique identifiers of those that are active, that you haven't already sent out an email to". Do I then construct DAL code that constructs SQL statements that only retrieve what the business layer needs? Ie. just "SELECT id FROM employees WHERE ..."?

What do I do then for the user interface? For the user, it would perhaps be best to include a lot more information, depending on why I want to send out emails. For instance, I might want to include some rudimentary contact information, or the department they work for, or their managers name, etc., not to say that I at least name and email address information to show.

How does the UI get that data? Do I change the DAL to make sure I return enough data back to the UI? Do I change the BLL to make sure that it returns enough data for the UI? If the object or data structures returned from the DAL back to the BLL can be sent to the UI as well, perhaps the BLL doesn't need much of a change, but then requirements of the UI impacts a layer beyond what it should communicate with. And if the two worlds operate on different data structures, changes would probably have to be done to both.

And what then when the UI is changed, to help the user even further, by adding more columns, how deep would/should I have to go in order to change the UI? (assuming the data is present in the database already so no change is needed there.)

One suggestion that has come up is to use Linq-To-SQL and IQueryable, so that if the DAL, which deals with what (as in what types of data) and why (as in WHERE-clauses) returned IQueryables, the BLL could potentially return those up to the UI, which could then construct a Linq-query that would retrieve the data it needs. The user interface code could then pull in the columns it needs. This would work since with IQuerables, the UI would end up actually executing the query, and it could then use "select new { X, Y, Z }" to specify what it needs, and even join in other tables, if necessary.

This looks messy to me. That the UI executes the SQL code itself, even though it has been hidden behind a Linq frontend.

But, for this to happen, the BLL or the DAL should not be allowed to close the database connections, and in an IoC type of world, the DAL-service might get disposed of a bit sooner than the UI code would like, so that Linq query might just end up with the exception "Cannot access a disposed object".

So I'm looking for pointers. How far off are we? How are you dealing with this? I consider the fact that changes to the UI will leak through the BLL and into the DAL a very bad solution, but right now it doesn't look like we can do better.

Please tell me how stupid we are and prove me wrong?

And note that this is a legacy system. Changing the database schema isn't in the scope for years yet, so a solution to use ORM objects that would essentially do the equivalent of "select *" isn't really an option. We have some large tables that we'd like to avoid pulling up through the entire list of layers.

This is not at all an easy problem to solve. I have seen many attempts (including the IQueryable approach you describe), but none that are perfect. Unfortunately we are still waiting for the perfect solution. Until then, we will have to make do with imperfection.

I completely agree that DAL concerns should not be allowed to leak through to upper layers, so an insulating BLL is necessary.

Even if you don't have the luxury of redefining the data access technology in your current project, it still helps to think about the Domain Model in terms of Persistence Ignorance. A corrolary of Persistence Ignorance is that each Domain Object is a self-contained unit that has no notion of stuff like database columns. It is best to enforce data integretiy as invariants in such objects, but this also means that an instantiated Domain Object will have all its constituent data loaded. It's an either-or proposition, so the key becomes to find a good Domain Model that ensures that each Domain Object hold (and must be loaded with) an 'appropriate' amount of data.

Too granular objects may lead to chatty DAL interfaces, but too coarse-grained objects may lead to too much irrelevant data being loaded.

A very important exercise is to analyze and correctly model the Domain Model's Aggregates so that they strike the right balance. The book Domain-Driven Design contains some very illuminating analyses of modeling Aggregates.

Another strategy which can be helpful in this regard is to aim to apply the Hollywood Principle as much as possible. The main problem you describe concerns Queries, but if you can shift your focus to be more Command-oriented, you may be able to define some more coarse-grained interfaces that doesn't require you to always load too much data.

I'm not aware of any simple solution to this challenge. There are techniques like the ones I described above that can help you address some of the issues, but in the end it is still an art that takes experience, skill and discipline.

I've been researching using a service layer to validate my domain models before persisting them to a database.

I found the following example of using extension methods to validate my models, but was wondering if there were any specific disadvantages to doing so? I don't see validation (aside from Data Annotations) mentioned all that much.

I was thinking of implementing the following:

public class FooService : IFooService {

    public bool Add(Foo foo) {

        if (!foo.IsValid()) {
            return false

        try ... catch

public static class validationExtensions {

    public static bool IsValid(this Foo foo) {

        // Call my validation implementation here

I'm nervous to do this, as I don't see this recommended/implemented much. Thoughts?

Domain objects should be self validating, this is simple OOP. They should not be allowed to get into invalid state in the first place. Properly designed domain object enforces all internal invariants without relying on external code. Otherwise the encapsulation is broken and your objects are really just a dumb data containers with getters and setters.

The word 'validation' can also be a very dangerous overgeneralization, that tend to shift focus from domain and objects to dumb data containers tailored to a choice of UI framework. This is why DDD book never mentions the 'validation' issue at all. I find it more useful to think about invariant than about validation. Invariants can be as simple as 'social security number can not have letters', in which case a Value object should be used. Or more complex like 'order is considered to be delinquent if it was not payed within 2 weeks' which can be encapsulated within order.IsDelinquent() or similar method. Note that in the first case we eliminate the possibility of object becoming invalid by implementing SocialSecurityNumber class. And in the second case we use the word 'delinquent' from ubiquitous language instead of generic 'valid'. Please see similar answers: 1, 2, 3.

As a side note, you should probably take all 'DDD' advices from ASP.NET crowd with a grain of salt. ASP.NET MVC is a great framework but the learning material confuses Domain model with View model. Most of the examples consider 'domain' object to be the same as data container with getters and setters, without any encapsulation. DDD is technology agnostic, so you can always do a reality check by asking yourself 'would this make sense in a console app?' or 'would this make sense in a non-UI project?'.

Maybe this question is silly, but I'm a little confused about. Let's suppose we want leverage this pattern:

  • what exactly is the event storage scope in an enterprise application?

  • Does an event storage share among multiple processes, or is just an in-process concept?

  • What happens to events when the application close? Are they bound to an application "instance" or to an application?

  • What are the differences between an event storage and a MessageBus with Publisher/Subscriber ( a part the fact we can store the message history?

  • Who's responsible for the message idempotence?

  • what does this sentence actually mean:"Interestingly enough, even without the presence of distributed transactions across the various resources involved, such as a message queue and persistent storage, the EventStore is able to ensure a fully transactional experience. This is achieved by breaking apart a distributed transaction into smaller pieces and performing each one individually" ( from this project ) I can't understand how breaking a transaction in several small pieces, even if all are transactional per-se can replace a distributed transaction.

How many questions!

what exactly is the event storage scope in an enterprise application?

Event storage is not properly a pattern, it's a technique usually used with two different (but strongly related) patterns: Event Sourcing and Command and Query Responsibility Segregation. Being a "storage", it's just a way to persist the state of the application that is relevant to the business.

Both patterns are often used in conjunction with a domain model, since they work well with the patterns introduced by Evans in Domain Driven Design.

An EventStore allows you to persist domain events (the Event Sourcing way) or application events (aka, Commands, in the CQRS one). It differs from document and relational storage since you don't save the state of the model but the events that led to it. However you could use either a RDBMS or a document db to store events.

Then to retrieve an entity, you can simply play forward each of events/commands registered, in sequence. Snapshots can be used to make this process faster.

Does an event storage share among multiple processes, or is just an in-process concept?

It depends on the store implementation, but there's no reason that prevents it's use among multiple process and/or applications.

What happens to events when the application close? Are they bound to an application "instance" or to an application?

Again it depends on the store implementation. The simplest possible event store save events into numbered files, thus when the application shuts down the events are still there (this always reminds me of Thompson's words: "we have persistent objects, we call them files").
But nothing prevents you from having a volatile event store, just in memory, if your application really needs it. I would implement it as an append-only collection keeping entry order.

What are the differences between an event storage and a MessageBus with Publisher/Subscriber ( a part the fact we can store the message history?

A message bus is a tool to deliver messages. Events (and commands) are messages, thus you can use it to deliver them. An event store, instead, is a persistence tool.

Who's responsible for the message idempotence?

In most common scenarios, the guy that design the domain model. In a non-DDD system, it's the guy that design the messages (events or commands). Indeed, idempotence must be guaranteed by the receivers of the messages, not by the technology per-se.

Given that, an EventStore could merge duplicated messages when it detect them. But this doesn't make, per-se, the model idempotent.

what does this sentence actually mean:"Interestingly enough, even without the presence of distributed transactions across the various resources involved, such as a message queue and persistent storage, the EventStore is able to ensure a fully transactional experience. This is achieved by breaking apart a distributed transaction into smaller pieces and performing each one individually" ( from this project ) I can't understand how breaking a transaction in several small pieces, even if all are transactional per-se can replace a distributed transaction.

It depends on the meaning that the author assign to "fully transactional experience". To me, this sentence looks wrong, since it would break the Brewer's theorem.

You could find useful this CQRS Journey from Microsoft and Greg Young.

See you tomorrow, at the office :-)

Not a book that talks about how to code but more high level organization of software.

Though perhaps it is not strictly architecture-related (although I believe you'll find that most prescriptions of architecture patterns are mere guidelines at best, and far from comprehensive), Steve McConnell's Code Complete is truly required reading. More importantly than teaching you design patterns, it will teach you to be a better programmer so that you can make these kinds of decisions for yourself.

General Responsibility Assignment Software Principles and Domain Driven Design are in my opinion are the next must have things you should get familiar with after learning to code. API Design is also a good read, especially when you are developing the software that will be used/extended by multiple people.

I am not in favor of learning patterns as it is EASIER to misuse them if the intention is not understood correctly. (Everything seems like a nail if you have a hammer kind of thing). I have nothing against patterns but I have seen it mostly misused by the junior developers thus creating hard to maintain products.

The setup: Winform/ASP.NET MVC projects. Learning NHibernate SQL-Server driven apps

I work with clients that have no idea how to model an application. That's what I'm for. However, we have lots of conflicts with validation, mis-understandings, etc.

For example, the client will ask for an order entry screen. The screen should require a "product". That's fine and dandy. However, the client didn't know to tell me that the user can't order a product of "Class A" unless it's Tuesday.

Or, they need a time entry screen. 2 days before it's rolled into production, they casually forgot to mention that certain activities are only valid for certain situations. These situations being a week of coding.

That's of course, some crude examples (not by much!). But the problem is getting these non-technical clients to layout their business logic. They somehow didn't realize that the "Class A" problem would come up two weeks later, etc.

I'm all for agile programming but is there an easy way to somehow make business logic like this extremely easy to implement and change on almost a daily basis?

I of course am splitting the project into hopefully intelligent pieces, using NHibernate, etc. But making this BI logic so dynamic is really making it hard to project timelines, etc.

Any suggestions? I know there will never be a perfect client (or a perfect provider) but how do you guys deal with the constant mis-understandings?


I highly recommend the book "Domain-Driven Design: Tackling Complexity in the Heart of Software", by Eric Evans. This is an EXCELLENT book that teaches how to communicate with your customers so that you are better equipped to model their needs.

Central to the book is the concept of Ubiquitous Language...a language that you, as a software architect, create during conversations with your customers, via the tool called modeling.

As an architect, there is a fundamental rule that you should come to embrace, as it will greatly help you in your endeavors to deliver business value to your customers: It is not the job of the customer to give you requirements all nicely and neatly packaged in a pretty box that you can just unwrap and build. As the middle man between the customer and the developer, it is critical for you to understand that it is YOUR job to extract requirements from your customers.

Why do I say this? Your customers role is business, not software development. They are concerned about making money so they can pay their employees, their advertisers, their other bills...and maybe make some profit in the process. They are not concerned about the details of how software...the tools they use to get the job They simply care that the tool does what they need it to.

If you can learn that as an architect, one of your roles is that of "requirement extractor", you'll become more successful. With that success, your customers should be happier, which should result in you being happier and more satisfied with your job and the software you and your developers create. Its not an easy thing to takes a whole different approach. It requires a greater presence of mind and forsight that gives you insight into your customers needs...letting you know what they need before they do. If you develop and use an ubiquitous language, as your project continues, each meeting with your customers will improve as the two of you learn how to communicate in common terms that have well-defined meanings.

Given all that, here are some examples that might have helped you get better requirements earlier on:

Cust: So, we need an order entry screen that we can enter product orders on.
Arch: Ok, thats doable. Can you give me more specifics about this order entry screen?
Cust: Hmm, well....I'm not sure...

Arch: If I may, here are some thoughts I have about business rules:
Arch: Are there any restrictions about what may be ordered?
Cust: Oh! Yeah, we don't want our customers ordering any products of Class A unless its Tuesday.

Arch: Great, thats helpful. Do you offer any combination deals, so that if a customer orders Product X, they can get Product Y at a discount?
Cust: Hmm, not exactly. We do have promotional deals, were if a customer enters a certain promotion code, they can get a deal on one or more products.
Arch: Ok, so there are class restrictions and promotional deals. Anything else that might affect the behavior of the order screen?

Cust: Hmm, now that you mention it...

This is a common scenario when doing DDD. (Note, this is also very Agile, as DDD and Agile work hand in hand.) In the dialog above, I have bolded and italicized terms that should become part of you and your customer's Ubiquitous Language. Things in bold are terms your customer uses to describe certain things about their business. These terms become part of your "software domain", which is the software model of your customers business (at least the parts you are writing software for.) I have italicized terms that architects and developers use, such as business rules. If you read Evan's book, he explains in much greater detail how to develop an ubiquitous language, and how to use ad-hoc visual modeling to design your software using the terms from that ubiquitous language.

Hopefully this helps. And hopefully, the book "DDD: Tackling Complexity in the Heart of Software" will help even more. The ultimate goal, once you have a proper rapport with your customer, there won't be any (or very few) misunderstandings.

What functionality do you think should be built into a persistable business object at bare minimum?

For example:

  • validation
  • a way to compare to another object of the same type
  • undo capability (the ability to roll-back changes)

The functionality dictated by the domain & business.

Read Domain Driven Design.

after reading the blue book (Eric Evan's Domain Driven Design) and starting applying the DDD concepts in a simple blog like application I have the following question, how do you model a local identity of an entity inside an aggregate root??

Let's say for the sake of simplicity, I have a simple blog model that has the following entities and scenarios: A registered user can publish a post, a post can have one or more tags associated and a registered or unregistered user can publish a comment on a post.

In this scenario, the entities are User, Post, and Comment, the aggregate roots are User na Post being comment an aggregate inside Post aggregate root.

So since the comment entity has a local identity inside Post how do I model its local identity?? I.E. I cannot differenciate a comment just by it's attributes since I can have twho different comments for the same post published by the same user with the same content...

I first thought of the order of the comment inside the post's comment list to identify the comment inside the post, but this becomes very complex in concurrent environments, like a web application where two clients can post comments on blogs and I would have collisions until I store them in the database. Besides that I need to keep the logic for restoring the comment list from repository in the same order the post was saved to the repository...

Then having a unique identifier counter inside the Post and autoincrement it with every comment published, but in concurrent environment becomes complex, so how do I model the local identity inside an aggregate root??

Thanks Pablo

Good question. From Eric Evan's book:

ENTITIES inside the boundary have local identity, unique only within the AGGREGATE.

...only AGGREGATE roots can be obtained directly with database queries. All other objects must be found by traversal of associations.

I think that the second part is what is important. You should be treating Comment Id as local identity. In other words you should not retrieve Comments bypassing its Aggregate Root (Post), reference Comments from the outside etc. Technically, Commend Id can be the same AUTOINCREMENT field generated by database, like you would have for User and Post (or any other id generator from hibernate). But conceptually Comment will have local identity. It is the job of the Aggregate Root to resolve it.

A simple billing system (on top of ColdBox MVC) is ballooning into a semi-enterprisey inventory + provisioning + issue-tracking + profit tracking app. They seem to be doing their own thing yet they share many things including a common pool of Clients and Staff (login's), and other intermingled data & business logic.

How do you keep such system modular? from a maintenance, testability & re-usability stand point?

  • single monolithic app? (i.e. new package for the base app)
  • ColdBox module? not sure how to make it 'installable' and what benefits does it bring yet.
  • Java Portlet? no idea, just thinking outside the box
  • SOA architecture? through webservice API calls?

Any idea and/or experience you'd like to share?

Stop thinking about technology (e.g. Java Portals, ColdBox modules, etc...) and focus on architecture. By this I mean imagining how you can explain your system to an observer. Start by drawing a set of boxes on a whiteboard that represent each piece - inventory, clients, issue tracking, etc... - and then use lines to show interactions between those systems. This focuses you on a separation of concerns, that is grouping together like functionality. To start don't worry about the UI, instead focus on algorithms and data.

If you we're talking about MVC, that step is focusing on the model. With that activity complete comes the hard part, modifying code to conform to that diagram (i.e the model). To really understand what this model should look like I suggest reading Domain Driven Design by Eric Evans. The goal is arriving at a model whose relationships are manageable via dependency injection. Presumably this leaves you with a set of high level CFCs - services if you will - with underlying business entities and persistence management. Their relationships are best managed by some sort of bean container / service locator, of which I believe ColdBox has its own, another example is ColdSpring.

The upshot of this effort is a model that's unit testable. Independent of of the user interface. If all of this is confusing I'd suggest taking a look at Working Effectively with Legacy Code for some ideas on how to make this transition.

Once you have this in place it's now possible to think about a controller (e.g. ColdBox) and linking the model to views through it. However, study whatever controller carefully and choose it because of some capability it brings to the table that your application needs (caching is an example that comes to mind). Your views will likely need to be reimagined as well to interact with this new design, but what you should have is a system where the algorithms are now divorced from the UI, making the views' job easy.

Realistically, the way you tackle this problem is iteratively. Find one system that can easily be teased out in the fashion I describe, get it under unit tests, validate with people as well, and continue to the next system. While a tedious process, I can assure it's much less work than trying to rewrite everything, which invites disaster unless you have a very good set of automated validation ahead of time.


To reiterate, the tech is not going to solve your problem. Continued iteration toward more cohesive objects will.

Now as far as coupled data, with an ORM you've made a tradeoff, and monolithic systems do have their benefits. Another approach would be giving one stateful entity a reference to another's service object via DI, such that you retrieve it through that. This would enable you to mock it for the purpose of unit testing and replace it with a similar service object and corresponding entity to facilitate reuse in other contexts.

In terms of solving business problems (e.g. accounting) reuse is an emergent property where you write multiple systems that do roughly the same thing and then figure out how to generalize. Rarely if ever in my experience do you start out writing something to solve some business problem that becomes a reusable component.

For some reason, a couple of members of my team habitually starts method names with "Do"


public void DoReopenLeads()
public void DoProcessBaloney()

Now, I'm a "learn on the job" kind of a guy and haven't had any formal code training, so I don't know whether this is an industry accepted coding standard.

To my mind, it seems a bit dumb, as all methods "Do" something or other...

Coding standards for our team doesn't cover how to name methods (other than saying what the function does in fairly clear English)

Whilst this topic is kind of oponionated I would like to share some soft guidances I have found.

The DoFactorys C# Coding Standards and Naming Conventions only states that:

use noun or noun phrases to name a class

A rather useful guide of AvSol C# Guideline maintained by Dennis Doomen states that:

Name methods using a verb like Show or a verb-object pair such as ShowDialog. A good name should give a hint on the what of a member, and if possible, the why.

Also don't include And in the name of a method. It implies that method is doing more than one thing, which violates the single responsibility principle...

In the end, the are no such official guidelines exist written in stone. Your workspace and domain culture should drive creating your own guidelines.

For a good start I suggest you to read Domain-Driven Design: Tackling Complexity in the Heart of Software written by Eric Evans. It contains a section called Ubiquitous Language which might help you to learn how to create and evolve your shared language by closely consulting with your domain experts AND fellow coworkers. Your code could also follow this common language, so your codebase could tell "stories" or use-cases by reading it.

An other good reference about the UL is written by Andrew Whitaker, where he writes that:

Having a ubiquitous language improves the coherence of the code base and keeps everyone on the same page.

I am developing a transactional application in .NET and would like to get some input on how to properly encapsulate database access so that:

  • I don't have connection strings all over the place
  • Multiple calls to the same stored procedure from different functions
  • or WORSE, multiple stored procedures that are different by a single column

I am interested in knowing if using an ORM like NHibernate is useful, as it may just add another layer of complexity to a rapidly changing data model, and artifacts need to be produced on a tight schedule.

I am more interested in methods or patterns OTHER than ORM packages.

There are at least two widely accepted design patterns used to encapsulate data access:

  • repository (DDD)
  • DAO (Data Access Object)

For the sake of completeness I suggest you these books:

I'd like to ask question about DDD features. Lets say we have two aggregates and each of them contains value-object Address. Accordingly to Eric Evans DDD, we should isolate aggregates from each other, so aggregate root of first aggregate can't have a link to Address. Frankly, it doesn't seem to make sense for me, so question is how to resolve such situation? Which aggregate should contain Address?


A Value Object is an object that describes some characteristic or attribute but carries no concept of identity.

Since it does not have conceptual identity you can not 'reference' or 'have link' to it. You can only 'contain' it. Lets say you have a User and user has Age. Age is a value object. If John is 25 years old and Jane is also 25 they do not 'reference' the same Age. Jonh's Age is simply equal to Jane's Age. So if your Address is indeed a Value Object then you are not violating any Aggregate boundaries. Your aggregate roots simply have equal addresses. Even if you technically have java/c# reference to Address it does not matter because Value Object are immutable most of the time.

It is hard to answer you question though without knowing what domain you are working on. But generally Address does not necessarily have to be a Value Object. Eric Evans mentions in his book that Postal Service and Delivery Route domains will treat Address as an Entity. Electrical Company that sends out technicians need to realize that two service calls from '123 Elm St' are actually coming from the same address and it only need to send one technician. Address or 'Dwelling' is an Entity in this case.

I do not have the Eric Evans' Domain-driven design book on me, but it says essentially

External objects may not hold a reference to an entity that is internal to the aggregate. External objects must reference the aggregate root only, no internal objects.

For example, my Team aggregate root has a method called AddPlayer() and returns the Player entity added. Does this mean I am violating the rule or is it saying I cannot pull a Player entity out of thin air, for example pulling it from a repository outside of its aggregate boundary?

This is always a tricky issue and is most likely an indication that your design does not quite fit your domain. I have a bit to say about this on my blog (should you be interested):

You have a Team and you have a Player. That would be 2 Aggregate Roots. Making Team the Aggregate Root and Player just a contained entity is what is probably causing you pain. In real life a player need not belong to a team and it also depends on what your 'team' is. Is it just the collective name, or the current members, or the actual folks that can take to the field on a given day?

So you will probably end up with different 'teams':

  • Team
  • Squad
  • GameSquad

So the players are not necessarily part of the aggregate but rather the aggregate can have some kind of ownership with probably a rather weak reference to the player (such as only an ID or some value object). Something to that effect.

But to get back to what Eric refers to in his book: I think it relates to something like this (using your mould):

var line = Order.AddLine(SomeProduct);

Here it shouldn't make too much sense having a reference to the actual entity within the aggregate as it has no lifecycle of it's own. Well, in this case the order line is not even an entity.

There has also been some discussion around whether repository only return ARs or Entities (that, in some repositories, are ARs). According to what I have found the blue book your can retrieve an entity from a repository.

Anyway. Just some thoughts. HTH :)

I'm developing an Android app and use Dagger 2 to inject some objects as singletons in my activities/fragments. Some of the objects are loaded from database.

So is it possible to load the database object in the background and inject it as soon as it is available? Or is it no problem if I just load it when the singleton is initialized by dagger? Alternatively I just could pass the reference to the activities/fragments and load the object there.

What is your approach to this problem?

This is an interesting question, because it touches on what I think is the first problem many people starting with Dependency Injection (DI) will face: what types of objects should I inject, what should I new up, and in your case, what should I pass around manually?

When you use dependency injection (and presumably also unit testing, but that's a different story), it's important that you understand the classification of the types of objects/classes you're designing:

  1. Services: objects that perform some action, like business logic.
    • these are the things you'd want to mock in unit tests.
  2. Value Objects (for the sake of our discussion, also includes DTOs and Entities.. think POCO or POJO): these are objects that hold information. Typically immutable model objects. Value objects don't have any dependencies on Services, i.e. you'd never want to inject anything into them.
    • You never mock these kind of objects in unit tests! You use concrete types and ideally Test Data Builders to create them.

Note: these are my quick interpretation of these terms. If you read a book on Domain Driven Design you will find much more precise definitions, but I think this should suffice for the purpose of discussing DI.

Misko Hevery (Father of AngularJS ;-) mentions that these terms, like "service" are overloaded, especially in Android, where "service" has a specific meaning, so he calls Value Objects and Services Newables and Injectables respectively. I think this is good advice.

To apply these concepts to your case: you'd have some class that queries the database for the object in question.

Let's say the object you're talking about is Student and it might have some immutable fields something vaguely like:

class Student {
    public final long id;
    public final String firstName;
    public final String lastName;
    public final String email;

    public Student(...) {
        // assignment of fields here...

And then you'd have some object that queries Student entries from the database, let's say something like this StudentRepository here:

class StudentRepository {
    public List<Student> findAll() {
        // db access here...

In this example, Student is a Value Object (newable), and StudentRepository is a Service (injectable).

In your code, you'd only want to use Dagger to inject the StudentRepository -- but you'd never inject a Student...

It's difficult to give further advice without knowing more details about what you're doing, but hopefully this answers your question: you'd have to pass the database Entity read from the database to wherever it's needed, you shouldn't have to inject it anywhere.

I'm guessing most of us have to deal with this at some point so I thought I'd ask the question.

When you have a lot of collections in your BLL and you find that you're writing the same old inline (anonymous) predicates over and over then there's clearly a case for encapsulation there but what's the best way to achieve that?

The project I'm currently working on takes the age old, answer all, static class approach (E.g User class and static UserPredicates class) but that seems somewhat heavy-handed and a little bit of a cop out.

I'm working in C# mostly so keeping in that context would be most helpful but i think this is generic enough a question to warrant hearing about other languages.

Also I expect there will be a difference in how this might be achieved with the advent of LINQ and Lambdas so I'd be interested in hearing how this could be done in both .Net2.0 and 3.0/3.5 styles.

Thanks in advance.

A Predicate is essentially just an implementation of the Specification design pattern. You can read about the Specification pattern in Domain-Driven Design.

I'm not sure for which use cases one should to use DI in the application. I know that injecting services like PlaceService or CalculationService etc fits very well but should I also create my domain objects with DI like a User? What is if the User has only one constructor which requires a first and lastname. Is this solveable with DI?

Should I use DI to create the instances for Set/List interfaces or is this pure overkill?

I use guice primarily.

The answer by ig0774 is a good starting point. In addition, I would like to offer this rule of thumb:

In the terminology of Domain-Driven Design, you should DI for services, but not for entities or value objects.

In other words, DI fits well with conceptually long-lived, stateless objects of which there are usually one or a known number in use.

In Domain Driven Design,domain objects are fall into two categories,entity and value object.It is very easy to make out which is entity,which is value object,but i don't know why we have to do that?Before the advent of DDD, we model the domain without entity and value object.After the DDD was put forwar,we use entity and value object to classify domain object, so what is the advantage of such classification?

Before the advent of DDD, we model the domain without entity and value object.After the DDD was put forwar,we use entity and value object to classify domain object, so what is the advantage of such classification?

You should review Chapter 5 ("A Model Expressed in Software") of the Blue Book.

Evans writes:

Defining objects that clearly follow one pattern or the other makes the objects less ambiguous and lays out the path toward specific choices for robust design.

Tracking the identity of ENTITIES is essential, but attaching identity to other objects can hurt system performance, add analytical work, and muddle the model by making all objects look the same.

... bidirectional associations between two VALUE OBJECTS just make no sense. Without identity, it is meaningless to say that an object points back to the same VALUE OBJECT that points to it. The most you could say is that it points to an object that is equal to the one pointing to it, but you would have to enforce that invariant somewhere.

My own summary would be this: recognizing that some domain concepts are entities is useful, because it encourages the designer to acknowledge that identity, continuity, and life cycle are important concerns for the entity in this domain. Similarly, recognizing that a concept is a value immediately frees you from those concerns, bringing your attention to their immutable nature and equivalence.

In MVC, 1 model 1 tables or 1 model several tables?

I am building an application that contain 3 tables. I don't know whether I should create one model for all 3 tables or create 3 models for 3 tables.

In the case I use 3 models for 3 tables, where should I put the code if I want to join these 3 tables? Put the code in any one of 3 models?

Any suggestions?

In general, the 'Model' part of MVC should be interpreted as a 'Presentation Model' or 'View Model' - that is, a class that encapsulates all the data and behavior needed by the View. This may or may not be equivalent to the Domain Model.

Domain Models should be designed to be independent of UI. This means that such Models should not be polluted with UI-specific data and behavior - such as determining whether a particular button is enabled or not.

You may also want to show the same Domain Objects in several different views (e.g. Master/Detail, or Display/Edit), and if those views differ sufficiently, having a View Model for each View will be beneficial.

So, in general, you should design your Domain Layer and your Presentation Layer independently.

In the Domain Layer, you can choose to model your three tables as three classes. Books like Fowler's Patterns of Enterprise Application Architecture and Evans's Domain-Driven Design contains lots of guidance on how to model relational data as Domain Models.

When it comes to modeling the views in MVC, it makes most sense to create one Model per View. Such a View Model might simply encapsulate a single Domain Object, but it may also encapsulate and aggregate several different Domain Objects.

In this way, you can ensure separation of concerns, and that your classes follow the Single Responsibility Principle.

For very simple scenarios, it may make sense to collapse the Domain Model and the Presentation Model into one layer, but you should realize that this essentially means that there's no Domain Model in the solution - all models would be pure Presentation Models.

in EntityFramework old version if I have a table DbSet more than 50, I would definitely separating using BoundedContext concept, because most of the tips and tricks that I got from some blog site, now my question how in .net core if I have 100 tables, whether EntityFrameworkCore can handle 100 table or more in a single DbContext, or did have in separating its performance in order to obtain its process load is not heavy. thanks.

Its not necessarily a performance issue as the DbContext uses IQueryable for your DbSets so each 'table' isnt materialized until you call ToList(), First(), etc. I have worked on projects (not my design) that have used over a 100 tables in a single context and the bottlenecks were never from a large DbContext. It just made the whole project unweildy and not something I would want to do again.

For larger projects whether its .net core or .net <=4.5 I would definitely go the bounded context route but structured from a DDD standpoint. Juli Lerman has written many blog posts about this and of course there is the blue book by Eric Evans which is the DDD bible.

Another DDD disciple is Jimmy Bogard where he expands on the DDD princples by delving into Value Objects, Aggregates, and Roots.

The short answer (too late) is that no, you don't have to worry about a 100 objects in your DbContext from a performance standpoint but by using DDD, Aggregates, Value Objects, Bounded Contexts, etc., you may find that the number of tables required will shrink and your project becomes much cleaner with greater separation of concerns.

I USED to be a developer and part-time db designer, but it's been many years and I'm re-learning...

I'm building a web app and have a Person (actually "Profile") table with a number of child tables which have N:N relationships with my Person table, e.g. FavoriteSports MusicalInstruments ArtisticSkills

After reading through Scott Mitchell's excellent data access tutorials at, I want to design my app w/ well-thought-out Data Access (using Table Adapters) and Business Logic (using Classes) Layers.

Where I get hung up is in designing the Classes and Table Adapters. The "Unit of work" here is something more like a real-life object, that doesn't align directly with my table-based classes or data access objects. So I guess what I'm seeing is a need to build a class modeled not around the tables, but instead around the real-life object (my Profile+FavoriteSports+MusicalInstruments+ArtisticSkills).

What I'm really looking for is a good book or web site that describes how to do this. Specifically, how can I build a class that supports these child records (as collections?). I think I understand the concepts - I just need some guidance on how to put it into practice.

The best answer I could ask for would point me at a book (C#, preferrably) that goes into all this in all its complexities - pretty sure I'm way past the "Beginners" books with this.

Thanks -


Unfortunately one book might not help. I recommend you go for these books to get back to understanding the two worlds of RDBMS and OO. Tables and business entities may not be a 1:1 map!

Patterns of Enterprise Application Architecture

Domain-Driven Design: Tackling Complexity in the Heart of Software

.NET Domain-Driven Design with C#: Problem - Design - Solution

Hope that helps.

I've got a question on my mind that has been stirring for months as I've read about DDD, patterns and many other topics of application architecture. I'm going to frame this in terms of an MVC web application but the question is, I'm sure, much broader. and it is this:  Does the adherence to domain entities  create rigidity and inefficiency in an application? 

The DDD approach makes complete sense for managing the business logic of an application and as a way of working with stakeholders. But to me it falls apart in the context of a multi-tiered application. Namely there are very few scenarios when a view needs all the data of an entity or when even two repositories have it all. In and of itself that's not bad but it means I make multiple queries returning a bunch of properties I don't need to get a few that I do. And once that is done the extraneous information either gets passed to the view or there is the overhead of discarding, merging and mapping data to a DTO or view model. I have need to generate a lot of reports and the problem seems magnified there. Each requires a unique slicing or aggregating of information that SQL can do well but repositories can't as they're expected to return full entities. It seems wasteful, honestly, and I don't want to pound a database and generate unneeded network traffic on a matter of principle. From questions like this Should the repository layer return data-transfer-objects (DTO)? it seems I'm not the only one to struggle with this question. So what's the answer to the limitations it seems to impose? 

Thanks from a new and confounded DDD-er.  

as I've read about DDD, patterns and many other topics of application architecture

Domain driven design is not about patterns and architecture but about designing your code according to business domain. Instead of thinking about repositories and layers, think about problem you are trying to solve. Simplest way to "start rehabilitation" would be to rename ProductRepository to just Products.

Does the adherence to domain entities create rigidity and inefficiency in an application?

Inefficiency comes from bad modeling. [citation needed]

The DDD approach makes complete sense for managing the business logic of an application and as a way of working with stakeholders. But to me it falls apart in the context of a multi-tiered application.

Tiers aren't layers

Namely there are very few scenarios when a view needs all the data of an entity or when even two repositories have it all. In and of itself that's not bad but it means I make multiple queries returning a bunch of properties I don't need to get a few that I do.

Query that data as you wish. Do not try to box your problems into some "ready-made solutions". Instead - learn from them and apply only what's necessary to solve them.

Each requires a unique slicing or aggregating of information that SQL can do well but repositories can't as they're expected to return full entities.

So what's the answer to the limitations it seems to impose?


Btw, internet is full of things like this (I mean that sample app).
To understand what DDD is, read blue book slowly and carefully. Twice.

My company has a fairly old fat client application written in Delphi. We are very interested in replacing it with a shiny new web application. This will make maintenance a breeze and many clients want a web application.

The application is extremely rich in domain knowledge, some of which is out of our control. Our clients use the program to manage their own clients and report them to the government. So an inaccurate program is a pretty big thing. The old program has no tests. We are not sure yet if we will implement automated testing with the new one.

We first planned to basically start from scratch. But we are short handed and wanting to basically get everyone on the web as soon as possible. So instead of starting from scratch we've decided to try to make use of the legacy fat-client database.

The database is SQL Server and can be used in SQL Server 2008 easily. It is very rich in stored procedures, functions, a few triggers, and lots of tables with over 80 columns... But it is decently normalized. We want for both the web application and fat client to be capable of using the same database. This is so that if something breaks badly in the web application, our clients can still use the fat client and connect to our servers. After the web application is considered "stable", we'd deprecate the fat client.

Has anyone else done this? What tips can you give? We want to, after getting everyone on the website, to slowly change the database structure to take care of some design deficiencies. What is the best way to keep this in a data access layer so that later changes are easy?

And what about actually making the screens? Is there any way easier than just rewriting an 80 field form in ASP.Net? Are there any tools that can make this easier?

The current plan is to use ASP.Net WebForms (.Net 3.5). I'd really like to use MVC, but no one on the team knows it including me.

I have a couple suggestions:

1) create a service layer to abstract away the dependance on the DAL. In a situation as you describe having a layer of indirection for the UI and BLL to rely on makes DB changes much safer.

2) Create automated tests (both unit and integration), especially if you plan on making fairly significant changes to the Domain or Persistance layers (BLL/DAL). To make this really easy you should always try to program to an interface. This makes your code more flexible as well as letting you use mocking frameworks (Moq is one I like) to ensure your tests truely are unit tests and not integration tests.

3) Take a look at DDD ( as it seems to fit pretty well with the given scenario. At the very least there are some very useful patterns that can help make your application more flexible.

4) MVC isnt very hard to learn at all, it is however an easy way to get unit testing setup for the UI as a result of the MVC architecture (testing the controller and not the view). That said, there is no reason you couldn't unit test web forms, its just a bit more work. MVC really is just a UI framework/design pattern (more Model2 but we can ignore that for now). It gets you closer to the metal so to speak as you will be writting a lot more HTML and using a Model (the 'M') for passing data around.

For DDD take a look at Eric Evans book:

Hope that helps

Someone in other question told me to implement following in my java program. but i am very new to java and i don't know how to start or convert my simple program into that catregory

Data Access Layer (read/write data)
Service Layer (isolated business logic)
Controller (Link between view and model)
Presentation (UI)
dependency injection. 
program to the interface:

NOw does that come inside some framework framework. i mean should i start learning spring and these things will come with that and i don't need to learn from one by one.

Or i can implement above technologies one by one without framework.

Any tutots for that

In short:

  • Data Access Layer is a module of your application that provides interface to your data. Data may be in SQL database, XML, file wherever. You write interfaces and classes that provide interface to access data usually as VO or DTO via DAOs
  • Service Layer contains most of the use-case logic. Service layer interacts with Data Access Layer to perform tasks in given use case. I did not find a good article on introductory service layer. You may see here and there
  • Controller is the one that interacts with Service Layer and/or Data Access Layer and/or other controllers in order to perform a specified client's tasks.

    For example, a sign-off button controller will request a sign-off action/service to invalidate user's sessions on all services that user is logged on to, then it will choose an appropriate view or log-off web-page to forward user to.
  • Presentation is your user interface. It can be a web-page made of HTML or Java Swing window or anything that user interacts with. GUI commonly known term for it. This is what your users will be interacting with using mouse clicks, scrolls, swipes, drag-and-drop. These actions are mapped with controller which performs action based on what user performed on UI.
  • Dependency Injection is a way to wire various components. There are a lot of resources on web. You can look in Martin Fowler's this article. It's basically a mechanism that allows components to behave much like plug-and-play devices, if you know what plug goes where.

    Spring is a good implementation of dependency injection. You may not want to write your own framework, and at this stage, you should rather not. There is a Spring MVC framework that can do things for you.

But I suggest you start from very basic. Instead of jumping on jargon, read from basic. Start with a good book on application development using Java. You can also look into

You might want to check out Domain Driven Design. The Code samples are in Java. The things you listed are design related more than any specific technology.


I'm working on a solution that has many assemblies. The main assembly references a DAL assembly with a large EF model. I am working on a DLL that contains its own smaller EF model. Both models will connect to the same database. The DLL that I am working on will return data to the main assembly, but it doesn't necessarily have to return entities from its model.


Is it better for each sub-assembly to contain its own small model or should they all share the same large model?


  • On one hand, if I shared the main assembly's model, the sub-assembly could return entities to the main assembly.
  • On the other hand, sharing one large model couples each assembly to that model. It seems like this would increase the chance that a change to that model could break a sub-assembly. I may not be able to safely make useful changes to the main model in fear of breaking one of the sub-assemblies.


  1. Ray Vernagus had some good points (I think) about setting clearly defined boundries around your models. I really like this idea. I am kind of doing this already by having a separate model in my subassembly, since my subassembly has a clearly defined scope. Is this enough?

  2. Consider the situation where all of the domain models were in the same DAL assembly and many of the entities were based on the same tables and had the same names. Besides needing to be in saparate namespaces, would this be a bad idea?

Eric Evans describes this situation aptly in his book, Domain Driven Design. His recommendation is to set boundaries around your models and to explicitly define the scope within which they apply. This is known as Bounded Context and Context Map.

It sounds like you need to be explicit about whether or not you want to have one common Domain Model or whether each DAL assembly should be bounded to its own Model. If you want one central Domain Model, you may want to consider defining such in your main assembly and then have your DAL assemblies communicate with it through that model. Otherwise, you can keep to the separate models per DAL assembly but define explicit Bounded Contexts.

Hope that helps!

Suppose I've built a system for my client that allow gamblers to maintain a portfolio of bets and track their gains/losses over time. This system supports a lot of complex domain logic - bets on different sports, rolling over wins to other bets etc.

Next my client wants to support the idea of tipsters. The tipsters do not actually gamble, instead they create "tip sheets", which are their tips on what bets to place. The tip sheets can be of different kinds - some can include tips on any bettable event, others only offer tips on horse racing, and so on. My client wants the system to track the performance of tipsters in the same way as it tracks the performance of gamblers, with the additional twist of being able to compare performance within and across different kinds of tipster (e.g. who is the best horse racing tipster? do they in general perform better than football tipsters?)

Now, the domain language is completely different between gamblers and tipsters, and there is the additional categorisation of tip sheets that doesn't exist for gamblers' portfolios. This suggests these are separate bounded contexts. However, there is a lot of shared logic as they both track performance over time.

So my questions are:

  1. Are these really separate bounded contexts? I'm wary of adding categorisation to the gamblers context (feels like a slippery slope).
  2. If they are distinct contexts, should I:
    • Share performance tracking logic between them (i.e. share DLLs, jars etc)? This creates a tight implementation coupling between the contexts which feels wrong.
    • Leave the performance tracking logic in the gambling bounded context, place the categorisation logic in the tipster bounder context, and have it ask the gambling context to track performance? In this case, it seems like the tipster context will send commands to the gambling context, which again feels wrong (I'm more comfortable with events).
    • Do something else...some kind of composition layer that communicates and correlates between both contexts?


A gambler's portfolio and a tipster's tip sheet are almost identical - the only difference is that the tip sheet can be categorised (e.g. horse racing, football etc).

Performance tracking is about measuring the profit/loss of the portfolio/tip sheet.

  1. If you see two clearly separate models, with only technical overlap, then I would agree that you have two BCs. However, be aware that having multiple BCs, especially when they need to communicate, can be a bit "expensive". It is much "cheaper" to employ modules, which is why you should not take the decision to have multiple BCs lightly.

  2. The blue book, Part 4 (Strategic Design), Chapter 15 (Distillation), introduces the notion of a generic subdomain which fit nicely into your scenario. Performance calculations can be regarded as a generic subdomain because while they are essential to the overall functioning of your model, they can be isolated into a library which can be used by two BCs. This is a pattern for distilling your model and keeping technical concerns abstracted away. You don't need to implement complex messaging or RPC communication between BCs, just use a shared library with intention-revealing interfaces.

This is one thing that has been bugging me for a while about DDD. I can clearly see the benefits of the approach when dealing with non-technical business domains with complex models and lots of interaction required between technical people and non-technical domain experts.

However, what about when the 'domain' in question is technical?

For instance, situation A) take a web start-up. Imagine they are trying to accomplish something quite complicated (say a facebook clone), but almost all of the staff are technical (or at least have strong technical understanding).

What about situation B) A similar situation, but with a slightly less ambitious project, and a lone developer trying to create somthing with an elegant architecture.

I'd be really interested to hear what people have to say. What I'm really trying to get to the meat of, is where the benefits of DDD lie, what the downsides might are, and at what point one outweighs the other...

DDD is really just an elaboration of the design pattern Fowler calls Domain Model in Patterns of Enterprise Application Architecture. In that book, he compares Domain Model to other ways of organizing code, such as Transaction Script, but it is clear that he prefers Domain Model over other alternatives for all but the simplest of applications. I do, too.

DDD simply expands greatly on the original concept of a Domain Model and provides tons of guidance on how we should analyze and model our domain in a way that will be beneficial to ourselves as developers.

If the Domain in question is complex, then a Domain model (and hence DDD) is a good choice. It doesn't really matter whether the Domain is business-oriented or more technical in nature. In his book Domain-Driven Design, Eric Evans starts by describing how the DDD techniques helped him model a Printed Circuit Board application. That is surely a technical Domain, if any!

In my current work, we are using DDD to model an application that deals with Claims-based identity - another very technical Domain.

DDD is really just about dealing with complexity in sofware, and this is also the subtitle of Evans' book: "Tackling Complexity in the Heart of Software."

What is the best way and why?


    var service = IoC.Resolve<IMyBLService>();
catch(BLException ex)
   //Handle Exception


var service = IoC.Resolve<IMyBLService>();
var result = service.Do();
if (!result.Success)
   //Handle exception

Exceptions are better in my opinion. I think that DDD code is first and foremost good object oriented code. And the debate about using exceptions vs return codes in OO languages is mostly over. In DDD context I see following benefits of using exceptions:

  • they force calling code to handle them. Exception don't let client code forget about the error. Calling code can simply forget to check for result.Success.

  • both throwing and handling code is more readable, natural and brief in my opinion. No 'ifs', no multiple return statements. No need to bend your Domain services to be exposed as 'operations'.

  • DDD in my opinion is all about using plain OO language to express specific business problems and keeping infrastructure out as much as possible. Creating 'OperationResult' class(es) seems too infrastructural and generic to me especially when language already supports exceptions.

  • Domain objects will throw exceptions anyway, even if its only for checking arguments. So it seems natural to use the same mechanism for domain services.

It may also be worth looking at the design itself, maybe there is a way to not get into error state in the first place? For example the whole class of 'validation' error conditions can be eliminated by using Value Objects instead of primitive strings and ints.

DDD is an approach, a set of guidelines so there is no 'right' way. The book never mentions this issue directly but the code in snippets and sample project use exceptions.

Alternatively, is basic entity validation considered a specification(s)?

In general, is it better to keep basic entity validation (name cannot be null or empty, date must be greater than xxx) in the actual entity, or outside of it in a specification?

If in a specification, what would that look like? Would you have a spec for each field, or wrap it all up in one EntityIsValid type spec?

It seems to me that once people have learned a little about DDD, they pick up the Specification pattern and look to apply it everywhere. That is really the Golden Hammer anti-pattern.

The way I see a place for the Specification pattern, and the way I understood Domain-Driven Design, is that it is a design pattern you can choose to apply when you need to vary a business rule independently of an Entity.

Remember that DDD is an iterative approach, so you don't have to get it 'right' in the first take. I would start out with putting basic validation inside Entities. This fits well with the basic idea about OOD because it lets the object that represents a concept know about the valid ranges of data.

In most cases, you shouldn't even need explicit validation because Entities should be designed so that constraints are represented as invariants, making it impossible to create an instance that violates a constraint.

If you have a rule that says that Name cannot be null or empty, you can actively enforce it directly in your Entity:

public class MyEntity
    private string name;

    public MyEntity(string name)
            throw new ArgumentException();
        } = name;

    public string Name
        get { return; }
                throw new ArgumentException();
   = value;

The rule that name cannot be null is now an invariant for the class: it is now impossible for the MyEntity class to get into a state where that rule is broken.

If later on you discover that the rule is more complex, or shared between many different concepts, you can always extract it into a Specification.

Can anyone please give me any example of situation in a database-driven application where I should use Flyweight pattern?

How can I know that, I should use flyweight pattern at a point in my application?

I have learned flyweight pattern. But not able to understand an appropriate place in my database-driven business applications to use it.

You should apply any pattern when it naturally suggests itself as a solution to a concrete problem - not go looking for places in your application where you can apply a given pattern.

Flyweight's purpose is to address memory issues, so it only makes sense to apply it after you have profiled an application and determined that you have a ton of identical instances.

Colors and Brushes from the Base Class Library come to mind as examples.

Since a very important part of Flyweight is that the shared implementation is immutable, good candidates in a data-driven application would be what Domain-Driven Design refers to as Value Objects - but it only becomes relevant if you have a lot of identical values.

DDD Newbie question:

I read in a blog somewhere that in a scenario where objects are closely associated with each other in a domain driven design, and where one object based on some complicated business rule is responsible for the creation of a dependent object, in such a design the usefulness of dependency injection is very limited.

Would you agree?

No, I wouldn't agree.

The whole purpose of DDD is to arrive at an expressive model that facilitates change. It is accepted as a given that business logic often changes, so the model must be flexible enough to enable a quick change of direction in the face of changing requirements or new insight.

As Uncle Bob writes in Clean Code, the only way to enable a flexible and expressive API that can quickly address unprecedented change is to use loose coupling. Loose coupling is achieved through the Dependency Inversion Principle; from there, the connection to DI follows naturally.

As I read Domain-Driven Design, this was always the underlying motivation behind all the talk about Factories, but I personally find the book a little vague there.

From a post I read it seems that Entity is just a subset of Aggregate. I've read about the two patterns in both Domain-Driven Design and Implementing Domain-Driven Design, and I'm trying to understand the UML difference between them.

Let's consider a simple class. It's a Letter holding a message, a receiver, and possibly the sender.

enter image description here

I guess this Letter class would be considered an Entity?

Now let's say we want to expand our parcel business to be able to send also Packages, then it could look like the following.

enter image description here

Since all the Items in the Package will be lost if the whole Package is lost, we use a UML Composition relation (a filled diamond). We also want to preserve the Package's consistency by prohibiting Items from being changed or removed from outside the Package. The description of Aggregate reads

The aggregate root guarantees the consistency of changes being made within the aggregate by forbidding external objects from holding references to its members.

We therefore make sure the Composition relation is hidden, and with preserved invariants, within the Aggregate.

My question is:
Can we say that the UML difference between Entity and Aggregate is that Entity does not contain any Composition relation whereas Aggregate contains at least one Composition relation?

To answer your question, no you can't say that. An aggregate root is entity itself, and may or may not be composed of child entities. The child entities can also be composed of other entities as well (though not recommended usually).

The aggregate root is responsible for maintaining the state and enforcing the invariants of both itself and it's child entities.

So to recap, an aggregate and a child entity can each have 0 or more child entities. All child entities require an aggregate root however.

From time to time, certain services which I am coding requires the functionality which another service has implemented. For example, in writing a service which returns the products bought by a user of a certain ID after a single transaction, I need the balance of the user's account after he has bought the product, so I invoke another services to fetch the data.

I can see some alternatives:

  1. It's good to do so as you are reusing code.

  2. Services should access their own repo to retrieve data for their operations

  3. Services should be isolated from each other and only pertain to a single domain. In my example, I should have another layer, perhaps a ViewFactory, to invoke the services to fetch the relevant data

What are the commonly accepted norms on this issue?

Is your question about Domain Services, not application or infrastructure services? If so, DDD has no specific guidelines about isolating Domain Services from each other. Use your judgement and watch for SOLID violations. Also keep in mind that domain services are often misused and it makes sense to put more logic into Entities:

SERVICES should be used judiciously and not allowed to strip the ENTITIES and VALUE OBJECTS of all their behavior.

Having a generic repository like

public class Repository<T> 
    where T: Entity<T> 
    /*anything else*/ 

should concrete repositories per agregation root like

class ProductRepository : Repository<Product> 

class CategoryRepository : Repository<Category> 

be created?

Also how do I use DI (Ninject) with generic implementation of repository.

Samples are apreciated!


I see a lot of misuse of generics in the questions in SO and while it does not necessarily refer to your question (although it will help to clarify), I think once for all write a summary here.

Generics is a typeless reuse of the behaviour (almost, since you can limit the types using restrictions). If a behaviour is shared by all types (or those limited to restrictions) you use the generics.

However if implementation of each generic type needs to be individually implemented, then using generics does not help and is reduced to a decoration - and for me it is bad design.

OK, let's have an example:

interface IConverter<T1, T2>
    T1 Convert(T2 t2);

This looks like a good generics (converting one type to another), but I have to implement all such combinations of types as different converters. And a IConverter<Apple, Orange> does not make any sense. Hence generics here is misleading and BAD.

Now going back to repositories. There are 100s of articles on this particular issue (lots of controversies) but this is personal my take:

Usually it is not recommended to use generic repositories. However, in my personal experience I use generic repository to implement common stuff (implemented by a base repo) and then use individual ones if there is any additional:

interface IRepository<T>
    T GetById(int id);
    IEnumerable<T> GetAll();
    void Save(T t);

class BaseRepository<T> : IRepository<T>
   ... // implement common methods

interface IProductRepository : IRepository<T>
    IEnumerable<T> GetTop5BestSellingProducts();    

class ProductRepository : BaseRepository<T>, IProductRepository 
    ... // implement important methods


Some believe repository must allow for criteria to be passed to the repository (Eric Evans DDD book) but I again do not feel I have to agree to that. I prefer a domain-significant declaration on the repository (e.g. GetTopFiveBestSellingProducts or GetTopBestSellingProducts) unless there are 100s of such like in a report writer which is only 1% of cases.

Second question:

I do not use Ninject but for any DI, here is how you do it

 container.Register<IProductRepository , ProductRepository>(); 

depending on DI framework, it can be slightly different.

I am working on a project using ASP.NET MVC, and repository model. I have repository classes, and services which consume these repository classes. My question is: Is it correct to return a IQueryable from my repository class and then use ".Where" and ".OrderBy" in a Service to generate a List? If yes, is it the best practice?


For starters: there's no "right" or "wrong" here. It's just a matter of what works best for you and the system you are building. For example, Rob Conery did an ASP.NET sample application called Storefront with exactly the pattern you're describing and it ignited a big flame war. A large part of the discussion evolved around the Repository pattern that is considered the "original one" as described by Eric Evans in his book Domain Driven Design and that describes the interface of a repository as one that accepts and/or returns actual instances (of lists of instances) and not some query interface.

So much for theory. What to choose for your system? The reason I would not directly choose the IQueryable route is that it actually leaks a bit of my persistence strategy to the client layer (being the service layer in this case). Since it primarily makes sense to return an IQueryable if you're retrieving objects from the database using LINQ to [a database access method (like SQL, Entities, ...)]. Only then will .Where or .OrderBy be optimizing your query result. It obviously doesn't make sense if you're using some database access code code that gets a full list which you then expose from the Repository using LINQ to Objects. So in short: you do tie your client layer into the LINQ-based database access strategy you're using.

Being a bit of a purist myself, I would prefer not to surface this tie to LINQ from out of my repository, and I would choose to offer where- and order-by criteria through the parameters of the operations of the repository. I can do all the retrieval optimization in the repository and return a neat clean set of domain objects.

So in the end it comes down to: whatever works best for you is fine.

I find myself rarely using object oriented principles when I design applications. I am looking for a good reference for Object Oriented design. I'm using C# as my programming language and would like to have a reference which helps to make use of the OO contructs provided by C#. But basically I need a good book to derive some inspiration.

Check out Evan's DDD

I have an MVC application in PHP. My 'M' includes Domain Objects, Factories and Mappers. The Model is accessed via a Service layer.

Obviously my Mappers use the Factories to create objects upon retrieval from the database. But should the Factories also create the objects for all 'new' entities, e.g. for new Users?

I think the answer is Yes, but just want to check. I would use the Factories to supply default values as one of their tasks.

As a side point: is there any terminology to distinguish between 'new' entities, versus those that are retrieved from the database? (I don't like using 'new', since the new keyword precedes all object instances, even those based on data retrieved from the database).

The first thing that comes to mind is to say yes as this is what factories should really do: create complex objects or hide object creation, but i want to mention two points to consider.
These points and the terminology i am going to propose are taken from Eric Evans excellent book Domain Driven Design.

  1. An ENTITY FACTORY used for reconstitution does not assign a new tracking ID.

  2. A FACTORY reconstituting an object will handle violation of an invariant differently.

The last point emphasizes that if the factory is restoring an object from the storage medium then it shouldn't take errors in the object state (e.g. corrupted object) slightly but rather deal with them radically.

For the terminology i would say to use Create Objects for new ones and Stored or Reconstituted Object for saved objects.

I 'm concern about what techniques should I use to choose the right object in OOP

Is there any must-read book about OOP in terms of how to choose objects?


You probably mean "the right class", rather than "the right object". :-)

There are a few techniques, such as text analysis (a.k.a. underlining the nouns) and Class Responsibility Collaborator (CRC).

With "underlining the nouns", you basically start with a written, natural language (i.e. plain English) description of the problem you want to solve and underline the nouns. That gives you a list of candidate classes. You will need to perform several passes to refine it into a list of classes to implement.

For CRC, check out the Wikipedia.

I suggest The OPEN Toolbox of Techniques for full reference.

Hope it helps.

You should check out Domain-Driven Design, by Eric Evans. It provides very useful concepts in thinking about the objects in your model, what their function are in the domain, and how they could be organized to work together. It's not a cookbook, and probably not a beginner book - but then, I read it at different stages of my career, and every time I found something valuable in it...
alt text

I'm learning DDD and trying to implement Repository using Google Datastore.

I find recreating DDD entities from datastore quite tricky. I've read there are frameworks to map my DDD entities to datastore entities, but I would like to learn low-level API first.

I though, the repository could set the state of an entity using setters, but this is often considered anti-pattern in DDD.

An alternative would be to use builder pattern, where builder instance is passed to the constructor of an entity. However, this introduce to the entity a functionality (restoring entity state) that is out of its responsibility.

What are good patterns to solve problem?

The whole Chapter 6 of Eric Evans book is devoted to the problems you are describing.

First of all, Factory in DDD doesn't have to be a standalone service -

Evans DDD, p. 139:

There are many ways to design FACTORIES. Several special-purpose creation patterns - FACTORY METHOD, ABSTRACT FACTORY, and BUILDER - were thoroughly treated in Gamma et. al 1995. <...> The point here is not to delve deeply into designing factories, but rather to show the place of factories as important components of a domain design.

Each creation method in Evans FACTORY enforces all invariants of the created object, however, object reconstitution is a special case

Evans DDD, p. 145:

A FACTORY reconstituting an object will handle violation of an invariant differently. During creation of a new object, a FACTORY should simply balk when invariant isn't met, but a more flexible response may be necessary in reconstitution.

This is important, because it leads us to creating separate FACTORIES for creation and reconstitution. (in the diagram on page 155 TradeRepository uses a specialized SQL TradeOrderFactory, not a general general purpose TradeOrderFactory )

So you need to implement a separate logic for reconstitution, and there are several ways to do it (You can find the full theory in Martin J Fowler Patterns Of Enterprise Application Architecture, on page 169 there's a subheading Mapping Data to Domain Fields, but not all of methods described look suitable(for example making the object fields package-private in java is seems to be too intrusive) so I'd prefer only one of the following two options

  • You can create a separate FACTORY and document it so that developers should only use it only for persistence or testing.
  • You can set the private field values with reflection, as for example Hibernate does.

Regarding the anemic domain model with setters/and getters, the upcoming Vaughn Vernon book criticizes this approach a lot so I dare say it is an antipattern in DDD.

I'm making the switch to a more object-oriented approach to ASP.NET web applications in a new system I'm developing.

I have a common structure which most people will be familiar with. I have a school structure where there are multiple departments; courses belonging to a single department; and students belonging to multiple courses.

In a department view I list all courses belonging to the department and I want aggregated figures for each course such as number of enrolments, withdrawals, number male/female etc.

In an individual course view however, I'll need the actual list of students along with their details such as whether they are enrolled, passed course, gender etc.

And then the individual student view where all detail about a student is displayed including enrolments on other courses, address, etc.

Previously, I would have had a data access layer returning whatever data I needed in each case and returning it as an SQLDataReader or DataSet (working in VB.NET). Now, I'm trying to model this in an object-oriented approach I'm creating objects in the DAL and returning these to the BLL. I'm not sure how to handle this though when I need aggregated details in objects. For example, in the department view with the list of courses I'll have aggregates for each of the courses. Would I store a collection of some lightweight course objects in the department where those lightweight course objects store the aggregated values?

I guess there are different levels of abstraction needed in different scenarios and I'm not sure the best way to handle this. Should I have an object model where there's a very basic course object which stores aggregates, and a child object which would store the full detail?

Also, if there are any useful resources that may help my understanding of how to model these kind of things that'd be great.

That's actually a pretty big issue that many people are struggling with.

As far as I've been able to identify, there are at least two schools of thought on this issue:

  • Persistent-ignorant Domain objects with an OR/M that supports lazy loading
  • Domain-Driven Design and explicitly modeled (and explicitly loaded) aggregates

Jeremy Miller has an article in MSDN Magazine that surveys some persistence patterns.

The book Domain-Driven Design has a good discussion on modeling Aggregates.

I'm refactoring some code I wrote a few months ago and now I find myself creating a lot of smallish classes (few properties, 2-4 methods, 1-2 events).

Is this how it's supposed to be? Or is this also a bit of a code smell?

I mean if a class does need a lot of methods to carry out it's responsibility, I guess that's how it's gotta be, but I'm not so sure that a lot small classes is particularly good practice either?

Lots of small classes sounds just fine :)

Particularly if you let each class implement an interface and have the different collaborators communicate through those interfaces instead of directly with each other, you should be able to achieve a so-called Supple Design (a term from Domain-Driven Design) with lots of loose coupling.

If you can boil it down so that important operations have the same type of output as input, you will achieve what Evans call Closure of Operations, which I've found to be a particularly strong design technique.

What tend to happen when you apply the SRP is that although all classes start out small, you constantly refactor, and from time to time it happens that a rush of insight clarifies that a few particular classes could be a lot richer than previously assumed.

Do it, but keep refactoring forever :)

I'm trying to create a simple three tier project and I'm on track on implementing the Business Logic layer, right now, this is how my BL looks like

//The Entity/BO
public Customer
    public int CustomerID { get; set; }
    public string CustomerName { get; set; }
    public UserAccount UserAccount { get; set; }
    public List<Subscription> Subscriptions { get; set; }

//The BO Manager
public class CustomerManager

     private CustomerDAL _dal = new CustomerDAL();

    private Customer _customer;

    public void Load(int customerID)
        _customer = GetCustomerByID(customerID);

    public Customer GetCustomer()
        return _customer;

    public Customer GetCustomerByID(int customerID)

        return _dal.GetCustomerByID(customerID);

    public Customer GetCustomer()
        return _customer;

    public UserAccount GetUsersAccount()

        return _dal.GetUsersAccount(_customer.customerID);

    public List<Subscription> GetSubscriptions()
         // I load the subscriptions in the contained customer object, is this ok??
        _customer.Subscriptions = _customer.Subscriptions ?? _dal.GetCustomerSubscriptions(_customer.CustomerID);

        return _customer.Subscriptions;

As you may notice, my object manager is really just a container of my real object (Customer) , and this is where I put my business logic, kinda way of decoupling the business entity to the business logic, this is how I typically use it

        int customerID1 = 1;
        int customerID2 = 2;


        //get customer1 object
        var customer = customerManager.GetCustomer();

        //get customer1 subscriptions
        var customerSubscriptions = customerManager.GetSubscriptions();

        //or this way
        //var customerSubscriptions = customer.Subscriptions;


        //get customer2
        var newCustomer = customerManager.GetCustomer();

        //get customer2 subscriptions
        var customerSubscriptions = customerManager.GetSubscriptions();

As you can see it only holds 1 object at a time, And if i need to manage List of Customers, I probably have to create another manager like CustomerListManager

My question is, Is this the right way of implementing the BL of a three tier/layer design? Or any suggestion on how would you implement it. Thanks.

As others have mentioned before you should have a look at the Repository pattern. I would also recommend to check out the Unit Of Work pattern as well as Domain Driven Design for your application's architecture in general.

You might also have a look into object relational-mapping (ORM) frameworks for .NET like Entity Framework or NHibernate if that's an option for you.

To give you a head-start here are some great resources to get you on the right road:


Online References

Hope that helps.

I'm using fluent syntax to configure a M:M relationship between simple Movie and Genre entities. That's working OK.

My problem is that I also wish to expose the link/join table MovieGenre as an entity as well. I realize it's not necessary since it only contains keys, but I wish to have it because it will make some operations easier.

I get this error when I try to configure the link table:

The EntitySet 'MovieGenre1' with schema 'Media' and table 'MovieGenre' was already defined.

I know it has to do w/ me already setting up the M:M relationship but haven't figured out how to make it all work together.

Below is my code:

public class Genre
   public int GenreID { get; set;}
   public string Name { get; set; }

   public ICollection<Movie> Movies { get; set; }

public class Movie
   public int MovieID { get; set; }
   public string Title { get; set; }

   public ICollection<Genre> Genres { get; set; }

public class MovieGenre
   public int MovieID { get; set; }
   public int GenreID { get; set; }

   public Movie Movie { get; set; }
   public Genre Genre { get; set; }

protected override void OnModelCreating(DbModelBuilder modelBuilder)
               .ToTable("Genre", "Media");

   var movieCfg = modelBuilder.Entity<Movie>();
   movieCfg.ToTable("Movie", "Media");
   movieCfg.HasMany(m => m.Genres)
           .WithMany(g => g.Movies)
           .Map(m =>
              m.ToTable("MovieGenre", "Media");

   var movieGenresCfg = modelBuilder.Entity<MovieGenre>();
   movieGenresCfg.ToTable("MovieGenre", "Media");
   movieGenresCfg.HasKey(m => new
      m.MovieID, m.GenreID


Any help would be appreciated. Thank you.

What you are trying is not possible. You need to remove your MovieGenre from the mapping.

Do not worry about fetching the entity from the database just to add it to the list of Genres. If you load an entity by id multiple times there is only one call to the DB anyway. As EF gets better and a second level cache is added it will just work and perform well. If you are too imperative there is no magic left to the framework.

And if you think about the domain and the users of your model, most of it is reads anyway so not many updates. On top of it adding a movie to a genres or the other way around could mean many things in which case you should not expose the collections directly but add methods to to implement the behaviours.

One example of behaviour would be "if you add a movie to more than 10 genres then the movie should change the say its category to X" and all sorts of other things. Stop thinking about data only. Domain Driven Design is a good book to read on the subject.

In your case here you can consider the Movie entity as a root aggregate because a Movie is what your users are after and a Movie is what you are going to edit as an administrator. A movie can exist without a Genre, maybe, so genre is just a tag on the movies, a value object, and not very important. Your mappings would become:

    public class Genre
       public int GenreID { get; set;}
       public string Name { get; set; }

    public class Movie
       public int MovieID { get; set; }
       public string Title { get; set; }

       public ICollection<Genre> Genres { get; set; }

   var movieCfg = modelBuilder.Entity<Movie>();
   movieCfg.ToTable("Movie", "Media");
   movieCfg.HasMany(m => m.Genres)
           .Map(m =>
              m.ToTable("MovieGenre", "Media");

The queries would look like:

// get all movies for a particular genre
var movies = movieContext.Movies.Where(m => m.Genres.Any(g => g.GenreID = id))
                                .OrderBy(m => m.Name)

// get all movies 
var movies = movieContext.Movies.OrderBy(m => m.Name).Take(10).ToList();

This is pretty much all you are going to do with your domain.

I'm wondering if there's a best practice for this, I'm building a site that's going to require quite a few models and I don't want to end up with a unstructured mess. Basically I'm looking for a good rulesheet to structure out the models. Right now I can only think of two ways:

  • Every table in the db has its own model, ie. accounts.php, books.php etc.
  • A model represents a collection of methods, pertaining to a core module of the website, ie. a login model that contains methods like login, logout etc. a model that handles sessions, a model that handles cookies

Thanks for your time!

I suggest that model != table in general.

Instead, a model represents some component of your application's domain logic. This might correspond one-to-one to a database table sometimes, but it could also be that a model consists of several database tables, or no database tables. And some models could be comprised of a collection of other models. Also some tables may be used in multiple model classes.

The one-to-one relationship between models and tables is attractive for simplifying applications, but it's a type of coupling that can be misleading.

In MVC, the Controller and View are relatively easy and straightforward. They correspond to handling input and generating output. The Model is hard, because this is the rest of your application's data and logic. Welcome to OO design and architecture!

A good resource to learn how to architect models effectively is Domain-Driven Design by Eric Evans, or the free online short version, Domain-Driven Design Quickly.

In my event sourced application developed using DDD, there are saving accounts that should accumulate interest every day. At the end of every year the accumulated interest should be capitalized. My question is: Should every daily calculation really be treated as a domain event?

An alternative could be to calculate the accumulated interest at a given point in time on the read side by traversing the transactions that has happened on the account up until that day (withdrawals, deposits etc), and sum the accumulated interest for each day.

The amount of events in the event store would quickly grow to millions, given that there could be hundreds of thousands saving accounts in the system calculating their accumulated interest each day. But at the same time it seems like a drawback to have to calculate the accumulated interest "on the fly" on the read side instead of raising an event every day.

Should every daily calculation really be treated as a domain event?

What do your domain experts say?

You might also want to review chapter 11 of the blue book, which includes "Example: Earning Interest with Accounts". That may not directly answer your question about the domain events, but it should provide you with some extra context for framing your own analysis.

I'm not a domain expert, but my expectation would be that accrued interest has implications, either legal, or in the model, and that you would expect to have a consistent record of the accrual and its consequences on your model.

From your initial description, the impact on the model is annual, so I would expect to see the InterestCapitalized event only once per year per account. But I find it difficult to believe that daily accrued interest doesn't matter, especially in the face of changing balances and compounding interest, so I'm suspicious that the described requirement actually matches the needs of the business.

I wouldn't expect "millions" of events to be that big a problem; using the CQRS pattern, most of your reads are going to come out of rolled up results anyway, so that's not a big deal. The real hurt will be in trying to re-hydrate an aggregate with millions of events; but if you are facing performance problems there, you can look into loading the aggregate from snapshots.

And of course, if each account is calculating its own accrued interest, then you are only looking at 365 (ish) extra events per year, which is no sweat at all.

I read several threads on here about structs (one about mutable structs) and I keep reading about how a struct should have no identity.

What exactly is a lack of identity in a struct? I am guessing it would be like a number, e.g. 5, not having a context (5 what?). Whereas client would be someone expecting a service, and thus there is an identity. Am I thinking correctly?

I know the technical differences and how structs are thread safe (as long as they can't be mutated, but I can still write methods to mutate state), they have new copies everytime they are passed into a method, etc...

I guess you're more or less influenced by Evans' book, where he distinguishes Entities and Value objects.

I think I am pretty good with programming C# syntax. What I am looking for now is some resources, books(preferable), websites, blogs, that deal with the best way to design object oriented Desktop Applications and Web applications, especially when it comes to data and databases.


Martin Fowler's Enterprise-Application-Architecture is a great book for common pattern's you'll see in a lot of client server applications.

More of a book on thinking about object oriented problems is Eric Evan's Domain-Driven Design: Tackling Complexity in the Heart of Software

You are asking to drink from a firehose. Let me encourage you to write some small programs before you tackle big ones. However, here are a few books about design and a paper which argues that a lot of design can't be learned from books:

  • On System Design is a good short paper that articulates what a lot of experienced programmers think about the art of design.

  • Programming Pearls by Jon Bentley presents some lovely examples of design in the small. It's a fun read and includes many classic stories.

  • The Unix Programming Environment by Kernighan and Pike presents one of the great software-design philosophies of the 20th century. Still required reading after almost 25 years.

  • Software Tools in Pascal is narrower and deeper but will tell you a lot about the specifics of building software tools and the design philosophy.

  • Abstraction and Specification in Program Development by Barbara Liskov and John Guttag will teach you how to design individual modules so they can fit with other modules to form great libraries. It is out of print but your local University library may have it.

  • C Interfaces and Implementations presents a very well designed library that gives C programmers the abstractions found in much higher-level languages.

  • Finally, Test-Driven Development will teach you how to articulate and develop a design through the stuff that matters: what your software actually does.

I learned a lot from Composite/Structured Design by Glenford Myers, but it bears a little less directly on the topics you asked about. It talks primarily about good and bad ways modules can interdepend.

For a book on how to develop software I would recommend The Pragmatic Programmer. For design you may want to look at Interface Oriented Design. Code Complete is an "A to Z" reference on developing software. You might also want to consider the O'Reilly Head First books, especially Head First Object-Oriented Analysis and Design, as something a little easier to start with.

EDIT I don't know how I forgot about Bob Martin, but you could also read any of the books that Object Mentor has on any of it's lists. Here is their section on Software Design. In particular I'd recommend Agile Software Development: Principles, Patterns, and Practices (Amazon, but it's also the second book on the Object Mentor list).

I haven't been thrilled with any of the recent books, so much so that I'm seriously thinking about writing a new one. The "Head First" books generally have read to me ike one step above the "For Dummies" books (to be fair, I haven't read that one.)

I'm actually fond of Peter Coad's Java Design; you can get one cheaply used, it's no longer in print. Obviously, it's Java heavy, but the design part is good, and pretty lightweight.

Ivar Jacobson's Object Oriented Software Engineering is also very good (it introduced the idea of "use cases", among other things) and does appear to still be in print, but there are zillions of used copies around.

I was wondering, should the entities have the capability to save changes to the context? Or have business logic that relates to that particular entity? For example:

ActionResult ResetPassword(UserViewModel viewModel)
   var user = userCollection.GetUser(viewModel.Username);


class User : Entity
      public string Password{ get; set; }

      public ResetPassword()
           Password = ""

I find this a bit weird since the entity would have a reference to the context. I am not sure whether this would work either - or whether this is recommended. But I want to work on a domain where I do not have to worry about saving changes, at a higher level etc. How can I accomplish this?



I have updated my example - hope its a bit clearer now :)

According to Domain-Driven Design domain objects should have behavior.

You should definately read this book:

enter image description here

Let's assume I have Identity Management bounded context and discussion bounded context. Each of those is a separate micro service.

Identity has Users, Discussion has Moderators.

If I update first and last name in the Identity bounded context, my plan is to publish a message to Amazon SQS, and have discussion bounded context to listen that queue for any changes, and update first and last name in discussion context via Anti-Corruption layer.

My question is, what if I decide to change first name and last name in the Discussion BC? Should my Identity BS listen for that changes too, or having bi-directional communication is not considered a good practice, and I should always update that information inside Identity BC?

Should my Identity BS listen for that changes too, or having bi-directional communication is not considered a good practice

I don't think bi-directional communication is necessarily the problem here. What concerns me is that you seem to have two different BCs acting as the "book of record" for identity.

In the blue book, Evans writes of bounded contexts being defined by meaning; when you cross from one context to another, you may need to change your understanding of what is understood by the common terminology: it doesn't necessarily follow that the rules for an aggregate in one context (User?) will be the same as in another.

The given example, User, is potentially more twitchy - because it may be that the real world, rather than the model, is the book of record, and the "aggregate" is really just a dumb bag of data.

If I use only reference id, wouldn't that mean that Discussion BC will not have necessary data in its domain

Necessary for what? Do changes in the discussion book of record depend on data stored in the Identity book of record?

Example for, in identity context i have user with username, first name and last name etc, but in discussion context I might have that same user but represented as moderator or poster with only necessary properties for that BC. If name changes in the identity context, it should propagate that changes to discussion.

That sounds as though you are describing it as necessary for your reads; as though you have a view of a discussion that includes a representation of the participants that includes their roles (which exist in the discussion BC) and their identities.

Reads tend not to be a very interesting use case, because reads don't change the book of record. As Udi hinted at, to build the view you basically need a reference id that you can use to pull the data you want out of some key value store. Is there any reason to prefer that the KV store is part of this BC as opposed to that one?

Consumers could be 3rd party companies and our company.

Connecting to the microservice(s) directly, or instead consuming an api that acts as a facade for the backing micro services?

My business objects are coded with the following architecture:

  • validation of any incoming data throws an exception in the setter if it doesn't fit business logic.
    • property can not be corrupt/inconsistent state unless the existing default/null is invalid
  • business objects can only be created by the business module via a static factory type method that accepts an interface implementation that is shared with the business object for copying into the business object.
    • Enforces that the dependency container, ui, and persistence layers can not create an invalid Model object or pass it anywhere.
  • This factory method catches all the different validation exceptions in a validation dictionary so that when the validation attempts are complete, the dictionary the caller provided is filled with field names and messages, and an exception is thrown if any of the validations did not pass.
    • easily maps back to UI fields with appropriate error messages
  • No database/persistence type methods are on the business objects
  • needed persistence behaviors are defined via repository interfaces in the business module

Sample Business object interface:

public interface IAmARegistration
  string Nbk { get; set; } //Primary key?
  string Name { get; set; }
  string Email { get; set; }
  string MailCode { get; set; }
  string TelephoneNumber { get; set; }
  int? OrganizationId { get; set; }
  int? OrganizationSponsorId { get; set; }

business object repository interface:

 /// <summary>
 /// Handles registration persistance or an in-memory repository for testing
 /// requires a business object instead of interface type to enforce validation
 /// </summary>
 public interface IAmARegistrationRepository
  /// <summary>
  /// Checks if a registration record exists in the persistance mechanism
  /// </summary>
  /// <param name="user">Takes a bare NBK</param>
  /// <returns></returns>
   bool IsRegistered(string user); //Cache the result if so

  /// <summary>
  /// Returns null if none exist
  /// </summary>
  /// <param name="user">Takes a bare NBK</param>
  /// <returns></returns>
   IAmARegistration GetRegistration(string user);

   void EditRegistration(string user,ModelRegistration registration);

   void CreateRegistration(ModelRegistration registration);

Then an actual business object looks as follows:

public class ModelRegistration : IAmARegistration//,IDataErrorInfo
    internal ModelRegistration() { }
    public string Nbk
            return _nbk;
            if (String.IsNullOrEmpty(value))
                throw new ArgumentException("Nbk is required");
            _nbk = value;
    ... //other properties omitted
    public static ModelRegistration CreateModelAssessment(IValidationDictionary validation, IAmARegistration source)

        var result = CopyData(() => new ModelRegistration(), source, false, null);
        //Any other complex validation goes here
        return result;

    /// <summary>
    /// This is validated in a unit test to ensure accuracy and that it is not out of sync with 
    /// the number of members the interface has
    /// </summary>
    public static Dictionary<string, Action> GenerateActionDictionary<T>(T dest, IAmARegistration source, bool includeIdentifier)
where T : IAmARegistration
        var result = new Dictionary<string, Action>


        return result;

    /// <summary>
    /// Designed for copying the model to the db persistence object or ui display object
    /// </summary>
    public static T CopyData<T>(Func<T> creator, IAmARegistration source, bool includeIdentifier,
        ICollection<string> excludeList) where T : IAmARegistration
        return CopyDictionary<T, IAmARegistration>.CopyData(
            GenerateActionDictionary, creator, source, includeIdentifier, excludeList);

    /// <summary>
    /// Designed for copying the ui to the model 
    /// </summary>
    public static T CopyData<T>(IValidationDictionary validation, Func<T> creator,
        IAmARegistration source, bool includeIdentifier, ICollection<string> excludeList)
         where T : IAmARegistration
        return CopyDictionary<T, IAmARegistration>.CopyData(
            GenerateActionDictionary, validation, creator, source, includeIdentifier, excludeList);

Sample repository method that I'm having trouble writing isolated tests for:

    public void CreateRegistration(ModelRegistration registration)
        var dbRegistration = ModelRegistration.CopyData(()=>new Registration(), registration, false, null);

       using (var dc=new LQDev202DataContext())


  • When a new member is added there are a minimum of 8 places a change must be made (db, linq-to-sql designer, model Interface, model property, model copy dictionary, ui, ui DTO, unit test
  • Testability
    • testing the db methods that are hard coded to depend on an exact type that has no public default constructor, and needs to pass through another method, makes testing in isolation either impossible, or will need to intrude on the business object to make it more testable.
    • Using InternalsVisibleTo so that the BusinessModel.Tests has access to the internal contructor, but I would need to add that for any other persistence layer testing module, making it scale very poorly
  • to make the copy functionality generic the business objects were required to have public setters
    • I'd prefer if the model objects were immutable
  • DTOs are required for the UI to attempt any data validation

I'm shooting for complete reusability of this business layer with other persistence mechanisms and ui mechanisms (windows forms,, mvc 1, etc...). Also that team members can develop against this business layer/architecture with a minimum of difficulty.

Is there a way to enforce immutable validated model objects, or enforce that neither the ui or persistance layer can get a hold of an invalid one without these headaches?

This looks very complicated to me.

IMO, Domain Objects should be POCOs that protect their invariants. You don't need a factory to do that: simply demand any necessary values in the constructor and provide valid defaults for the rest.

Whereever you have property setters, protect them by calling a validation method like you already do. In short, you must never allow an instance to be in an incosistent state - factory or no factory.

Writing an immutable object in C# is easy - just make sure that all fields are declared with the readonly keyword.

However, be aware that if you follow Domain-Driven Design, Domain objects tend to fall in three buckets

  • Entities. Mutable objects with long-lived identities
  • Value Objects. Immutable objects without identity
  • Services. Stateless

According to this defintion, only Value Objects should be immutable.

Ok it seems my project setup could use some improvments.

I currently have:

1. ASP.NET MVC3 Web project
2. NHibernate project with Repositories/Mappings and some session code.
3. Entities (models used in nhibernate like User.cs)
4. Interfaces (like IUser, IRepository<IUser>, IUserRepository...)
5. Common (UserService, ..)

Now the issue is that I my nhibernate models now need to implement IUser, which I don't like, but I was forced to do this since my IRepository is generic, and I could use IRepository<User> since User is in another project, so I had to create an interface and do IRepository<IUser>

I will never need to have another implemention of User, so this is bugging me.

How can I fix this while keeping things seperate so I can swap out my ORM?

I think you should approach this problem from Domain Driven Design perspective. Domain should be persistent-ignorant. Proper implementation of DDD repository is a key here. Repository interface is specific, business-focused, not generic. Repository implementation encapsulates all the data access technicalities (ORM). Please take a look a this answer and these 2 articles:

Your entities should be concrete types, not interfaces. Although you may never need to swap your ORM (as Ladislav is saying in comments), you should design it as if you will need to swap it. This mindset will really help you achieve persistence ignorance.

Is passing specification object is overkill in Repository

I am asking this because, if we pass specification object in method like FindCustomersCreatedToday,

class CustomerRepository
     public List<Customer> FindCustomersCreatedToday(ISpecification ISCreatedToday)
         List<Customer> customers= Load all customers from db;

         List<Customer> newList=new List<>();

         foreach(var customer in customers)


I above implementation in most of the sites i saw that, they fetch all entities from database and loop over them and pass each of them in to specification, but i don't like the idea of loading all entities at once and then create a new filtered list.

Suppose if i have 10000 customers and only 10 passes this criteria.

Is that not over kill to pass specification ?

Yes it is definitely an overkill if you expect a lot of customers. You can use information in the instance of your specification to generate appropriate SQL/HQL or ICreteria (assuming you use NHibernate).

public IList<Customer> FindCustomers(CreationDateRangeSpecification spec) {
    ICriteria c = _nhibernateSession.CreateCriteria(typeof(Customer));
    c.Add(Restrictions.Between("_creationDate", spec.Start, spec.End));
    return c.List<Customer>();

This code is bit less expressive than the one you posted but it still good at capturing some domain information in Specification.

One thing to keep in mind when you work with Specification is that it is a domain concept. It belongs to domain layer and should be free of data access technologies. Technicalities of getting the data are important they just not important in domain layer, they belong to data access layer. Things like Expression<Func<Customer, bool>> are to 'infrastructural' for domain code in my opinion. In addition, Linq-based specification tend to require domain objects to expose their data as properties which sometimes breaks their encapsulation. So the whole thing may turn into "Linq over Anemic Model".

I highly recommend you to read DDD book. It has a chapter dedicated to Specification pattern and all the tradeoffs that you are dealing with.

The new place I started at is just starting to develop a completely new product from scratch. They are going transaction script in application services, completely dumb entities, and a hand rolled DAL with stored procedures (the argument is that nhibernate doesn't optimize sql well, loads too much stuff, doesn't scale well to huge projects etc etc etc). the app is supposed to be HUGE project just in it's infancy.

I'm coming from a position where I was doing domain model with all business logic encapsulated in that and the app services only handling infrastructure + loading up the model with nhibernate and scripting it.

I really believe going with the 2nd approach is much better. I was planning on doing a presentation on why. I have plenty of books/articles/personal opinions that I can back this up with...but being more of a "junior" there it might not mean much (I was also the single dev at my last place).

What I'm looking for is some experience/tips/examples of failed projects from more senior people why going transaction script/hand rolled DAL is not the right idea.

In addition to what Paul T Davies and Magnus Backeus have said. I think that at the end of the day it would be a people and cultural issue. If people are open minded and willing to learn it will be relatively easy to convince them. If they consider you a 'junior' (which is a bad sign because the only thing that matters is what you say not how old/experienced you are) you can appeal to a 'higher authority':

Stored procedures are dead and you are not the only one who thinks so:

It is startling to us that we continue to find new systems in 2011 that implement significant business logic in stored procedures. Programming languages commonly used to implement stored procedures lack expressiveness, are difficult to test, and discourage clean modular design. You should only consider stored procedures executing within the database engine in exceptional circumstances, where there is a proven performance issue.

There is no point in convincing people that are not willing to improve and learn. Even if you manage to win one argument and squeeze in NHibernate, for example, they may end up writing the same tightly coupled, untestable, data-or-linq-oriented code as they did before. DDD is hard and it will require changing a lot of assumptions, hurt egos etc. Depending on the size of the company it may be a constant battle that is not worth starting.

Driving Technical Change is the book that might help you to deal with these issues. It includes several behavior stereotypes that you may encounter:

  • The Uninformed
  • The Herd
  • The Cynic
  • The Burned
  • The Time Crunched
  • The Boss
  • The Irrational

Good luck!

We are building a slot booking system but the slots are dynamic that get updated live from the handhelds. I have got to the stage where we have the booking screen i know i have an avalible day and time slot between e.g. 8am and 10am. When i book the slot i need to put 2 records in a table 1 for job time and 1 for travel time.


Slot 8am-10am

Travel Time = 20 mins (Start 8:00am Finish 8:20am)

Job Time = 30 mins (Start 8:20am Finish 8:50am)

This would then show on a schedule gantt view visually we will be able to see his travel time and job time as 2 blocks.

So if there where no other appointments booked the first appointment would start at 8am and finish at 8:50am but how can i see if there are any appointments booked in this slot and if they are find the finish time so i no what to set the start time for the next job.

second problem - it may be the case where i have an appointment for 8am - 8:30am and another for 9:30am - 10am (the gap in the middle is due to a customer canceling) so i need to be able to say i have a job that totals 40mins can i fit it in the gap, YES get the finish time (8:30) and insert the records that fills the gap.

Hope that makes sense, i would like to do the with c#. Any Ideas??

You may want to take a look at the Specification design pattern, and how it can be used as part of a Builder (in this case to find and fill available slots). Read more in the excellent Domain-Driven Design book.

We are trying to apply Domain-Driven Design in our project. However, the modeling efforts are huge and somehow the modeling seems to conflict with agile principles as a lot of upfront design is done. On the other hand the actual benefit are diffuse or are rather longterm whereas the "requirements analysis / modeling overhead" felt is an acute and ongoing problem.

So, the question comes up: What makes Domain-Driven Design worthwhile? What are the short-term benefits?

Aside from your experience (which I find very interesting though): Is there an undisputable, logical answer?

DDD - Continous Refactoring

I guess I'd clarify that Domain Driven Design doesn't call for a tonne of up front modelling - what it calls for is conversations with domain experts, knowledge crunching to gain an intuitive understanding of the domain through 'sensible sounding' use of the ubiquitous language, and continuous refinement of all of the above.

The value of the tactical patterns (aggregates, etc.) is not around getting the model perfect up front, but from structuring your application such that when you inevitably realize that there is a better way of expressing the domain in a model, you can iterate and incorporate your insights into the updated model.

So - in that sense, it is highly supportive of an agile approach.

The best reference for this is the source - "Part III Refactoring Toward Deeper Insight" of the Blue Book' by Eric Evans

I'd recommend not trying to 'waterfall' your model and then 'agile' your code - 'agile' both of them, and accept that you will be refactoring your code not just when you find a more elegant way of solving technical problems, but also when you find a more elegant way of modelling business problems.

Undisputable Logical Answer?

In terms of an "undisputable logical answer" - to be honest I'm not sure you'll find one. DDD is an approach that is applied differently by different people - it is not an algorithm that can be analysed for it's Big O complexity.

My experience is that programs with anemic models and business logic scattered through a collection of loosely related services struggle to iterate and incorporate deeper insights into the business requirements because changes to the rules can have unforeseeable repercussions throughout the system. They encourage systems where new requirements are satisfied by stuffing behavior into places it was never intended to go, and you end up having conversations that involve multiple layers of remembering that code using the word 'employee' kind of sometimes relates to requirements for 'students' and 'teachers'.

Concentrating the essence of each entity into a class, and exposing it's behavior behind intention revealing interfaces enables effective reasoning about the impact of changes, thus enabling continuous refactoring of the model - both as understanding grows and requirements change.

Edit - How to Pursuade Others

From your comment, I now understand your intent better - I misinterpreted the question that you were looking to be persuaded that DDD is worthwhile - rather you are looking for an argument to present to your team to persuade them that it is worthwhile!

Unfortunately that is more of a inter-personal question than a technical one, as people are often not persuaded by arguments once they are convinced they are on the right path.

Perhaps if you have time you could produce a proof of concept of some acceptance tests and domain models to illustrate the method using real concepts from your domain? You can then show how easily the tests and models can be evolved as understanding grows, and ideally demonstrate an insight gained by actively modeling the domain in code and exercising the model. This is key, I believe, as in my opinion, such insights can only be gained by actively doing, and will never be arrived at through meeting room navel gazing.

We have an application that, along with many things, does some changes to Active Directory (add/remove user from group, change attribute values on user, etc).

We are now in the process of redesigning it (from "spaghetti-code" into a more layered solution). The Active Directory management functions is something we would like to abstract out to some degree in the domain layer, but at the same time, most functions are very technology dependent.

Should we place all Active Directory access code in the data access layer along with our DB-access, or is it ok to create a active directory library of functions and call into this library directly from the domain model? That makes the domain object persistent aware and that's probably a bad idea?

Or should all Active Directory access instead be performed in the service layer instead and not even involve the domain layer?

Domain Models should be technology-agnostic, so don't put your AD code in the Domain Model.

In essence you could say that AD code is just another form of Data Access, so it belongs in the Data Access Layer (DAL). However, it doesn't belong together with your database module, as that would be a violation of the Single Responsibility Principle (SRP - it applies to modules as well as individual types).

Instead of bundling it together with the database access, implement it in its own library. Conceptually, it belongs in the same layer, but it does different things, so now you have two libraries in the same layer. That's absolutely fine - you can have as many libraries in each layer as you need.

In the Domain Model, treat the AD access (and the DB access) as abstractions. Abstract Repositories are the default approach. The AD library will contain implementations of the AD Repository, and the DB library will contain implementations of the DB Repositories.

This fits well with Domain-Driven Design and the concept of an Anti-Corruption Layer.

You can use Dependency Injection (DI) to wire the concrete Repositories up with your Domain Model.

What are DDD recommendations for inter-domain referencing design?

Should I try to connect them as "Matryoshka" (put one into another) or it is better to create upper-level "inter-domain" business service?

P.S. Crossing this smooth water, I was unable to find anything useful to read in the Internet, and have started thinking that for this kind of things exist better term than "inter-domain referencing"... Am I right?


  1. I have two models/business services.
  2. Semantically first domain (A) is CRM with sell/maintenance process for our goods, second domain (B) is "design" data of our goods. We have two view points on our goods: from seller perspective and from engineer perspective.
  3. Actually each model is effective ORM (Object-Relational Mapping) tool to the same database.
  4. There are some inter-domain activities e.g. validations (e.g. sometimes we can sell things to smb. only if some engineering rules are valid).

From developer's point of view I have two clear possibilities (reference B in A or create new cross reference domain/service C ). But from designer perspective I am lost in understanding what kind of Business Service I have when I compose business logic from two different domains.

As far as I know, DDD has no strict rules for 'inter-domain' referencing. At the end of the day your domain model will have to reference basic Java or .NET classes. Or it may reference specialized date/time or graph library (aka 'Generic Domain').

On the other hand DDD has a concept of Bounded Context. And it has quite a few patterns that can be applied when you work at the boundaries of the system. For example 'Anticorruption Layer' can be used to isolate you from legacy system. Other integration styles can be used depending on how much control you have over external code, team capabilities etc.

So there is probably no need to introduce artificial glue layer if you just dealing with two subdomains in one Bounded Context. Might also be worth reading Part 4 of DDD book (Strategic Design).


Based on the information you provided, it looks like you only have one Bounded Context. You don't seem to have 'linguistic clashes' where the same word have two different meanings. Bounded Context integration patterns are most likely not applicable to your situation. Your Sales domain can reference Products domain directly. If you think of Products domain being more low-level and Sales being high level you can use Dependency Inversion Principle. Define an interface like ProductCompatiblityValidator in Sales and implement it in Products domain. And then inject the actual implementation at the application layer. This way you will not have a direct reference from Sales to Products.

When analyzing life cycle of domain objects, aggregate is basic element for objects grouping. I am having trouble implementing aggregetes in C#.

One short example, with couple of classes, would be very helpful. Or any link on this theme.

class Order {
    public int OrderNumber { get; private set; }
    public Address ShippingAddress { get; private set; }
    public Address BillingAddress { get; private set; }
    private readonly IList<OrderLine> OrderLines { get; private set; }
    public void AddItem(Item item, int quantity) {
        OrderLine orderLine = new OrderLine(item, quantity);
    // constructor etc.

class OrderLine {
    public Item Item { get; private set; }
    public int Quantity { get; private set; }        
    public OrderLine(Item item, int quantity) {
        Item = item;
        Quantity = quantity;

At no point should logic involving OrderLines be exposed outside of an instance of Order. That's the point of aggegrate roots.

For a .NET specific reference, see Applying Domain-Driven Design and Patterns: With Examples in C# and .NET. Of course, the standard reference here is Domain Driven Design: Tackling Complexity in the Heart of Software . There's a good article on MSDN too.

You have to use one of two ways to transfer data between BLL <-> DAL

1- Either put the data in Entity.dll, where DAL and BLL and UI can use Entity.Student which only include student information without any logic.

Increase of coupling:- This can lead to trouble, for example, if Student was having a ClassID where every student is having only one class to attend. then the business changed so that Student is having a List. In that case probably you will re-write a lot of code in all layers (UI, BLL, DAL).

2- DAL is having its own Student, and BLL is having its own Student, then when you pass an object from DAL to BLL, you will need to convert every DALStudent to BLLStudent.

A lot of code: I even don't like the idea of Translator.dll, still a lot of code, and a little bit wasting of processor cycles.

Is there a way out, what is your opinion\way?


You cannot use exactly same code for different requirements - you have to deal with it. Although, it is a best practice to use different adapted classes for your UI layer (viewmodels). And you don't need BLL layer at all (well, you need, but only for cross-cutting concerns) - take a look at Domain Driven Design. And then in case of requirements changing you would need to change only domain object and its mapping to view model (which could be easily done using tools like Automapper)

Afternoon all

I am currently studying Domain Driven Design (DDD) and am having trouble grasping a fundemental concept.

Patterns Principles and Practices (Millett and Tune)

During my studies I have often come across the term Domain Model (DM) however it is commonly discussed with various levels of granularity.

  1. In some cases its represented as a collection of artifacts (UML, sketches, photos) of various interconnected objects (Customer,Sale,Quotation,Invoice etc) which outline all the concepts within a single sub domain.

    Such that there is only one model for a single sub-domain

  2. In other cases its represented as a single entity such as Product, whereby a sub-domain will consist of many different Domain Models.

With the above ambiguity I am struggling to understand what a Domain Model actually is and how such models can be put into a Bounded Context (BC)

Further to this, I have read Domain Models can be shared between different Bounded Context.

For example Employee is shared between the Payroll and HR Bounded Context

With this in mind,

  1. would I create multiple Domain Models to represent a sub-domain ?
  2. or just a single one ?
  3. If the later, how would one share such large models between contexts ?

Please could someone shed light on this ambiguity and explain exactly what a Domain Model is and how granular it can be.

Much Appreciated


Make sure you review The Blue Book.

exactly what a Domain Model is ...

A domain model is

  • a collection of data/state/information that a business cares about
  • the rules that govern how that data can change

read Domain Models can be shared between different Bounded Context


Employee is shared between the Payroll and HR Bounded Context

An important thing to include in your design: the ubiquitous language changes as you cross the boundary between one context and another. If Payroll and HR don't understand Employee the same way, with the same rules governing the changes to the data and the same life cycle, then insisting that they share the same model exposes you to risks that you wouldn't face if those models were kept separate.

Further complicating things is understanding whether your model is the "book of record". For instance, Employee -- if you are talking about the human being -- is out here in the real world. The real world is the book of record; the information you've captured in your databases is just a copy.

For example: in the real world, people are legally entitled to change their names. What does that mean to your business? Does the timing of that impact have the same implication on HR processes that it does on Payroll processes? If they are the same today, are you sure that will always be true?

Prior to being an Employee, the human being may have been an Applicant; does HR care? Does payroll?

There are also practical concerns -- if the HR database goes down, should that block payroll processing?

Again - i'm confused about DDD things :)

I have architeture (I'm still working on it) that in short hand looks like that:

 EntityDao -> Implementing domain layer interfaces (NHibernate)
 EntityRepository -> Repository for each entity with injected Dao
 DomainObjects/Entitys -> Some logic

And I'm right now in that point where I feel to create and use some Service class. I have some questions with that:

1.Should I create at least one service for each entity/domain object ?

2.a.Should services have "query" method like Find, FIndAll, FindAll(LINQQuery)?

2.b.Should I stop to use Repositorys in upper layers (UI) to get sets ("Find"-like methods) of entity's and start to use only services?

3.If answer on 2 question is No - Should I use Services and Repository in parallel fashion (When in UI I just need to get all entity's I use Repository.FindAll, when I need to get some "logical" list of that entity's i use Service.FindXXX method)?

4.Somehow I feel that Repositorys don't fit in Domain layer - should I separate them somehow and in DOMAIN leave only domain specific objects like Entity's and Services ? If Yes - give me some structure example how to achieve that.

Examples of some objects:


public class NHibernateDao<T> : IDao<T>
    public NHibernateDao() { }

    public T Get(object id)
        T entity = (T)NHibernateSession.Get(entityType, id);
        return entity;
    public T Load(object id)
        T entity = (T)NHibernateSession.Load(entityType, id);
        return entity;
    public virtual T Update(T entity)
        return entity;


public class BaseRepository<T>:IRepository<T>
    private DataInterfaces.IDao<T> mDao;

    public virtual T Get(object id)
        return mDao.Get(id);
    public virtual void Delete(T entity)
    public virtual T Update(T entity)
        return mDao.Update(entity);
    public virtual IQueryable<T> FindAll()
        return mDao.FindAll();

Domain objects, at the moment , it's mainly get/set container - background of this question is to remove that anemic model.

1. One service per entity?

No. You do not need to create one service for one entity. In DDD you would create services for operations that do not naturally map to a single entity (or value object). A good service (from Evans) :

  • The operation relates to a domain concept that is not a natural part of an entity or value object.
  • The interface is defined in terms of elements of the domain.
  • The operation is stateless

So a service can consume many entities and there might be many entities that aren't consumed by a single service at all.

2a. Should services have "query" methods (..)?

No. Generally speaking those are repository methods and are not placed on services. However, there can be operations on a service that return a collection of entities.

2b.Should I stop to use Repositories in upper layers (UI) to get sets ("Find"-like methods) of entity's and start to use only services?

That might be a good idea. Often, when an application uses many repositories in the UI layer, the UI performs domain operations on multiple entities. These operations should typically be implemented in the domain layer; either in the entities themselves, or in services.

3. Should I use Services and Repositories in parallel from the UI?

Better not, see above; although there might be situations where you can quickly create part of your UI by doing so.

4. Somehow I feel that Repositoriess don't fit in Domain layer ...

You're right, you should only put repository interfaces in the domain. See Kostassoid's answer for an example.

currently the system I am working on is layered like this

  • Web UI
  • Application
  • Domain
  • Infrastructure

In which layer would I put the specification implementations? Infrastrucutre?

Specifications are part of the Domain Model.

This pattern is described in Domain-Driven Design, and since this book deals explicitly with Domain Modeling, I think it's fair to say that it belongs in the Domain Layer.

First off, I'll admit that I'm a newbie to DDD and need to read the "blue book".

I'm building a system that has an AggregateRoot of type "Match". Each Match can have a collection of "Votes" and also has a readonly "VoteCount" property which gets incremented when a user up-votes or down-votes a Match.

Since many users could be voting on a Match at the same time, Votes have to be added/removed from the Match and the VoteCount has to be incremented/decremented as one atomic operation involving write locks (with locks handled by the DB). (I need VoteCount as a static value in the database to be queried on efficiently by other processes/components.)

It seems to me that if I were adhering to strict DDD, I would be coding this operation as such:

An application service would receive a vote request object The service would then retrieve the Match object from a Match Repository The service would then call some sort of method on the Match object to add the Vote to the collection and update VoteCount. The Repository would then persist that Match instance back to the DB However, this approach is not feasible for my application for 2 main reasons, as I see:

I'm using MongoDB on the backend and cannot wrap this read-write operation into a transaction to prevent dirty reads of the Match data and its associated Votes and VoteCount.

It's highly inefficient. I'm pulling back the entire object graph just to add a Vote and increment VoteCount. Although this is more efficient in a document db than in a relational one, I'm still doing an unnecessary read operation.

Issues 1 & 2 are not a problem when sending a single Vote object to the repository and performing one atomic update statement against Mongo.

Could Vote, in this case be considered an "aggregate" and be deserving of its own repository and aggregate status?

Could Vote, in this case be considered an "aggregate" and be deserving of its own repository and aggregate status?

I think this might be the right answer. An aggregate should be a transactional consistency boundary. Is there a consistency requirement between votes on a match? The presents of a Vote collection on a Match aggregate would suggest that there is. However, it seems like one vote has nothing to do with the next.

Instead, I would store each vote individually. This way you can use the aggregate functionality of MongoDB to get the count, though I'm not sure whether it is still slow. If it is, then you can aggregate using the Map/Reduce functionality.

More generally, this may not be a best fit for DDD. If the domain doesn't consist of complex behavior there is hardly a reason to try to adapt the DDD tactical patterns (entity, agreggate) to this domain.

From the DDD book by Eric Evans:

VALUE OBJECTS can even reference ENTITIES. For example, if I ask an online map service for a scenic driving route from San Francisco to Los Angeles, it might derive a Route object linking L.A. and San Francisco via the Pacific Coast Highway. That Route object would be a VALUE, even though the three objects it references (two cities and a highway) are all ENTITIES.

page #98

In Hibernate if I have a value object I can map it with as a component. What if I want to reference an entity form that component?


  • I have a Users table.
  • Each User can have many addresses.
  • I create a addresses table, but I treat the addresses as value objects in my domain.
  • Each address have a type (work address, home address,...etc.)
  • I create a address type table as a look up and treat it as an entity in my domain.
  • An address should have a reference to it type.

How can I achieve that?

See the documentation.

The <component> element maps properties of a child object to columns of the table of a parent class. Components may, in turn, declare their own properties, components or collections. See "Components" below.

<component>                 <!-- NOTE: I'm omitting the attributes. See docs for details on these. -->
       <property ...../>
       <many-to-one .... />

Notice the example property and many-to-one in the code above. To create a reference to another entity, you simply use many-to-one inside the component, just as you would outside the component.

Going to develop a search engine.

I'm wondering how my DDD should look. Sorting records should be implemented, but i don't want that my views knew about my database structure (by which columns to sort). As far as i understand - sorting information should come from infrastructure layer, from repository implementation, so there have to be a flexible domain.

How it should look?

I want this to be strongly typed.

Any best practices?

Recommendations for architecture?

If you are going to develop a search engine, you'll be forced to think about scalability very fast. Sorting in search related environments is a familiar problem. You should have a look at the search implementations from Google! How you sort should depend on a ranking algorithm. A domain centric ranking algorithm design shouldn't be such different from the ranking as a service approach!

What language you use is your choise. If you'll choose C/C++ Message Passing Interface (MPI) for distributed computing. If you use Java, have a look at JMS and GridGain (GridGain implements Googles MapReduce).

Another question is, how to store your data (distributed, fast, fault-tolerant)! For Java have a look at Project Voldemord (which is one of the best systems you can get for free.

For more information about the Google architecture, read more on the high scalability website.

For issues about DDD, have a look at, the Homepage of Eric Evans himself ;) He has written a very good Book Domain-Driven Design. DDD is fine, because it assures the integrity and the of a Domain.

A simple model might be:

page ( URL url, BigInt rank, List<String> keywords, 
       List<URL> links, List<URL> outLinks, Content ref) 

content ( GzippedBytes[] content )

If a new node is added to the system, it should react on things like "setLinks" etc. so it can get it's pagerank by it's own.

The client ist quient simple, he does only a search ( keywords ) which gets sorted by PageRank.

Here is an service example of a pagerank implementation in Java.

I am developing a web based application using ASP.NET MVC. I am trying have rich domain models rather than the thin/anemic models.

I have modelled my solution along the lines of Onion architecture. The different projects are as below : enter image description here

  • {}.Domain.Core - contains the domain objects and interfaces like IDbContext which is implemented in the Infrastructure layer
  • {}.Database - is the database prject
  • {].Infrastructure - contains implementation for logging, Data Access etc.
  • {}.Web - View and Controllers

**** The data access is done using dapper and IDbContext is a wrapper around 2 simple command, query interfaces. I have isolated each of the queries as separate class.

For sake of discussion I am taking a small part of the application.

I have a versioned document library which contains documents along with other metadata like tags, permissions etc

A simplified model of my document object is as shown below

enter image description here

I would want the operations to be defined within the domain object, since there is business logic involved in each of these operations. Let me take "Delete" as an operation. The operation needs to be perform

  • Validate if user has permission to delete
  • Check if there are no associations which will get impacted by this delete
  • Check if no workflow is in progress
  • Delete the actual item from database in a transaction

As shown in above example, I would need the database context to complete this operation. The way I have currently thinking if modeling is to have the domain object have IDbContext, which can execute the queries exposed.

enter image description here

In my controller class I call the domain objects and perform the operations.

I am not sure if passing the IDbContext in the domain object is ok? If not what are the better ways to model this?

I am not convinced in having a separate service layer because 1) Controller act as first layer of service layer for most of the cases 2) Service layer is just duplicating the same methods from domain to another class

Let me know how I can better this design.

Injecting the IDbContext like that brakes the main principle of the Domain model which should be responsible for business logic ONLY while retrieving and storing your domain entities is the responsibility of the infrastructure layer. Yes you inject it by interface, hiding the actual implementation but it makes you domain model aware of some storage.

Also the steps from above required to delete a Document doesn't entierly belong to the Document object. Let's consider the first step with user permissions and the following cases:

  • Users with Admin role should be allowed to delete any document
  • Document owner should be allowed to delete the document

For the first case there might not be a connection between a user and a document to remove. Admin users are just allowed to do anything. It's like a classical example with two bank accounts and an operation to tranfer money which involves both accounts but is not their responsibility. This is when Domain services come into place. Please don't confuse them with Service layer services. Domain services are part of the domain model and responsible for business logic only.

So if I were you, I would create a new Domain service with DeleteDocument method. This should do the first three steps from above accepting User and Document as parameters. The fourth step should be done by your repository. Not sure what you mean by saying

I didn’t see too much value in adding repositories

but from domain model perspective you already have one it's the IDbContext. I assume you meant some pattern to implement repository or separate repository for each entity. In the long run your pseudo code in the controller should be the following:

var user = bdContext<User>.SelectById(userId);
var document = bdContext<Document>.SelectById(docId);
var docService = new DocumentService();
docService.DeleteDocument(document, user);  //  Throw exception here if deletion is not allowed

If you expect you need this logic in many places of you application you can just wrap it up in a Service layer service.

I suggest reading Eric Evans book on DDD if you want to learn more about Domain modeling. This discusses the meaning of entities, value objects, domain services, etc. in detail.

ANSWER TO THE COMMENT: Not really, the domain services are part of the domain, so both implementation and interface are part of the domain as well. The fact that two or more objects have to interact with each other is not enough for creating a domain service. Let's consider a flight booking system as an example. You have a Flight entity with different properties such as DepartureCity, ArrivalCity. Flight entity should also have a reference to a list of seats. Seat could be a separate entity as well with such properties as Class (business, economy, etc.), Type (isle, row, middle), etc. So booking a seat requires interacting with different entites, such as Flight and Seat but we don't need a domain service here. As by nature Seat property makes no sense if not considered as a child object of a Flight. It's even very unlikely you would ever have a case to query a Seat entity from out of the Flight context. So reserving a Seat is responsibility of the Flight entity here and it's ok to place the reserving logic to the Flight class. Please note it's just an example to try and explain when we need to create domain services, a real system could be modeled completely another way. So just try following these three basic steps to decide whether or not you need a domain service:

  1. The operation performed by the Service refers to a domain concept which does not naturally belong to an Entity or Value Object.
  2. The operation performed refers to other objects in the domain.
  3. The operation is stateless.

I'm accessing dbcontext from the controller which is application/service layer not domain/business layer. Domain model deals with business logic only, it should not be aware of any persistance logic and from the example above you can see that DocumentService has no references of the dbcontext.

I am learning Spring Data JPA, and I came across the term "JPA repository" (to be specific Spring Data JPA repository). I want to understand if the term "JPA repository" is a general concept (like JPA - a standard specification) or coined by Spring Data project?

What exactly does "repository" in this context means?

What exactly does "repository" in this context means?

Repository is one of the patterns introduced in Patterns of Enterprise Application Architecture (Martin Fowler, 2002). In the book, Repository defined as:

Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects

A system with a complex domain model often benefits from a layer, such as the one provided by Data Mapper, that isolates domain objects from details of the database access code. In such systems it can be worthwhile to build another layer of abstraction over the mapping layer where query construction code is concentrated. This becomes more important when there are a large number of domain classes or heavy querying. In these cases particularly, adding this layer helps minimize duplicate query logic.

A Repository mediates between the domain and data mapping layers, acting like an in-memory domain object collection. Client objects construct query specifications declaratively and submit them to Repository for satisfaction. Objects can be added to and removed from the Repository, as they can from a simple collection of objects, and the mapping code encapsulated by the Repository will carry out the appropriate operations behind the scenes. Conceptually, a Repository encapsulates the set of objects persisted in a data store and the operations performed over them, providing a more object-oriented view of the persistence layer. Repository also supports the objective of achieving a clean separation and one-way dependency between the domain and data mapping layers.

Further Reading

For more detailed discussion about Repository pattern, you should take a look at Patterns of Enterprise Application Architecture (Fowler, 2002) and Domain Driven Design (Evans, 2003).

I am new to DDD, I have a Partner aggregate which has an User reference. The User object itself is another Aggregate.

Since not all users has to be referenced in the Partner object, the User object is an aggregate root. The Partner is an aggregate root as well.

First: Would my design be wrong with one aggregate root inside another?

Second: If the design is right, would it be a bad practice to use one Repository inside another to persiste the Partner? (UserRepository inside PartnerRepository)

Obs: I am not using any ORM framework.

First: Would my design be wrong with one aggregate root inside another?


AGGREGATE A cluster of associated objects that are treated as a unit for the purpose of data changes. External references are restricted to one member of the aggregate, designated as the root. A set of consistency rules applies within the aggregate's boundaries.

Eric Evans, Domain Driven Design

The problem your design faces is this: the Partner aggregate cannot protect its own invariant if the User aggregate is allowed to modify the same state independently.

When, in your design, you assert that X is an aggregate, you are making two claims

  1. The validity of any change to X can be determined without consulting any external state.
  2. The validity of any external change can be determined without consulting the state of X.

That's the aggregate boundary -- changes on the outside don't need to look inside, changes inside don't need to look outside.

Therefore, nesting aggregates is a contradiction.

Another way of expressing the same idea: if one aggregate root is inside another, then you have an aggregate (the Partner aggregate) which has two roots (the Partner entity, and the User entity), which is precisely the situation that the aggregate pattern is intended to avoid.

Possible remedies:

One is that the User entity really is part of the Partner aggregate. Every change to the user is supposed to be managed by the Partner. You move the User back into the partner aggregate, and prevent your implementation from accessing it except by issuing commands to the aggregate root.

Another is that the User entity isn't part of the Partner aggregate. Then you eliminate the direct reference in the Partner; that might mean eliminating the reference completely (if the business rules of Partner don't depend upon user at all), or having a reference to the User identifier, and business rules that check the identifier without following it (in other words, your rules might check that an id reference is null/not null, or is/is not a member of a collection). If you think about it, the Partner aggregate can't even tell if the User it is referencing exists.

A third is the discovery of a new entity in your model, that includes the part of User that the Partner aggregate actually uses for validation. This might be a snapshot of User state, or it might be a refactoring of User into multiple pieces.

I am on a tight schedule with my project so don't have time to read books to understand it.

Just like anything else we can put it in few lines after reading books for few times. So here i need some description about each terms in DDD practices guideline so I can apply them bit at a piece to my project.

I already know terms in general but can't put it in terms with C# Project.

Below are the terms i have so far known out of reading some brief description in relation with C# project. Like What is the purpose of it in C# project.

  • Services
  • Factories
  • Repository
  • Aggregates
  • DomainObjects
  • Infrastructure

I am really confused about Infrastructure, Repository and Services When to use Services and when to use Repository?

Please let me know if anyway i can make this question more clear

I recommend that you read through the Domain-Driven Design Quickly book from infoq, it is short, free in pdf form that you can download right away and does its' best to summarize the concepts presented in Eric Evan's Blue Bible

You didn't specify which language/framework the project you are currently working on is in, if it is a .NET project then take a look at the source code for CodeCampServer for a good example.

There is also a fairly more complicated example named Fohjin.DDD that you can look at (it has a focus on CQRS concepts that may be more than you are looking for)

Steve Bohlen has also given a presentation to an crowd on DDD, you can find the videos from links off of his blog post

I've just posted a blog post which lists these and some other resources as well.

Hopefully some of these resources will help you get started quickly.

I am working on an application that has some scalability requirements and consists of a web-based front-end along with a set of services and workflows. In the architecture that I have designed, some of these services will perform necessary transformations on a given set of data, pull additional data from a database, and so on.

In terms of documenting my architectural design, I am wondering if someone can suggest a couple books or some reading material on what are the best practices. I am not looking for a guide on UML. Let me clarify...

For example: I have a service... let's call it my Workflow service. It will take a request, read some stuff from a database to look up that request, and trigger a workflow. Sounds easy enough. In terms of the architectural design, lets say I break off the database logic into its own module or package... should this just be called the blahblahblahDAO or blahblahblahBusinessObjects?

Thanks in advance.

If you are looking for deeper insights in how to layer real software and what proper names they should have you should read about Domain Driven Design

First and classic book (be aware that it's very general). As for something practical you can check out this book or just google for some online examples.

I bump into this from time to time during class design... when I have several properties of the same type in the object. Take few examples:

User has several addresses. We can do

IDictionary<string, Address> Addresses; // Addresses["BusinessAddress"];


Address BusinessAddress; 
Address ShippingAddress;

Product has related products, by different cagetories. We can do

IDictionary<string, IList<Product>> Related; // Related["Available"];


IList<Product> Available;
IList<Product> Required;

User has several roles assigned. We can do

IList<Role> Roles;


bool IsAdmin;
bool IsSales;

So, what's better? IDictionary is more flexible as we can easily add new address categories into the database without even touching the code. But the drawback is that we use strings to access; this is always error-prone. We can solve this with either unit tests or constants like

public class ProductCategories { public static string Available = "Available"; }

But it is still much worse to read "product.Available" than "product.Related[ProductCategories.Available]".

There are other considerations, like, what's easier/better to map with ORM (e.g. NHibernate).

But the question is, are there any known preferences? Design articles and/or books on this subject? Real world practices that people here experienced?

For example, we can combine both worlds... have IList and bool IsAdmin doing "return Roles.Contain(Role("Admin"));". But this looks ugly to me.

Or, in case of IDictionary we can't have more than 1 address per type; and if we do

IDictionary<string, IList<Address>>

this is going crazy for simple addresses where we don't need multiples. But if we use BillingAddress, ShippingAddress, and IList for MultipleAddress, we have a much more fine-grained control over our addresses... with Intellisense and so on.

Since you mention Domain-Driven Design (DDD), the book of the same title contains guidelines for Intention-Revealing Interfaces. Having a dictonary of Domain objects is by no way intention-revealing, so for that reason alone, I would strongly recommend against Dictionaries.

Dictionaries should mostly be used to expose an API where callers will have the ability to both add and retrieve values, or for very general-purpose infrastructure APIs, where you need to be able to deal with a very flexible set of situations.

This is not the case for DDD. Imagine that you didn't write the class yourself, and then encounted the Dictionary-based Addresses property from your example. How would you know which types of addresses it contains? You would have to look at the implementation code or read the documentation.

If, on the other hand, you have both a BusinessAddress and a ShippingAddress property, it is immediately obvious to the consumer of the API what's available.

I think the flexibility you think about is a false sense of flexibility, since client code still needs to know which dictionary entries are available before they can consume them. It is just as easy to add a new property to a class as it is to add an entry to a Dictionary.

If you really need a Dictionary (or better, a list) because sometimes you need to iterate over all Addresses (or whatever), you can do this by exposing a read-only property that provides an enumerator over both - something like this:

public Address BusinessAddress { get; set; }
public Address ShippingAddress { get; set; }

public IEnumerable<Address> Addresses
    yield return this.BusinessAddress;
    yield return this.ShippingAddress;

I first want to say that I am not trying to accomplish a domain model in my current design.

That being said, I currently am building an architecture that looks like the following:

UI DTO <=> Service DTO <=> Business/Database DTO (using AutoMapper)

I have been reading Eric Evan's DDD book, and have also watched Greg Young's 7 reasons why DDD projects fail, and am afraid of the anemic model. Even though I am not prescribing to DDD, I do not want to create too many layers that it becomes a pain to keep mapping very similar things back and forth.

However, the whole reason that I have the setup that I do is two-fold. Ease of change and obscurity

  • Ease of change: If my public objects are exposed via my service, and I use the UI and business objects internally, then I much more free to make changes without breaking existing APIs. But, maybe I can use the one DTO and refactor if the DTO's begin to deviate?
  • Obscurity: I can expose my public objects, but not expose my full objects and their implementations if they are internal. This is going to need to be a secure product, so I am preparing for that. But, maybe again I can just refactor that later?

So, my question is this: Does my current model make sense, or am I asking for problems later on? Is it ok because these objects are primarily DTO's? Even in Evan's book he implied that this model is ok if it is planned to be distributed over different servers? So, is my layering ok for this reason alone, as I plan on having the UI,Service, and DB layer capable of being on different servers (they arent currently due to no current need)?

I am just trying to be aware of over-architecture, but at the same time trying to avoid, is this model structure good or bad for my current implementation?

This is the pattern my team uses for development using ASP.NET MVC and WCF, where your business/database dto maps to an entity framework class, your service dto maps to a POCO class/DataContract passed into/out of the WCF service and your UI dto maps to an MVC model.

While this might seem redundant, rarely do the needs of each layer lend themselves to a design where all three dto's in a stack have the same properties. One example where they tend to be different is in foreign keys into lookup tables. In the database layer, this will be represented by an int, while in the service layer, this will be better modeled as an enum, so as to impose type safety, and lastly, in the ui, said field will be converted into a localized string for display to the user.

In regards to your fear about over architecting, there is something to be said about keeping things simple. The forces that would drive me to use this pattern are that I have a need to deploy my application layer independently of my ui or simply that I work on a team with more than two developers. As your team grows, the likelihood that a team member will decide to bypass your business layer and go directly against the database goes up significantly. In doing so, they invariably defeat the purpose of any Aspect-Oriented Programming you have in place - I.e. stuff doesn't get logged or wrapped in transactions. Adding the extra layer and moving it to a separate server creates a clean separation of concerns, and, more importantly, enforces it. A one or two person team may be disciplined enough to avoid this, but, as you grow, it becomes a necessity rather quickly.

Hope that helps!

I have found quite a few material (books and other stuff online) on how to make UML diagrams. So now I understand UML and the diagramming (with a tool).

However, where I am stuck is the approach / methodology. My hunt for approach / methodology always leads to how to use UML and which diagram fits where. Frankly my intent is to know how to start the journey from putting down the domain understanding (and how) to drafting the blueprint of the system that is ready for the use of developers.

I really don't care if it is UML (good if it is so) or not. I should be able to communicate the target application's domain understanding, it's analysis and eventually it's intended design in as clear terms as possible.

I think there is no Cast in Stone way of doing this, however, I am looking for potential approaches / methodologies. Please share pointers to any books / training material that is available for the purpose.

Here are a few resources that may help:

  1. Domain Driven Design Quickly (Free summary of Domain Driven Design)
  2. Domain Driven Design

These resources deal with gathering the knowledge of the Domain from domain experts, coming up with terms that are ubiquitous for all parties involved, and then designing the programming model to suit.

Additionally, since you mention UML, and if you haven't come across the following book yet, I highly recommend it:

Lastly, in more general terms, I would look further into Agile Development Methodologies.

I'm going to be helping to build an app (obviously), I was thinking of going with a microservices architecture initially but in thinking about it it's not necessary at this stage but it will be in the future.

So, how do I build an app but with the intention to move sub-components into microservices?

What concepts or structure should I follow that will make such a future transition easier?

What should I be aware of?

Any gotchas or things that might make transitioning harder then necessary if I don't watch out for them?

Any thing else that would be usefull to know as well thank you.

P.s. Yes, this might be a bit vague/broad but I'm not asking for in depth responses, just links to useful bit of information that will be of help to me. I've looked but not found anything useful in the transition from monolithic to microservices architectures.

EDIT: Since it's obviously not clear let me state that I'm looking for resources, I get some will be opinionated, but that's fine, opinionated resources are better then NO resources which is what I currently have.

Some guidance > no guidance.

I don't think there are a lot of resources that directly prepares you for a microservices architecture. At least not that I know of. The closest I can think of is the Domain Driven Design book from Eric Evans.

Its more of a software design book but in my opinion, microservices is really just an architecture that mimics software design. It's the attempt of separating concerns of an application to different categorized components.

The most useful concept is probably the bounded contexts and service objects for the microservices architecture. The bounded contexts are the sub-level domains for which the services should be scoped and the service objects will be the actual services down the line. These service objects should be loosely coupled to make the migration to microservices seamless.

Finally, during the migration to a microservices architecture, the service objects can be converted to a client-like object that abstracts away the interservice communication protocol to a given service. Hope this helps!

I'm trying to design my web app with ddd practices in mind. This app deals with the storage of containers in storage locations. A container contains a substance. Most likely, users will search for a substance and want to know in which location to find the container. Moreover, they will want to inventorize a storage location, i.e. get all containers of that storage location.

This is why I have identified substance, container and storageLocation as aggregates. I have learned, that other aggregates should not be referenced directly, but by primary key. Now, I am wondering what the best way to ensure referential integrity in my domain layer is (i.e. not having references that point to a nonexistent/wrong container), e.g. when deleting containers since substance and storageLocation have references to containers. Let's assume all references are bidirectional. I am mostly afraid of "forgetting" to add appropriate methods to an entity which might be added later in the project. I am not sure if that even is a "valid" concern when programming.

These are my entities:

public class Substance{
    private Long id;

    List<Long> ContainerIds;

    public void addContainer(Container c){containerIds.add(c.getId())}
    public void removeContainer(Container c){// removes c.getId() from list}
public class Container{
    private Long id; //+get

    private Long substanceId; //+ get set
    private Long storageLocationId; //+ get set
public class StorageLocation{
    private Long id;

    private List<Long> containerIds;

    public void addContainer(Container c){containerIds.add(c.getId())}
    public void removeContainer(Container c){// removes c.getId() from list}

Now, I'n my controller, I have to get the Substance and StorageLocation entities from the repository, remove the container ID references from them and then remove the container:

public class Acontroller{

    private ContainerRepository containerRepository; // constructor injected
    private SubstanceRepository substanceRepository; // constructor injected
    private storageLocRepository storageLocRepository; // constructor injected

    public void deleteContainer(Container c){
        Substance sub =  substanceRepository.getByID(c.getSubstanceId());

        //The same for the storageLocation


And everytime I add another entityReference to Container, I will have to expand the controller method.

Is this way of managing the references "by hand" acceptable. If not, how would I go about doing it while retaining the reference by id? Or should I forget about the id and just work with object references?

ps: first SO question, so please be gentle with me and let me know what to change about the question.

Only Model Necessary Associations

Let's assume all references are bidirectional.

I think this is probably the first assumption you need to question. When modelling your domain entities, it's best to think about the operations that they participate in and the invariants you need to enforce during those operations. If bidirectional references aren't required for those operations and invariants, don't maintain them.

e.g. in your case - depending on your domain and invariants, you might be able to get away with uni-directional interfaces - e.g. perhaps substance holds a containerId and container holds a storageLocationId

Chapter 5 "Model Expressed in Software" of Eric Evan's book has an excellent discussion on this topic, including explicit debunking of the usual first-case assumption that references must be bidirectional.

Is "Delete" a Business Operation?

Expanding on @VoiceOfUnreason's answer and Udi Dahan's blog, it is really important to understand what your users mean when they ask to be able to delete something. In your case - a few questions to ask:

  • Has the container gone out of service?
  • Will it return to service at some point in the future?
  • Is it now being used in a different part of the facility?
  • Is it being deleted because the container identifier has changed (e.g. a barcode has rubbed off and a new one printed and stuck on) - in which case you might be modelling the identity of the entity incorrectly, as the container is the same, but the barcode has been changed - in which case the barcode is not truly the container's identity
  • What happens to substances in a container when it is 'deleted'? Have they been used? moved to another container?

'Referential Integrity' via Eventual Consistency

Sometimes things that look like invariants, are not really things that absolutely, positively, must at all times be enforced. e.g. in the unlikely case that if you did discover through all of the above questioning that you really do need to delete a container, what would happen if there was a slight delay in processing the ramifications of the delete from the perspectives of the substances?

Could you publish a domain event ContainerDeleted and in the handler for that event, identify all the associated substances and do what needs doing to them? e.g. mark them as 'uncontained' or whatever makes sense in your domain.

This allows you to keep aggregates small by focussing on the things that truly are invariants for that aggregate - Vaughn Vernon's Effective Aggregate Design is great reading for exploring this concept.

Identifying Hidden concepts in the Model

Sometimes through analysis and 'knowledge crunching' you can identify hidden concepts in the model, that when brought to light and modelled explicitly can simplify your model and business processes. e.g. in your case, a few things that might be useful:

  • Explicitly model a ContainerPlacement:
    • This could be a entity within the storageLocation aggregate - the stoageLocation may hold a collection of ContainerPlacement
    • ContainerPlacement could just hold a reference to a containerId and perhaps any properties required to enforce the invariants that the storageLocation must maintain - e.g. perhaps it holds a copy of the container volume valueobject to permit enforcing the invariant, "don't put more containers in me than will fit in me" on the storageLocation aggregate, whilst leaving most of the other properties of the container (e.g. colour, in-service date, etc.) as the responsibility of the container aggregate.
  • What is a substance really? Can multiple containers contain the same substance? i.e. if substance is 'water' can multiple containers contain water? Is there a difference between the water in one container and the water in another?
    • Perhaps there is a difference between substance as an entity - maintaining the name of the substance and other properties of it - viscosity, density etc., and substance as a valuobject - representing the volume or quantity of the substance within a container.
    • this would simplify the model, as then the container would just have a ValueObject - perhaps called ContainedSubstance - on it defining a substanceId and a volume. If containers can have multiple substances in them, you could model it as a collection of such valueobjects.

Separating Query Operations

Some of your requirements are really query requirements - the domain model does not exist to satisfy query requirements - only to enforce invariants under changes.

You might find that even with the association modelling revealed by the above questions you can satisfy your queries with a relational database persistence of your domain model - but if not, you can also look into maintaining a separate read model to facilitate the queries whilst leaving your domain models purpose built to maintain their invariants.

I'm trying to design the authentication of my web application in an object oriented manner. Is this a concern of my domain in which case I would have something like this:


Where $authenticator is an interface to my authentication service.

Or would this be a cross cutting concern and I would do it the other way around.


The first way seems more "OO" to me, since I don't have to ask anything from my user passes the information the authenticator needs. But it feels like I'm "polluting" my domain in a certain respect...logging in is not a business requirement of my is a side effect from the fact that I need a method of authentication to protect my application.

Unless your Domain includes authentication as a central concept, I would say that it's a cross-cutting concern and not part of the Domain Model.

Most developers write business applications that model something entirely different than software security. Authentication is a very important part of many applications, but really has nothing to do with the Domain itself.

That doesn't mean that you can't deal with authentication in an object-oriented way.

In Domain-Driven Design terminology, the business concept you model is part of your Core Domain while you could choose to implement authentication and other security concepts in a Generic Subdomain.

I can't help with the php-specific things, but in .NET, security is pretty much something that's just handled by the platform if you do it right. Here it's a truly cross-cutting concern by implementation, so that's how it's done elsewhere (FWIW).

I'm trying to understand which objects should be injected into an object and which should be created internally.

  1. If i have some List<int> (as data field) which holds infomation gathered during run time. it seems that i should init it in the c'tor instead of injecting it.

but what about a hardware class which communicates through a COM port.

do i let the HW class init the SerialPort or i inject it ?

  1. If the above mentioned SerialPort needs to be injected; what is the best way to do it ?

do i create it manually :

SerialPort port = new SerialPort(name, baud ...);

HWClass hwClass = container.Reolve<IHWClass>("HWClass", new InjectionConstructor(port));

or using the Unity container

SerialPort port = conatiner.Resolve<SerialPort>(...);

HWClass hwClass = container.Reolve<IHWClass>("HWClass", new InjectionConstructor(port));

or should i init it inside the HWClass C'tor ?


Domain-Driven Design distinguishes between Services and other Domain objects (Entities and Value Objects). Even if you don't otherwise subscribe to DDD, this distinction is very useful.

Services are typically long-lived, stateless objects that perform operations for their consumers. They are the typical dependencies that you can benefit greatly from injecting.

In your case, both SerialPort and IHwClass sounds very much like services because they represent external resources, so I would definitely inject them via Constructor Injection.

However, you only really gain the benefit of loose coupling if you inject an abstraction. The IHWClass looks fine because it's an interface, but the SerialPort looks like a concrete class, so you don't gain much from injecting it. Extracting an interface from SerialPort (say, ISerialPort) and injecting that instead would be better.

In an ORM (no preference), how to best represent a many-to-many relationship between two models, when we need to hold information about this relationship?

I have an Order, which can have many Products.
A Product can belong to many Orders.

An Order can have several pieces of information attached to each Product: quantity, special request, ...

In database design, this is represented by a junction table, which holds the quantity and specialRequest fields.

But how to correctly represent this in an ORM, where I would just like to use order.products and get a collection of Products? As I plan to use an Identity Map, there must be only one instance of a same Product across all orders, which prevents me from having some kind of "modified" Product which would contain the extra information.

Any ideas?

According Eric Evans's book, many-to-many associations often introduce a lot of complexity into domain and recommended approach is to decrease count of such associations in your domain. In your particular case Order is an aggregate root, which contains bunch of OrderItems that hold quantity and specialRequest fields.

In Domain Driven Design literature it is often said that domain services should be stateless.

I believe the reason for this is because service calls should represent single units of work. There shouldn't be any service state which multiple service methods would use.

I break this rule in my service architecture so that I can constructor inject all the relevant repositories required in the service. Example:

public class UserService : IUserService
    public IUnitOfWork UnitOfWork { get; set; }

    public IUserRepository UserRepository { get; set; }

    public ICustomerRepository CustomerRepository { get; set; }

    public UserService(IUnitOfWork unitOfWork, IUserRepository userRepository, ICustomerRepository customerRepository)
        UnitOfWork = unitOfWork;
        UserRepository = userRepository;
        CustomerRepository = customerRepository;

    public User RegisterNewUser(...)
        // Perform relevant domain logic

    // ...

In order for me to use constructor injection on the UserService, I would need to have state (properties) so that the service methods have access to the relevant repositories and such.

Although I hope to design the individual service methods as isolated units of work, I cannot necessarily prevent that from happening.

How could I architecture domain services so that they are stateless? Is this even necessary?


Eric Evans in Domain-driven Design: Tackling Complexity in the Heart of Software:

When a significant process or transformation in the domain is not a natural responsibility of an ENTITY or VALUE OBJECT, add an operation to the model as standalone interface declared as a SERVICE. Define the interface in terms of the language of the model and make sure the operation name is part of the UBIQUITOUS LANGUAGE. Make the SERVICE stateless.

Vaughn Vernon also recommends stateless services in his book Implementing Domain Driven Design.

One way to get close to your goal is to inject an IOC container into your service class then override your property get methods to resolve an instance of the necessary class. Your new class would look something like this:

public class UserService : IUserService
  private IUnitOfWork UnitOfWork 
    get{return container.Resolve<IUnitOfWork>()}
  private IUnityContainer container {get;set;}

  public UserService(IUnityContainer container )
    this.container = container;

  public User RegisterNewUser(User user)
     //Domain logic


Your service class now has a dependency on an IOC container which is not a good thing, but if you are trying to get closer to a stateless service, this would do it.

I think I am very close to assembling an MVC repository correctly but just falling apart at the fringe. I created an MVC project with a repository and am returning data successfully, but not accurately as it pertains to DDD. Please tell me where I am incorrect in terms of strict DDD assembly. I guess if the topics are too wide, a book suggestion would be fine. I hope that I am specific enough in my question.

This was one question but I separated them for clarity: Do you create a single namespace for all repository classes called MyStore.Models? Create a repository class for each entity like Product within the Models namespace? Do you put the Pocos in their own classes in the Models namespace but not part of the Repository class itself?

I am currently using Pocos to cut out entities from Linq statements, returning groups of them within IQueryable wrappers like so. I guess here you would somehow remove the IQueryable and replace it with some type of Lazy load? How do you lazy load without being dependent on the original Linq to Sql?

public IQueryable<Product> GetProducts(...) {
return (from p in db.Products
        where ...
        select new myProductPoco {  // Cut out a Poco from Linq
            ID = p.ID,
            Name = p.Name,

Then reference these in MVC views within the inherit page directive:


However, the nested generics looks wrong. I assume this requires a re-factor. Where do you define View Model classes that contain references to Entities? Within the controller class (nested class)?

As book suggestions, try Eric Evan's Domain-Driven Design, and maybe Martin Fowler's Refactoring.

So suppose my sandwich shop has two service interfaces, IMeatGetter and IVeggiesGetter. Now I have a domain object, Sandwich, with a property IsVegetarian. I want a method to select all the possible toppings, like

void GetToppings
  if IsVegetarian
    select (veggies)
    select (veggies union meats)

Is it correct domain design to pass in the services to the sandwich constructor? Or would you, at some higher level, load the meats and veggies first, and pass them all to the sandwich?

What if IMeatGetter is a very slow operation? Would that change your answer?

There's a school of thought associated with Domain-Driven Design that holds that Domain Objects should be POCOs/POJOs, so according to that philosophy the Sandwich class should be independent of the IMeatGetter and IVeggiesGetter services. I personally find that this approach works well despite certain drawbacks.

If the total lists of veggies and meats make sense in the context of a Sandwich (it sounds more like a potential sandwich to me, though), I would pass them in as part of the constructor. Here's a C# example:

public class Sandwich
    private readonly IEnumerable<Ingredient> veggies;
    private readonly IEnumerable<Ingredient> meats;

    public Sandwich(IEnumerable<Ingredient> veggies, IEnumerable<Ingredient> meats)
        if(veggies == null)
            throw new ArgumentNullException("veggies");
        if(meats == null)
            throw new ArgumentNullException("meats");

        this.veggies = veggies;
        this.meats = meats;

    // implement GetToppings as described in OP

If IMeatGetter is used to retrieve the list of meats and that is a very slow operation, then no, that would not change my answer. In this way we have decoupled the Sandwich class from the retrieval logic of the meats.

This allows us to attempt to manage the lifetime of the meats list elsewhere. We might, for example, write an implementation of IMeatGetter that caches the list of meats in memory.

Even if that's not possible, we can still change the list itself to do a lazy evaluation. Although I don't know which platform you are using, in .NET, the IEnumerable<T> interface allows deferred execution; in other words, it doesn't retrieve the list until you actually begin the enumeration.

If you are working on a different platform, it should still be trivial to introduce a custom interface that enables lazy loading of the meats.

In summary, you pass a reference to a list into the Sandwich class. That list may or may not be loaded into memory at that time, but that's up to you to control - independently of the Sandwich class. Thus, the Sandwich class conforms to the Single Responsibility Principle because it doesn't have to deal with managing the lifetime of meats.

So we may apply the domain driven design for multiple projects but there could be intersection of the same piece of domain model.

In this case, how to apply the domain driven design (use ORM, model first, generating database schema)? Create multiple databases with a lot of same tables? Or how to share data? Use synonyms? What is the possible strategy to resolve the sharing model (including data)?

Any suggestion is welcome. Thanks in advance!

You might want to (re-)read the strategic design patterns in the blue book.

The purpose of the PIMPL idiom is to hide implementation, including methods, structures, and even sizes of structures. One downside is it uses the heap.

However, what if I didn't want to hide the size requirements of anything. I just wanted to hide methods, the formatting of the structure and the variable names. One way would be to allocate an array of bytes of the perfect size, have the implementation constantly cast that to whatever structure and use that. But manually find the size of the bytes to allocate for the object? And do casts all the time? Obviously not practical.

Is there an idiom or general way of handling this case that is advantageous to PIMPL or opaque pointers.

A rather different approach could be to rethink the nature of what your objects really represent. In traditional OOP it's customary to think of all objects as self-contained entities that have their own data and methods. Some of those methods will be private to the class because they're just required for that class's own housekeeping, and so these are the kind of thing you usually move the 'impl' of a Pimpl class.

In a recent project I've been favouring the Domain-Driven Design approach where one of the desirables is to separate the data from the logic that does things with it. The data classes then become little more than structs, and the complex logic that previously was hidden in the Pimpl now can go in a Service object that has no state of its own.

Consider a (rather contrived) example of a game loop:

class EnemySoldier : public GameObject
    // just implement the basic GameObject interface
    void        updateState();
    void        draw(Surface&);

    std::unique_ptr<EnemySoldierImp>  m_Pimpl;
class EnemySolderImpl
      // 100 methods of complex AI logic
      // that you don't want exposed to clients

    StateData       m_StateData;
void runGame()
    for (auto gameObject : allGameObjects) {

This could be restructured so that instead of the GameObjects managing their data and their program logic, we separate these two things out:

class EnemySoldierData
    // some getters may be allowed, all other data only 
    // modifiable by the Service class. No program logic in this class
    friend class EnemySoldierAIService;
    StateData       m_StateData;
class EnemySoldierAIService
    EnemySoldierAIService() {}

    void updateState(Game& game) {
        for (auto& enemySoldierData : game.getAllEnemySoldierData()) {
            updateStateForSoldier(game, enemySoldierData);

    // 100 methods of AI logic are now here

    // no state variables

We now don't have any need for Pimpls or any hacky tricks with memory allocation. We can also use the game programming technique of getting better cache performance and reduced memory fragmentation by storing the global state in several flat vectors rather than needing an array of pointers-to-base-classes, eg:

class Game
        std::vector<EnemySoldierData> m_SoldierData;
        std::vector<MissileData>     m_MissileData;

I find that this general approach really simplifies a lot of program code:

  • There's less need for Pimpls
  • The program logic is all in one place
  • It's much easier to retain backwards compatibility or drop in alternate implementations by choosing between the V1 and V2 version of the Service class at runtime
  • Much less heap allocation

There are some mentioned in here but not directly so Questions should be ok?

What are aggregates and how are they used in CQRS (Command-Query-Responsibility-Segregation) and ES (Event-Sourcing)? I'm new to this kind of architecture, and I'd be really happy if someone could please explain this to me. Thanks!

First I'd like to quote Martin Fowler's blog post on CQRS and note that Aggregates are rather related to Domain Driven Design then to CQRS.

CQRS naturally fits with some other architectural patterns.

  • As we move away from a single representation that we interact with via CRUD, we can easily move to a task-based UI.
  • Interacting with the command-model naturally falls into commands or events, which meshes well with Event Sourcing.
  • Having separate models raises questions about how hard to keep those models consistent, which raises the likelihood of using eventual consistency.
  • For many domains, much of the logic is needed when you're updating, so it may make sense to use EagerReadDerivation to simplify your query-side models.
  • CQRS is suited to complex domains, the kind that also benefit from Domain-Driven Design.

In terms of Domain-Driven Design Aggregate is a logical group of Entities and Value Objects that are treated as a single unit (OOP, Composition). Aggregate Root is a single one Entity that all others are bound to.

What are the different architectures for developing professional and organized Java Web Applications? I have heard about MVC architecture, what architecture else does, for example, Stackoverflow, google, orkut, etc.. use for a scalable, robust and easily-maintainable-from-the-developers-point-of-view, exist!

(I wonder why people name framework to person asking for architecture.)

1) I would recommend starting thinking about layered architecture. One Layer should know about (depend on) only 1 nearest layer.


Where Presentation (UI) and Persistence (database) , depend on Domain.

Describe the architecture you use for Java web applications?

2) Take some ideas from DDD (Domain-Driven Design). Reading of Eric Evans book is recommended.

Within a research project that I'm currently working on, we're trying to define the notion of application model, and we're investigating methodologies/formalisms to represent application models, with a focus on Web applications.

After having done some research on the Web, I haven't found specific information on comprehensive application models. So I thought that it was worth asking the question to enthusiast programmers, who can give me a concrete and practical perspective on this topic. I'm not sure if this question fits better on programmers stackexchange: if so, feel free to migrate it. What I'm interested in is getting feedbacks/ideas on my notion of application model, and on possibly related methodologies/formalisms.

I currently have no precise definition of application model, but I think that at least three aspects are important to define this notion:

  • human computer interaction design choices: an application that interact with users should carefully define its interaction patterns to improve and simplify usability; this area should take into account user preferences and characteristics (user models), and (possibly) device characteristics (device models);

  • architecture design choices: any complex application should be based on an architectural model that is shared and understood by its designers and developers;

  • implementation design choices: when implementing an application it is a good practice to identify known and recurring problem, and to solve them by reusing appropriate design solutions.

Am I missing any important aspect?

I think the following is a non-exhaustive list of relevant methodologies/formalisms:

  • ConcurTaskTrees: useful for design of interactive applications, and to model their human-computer interaction;

  • UML: widely known and used modelling language for software design; it can address various aspects of architecture and implementation design;

  • Design Patterns: a set of known and reusable solution for software design; they are often used during the implementation phase.

Any other suggestion?

To summarize: I'm interested in what are the relevant aspects to define the model of an application (see the first list above), and what are the useful formalisms in this area (see the second list above).

If you are looking for best practices on how to actually model an application, I'd highly suggest looking into "Domain Driven Design" (a.k.a. "DDD").

DDD is basically best practices flown from the idea of "talking the same language" between domain experts (those who know the problem area) and the developers, and to actually model the problem domain itself (typically using UML), rather than thinking to model an application. Experience tells that this will in the end typically give you the best model for an application too, since it represents the realities of the problem domain, as complex as it can be, and this is typically what the application needs to deal with anyway.

The main source for DDD is Eric Evans book with the same name. Also you should not miss Mr Evans' two talks "Putting the model to work" and "strategic design" on InfoQ. The Wikipedia article has some links also (Doesn't seem to be the best introduction to the subject though).

Possible Duplicate:
How to build multi oop functions in PHP5


I've seen this kind of code in a couple of forum systems but I can't find any examples like this:


You can see a similar example in PDO:


I don't know how this type of coding is called in PHP and thus I can't get on looking for any tutorials and examples.

You just make sure a chainable method returns an object reference, and you can chain another method call onto the result.

You can return $this as @Tim Cooper shows, or you can return a reference to another different object:

class Hand
  protected $numFingers = 5;
  public function countFingers() { return $this->numFingers; }

class Arm
  protected $hand;
  public function getHand() { return $this->hand; }

$n = $body->getLeftArm()    // returns object of type Arm
          ->getHand()       // returns object of type Hand 
          ->countFingers(); // returns integer

The PDO example you show uses two different object types. PDO::query() instantiates and returns a PDOStatement object, which in turn has a fetch() method.

This technique can also be used for a fluent interface, particularly when implementing an interface for domain-specific language. Not all method chains are fluent interfaces, though.

See what Martin Fowler wrote about fluent interfaces in 2005. He cites Eric Evans of Domain-Driven Design fame as having come up with the idea.

What is model ?

I am an ASP.Net WebForms developer. I have been studying MVC for couple of days.

I can understand the concept of Controller and View but What is Model ?

Is it Data ? Does it have to be with LINQ or we can use traditional stored procedures ?

Is it Data ?

Generally it's data and behavior. When you go about designing your model I'd suggest forgetting about "database" altogether. The Domain Driven Design book has an excellent example of this.

Does it have to be with LINQ or we can use traditional stored procedures ?

Try and create your model to be persistent ignorant. Lots of answer mention a database but ideally the model shouldn't have any knowledge of how it's persisted. And in some cases some not all parts of the model will be persisted. Look at the kigg source. It shows a clear separation between the model and the persistence. It shows implementing EF and Linq as two different options.

IMO, The ASP.NET MVC in Action books do a great job of laying out the different parts of MVC and discussing the different "models", ViewModel vs Entity Model, etc.

edit: Maybe the question should be, In (Domain driven design || Data driven design), what is the model?

Do you have good literature recommendation for Design Patterns, especially for Data Access layers.

I'm searching for patterns like Repository and Unit of Work. I need those for implementing WebServices, MVC web applications and MVVM desktop applications.

If you are searching specifically for Repository and Unit Of Work patterns then i suggest that you don't read full books because they discuss them i a generic way and you will get overwhelmed, instead look for specific implementations for those patterns in the technology area you are working in.
With that being sad, the two authors that stand behind Repostiory and UnitOfWork patterns are Martin Fowler and Eric Evans with their books Patterns of Enterprise Architecture and Domain Driven Design: Tackling Complexity at the Heart of Software respectively, the latter book is followed by a great book called Applying Domain Driven Design and Patterns with Examples in C# and .NET.
Regarding design patterns in general the authoritative reference is The GoF (Gang of Four) book Design Patterns: Elements of Reusable Object Oriented Software System and a very great book that focuses on the most common patterns in an entertaining manner is Head First Design Patterns.

From DDD: Tackling Complexity in the Heart of Software (pg. 159/160):

When database schema is being created specificaly as a store for the objects, it is worth accepting some model limitations in order to keep the mapping very simple


This does entail some sacrifice in the richness of the object model, and sometimes compromises have to be made in the database design (such as selective denormalization), but to do otherwise is to risk losing the tight coupling of model and implementation.


But it is crucial that the mappings be transparent, easily understandable by inspecting the code or reading entries in the mapping tool.


When database is being viewed as an object store, don't let the data model and the object model diverge far, regardless of the powers of the mapping tools. Sacrifice some richness of object relationships to keep close to the relational model.

I do understand that with simpler mappings the Data Mappers will be easier to maintain, less buggy etc, but I don't understand why we could also risk losing the tight coupling between a Domain Model and DM's implementation by making mappings between DM and Data Model complex.

Namely, when creating a DM, we should try to be oblivious of how non-domain layers will be implemented and what techologies they will use. And since Data Mappers resides within DAL ( thus outside the Domain layer ), how then could the complexity of a mapping between DM and Data Model ( and thus the complexity of Data Mappers ) have any impact on the coupling between DM and DM's implementation?

Thank you

The quote is actually:

but to do otherwise is to risk losing the tight coupling of model and implementation

It is talking about the coupling of the conceptual model and its implementation (in code). This is not a discussion about data mappers and mapping data to the implemented model, but about how you can lose fidelity to the conceptual model when implementing it - in particular when you need to consider a database or other implementation details.

From DDD: Tackling Complexity in the Heart of Software ( pg. 177 ):

The need to update Delivery History when adding a Handling Event gets the Cargo AGGREGATE involved in the transaction.

a) Further down the page author does propose an alternative solution, but still - isn't author in above excerpt essentially proposing that we'd implement an association by having DeliveryHistory.Events property query a database ( via repository ) each time this property gets accessed?

b) Since implementation "proposed" by author is almost identical to how lazy loading is implemented ( with an exception that lazy loading only queries for data the first time we need it and then caches it ), I'll also ask the following:

Many are against lazy loading in general, but regardless, I assume that we should never use lazy loading if related entities reside within the same aggregate, since such an association is expressed with object reference, which is implemented when we require a transactional integrity?

Reason being that this integrity may be compromised if related data is never accessed ( and as such is never retrieved ), since invariants can't be enforced properly when aggregate is modified?



The DeliveryHistory.Events collection can be loaded when the DeliveryHistory entity is loaded by the repository. It can also be loaded via lazy loading in which case an ORM injects a collection proxy which when iterated calls the database.

But isn't author proposing a third option, which is to query for events each time DeliveryHistory.Events is accessed ( or perhaps each time DeliveryHistory.GetEvents() is called )?


It is similar to lazy loading however the important difference is that resorting to a repository query allows the omission of the Events property in the object model. This reduces the "footprint" of the DeliveryHistory entity.

I - I'm assuming that by "being similar to lazy loading" you're referring to a design where events are retrieved from the db each time they are requested?!

II - Anyways, if we omit the DeliveryHistory.Events property ( and presumably don't define as an alternative a DeliveryHistory.GetEvents()), how then do we implement a design proposed by author ( as noted in my original post, I'm aware that further down the page author did propose a better alternative )?

Thank you

a) The DeliveryHistory.Events collection can be loaded when the DeliveryHistory entity is loaded by the repository. It can also be loaded via lazy loading in which case an ORM injects a collection proxy which when iterated calls the database.

b) It is similar to lazy loading however the important difference is that resorting to a repository query allows the omission of the Events property in the object model. This reduces the "footprint" of the DeliveryHistory entity.

The problem with lazy loading is not that data may never be accessed, it is that accessing a lazy loaded property for the first time will result in a database call and you have to make sure that the connection is still alive. In a sense, this can compromise the integrity of the aggregate which should be considered a whole.


a) Either way the net result is the same. I'm not sure if creating a proxy collection was a technique utilized when the book was written (2003).

b1) Yes, they are similar in that the events aren't loaded together with the DeliveryHistory entity, but only on demand.

b2) Instead of an events property on the DeliveryHistory entity, the events would be accessed by calling a repository. The repository itself would be called by the surrounding application service. It would retrieve the events and pass them to places that needed them. Or if the use case is adding events, the application service would call the repository to persist the event.

From How are Value Objects stored in the database? :

Assume that a Company and Person both have the same mail Address. Which of these statements do consider valid?

   1."If I modify Company.Address, I want Person.Address to automatically get those changes"

   2."If I modify Company.Address, it must not affect Person.Address"

If 1 is true, Address should be an Entity

If 2 is true, Address should be a Value Object.

Shouldn't in the above model the mail Address be a Value Object, since even if Company and Person have same mail, this mail still doesn't have a conceptual identity?

In other words, if initially Company and Person share, but then get new mail, then we can argue that mail address itself didn't changed, instead Company and Person replaced it by ?

Thus to my understanding a mere fact that Address is shared shouldn't be enough to give it personality (ie identity)?!

Thank you

Yes, your understanding is correct. Address should almost always be a value object, since in most domains, the address is indeed just a value.

The fact that a Company and a Person have the same Address today does not mean that if one changes, the other should change too. If such a relationship exists, it should be modeled through an explicit constraint rather than by making Address an entity.

Eric Evans talks about this in his excellent book on Domain-Driven Design and even provides a specific example where Address might be an entity -- the postal service, whose domain revolves around addresses, and where the identity of individual addresses is important.

How to design a WPF app according to the MVVM pattern?

You can think of the procedure in three parts.

1) Define Views

2) Define ViewModels

3) Define Models

Interesting questions are:

1) Do you find yourself always following some order with these? For example do you first create the View and then do the others?

2) Given a View how do you create a corresponding ViewModel?

3) Also, given a View and ViewModel how do you approach the creation of the Model?

4) When using the MVVM pattern, do you start with some template project or you do it all from scratch?


  1. What you are creating first its just matter of your preference and coding style.
  2. When you have a view you can easily determine what data and behavior do you need from viewmodel and therefore you can define and create a viewmodel without any problems.
  3. Model is very often actually your business entities. So its a question about how to build domain model which goes beyond the scope of MVVM (I can suggest you to read some books about OOD or Domain-Driven Design if your domain is really complex). If you prefer domain centric approach, your model will be ready before views and viewmodels. if no, and you are going from View for example, you still have to model your domain, it can't just arise from views and viewmodels.
  4. There are a lot of MVVM frameworks that can help you and free you from writing MVVM infrastructure code. Also you can do everything from scrath, but I can't recommend this approach.

MVVM frameworks to consider (some of them are more then just MVVM frameworks):


MVVM Light


I am a self taught "developer". I use the term loosely because I only know enough to make myself dangerous. I have no theory background, and I only pick up things to get this little tool to work or make that control do what I want.

That said, I am looking for some reading material that explains some of the theory behind application development especially from a business standpoint. Really I need to understand what all of these terms that float around really talk about. Business Logic Layer, UI abstraction level and all that. Anyone got a reading list that they feel helped them understand this stuff? I know how to code stuff up so that it works. It is not pretty mostly because I don't know the elegant way of doing it, and it is not planned out very well (I also don't know how to plan an application).

Any help would be appreciated. I have read a number of books on what I thought was the subject, but they all seem to rehash basic coding and what-not.

This doesn't have to be specific to VB.NET or WPF (or Entity Framework) but anything with those items would be quite helpful.

In addition to some of the others (and after Code Complete), try Domain-Driven Design: Tackling Complexity in the Heart of Software.

I think most people would recommend Code Complete by Steve McConnell as the first book to read on putting some good software together.

I am having hard time using Repository patterns, is it possible to create two repository patterns?? One for products, another for orders??

I failed to connect these repositories to databases. I know how to work with one repository, but two with IRepository where T: Entity I am getting lost. The question is whether I can create and will not volatile the rules if create ProductRepository and OrderRepository?

Repository pattern is widely used in DDD (Domain-Driven-Design) you could check it here: Also check this book:

With regards to your question:

Yes you can use more than 1 repository. Look in this example I use nHibernate session:

// crud operations
public abstract class Repository<T> : IRepository<T> where T : class
        protected readonly ISession _session;

        public Repository(ISession session)
            _session = session;

        public T Add(T entity)
            return entity;

public interface IRepository<T>
    T Add(T entity);
    T Update(T entity);
    T SaveOrUpdate(T entity);
    bool Delete(T entity);      

Then my repository looks like this:

public class ProjectRepository : Repository<Project>, IProjectRepository
 // Project specific operations 
public interface IProjectRepository : IRepository<Project>
        Project Add(Project entity);
        Project Update(Project entity);
        Project find_by_id(int id);
        Project find_by_id_and_user(int id, int user_id);

Then using Ninject:


protected void Application_Start()

            ControllerBuilder.Current.SetControllerFactory(new NinjectControllerFactory());

Then in NinjectControllerFactory I load the modules:

 public class NinjectControllerFactory : DefaultControllerFactory

        private IKernel kernel = new StandardKernel(new NhibernateModule(), new RepositoryModule(), new DomainServiceModule());

        protected override IController GetControllerInstance(RequestContext context, Type controllerType)
            //var bindings = kernel.GetBindings(typeof(IUserService));

            if (controllerType == null)
                return null;
            return (IController)kernel.Get(controllerType);


public class NhibernateModule : NinjectModule
        public override void Load()
            string connectionString =            

            var helper = new NHibernateHelper(connectionString);

            Bind<ISession>().ToMethod(context => context.Kernel.Get<ISessionFactory>().OpenSession()).InRequestScope();           


Then in RepositoryModule I use Ninject Conventions to automatically bind all repositories with their interfaces:

using Ninject.Extensions.Conventions;

    public class RepositoryModule : NinjectModule
        public override void Load()
            IKernel ninjectKernel = this.Kernel;

            ninjectKernel.Scan(kernel =>



And in the end I basically inject Repository in the controller:

public class projectscontroller : basecontroller { private readonly IProjectRepository _projectRepository;

public projectscontroller(IProjectRepository projectRepository)
    _projectRepository = projectRepository;

public ActionResult my()
    int user_id = (User as CustomPrincipal).user_id;
    var projectList = _projectRepository.find_by_user_order_by_date(user_id);
    var projetsModel = new ProjectListViewModel(projectList);
    return View("my", projetsModel);


This way you just create new Repository and its Interface and it will be automatically injected to your controller.

I have a class which should function as an entity class (as in DDD). It basically looks like this:

public class Page
    protected String Name;
    protected String Description;
    protected Dictionary<int, IField> Fields = new Dictionary<int, IField>();

    protected Page SetName(String name)
        this.Name = name;
        return this;

    protected Page SetDescription(String description)
        this.Description = description;
        return this;

    public Page AddField(IField field)
        this.Fields.add(xxx, Field); //xxx = just some id
        return this;

Now my question is, is this a valid entity class?

I'd need to keep the method chaining, so please don't go into too much detail about that (even if you think it's wrong).

My main concern is, can an entity class contain methods, such as getters and setters? And especially a method like AddField?

The AddField method takes an object of the type IField. I store that in a dictionary inside the Page class. That's an aggregate then, right?

Doesn't that changes the state of the entity, making it not a real entity class?

Or is it just fine the way it is?

My main concern is, can an entity class contain methods, such as getters and setters? And especially a method like AddField?

An entity can contain getters, setters, adders, behavior and business rules (recommended)... basically anything you want.

The AddField method takes an object of the type IField. I store that in a dictionary inside the Page class. That's an aggregate then, right?

No, it might be an aggregate root but not necessarily. It depends on your context and how you design your aggregates. See

Doesn't that changes the state of the entity, making it not a real entity class?

It's the very nature of entities to change state, that's why we give them IDs to track changes. Value objects, on the other hand, can be made immutable because their identity doesn't matter, most of the time you don't modify a value object but just create a new one.

If you want to go the DDD route, I suggest you read the blue book or a summary to get a basic comprehension of the approach. DDD has its own principles and design patterns - cherrypicking a few of them doesn't always make sense when you don't adopt the whole paradigm or at least get the intent behind it.

I have a java class, which has lots of functions, and all are in one file. The total number of lines in this file is around 50K. I makes it very hard to read. Can I move some of the functions to different file? If yes, how to do that. And if no, is there is some other technique to make program more readable.

Define domain objects (e.g. value objects) for the various concepts in your model, and put functionality related to those classes inside them.

For instance if you have

public class MyBigClass {
   private String account;
   private boolean accountIsValid() { ... }

then separate the account into an Account class, like this:

public class Account {
  public Account(String accountNumber) { ... }
  public boolean isValid() { ... }

public class MyBigClass
  private Account account;

if you keep doing that, and you make sure functionality is always located together with the value, your huge class will shrink rapidly. Try to avoid using native types (like String) for anything but the internal values of your value objects.

Also, look at libraries like commons-lang and google guava, to make sure you are not re-implementing something complex that there already exists a suitable solution for; Examples of classes that will both simplify your code and reduce the chance of implementation errors are EqualsBuilder and HashCodeBuilder.

To further improve your coding style, consider reading the following books:

I'm trying to learn effective DDD practices as I go, but had a fundamental question I wanted to get some clarity on.

I am using ASP.NET WebForms and I am creating a situation where a user places an order. Upon order submission, the code-behind retrieves the user, builds the order from the inputs on the form, calls the User.PlaceOrder() method to perform add the order object to the user's order collection, and calls the repository to save the record to the database. That is fairly simply and straightforward.

Now I need to add logic to send an order confirmation email, and I'm not really sure the proper place to put this code or where to call it. In the olden days I would simply put that code in the code-behind and call it at the same time I was building the order, but I want to get a step closer to solid proper architecture so I wanted to get some information.

Thanks for your help!

For me, I keep everything as close to the entity as possible. After a while, you will start to see that things just fit better in some places versus others. For example, business logic that can be determined based solely on a given instance of the entity should be in the entity. If it requires more knowledge of the domain, then perhaps it belongs in the domain service.

I bucket my logic into three areas, for the most part:

  • Entity Logic
  • Domain Service Logic
  • Application Service Logic

The application logic is where I would register domain events, for example. I do not think that emailing belongs in the domain, personally. It is a requirement, rather than a piece of logic. If I have a listener at that point, the domain might raise an OrderSubmitted() event, and the listener has the responsibility of acting on it. The event belongs in the domain, because it is describing a significant event in the context of the domain. How the application responds to that, however, is different, in my opinion.

As mentioned by Syznmon, Udi's blog is a good resource. I strongly recommend, though, both the Evan's book, and the presentation he gave with lessons learned, as well.

In short:

On startup of a Java/Android application, consider an array of data being initialised with hard coded values. Then, during runtime, a class can read from that data. The exact elements from the array that it will want to access are dependent on User input at runtime.

In detail:

Consider a crossword application. From a main screen, you select the crossword that you want to play. A Crossword (class), creates multiple Questions (class). To create a Question, it is initialised with two strings: 'question' and 'answer'. In order for the Crossword to create all of it's Questions, it needs to know which Questions belong to it. When a User selects a Crossword from a list, it gets passed an ID which will map to (perhaps) an element index of a multidimensional array, whereby each outer array represents a set of crossword questions, and each inner array represents the questions within that crossword. Similar to below:

// crossword 1
[0] => Array(
    // question 1
    [0] => Array(
        [0] => 'question'
        [1] => 'answer'
    // question 2
    [1] => Array(
        [0] => 'question'
        [1] => 'answer'
// crossword 2
[1] => Array(
    // question 1
    [0] => Array(
        [0] => 'question'
        [1] => 'answer'
    // question 2
    [1] => Array(
        [0] => 'question'
        [1] => 'answer'


OK, so User selects Crossword 1, I now create a Crossword object, create multiple Question objects by accessing index 1 of the global array. Something like:

Q1 = new Question(globalArr[1][0][0], globalArr[1][0][1]);
Q2 = new Question(globalArr[1][1][0], globalArr[1][1][1]);

My question:

I imagine (hope) there is a more elegant OOP solution to this. Assuming that there is, I would appreciate someone sharing their knowledge. Would a Singleton design come into play here?

The main goal here is making hard coded information about each question accessible to the entire application (or at least a class)

The generic (and mostly dirty) way of providing hard coded values in Java is to declare static fields or data structures in a class which are then made available by the classloader. This is how a singleton is implemented in most of the cases. But since you delegate the management of creating the single instance to your classloader(s) (there can be multiple classloaders in one VM) there is no guarantee that you will always be using the exact same instance of the static data. So this is not a good recipe to pass around data between multiple threads.

The right way to handle data/content in Android is probably to map your domain model (crosswords) to an implementation using the Android Content Provider API. You can also provide hardcoded values with that if you absolutely need to.

In general: model your domain with apropriate abstract types and not using arrays. That is definitely not the OO way of doing it.

Read this book:

I've never asked anything this general before, and I'm not sure if it's going to sound like I'm asking for someone to do my work for me. I'm an experienced programmer but I'm somewhat new to the kinds of design patterns that get talked about on sites like this.

Synopsis of parameters:

(1) There are 3 external systems that are effectively data stores

System1 - REST System2 - WSDL System3 - COM Interop

(2) Data entities are meaningful in 2 of the systems, moving both to and from (the 2 respective systems)

(3) The whole thing is driven by a synchronization manager app.

Synopsis of implementation:

(4) Data entities are defined by interfaces living in a separate namespace.

IFoo IBar IBaz

(5) Work is done by Utility classes that live in a namespace dedicated to the system.

namespace MyCompany.Integrations.System1 {
 public static class Utility {
  public static List<IFoo> GetFoos(DateTime since) {...}
  public static void SaveBazes(List<IBaz> bases) {...}

namespace MyCompany.Integrations.System2 {
 public static class Utility {
  public static void SaveFoos(List<IFoo> foos) {...}
  public static List<IBar> GetBars(DateTime since) {...}

namespace MyCompany.Integrations.System3 {
 public static class Utility {
  public static void SaveFoos(List<IFoo> foos) {...}
  public static void SaveBars(DateTime since) {...}

Question: What existing patterns, if any, is this similar to and are there any areas I might explore to help me learn how to improve my architecture? I know the Utility classes aren't OO. I haven't figured out how to layout classes to get it done in a simple way yet.

Addition: I thought more and based on one response, I think I was not specific enough. I am hoping someone who had more experience tells me how to apply some OO patterns and get away from Utility classes

10,000 foot answer:

You might find Domain Driven Design and Clean Code useful as they give you a set of patterns that work well together and a set of principals for evaluating when to apply a pattern. DDD resources: the book, free quick intro, excellent walkthrough. Clean Code resources: summary, SOLID principles.

Specific answer:

You are already using the Repository pattern (your utility classes) which I'd probably use here as well. The static members can make the code difficult to test but otherwise aren't a problem. If the Repositories become too complex, break out the low-level API communication into Gateways.

Since an entity is split across multiple data sources, consider modelling this explicitly. For example: Person, HumanResourcesPerson, AccountingPerson. Use names understood by the external systems and their business owners (e.g. Employee, Resource). See Single Responsibilty Principle and Ubiquitous Language for some reasoning. These may be full Entities or just Data Transfer Objects (DTOs) depending on how complex they are.

The synchronization might be performed by an Application Service that coordinates the Repositories and Entities.

Note - all quotes are from DDD: Tackling Complexity in the Heart of Software

First quote ( page 222 ):

Processes as Domain Objects

Right up front let's agree that we do not want to make procedures a prominent aspect of our model. Objects are meant to encapsulate the procedures and let us think about their goals or intentions instead.

What I am talking about are processes that exist in the domain, which we have to represent in the model. When these emerge, they tend to make for awkward object designs.

The first example in this chapter described a shipping system that routed cargo. This routing process was something with business meaning. A Service is one way of expressing such a process explicitly, while still encapsulating the extremely complex algorithms.

Second quote ( pages 104,106 ):

Sometimes, it just isn't a thing. In some cases, clearest and most pragmatic design includes operations that do not conceptually belong to any object. Rather than force the issue, we can follow the natural contours of the problem space and include Services explicitly in the model.

When a significant process or transformation in the domain is not a natural responsibility of an Entity or Value Object, add an operation to the model as a standalone interface declared as a Service. Define the interface in terms of the language of the model and make sure the operation name is part of the Ubiquitous language.

I can't figure out whether in first quote author is using the term "processes" to describe the same type of behavior ( which should also be encapsulated within a Service ) as in the second quote, or is the term "processes" used to describe a rather different kind of behavior than one he's describing on pages 104, 106?

Thank you

The concepts are pretty close but to me, the first quote looks more like it's about large real-world domain processes that would exist without the software (e.g. "a cargo routing process").

Second one is less clear but I think it describes smaller operations/processes/transformations taking place in the modelled version of the domain.

While the first kind should immediately click as "Service" right from early analysis stages, the latter is more subtle and could take more time to be distinguished from regular entity behavior - you could have included it in an entity at first but realize it doesn't fit that much in it as you refine the model.

Now I want to try to start from the objects of my model according to the dictates DDD but I have some difficulty in understanding how to migrate my thought patterns because I can not turn the examples that I find lying around on my specific case .

My main concept are the activities , each activity has an indicative code , a description , a status that changes over time and a quarter each Result.

Users want to be able to see the history of all the states hired from the activities, with the dates on which the changes were made . In addition, they also want to be able to create new states, change the description of the existing ones and possibly prevent the use of some of these while maintaining the value for the previous activities .

Each quarter, users want to be able to insert an Result that contains Outcome and recommendations, a rating and the date of formulation of the outcome .

The ratings must be a list freely maintainable by users.

Thinking to my old way I would create classes like this:

public class Activity
    public int ID;
    public string Desc;
    public IList<ActivityStatus> ActivityStatusList;
    public IList<Result> ResultList;

public class ActivityStatus
    public Activity Activity;
    public Status Status;
    public DateTime StartDate;
    public DateTime EndDate;

public class Status
    public int ID;
    public string Desc;
    public bool Valid;

public class Result
    public Activity Activity;
    public int Quarter;
    public string Outcome;
    public string Recommendations;
    public Rating Rating;

public class Rating
    public int ID;
    public string Desc;
    public bool Valid;

than i will implement a DataAccessLayer mapping this class to a new db (created from this class) with NHibernate and add repository to grant user CRUD operation to all of this object

According to DDD are there better ways?

I'd recommend to read the book or at least the Wikipedia article.

DDD is about focussing on domain logic and modelling this first - in an object-oriented way. Persistence is a technical concern, which should not be the starting point of your design and (usually) not determine, how you will design your domain classes.

On page 134 ( DDD: Tackling Complexity in the Heart of Software ) author improves Purchase Order model by making Purchase Order PO an Aggregate Root containing PO Line Items, while Part entity is made a root of its own Aggregate ( making Part an Aggregate root does make sense, since parts will be shared by many POs ):

An implementation consistennt with this model would guarantee the invariant relating PO and its items, while changes to the price of a part would not have to immediately affect the items that reference it.


But this is not an invariant that must be enforced at all times. By making the dependency of line items on parts looser, we avoid contention and reflect the realities of the business better. At the same time, tightening the relationship of the PO and its line items guarantees that an important business rule will be followed.

Author argues that changes to the part prices don't have to immediately propagate to the PO aggregates that reference it, since:

  • locking the part when particular PO is being updated may cause contention ( due to the possibility of several POs simulaneously trying to get a lock on the same part )

  • parts get modified less frequently than POs and as such chances of POs having invalid data are relatively small

a) I understand author's arguments, but shouldn't the consistency of PO be a top priority in such a model and as such parts should be locked together with POs being updated, even at the risk of contention?

b) In contrast, on pages 176, 177 author did find it necessary to enforce invariants spanning two aggregates within single transaction ( ie when Handling Event is added, Delivery History should also be updated accordingly within the same transaction ):

The Delivery History holds a collection of Handling Events relevant to its Cargo and new object must be added to this collection as part of the transaction. If this back-pointer were not created, the objects would be inconsistent.


The need to update Delivery History when adding a Handling Event gets the Cargo AGGREGATE involved in the transaction.

I can't figure out why would maintaining consistency within single transaction be more important in this example than in PO example?

Note ( I'm assuming that by "back-pointer" he's referring to Handling Event instance? )

c) Is there a particular reason why author didn't as an alternative propose an implementation of Optimistic Concurency check, where Parts table would contain a rowversion field, which our code would inspect each time some PO was being updated?

d) Btw, why must Price value be copied into Line Item ( Figure 6.11, page 134 )? Can't PO's invariants check the price by inspecting Part entity?

Thank you

A PO, once placed, can be regarded as an immutable event. This is why the price value must be copied. Future changes in price should not be reflected on existing POs - that would be a violation of business rules. This is why the relationship between a PO and the Part can and should be loosened. This is also why these consistency characteristics apply to a PO but not in the other scenario.

Optional assignment for one of my classes. 30-45 minute presentation/case study on either of these two topics:

  1. Examples of currently existing design patterns in real life projects: what problem they solve, why are they better than other techniques, etc
  2. New design patterns, what problems they solve that other design patterns can't, etc

Note that "new" and "existing" are with respect to the GoF book and the design patterns listed within.

For the first one, source code is not required, but it's probably a plus, so an open source project would be the best.

For the second, I'd basically need to be able to give a description like the ones in the GoF book for each pattern, with proper motivations, examples, and the like.

Anyone got some good ideas/pointers?

Quotes are from DDD: Tackling Complexity in the Heart of Software ( pg. 150 )


global search access to a VALUE is often meaningles, because finding a VALUE by its properties would be equivalent to creating a new instance with those properties. There are exceptions. For example, when I am planning travel online, I sometimes save a few prospective itineraries and return later to select one to book. Those itineraries are VALUES (if there were two made up of the same flights, I would not care which was which), but they have been associated with my user name and retrieved for me intact.

I don't understand author's reasoning as for why it would be more appropriate to make Itinierary Value Object globally accessible instead of clients having to globally search for Customer root entity and then traverse from it to this Itinierary object?


A subset of persistent objects must be globaly accessible through a search based on object attributes ... They are usualy ENTITIES, sometimes VALUE OBJECTS with complex internal structure ...

Why is it more common for Values Objects with complex internal structure to be globally accesible rather than simpler Value Objects?

c) Anyways, are there some general guidelines on how to determine whether a particular Value Object should be made globally accessible?



There is no domain reason to make an itinerary traverse-able through the customer entity. Why load the customer entity if it isn't needed for any behavior? Queries are usually best handled without complicating the behavioral domain.

I'm probably wrong about this, but isn't it common that when user ( Ie Customer root entity ) logs in, domain model retrieves user's Customer Aggregate?

And if users have an option to book flights, then it would also be common for them to check from time to time the Itineraries ( though English isn't my first language so the term Itinerary may actually mean something a bit different than I think it means ) they have selected or booked.

And since Customer Aggregate is already retrieved from the DB, why issue another global search for Itinerary ( which will probably search for it in DB ) when it was already retrieved together with Customer Aggregate?


The rule is quite simple IMO - if there is a need for it. It doesn't depend on the structure of the VO itself but on whether an instance of a particular VO is needed for a use case.

But this VO instance has to be related to some entity ( ie Itinerary is related to particular Customer ), else as the author pointed out, instead of searching for VO by its properties, we could simply create a new VO instance with those properties?


a) From your link:

Another method for expressing relationships is with a repository.

When relationship is expressed via repository, do you implement a SalesOrder.LineItems property ( which I doubt, since you advise against entities calling repositories directly ), which in turns calls a repository, or do you implement something like SalesOrder.MyLineItems(IOrderRepository repo)? If the latter, then I assume there is no need for SalesOrder.LineItems property?


The important thing to remember is that aggregates aren't meant to be used for displaying data.

True that domain model doesn't care what upper layers will do with the data, but if not using DTO's between Application and UI layers, then I'd assume UI will extract the data to display from an aggregate ( assuming we sent to UI whole aggregate and not just some entity residing within it )?

Thank you

a) There is no domain reason to make an itinerary traverse-able through the customer entity. Why load the customer entity if it isn't needed for any behavior? Queries are usually best handled without complicating the behavioral domain.

b) I assume that his reasoning is that complex value objects are those that you want to query since you can't easily recreate them. This issue and all query related issues can be addressed with the read-model pattern.

c) The rule is quite simple IMO - if there is a need for it. It doesn't depend on the structure of the VO itself but on whether an instance of a particular VO is needed for a use case.


a) It is unlikely that a customer aggregate would have references to the customer's itineraries. The reason is that I don't see how an itinerary would be related to behaviors that would exist in the customer aggregate. It is also unnecessary to load the customer aggregate at all if all that is needed is some data to display. However, if you do load the aggregate and it does contain reference data that you need you may as well display it. The important thing to remember is that aggregates aren't meant to be used for displaying data.

c) The relationship between customer and itinerary could be expressed by a shared ID - each itinerary would have a customerId. This would allow lookup as required. However, just because these two things are related it does not mean that you need to traverse customer to get to the related entities or value objects for viewing purposes. More generally, associations can be implemented either as direct references or via repository search. There are trade-offs either way.


a) If implemented with a repository, there is no LineItems property - no direct references. Instead, to obtain a list of line items a repository is called.

b) Or you can create a DTO-like object, a read-model, which would be returned directly from the repository. The repository can in turn execute a simple SQL query to get all required data. This allows you to get to data that isn't part of the aggregate but is related. If an aggregate does have all the data needed for a view, then use that aggregate. But as soon as you have a need for more data that doesn't concern the aggregate, switch to a read-model.

The premise of domain-driven design is the following:

  • Placing the project's primary focus on the core domain and domain logic
  • Basing complex designs on a model
  • Initiating a creative collaboration between technical and domain experts to iteratively cut ever closer to the conceptual heart of the problem.

Domain-driven design is not a technology or a methodology. DDD provides a structure of practices and terminology for making design decisions that focus and accelerate software projects dealing with complicated domains.

The term was coined by Eric Evans in his book of the same title.

We are working in a project where there will be a shared configuration which needs to be accessed by multiple parts of the solution.

The team responsible for the Config module implemented an interface which consists only of 2 classes. 2 classes that are responsible for getting, caching and providing the particular values (via properties).

I feel that this is a bad design, in my opinion, it would be better to define all the config values one may access via the interface, but not actual classes that implement this behaviour.

In my opinion, for something like getting config values, it would be more logical to give an interface that shows me what values I will be able to access, but not a class (which implementation e.g. the properties is not controlled by the interface).

-edit- The Interface looks like this:

public interface IConfigurationResolver
    GeneralConfiguration GetGeneralConfiguration(string Id);
    SpecificConfiguration GetSpecificConfiguration(string Id);

It is implemented by one class. What I meant is that this interfaces really just defines two classes that are each responsible for providing the configuration values, whereas I think it would be better if the interface did not care about such details and should provide the config values itself

These are very experienced developers, whereas I am not, so what is your stand on this?

There are quite a few things going on here…

The reference of non-abstract classes in the IConfigurationResolver interface is a code smell, violating the “program to an interface, not an implementation” principal (What does it mean to "program to an interface"?).

Your desire to explicitly reveal the configuration parameters through an interface is a good one, and is in accordance with the notion of an Intention Revealing Interface (as discussed in Eric Evans’ Domain Driven Design).

However, if you have a great many configuration values, this interface could end up having a great many methods on it. This is where knowledge of your domain comes in – the decomposition of the “universe of configurations” in to a set of cohesive interfaces, each of which are used to configure a separate aspect of your application is a skill in itself, and relates to the ‘I’ in SOLID. Lowy’s Programming .NET components discusses the issue of contract re-factoring, and as a rough guide suggests aiming for 3-5 methods per interface.

I’m guessing the desire to "re-factor the configurations" is the root of the existence of the two methods on the current interface.

What is the right way to implement a restful-controller returning Json-objects with Spring MVC and JPA?


  • Where should occur the conversion from json to entities (entity loading)?
  • Where should occur the conversion from entities to json?
  • Where should be placed the @Transactional statements?
  • Important: entities contains lazy loaded collection!

A first naive design was:

@RequestMapping(value = "/{userId}", method = RequestMethod.GET)
public JsonUser doSomethingOnUser(@PathVariable("userId") Long userId) {

    // 1. load entities by ids
    User user = mUserRepository.findOne(userId);

    // 2. eventually validate

    // 3. perform changes on entities (create, update, delete)

    // 4. convert back to json
    return mUserPresenter.convertToJsonUser(user);

Motivation was to:

  • perform the conversion from id to entity as soon as possible
  • let the controller remain responsible for the presentation, in this case convert from entities to json

But I get several issues with transactions boundaries combined with lazy-loading and entities relations, so that it seems to be a bad design.

What are your best practices?

Try structuring it like this, this is a very frequent solution, it's the Domain Driven Development approach described in the blue book, here is a free short version approved by the same author:


The controller is not transactional and does not have business logic, so it does not try to navigate the user object graph which could cause LazyInitializationExceptions.

If by any reason other than business logic this is needed, then either the controller calls a service that returns an eager fetched object graph or it first merges the object into a session.

This is meant to be the exception and not the rule, in general the role of the controller is to validate the input parameters to see if they have the correct type/mandatory parameters, call the business logic and prepare a DTO for the response if one is needed.

@RequestMapping(value = "/{userId}", method = RequestMethod.GET)
public JsonUser doSomethingOnUser(@PathVariable("userId") Long userId) {

    // all the business logic is in the service layer
    User user = mUserService.doSomething(userId);

    // conversion to DTO is handled in the controller layer, 
    // the domain does not know about DTOs
    return mUserPresenter.convertToJsonUser(user);


The service contains the business logic written using the domain model, and defines the transaction scope.

public class MyUserService {
    private MyRepository repository;

    public User doSomething(String userId) {

       //this object is attached due to @Transactional, no exceptions will be thrown
       User user = mUserRepository.findOne(userId);

       // do something with the attached object


The repository is usually not transactional, as it cannot know the scope of the transaction it's in. It is responsible for retrieving data from the database and transforming it in domain objects, and for storing domain objects in the database:

public class MyRepository {
    private EntityManager em;

    public void someDataRelatedMethod(...) {
        ... use entity manager ...

I want to study some approaches on realizing a project, designing an application, and so on. I'm not referring to Design Patterns as much as i'm referring at some design styles. For example MVC. So, in that order, I need some links, book names or other suggestions to study on this topic. If you have some thoughts, please help me. Thanks.

I would start by reading upon Domain Driven Design. Eric Evans Tackling Complexity in the Heart of Software is a must-read on this topic. I can then recommend reading Jimmy Nilssons Applying Domain Driven Design and Patterns. This book has examples in .NET (C#) but you should be able to apply it to your language of choice.

Code Complete by Steve McConnell is also a good read if you want to learn how to write clean, maintainable code.

If you like Head Firsts books, i can also recommend reading Object-Oriented Analysis & Design.

For the record, MVC is a design pattern.

I've been programming for several years now and since then I have learned several concepts and techniques that have made me a better programmer (i.e. OOP, MVC, regex, hashing, etc). I feel too that by been able to learn several languages (basic, pascal, C/C++, lisp, prolog, python) I have widen my horizons in a very possitive way. But since some time ago I feel like I'm not learning any new good "trick". Can you suggest some interesting concept/technique/trick that could make me retake the learning flow?

High level understanding, creating good abstractions with proper dependencies, is what pays off in long term. For example, Law of Demeter is an important guideline. I recommend also reading Eric Evan's Domain Driven Design

Once a mathematician told me a project is possible on the condition that we got a language. Could you help me understand how we know when we do and when we don't? Like examples or whether an automated test can know what's a "language" and what isn't. Thanks

It is hard to understand what (s)he might have meant without any context. However, my personal (and highly speculative) association to this is domain languages. Users of a specific domain have their own terminology and logic, which the analyst/programmer must understand and translate into code in order to develop a successful sw product. If the users and developers speak the same ubiquitous language, the project has good chances to succeed. If not, however, even if something gets "successfully" developed, it will not be very useful for the end users, thus the project is in fact a failure.

The fundamental book for this is Domain Driven Design.

Brief blurb,

My skill in .net has been called "innovative" but I would prefer it be exemplary. Basically, I need a mentor. I own the domain name and I am going to live up to that name but in order to do so I need a mentor & community.

On to the questions:

  1. Entity Framework - I'd imagine this is an intense framework mapping objects from a factory while retaining the integrity and state of objects within the system. At least, that's what I can intuit from about 15 minutes of a podcast I listened to. RTFM I know but is that a correct general assessment?

  2. Enterprise Library - Killer. Used most blocks at least as example applications.

  3. Domain Driven Design - What are some tricks to going from thinking like an ERD/ORM to domain driven design? Pros vs. Cons?

  4. Agile vs. SCRUM - Is there a difference really?

  5. Unit testing - The last thing I think of. Can't get automated web ui testing setup correctly also need help with NAnt/MSBuild scripts from a VSS 2005 repository. A full example in source would be really nice, perhaps including scheduling.

  6. Bare essential TSQL - What is considered the bare minimum professional grade TSQL statements for enterprise development? Like ROWCOUNT, TRANSACTION, ROLLBACK, flow control statements, in-line sql & security concerns for CRUD methods.

  7. It is conceivable to integrate MVC2/3, Entity Framework, Enterprise Library and SilverLight web front ends? Even perhaps Sharepoint?

  8. I asked a guy I met once when should I use Linq, his reply was "always use the force".

  9. When learning a new language what games/apps do you write? What are some good exercises for those about to code? (We salute you!)

  10. What books would you recommend for general programming theory, enterprise architecture & business analysis?

Ok probably no one will respond but these are burning questions I've had in my gut and I just had to get all that out.

Geek For Life.

Ok, let's see.

1) Entity Framework - it's mostly an ORM (Object-Relational Mapper). The idea of EF is actually a little more ambitious than that; the true goal is to create the uber-be-all modelling framework for all kinds of data (that's EDM) and then provide software that implements and supports that model (EF). In practice, though, it's an ORM.

2) Umm, is there actually a question here?

3) Run, don't walk, to buy Eric Evan's Domain Driven Design. This is the book that defined the DDD vocabulary everyone uses today. Want to know what a repository really is? It's in here.

4) Hell yes. Agile is the ideas embodied in the Agile Manifesto. It's the underlying principles. SCRUM is a particular methodology (well, methodology framework) that conforms to those principles. There are lots of agile methods (Extreme Programming and Crystal Clear are two examples off the top of my head), but they all share the same underlying principles.

5) Well, it should be the first thing you think of, but you should be doing Test Driven Design, not Unit Testing. TDD is a design / development activity, Unit Testing is a test activity. Web UI testing is a pain, granted. Although your question is a little vague and it looks like multipart. You might want to split this one out into separate, more specific questions.

6) I'm not really a SQL guy, but I've gotten pretty far with just the basics - SELECT is remarkably complicated just for starters. Although I'm of the opinion that if you need conditional logic or loops inside your TSQL sprocs, something has gone terribly, terribly wrong. You're better off really understanding the theory - the relational model, normal forms, and the various data types and how they behave.

7) Yes, for most of them. Each one plays a different part in the software stack. Assuming you're doing a RIA style client, you'd have the silverlight app running in the browser providing the UI, communicating back to a web server that's responding via an MVC site. Entlib is useful in implementing that MVC app. If you're using Silverlight, you will be using EF for data access most likely. You can also use it to hit the database inside the MVC app. Sharepoint may be a little problematic - it's also a web server thing, so you could consider it a competitor for the MVC app. But you could also use it as a data store.

8) LINQ rocks. It's a different way of thinking about certain problems around managing sequence of data. The thing I like about it is that it's very composable - you can filter, transform, and operate on data in lots of ways, and pass those things around and do more filtering / transforming along the way and it just all slots together seamlessly. Plus, the language stuff needed to implement LINQ brought in a whole TON of new power to C# / VB.NET which is really, really cool.

9) Hello world is always useful just to make sure you've got the editor-compiler-debugger toolchain nailed down and working. After that, I tend to dive into whatever I feel like. When I first tried Silverlight I did a little game. I may do a small parser. Or just try to throw some windows on the screen. I don't have a standard new project.

10) Agreed on the recommendations for Design Patterns and POEAA. I also strongly recommend The Pragmatic Programmer by Hunt and Thomas. It's not about the theory of programming, it's about the craftsmanship of building software.

Realated tags