XUnit Test Patterns

Gerard Meszaros

Mentioned 64

Improves software return on investment by teaching the reader how to refactor test code and reduce or prevent crippling test maintenance.

More on Amazon.com

Mentioned in questions and answers.

I am working to integrate unit testing into the development process on the team I work on and there are some sceptics. What are some good ways to convince the sceptical developers on the team of the value of Unit Testing? In my specific case we would be adding Unit Tests as we add functionality or fixed bugs. Unfortunately our code base does not lend itself to easy testing.

In short - yes. They are worth every ounce of effort... to a point. Tests are, at the end of the day, still code, and much like typical code growth, your tests will eventually need to be refactored in order to be maintainable and sustainable. There's a tonne of GOTCHAS! when it comes to unit testing, but man oh man oh man, nothing, and I mean NOTHING empowers a developer to make changes more confidently than a rich set of unit tests.

I'm working on a project right now.... it's somewhat TDD, and we have the majority of our business rules encapuslated as tests... we have about 500 or so unit tests right now. This past iteration I had to revamp our datasource and how our desktop application interfaces with that datasource. Took me a couple days, the whole time I just kept running unit tests to see what I broke and fixed it. Make a change; Build and run your tests; fix what you broke. Wash, Rinse, Repeat as necessary. What would have traditionally taken days of QA and boat loads of stress was instead a short and enjoyable experience.

Prep up front, a little bit of extra effort, and it pays 10-fold later on when you have to start dicking around with core features/functionality.

I bought this book - it's a Bible of xUnit Testing knowledge - tis probably one of the most referenced books on my shelf, and I consult it daily: link text

I've heard that unit testing is "totally awesome", "really cool" and "all manner of good things" but 70% or more of my files involve database access (some read and some write) and I'm not sure how to write a unit test for these files.

I'm using PHP and Python but I think it's a question that applies to most/all languages that use database access.

The book xUnit Test Patterns describes some ways to handle unit-testing code that hits a database. I agree with the other people who are saying that you don't want to do this because it's slow, but you gotta do it sometime, IMO. Mocking out the db connection to test higher-level stuff is a good idea, but check out this book for suggestions about things you can do to interact with the actual database.

We have tried to introduce unit testing to our current project but it doesn't seem to be working. The extra code seems to have become a maintenance headache as when our internal Framework changes we have to go around and fix any unit tests that hang off it.

We have an abstract base class for unit testing our controllers that acts as a template calling into the child classes' abstract method implementations i.e. Framework calls Initialize so our controller classes all have their own Initialize method.

I used to be an advocate of unit testing but it doesn't seem to be working on our current project.

Can anyone help identify the problem and how we can make unit tests work for us rather than against us?

Good question!

Designing good unit tests is hard as designing the software itself. This is rarely acknowledged by developers, so the result is often hastily-written unit tests that require maintenance whenever the system under test changes. So, part of the solution to your problem could be spending more time to improve the design of your unit tests.

I can recommend one great book that deserves its billing as The Design Patterns of Unit-Testing

HTH

I will have the following components in my application

  • DataAccess
  • DataAccess.Test
  • Business
  • Business.Test
  • Application

I was hoping to use Castle Windsor as IoC to glue the layers together but I am bit uncertain about the design of the gluing.

My question is who should be responsible for registering the objects into Windsor? I have a couple of ideas;

  1. Each layer can register its own objects. To test the BL, the test bench could register mock classes for the DAL.
  2. Each layer can register the object of its dependencies, e.g. the business layer registers the components of the data access layer. To test the BL, the test bench would have to unload the "real" DAL object and register the mock objects.
  3. The application (or test app) registers all objects of the dependencies.

Can someone help me with some ideas and pros/cons with the different paths? Links to example projects utilizing Castle Windsor in this way would be very helpful.

In general, all components in an application should be composed as late as possible, because that ensures maximum modularity, and that modules are as loosely coupled as possible.

In practice, this means that you should configure the container at the root of your application.

  • In a desktop app, that would be in the Main method (or very close to it)
  • In an ASP.NET (including MVC) application, that would be in Global.asax
  • In WCF, that would be in a ServiceHostFactory
  • etc.

The container is simply the engine that composes modules into a working application. In principle, you could write the code by hand (this is called Poor Man's DI), but it is just so much easier to use a DI Container like Windsor.

Such a Composition Root will ideally be the only piece of code in the application's root, making the application a so-called Humble Executable (a term from the excellent xUnit Test Patterns) that doesn't need unit testing in itself.

Your tests should not need the container at all, as your objects and modules should be composable, and you can directly supply Test Doubles to them from the unit tests. It is best if you can design all of your modules to be container-agnostic.

Also specifically in Windsor you should encapsulate your component registration logic within installers (types implementing IWindsorInstaller) See the documentation for more details

I am looking for podcasts or videos on how to do unit testing.

Ideally they should cover both the basics and more advanced topics.

Other hanselminutes episodes on testing:

Other podcasts:

Other questions like this:

Blog posts:

I know you didn't ask for books but... Can I also mention that Beck's TDD book is a must read, even though it may seem like a dated beginner book on first flick through (and Working Effectively with Legacy Code by Michael C. Feathers of course is the bible). Also, I'd append Martin(& Martin)'s Agile Principles, Patterns & Techniques as really helping in this regard. In this space (concise/distilled info on testing) also is the excellent Foundations of programming ebook. Goob books on testing I've read are The Art of Unit Testing and xUnit Test Patterns. The latter is an important antidote to the first as it is much more measured than Roy's book is very opinionated and offers a lot of unqualified 'facts' without properly going through the various options. Definitely recommend reading both books though. AOUT is very readable and gets you thinking, though it chooses specific [debatable] technologies; xUTP is in depth and neutral and really helps solidify your understanding. I read Pragmatic Unit Testing in C# with NUnit afterwards. It's good and balanced though slightly dated (it mentions RhinoMocks as a sidebar and doesnt mention Moq) - even if nothing is actually incorrect. An updated version of it would be a hands-down recommendation.

More recently I've re-read the Feathers book, which is timeless to a degree and covers important ground. However it's a more 'how, for 50 different wheres' in nature. It's definitely a must read though.

Most recently, I'm reading the excellent Growing Object-Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce. I can't recommend it highly enough - it really ties everything together from big to small in terms of where TDD fits, and various levels of testing within a software architecture. While I'm throwing the kitchen sink in, Evans's DDD book is important too in terms of seeing the value of building things incrementally with maniacal refactoring in order to end up in a better place.

public class Student
{
    public string Name { get; set; }
    public int ID { get; set; }
}

...

var st1 = new Student
{
    ID = 20,
    Name = "ligaoren",
};

var st2 = new Student
{
    ID = 20,
    Name = "ligaoren",
};

Assert.AreEqual<Student>(st1, st2);// How to Compare two object in Unit test?

How to Compare two collection in Unitest?

What you are looking for is what in xUnit Test Patterns is called Test-Specific Equality.

While you can sometimes choose to override the Equals method, this may lead to Equality Pollution because the implementation you need to the test may not be the correct one for the type in general.

For example, Domain-Driven Design distinguishes between Entities and Value Objects, and those have vastly different equality semantics.

When this is the case, you can write a custom comparison for the type in question.

If you get tired doing this, AutoFixture's Likeness class offers general-purpose Test-Specific Equality. With your Student class, this would allow you to write a test like this:

[TestMethod]
public void VerifyThatStudentAreEqual()
{
    Student st1 = new Student();
    st1.ID = 20;
    st1.Name = "ligaoren";

    Student st2 = new Student();
    st2.ID = 20;
    st2.Name = "ligaoren";

    var expectedStudent = new Likeness<Student, Student>(st1);

    Assert.AreEqual(expectedStudent, st2);
}

This doesn't require you to override Equals on Student.

Likeness performs a semantic comparison, so it can also compare two different types as long as they are semantically similar.

I recently finished a project using TDD and I found the process to be a bit of a nightmare. I enjoyed writing tests first and watching my code grow but as soon as the requirements started changing and I started doing refactorings I found that I spent more time rewriting / fixing unit tests than I did writing code, much more time in fact.

I felt while I was going through this process it would be much easier to do the tests after the application was finished but if I did that I would of lost all the benefits of TDD.

So are there any hits / tips for writing maintainable TDD code? I'm currently reading Roy Osherove's The Art Of Unit Testing, are there any other resources that could help me out?

Thanks

Yes, there a whole book called xUnit Test Patterns that deal with this issue.

It's a Martin Fowler signature book, so it has all the trappings of a classic patterns book. Whether you like that or not is a matter of personal taste, but I, for one, found it immensely invaluable.

Anyhow, the gist of the matter is that you should treat your test code as your would your production code. First and foremost, you should adhere to the DRY principle, because that makes it easier to refactor your API.

Practice

It takes a while to learn how to write decent unit tests. A difficult project (more like projects) is nothing strange.

The xUnit Test Patterns book recommended already is good, and I've heard good things about the book you're currently reading.

As for general advice it depends on what was hard about your tests. If they broke often, they may not be unit tests, and more so integration tests. If they were difficult to set up, the SUT (System Under Test) could be showing signs of being too complex and would need furthering modularisation. The list goes on.

Some advice I live by is following the AAA rule.

Arrange, Act and Assert. Each test should follow this formula. This makes the test readable, and easy to maintain if and when they do break.

Design is Still Important

I practice TDD, but before any code is wrote I grab a whiteboard and scribble away. While TDD allows your code to evolve, some up front design is always a benefit. Then you at least have a starting point, from here your code can be driven by the tests you write.

If I'm carrying out a particular difficult task, I make a prototype. Forget TDD, forget best practices, just bash out some code. Obviously this is not production code, but it provides a starting point. From this prototype I then think about the actual system, and what tests I require.

Check out the Google Testing Blog - this was the turning point for myself when starting TDD. Misko's articles (and site - the Guide to Testable code especially) are excellent, and should point you in the right direction.

Where can I find good literature on unit testing? Book titles and links are welcome.

Update: Here is a list of books mentioned in answers below

xUnit Test Patterns: Refactoring Test Code

Growing Object-Oriented Software Guided by Tests

The Art Of Unit Testing

The real challenge of software testing is solving the puzzle of test design.

Testing Object-Oriented Systems: Models, Patterns, and Tools provides three dozen test design patterns applicable to unit test design. It also provides many design patterns for test automation. These patterns distill many hard-won best practices and research insights.

Pragmatic Unit Testing

Test Driven Development: By Example

I was reading the Joel Test 2010 and it reminded me of an issue i had with unit testing.

How do i really unit test something? I dont unit test functions? only full classes? What if i have 15 classes that are <20lines. Should i write a 35line unit test for each class bringing 15*20 lines to 15*(20+35) lines (that's from 300 to 825, nearly 3x more code).

If a class is used by only two other classes in the module, should i unit test it or would the test against the other two classes suffice? what if they are all < 30lines of code should i bother?

If i write code to dump data and i never need to read it such as another app is used. The other app isnt command line or it is but no way to verify if the data is good. Do i still need to unit test it?

What if the app is a utility and the total is <500lines of code. Or is used that week and will be used in the future but always need to be reconfiguration because it is meant for a quick batch process and each project will require tweaks because the desire output is unchanged. (i'm trying to say theres no way around it, for valid reasons it will always be tweaked) do i unit test it and if so how? (maybe we dont care if we break a feature used in the past but not in the present or future).

etc.

I think this should be a wiki. Maybe people would like to say an exactly of what they should unit test (or should not)? maybe links to books are good. I tried one but it never clarified what should be unit tested, just the problems of writing unit testing and solutions.


Also if classes are meant to only be in that project (by design, spec or whatever other reason) and the class isnt useful alone (lets say it generates the html using data that returns html ready comments) do i really need to test it? say by checking if all public functions allow null comment objects when my project doesnt ever use null comment. Its those kind of things that make me wonder if i am unit testing the wrong code. Also tons of classes are throwaway when the project. Its the borderline throwaway or not very useful alone code which bothers me.

Some Time ago, i had The same question you have posted in mind. I studied a lot of articles, Tutorials, books and so on... Although These resources give me a good starting point, i still was insecure about how To apply efficiently Unit Testing code. After coming across xUnit Test Patterns: Refactoring Test Code and put it in my shelf for about one year (You know, we have a lot of stuffs To study), it gives me what i need To apply efficiently Unit Testing code. With a lot of useful patterns (and advices), you will see how you can become an Unit Testing coder. Topics as

  • Test strategy patterns
  • Basic patterns
  • Fixture setup patterns
  • Result verification patterns
  • Test double patterns
  • Test organization patterns
  • Database patterns
  • Value patterns

And so on...

I will show you, for instance, derived value pattern

A derived input is often employed when we need to test a method that takes a complex object as an argument. For example, thorough input validation testing requires we exercise the method with each of the attributes of the object set to one or more possible invalid values. Because The first rejected value could cause Termination of The method, we must verify each bad attribute in a separate call. We can instantiate The invalid object easily by first creating a valid object and then replacing one of its attributes with a invalid value.

A Test organization pattern which is related To your question (Testcase class per feature)

As The number of Test methods grows, we need To decide on which Testcase class To put each Test method... Using a Testcase class per feature gives us a systematic way To break up a large Testcase class into several smaller ones without having To change out Test methods.

But before reading

xUnit Test Patterns

My advice: read carefully

A new project we began introduced a lot of new technologies we weren't so familiar with, and an architecture that we don't have a lot of practice in. In other words, the interfaces and interactions between service classes etc of what we're building are fairly volatile, even more so due to internal and customer feedback. Though I've always been frustrated by the ever-moving specification, I see this to some degree a necessary part of building something we've never built before - if we just stuck to the original design and scope, the end product would probably be a whole lot less innovative and useful than it's becoming.

I also introduced test-driven development (TDD), as the benefits are well-documented and conceptually I loved the idea. Two more new things to learn - NUnit and mocking - but seeing all those green circles made it all worthwhile.

Over time, however, those constant changes in design seemed to mean I was spending a whole lot more time changing my tests than I was on writing the code itself. For this reason alone, I've gone back to the old ways of testing - that is, not automated.

While I have no doubt that the application would be far more robust with hundreds of excellent unit tests, I've found the trade-off of time to launch the product to be mostly unacceptable. My question is, then - have any of you also found that TDD can be a hassle if you're prototyping something / building a beta version? Does TDD go much more naturally hand-in-hand with something where the specifications are more fixed, or where the developers have more experience in the language and technologies? Or have I done something fundamentally wrong?

Note that I'm not trying to criticise TDD here - just I'm not sure it's always the best fit for all situations.

The short answer is that TDD is very valuable for beta versions, but may be less so for prototyping.

I think it is very important to distinguish between beta versions and prototyping.

A beta version is essentially a production version that is just still in development, so you should definitely use TDD in that scenario.

A prototype/proof of concept is something you build with the express intent of throwing it away once you've gotten the answers out of it that you wanted.

It's true that project managers will tend to push for the prototype to be used as a basis for production code, but it is very important to resist that. If you know that's not possible, treat the prototype code as you would your production code, because you know it is going to become your production code in the future - and that means you should use TDD with it as well.

When you are learning a new technology, most code samples etc. are not written with unit tests in mind, so it can be difficult to translate the new technology to the unit testing mindset. It most definitely feels like a lot of overhead.

In my experience, however, unit testing often really forces you to push the boundaries of the new technology that you are learning. Very often, you need to research and learn all the different hooks the new technology provides, because you need to be able to isolate the technology via DI or the like.

Instead of only following the beaten path, unit testing frequently forces you to learn the technology in much more depth, so what may feel like overhead is actually just a more in-depth prototype - one that is often more valuable, because it covers more ground.

Personally, I think unit testing a new technology is a great learning tool.

The symptoms you seem to experience regarding test maintainability is a bit orthogonal, I think. Your tests may be Overspecified, which is something that can happen just as well when working with known technologies (but I think it is probably easier to fall into this trap when you are also learning a new technology at the same time).

The book xUnit Test Patterns describes the Overspecified Test antipattern and provides a lot of guidance and patterns that can help you write more maintainable tests.

On my search for a Unit-Testing tool for C# i have found xUnit.NET. Untill now, i read most of the articles on http://xunit.codeplex.com/ and even tried out the examples given at How do I use xUnit.net?.

But sadly, on the offical page i could just find the basic informations to xUnit.NET. Is there any further information avadible for it?

Besides the xUnit-1.9.1.chm-File mentioned by Sean U and the Examples on the official xUnit.NET website I found two other resources to help me understand the basics of the work with xUnit.NET:

Sadly, as pointed out also by Sean U, it seems as there are no books at all about the xUnit.NET-Framework yet. So, for further information it looks like one has go with studying the *.chm-File and reading general books about unit testing. Or switch to another testing-framework, that's what I think I'll do...

Update

Ognyan Dimitrov added some additional resources in his comments:

If you decide to abandon xUnit and use NUnit instead, a good book to read is "The Art of Unit Testing (with examples in .NET)".

Nice clear explanations of both basic and advanced unit testing concepts, using the NUnit framework.

I am a hacker not and not a full-time programmer but am looking to start my own full application development experiment. I apologize if I am missing something easy here. I am looking for recommendations for books, articles, sites, etc for learning more about test driven development specifically compatible with or aimed at Python web application programming. I understand that Python has built-in tools to assist. What would be the best way to learn about these outside of RTFM? I have searched on StackOverflow and found the Kent Beck's and David Astels book on the subject. I have also bookmarked the Wikipedia article as it has many of these types of resources.

Are there any particular ones you would recommend for this language/application?

A little late to the game with this one, but I have been hunting for a Python oriented TDD book, and I just found Python Testing: Beginner's Guide by Daniel Arbuckle. Haven't had a chance to read it yet, but when I do, I'll try to remember to post a follow up here. The reviews on the Amazon page look pretty positive though.

I know that Kent Beck's book (which you mentioned) covers TDD in Python to some pretty good depth. If I remember correctly, the last half of the book takes you through development of a unit test framework in Python. There's nothing specific to web development, though, which is a problem in many TDD resources that I've read. It's a best practice to keep your business logic separate from your presentation in order to make your BL more testable, among other reasons.

Another good book that you might want to look into is xUnit Test Patterns. It doesn't use Python, but it does talk a lot about designing for testability, how to use mocks and stubs (which you'll need for testing web applications), and automating testing. It's more advanced than Beck's book, which makes it a good follow-up.

I've read about unit testing and heard a lot of hullabaloo by others touting its usefulness, and would like to see it in action. As such, I've selected this basic class from a simple application that I created. I have no idea how testing would help me, and am hoping one of you will be able to help me see the benefit of it by pointing out what parts of this code can be tested, and what those tests might look like. So, how would I write unit tests for the following code?

public class Hole : INotifyPropertyChanged
{
    #region Field Definitions
    private double _AbsX;
    private double _AbsY;
    private double _CanvasX { get; set; }
    private double _CanvasY { get; set; }
    private bool _Visible;
    private double _HoleDia = 20;
    private HoleTypes _HoleType;
    private int _HoleNumber;
    private double _StrokeThickness = 1;
    private Brush _StrokeColor = new SolidColorBrush(Colors.Black);
    private HolePattern _ParentPattern;
    #endregion

    public enum HoleTypes { Drilled, Tapped, CounterBored, CounterSunk };
    public Ellipse HoleEntity = new Ellipse();
    public Ellipse HoleDecorator = new Ellipse();
    public TextBlock HoleLabel = new TextBlock();

    private static DoubleCollection HiddenLinePattern = 
               new DoubleCollection(new double[] { 5, 5 });

    public int HoleNumber
    {
        get
         {
            return _HoleNumber;
         }
        set
        {
            _HoleNumber = value;
            HoleLabel.Text = value.ToString();
            NotifyPropertyChanged("HoleNumber");
        }
    }
    public double HoleLabelX { get; set; }
    public double HoleLabelY { get; set; }
    public string AbsXDisplay { get; set; }
    public string AbsYDisplay { get; set; }

    public event PropertyChangedEventHandler PropertyChanged;
    //public event MouseEventHandler MouseActivity;

    // Constructor
    public Hole()
    {
        //_HoleDia = 20.0;
        _Visible = true;
        //this.ParentPattern = WhoIsTheParent;
        HoleEntity.Tag = this;
        HoleEntity.Width = _HoleDia;
        HoleEntity.Height = _HoleDia;

        HoleDecorator.Tag = this;
        HoleDecorator.Width = 0;
        HoleDecorator.Height = 0;


        //HoleLabel.Text = x.ToString();
        HoleLabel.TextAlignment = TextAlignment.Center;
        HoleLabel.Foreground = new SolidColorBrush(Colors.White);
        HoleLabel.FontSize = 12;

        this.StrokeThickness = _StrokeThickness;
        this.StrokeColor = _StrokeColor;
        //HoleEntity.Stroke = Brushes.Black;
        //HoleDecorator.Stroke = HoleEntity.Stroke;
        //HoleDecorator.StrokeThickness = HoleEntity.StrokeThickness;
        //HiddenLinePattern=DoubleCollection(new double[]{5, 5});
    }

    public void NotifyPropertyChanged(String info)
    {
        if (PropertyChanged != null)
        {
            PropertyChanged(this, 
                       new PropertyChangedEventArgs(info));
        }
    }

    #region Properties
    public HolePattern ParentPattern
    {
        get
        {
            return _ParentPattern;
        }
        set
        {
            _ParentPattern = value;
        }
    }

    public bool Visible
    {
        get { return _Visible; }
        set
        {
            _Visible = value;
            HoleEntity.Visibility = value ? 
             Visibility.Visible : 
             Visibility.Collapsed;
            HoleDecorator.Visibility = HoleEntity.Visibility;
            SetCoordDisplayValues();
            NotifyPropertyChanged("Visible");
        }
    }

    public double AbsX
    {
        get { return _AbsX; }
        set
        {
            _AbsX = value;
            SetCoordDisplayValues();
            NotifyPropertyChanged("AbsX");
        }
    }

    public double AbsY
    {
        get { return _AbsY; }
        set
        {
            _AbsY = value;
            SetCoordDisplayValues();
            NotifyPropertyChanged("AbsY");
        }
    }

    private void SetCoordDisplayValues()
    {
        AbsXDisplay = HoleEntity.Visibility == 
        Visibility.Visible ? String.Format("{0:f4}", _AbsX) : "";
        AbsYDisplay = HoleEntity.Visibility == 
        Visibility.Visible ? String.Format("{0:f4}", _AbsY) : "";
        NotifyPropertyChanged("AbsXDisplay");
        NotifyPropertyChanged("AbsYDisplay");
    }

    public double CanvasX
    {
        get { return _CanvasX; }
        set
        {
            if (value == _CanvasX) { return; }
            _CanvasX = value;
            UpdateEntities();
            NotifyPropertyChanged("CanvasX");
        }
    }

    public double CanvasY
    {
        get { return _CanvasY; }
        set
        {
            if (value == _CanvasY) { return; }
            _CanvasY = value;
            UpdateEntities();
            NotifyPropertyChanged("CanvasY");
        }
    }

    public HoleTypes HoleType
    {
        get { return _HoleType; }
        set
        {
            if (value != _HoleType)
            {
                _HoleType = value;
                UpdateHoleType();
                NotifyPropertyChanged("HoleType");
            }
        }
    }

    public double HoleDia
    {
        get { return _HoleDia; }
        set
        {
            if (value != _HoleDia)
            {
                _HoleDia = value;
                HoleEntity.Width = value;
                HoleEntity.Height = value;
                UpdateHoleType(); 
                NotifyPropertyChanged("HoleDia");
            }
        }
    }

    public double StrokeThickness
    {
        get { return _StrokeThickness; }
        //Setting this StrokeThickness will also set Decorator
        set
        {
            _StrokeThickness = value;
            this.HoleEntity.StrokeThickness = value;
            this.HoleDecorator.StrokeThickness = value;
            NotifyPropertyChanged("StrokeThickness");
        }
    }

    public Brush StrokeColor
    {
        get { return _StrokeColor; }
        //Setting this StrokeThickness will also set Decorator
        set
        {
            _StrokeColor = value;
            this.HoleEntity.Stroke = value;
            this.HoleDecorator.Stroke = value;
            NotifyPropertyChanged("StrokeColor");
        }
    }

    #endregion

    #region Methods

    private void UpdateEntities()
    {
        //-- Update Margins for graph positioning
        HoleEntity.Margin = new Thickness
        (CanvasX - HoleDia / 2, CanvasY - HoleDia / 2, 0, 0);
        HoleDecorator.Margin = new Thickness
        (CanvasX - HoleDecorator.Width / 2, 
         CanvasY - HoleDecorator.Width / 2, 0, 0);
        HoleLabel.Margin = new Thickness
        ((CanvasX * 1.0) - HoleLabel.FontSize * .3, 
         (CanvasY * 1.0) - HoleLabel.FontSize * .6, 0, 0);
    }

    private void UpdateHoleType()
    {
        switch (this.HoleType)
        {
            case HoleTypes.Drilled: //Drilled only
                HoleDecorator.Visibility = Visibility.Collapsed;
                break;
            case HoleTypes.Tapped: // Drilled & Tapped
                HoleDecorator.Visibility = (this.Visible == true) ? 
                Visibility.Visible : Visibility.Collapsed;
                HoleDecorator.Width = HoleEntity.Width * 1.2;
                HoleDecorator.Height = HoleDecorator.Width;
                HoleDecorator.StrokeDashArray = 
                LinePatterns.HiddenLinePattern(1);
                break;
            case HoleTypes.CounterBored: // Drilled & CounterBored
                HoleDecorator.Visibility = (this.Visible == true) ? 
                Visibility.Visible : Visibility.Collapsed;
                HoleDecorator.Width = HoleEntity.Width * 1.5;
                HoleDecorator.Height = HoleDecorator.Width;
                HoleDecorator.StrokeDashArray = null;
                break;
            case HoleTypes.CounterSunk: // Drilled & CounterSunk
                HoleDecorator.Visibility = (this.Visible == true) ? 
                Visibility.Visible : Visibility.Collapsed;
                HoleDecorator.Width = HoleEntity.Width * 1.8;
                HoleDecorator.Height = HoleDecorator.Width;
                HoleDecorator.StrokeDashArray = null;
                break;
        }
        UpdateEntities();
    }

    #endregion

}

We must test it, right?

Tests are validation that the code works as you expect it to work. Writing tests for this class right now will not yield you any real benefit (unless you uncover a bug while writing the tests). The real benefit is when you will have to go back and modify this class. You may be using this class in several different places in your application. Without tests, changes to the class may have unforseen reprecussions. With tests, you can change the class and be confident that you aren't breaking something else if all of your tests pass. Of course, the tests need to be well written and cover all of the class's functionality.

So, how to test it?

At the class level, you will need to write unit tests. There are several unit testing frameworks. I prefer NUnit.

What am I testing for?

You are testing that everything behaves as you expect it to behave. If you give a method X, then you expect Y to be returned. In Gord's answer, he suggested testing that your event actually fires off. This would be a good test.

The book, Agile Principles, Patterns, and Practices in C# by Uncle Bob has really helped me understand what and how to test.

Tests will help, if you need to make changes.

According to Feathers (Feathers, Working Effectively with Legacy Code, p. 3) there are four reasons for changes:

  • Adding a feature
  • Fixing a bug
  • Improving design
  • Optimizing resource usage

When there is the need for change, you want to be confident that you don't break anything. To be more precise: You don't want to break any behavior (Hunt, Thomas, Pragmatic Unit Testing in C# with NUnit, p. 31).

With unit testing in place you can do changes with much more confidence, because they would (provided they are programmed properly) capture changes in behavior. That's the benefit of unit tests.

It would be difficult to make unit tests for the class you gave as an example, because unit tests also requires a certain structure of the code under test. One reason I see is that the class is doing too much. Any unit tests you will apply on that class will be quite brittle. Minor change may make your unit tests blow up and you will end up wasting much time with fixing problems in your test code instead of your production code.

To reap the benefits of unit tests requires to change the production code. Just applying unit tests principles, without considering this will not give you the positive unit testing experience.

How to get the positive unit testing experience? Be openminded for it and learn.

I would recommend you Working Effectively with Legacy Code for an existing code basis (as that piece of code you gave above). For an easy kick start into unit testing try Pragmatic Unit Testing in C# with NUnit. The real eye opener for me was xUnit Test Patterns: Refactoring Test Code.

Good luck in you journey!

Although there are plenty of resources, even here on SO, only two of the terms are compared to each other in these Q/A.

So, in short, what is each one of them? And how they all relate to each other? Or don't they at all?

Difference between mock and stub is very simple - mock can make your test fail, while stub can't. That's all there is. Additionally, you can think of stub as of something that provides values. Nowadays, fake is just a generic term for both of them (more on that later).

Example

Let's consider a case where you have to build a service that sends packages via communication protocol (exact details are irrelevant). You simply supply service with package code and it does the rest. Given the snippet below, can you identify which dependency would be a stub and which mock in potential unit test?

public class DistributionService
{
    public double SendPackage(string packageCode)
    {
        var contents = this.packageService.GetPackageContents(packageCode);
        if (contents == null)
        {
            throw new InvalidOperationException(
                "Attempt to send non-exisiting package");
        }

        var package = this.packageBuilder.Build(contents);
        this.packageDistributor.Send(package);
    }
}

It's fairly easy to tell that packageBuilder simply provides value and there's no possible way it could make any test fail. That's a stub. Even though it might seem more blurry, packageService is stub too. It provides a value (what we do with the value is irrelevant from stub's point of view). Of course, later we'll use that value to test whether exception is thrown, but it's still all within our control (as in, we tell stub exactly what to do and forget about it - it should have no further influence on test).

It gets different with packageDistributor. Even if it provides any value, it's not consumed. Yet the call to Send seems to be pretty important part of our implementation and we'll most likely want to verify it is called.

At this point we should get to a conclusion that packageDistributor is a mock. We'll have a dedicated unit test asserting that Send method was called and if for some reasons it wasn't - we want to know that, as it's important part of the entire process. Other dependencies are stubs as all they do is provide values to other, perhaps more relevant pieces of code.

Quick glance at TDD

Stub being stub, could be just as well replaced with constant value in naive implementation:

var contents = "Important package";
var package = "<package>Important package</package>";
this.packageDistributor.Send(package);

This is essentially what mocking frameworks do with stubs - instruct them to return configurable/explicit value. Old-school, hand-rolled stubs often do just that - return constant value.

Obviously, such code doesn't make much sense, but anyone who ever done TDD surely seen bunch of such naive implementations at the early stage of class development. Iterative development that results from TDD will often help identify roles of your class' dependencies.

Stubs, mocks and fakes nowadays

At the beginning of this post I mentioned that fake is just a generic term. Given that mock can also serve as stub (especially when modern mocking frameworks are concerned), to avoid confusion it's good idea to call such object a fake. Nowadays, you can see this trend growing - original mock - stub distinction is slowly becoming a thing of the past and more universal names are used. For example:

  • FakeItEasy uses fake
  • NSubstitute uses substitute
  • Moq uses mock (name is old, but no visible distinction is made whether it's stub or mock)

References, further reading

I've been working on an ASP.NET MVC project for about 8 months now. For the most part I've been using TDD, some aspects were covered by unit tests only after I had written the actual code. In total the project pretty has good test coverage.

I'm quite pleased with the results so far. Refactoring really is much easier and my tests have helped me uncover quite a few bugs even before I ran my software the first time. Also, I have developed more sophisticated fakes and helpers to help me minimize the testing code.

However, what I don't really like is the fact that I frequently find myself having to update existing unit tests to account for refactorings I made to the software. Refactoring the software is now quick and painless, but refactoring my unit tests is quite boring and tedious. In fact the cost of maintaining my unit tests is higher than the cost of writing them in the first place.

I am wondering whether I might be doing something wrong or if this relation of cost of test development vs. test maintenance is normal. I've already tried to write as many tests as possible so that these cover my user stories instead of systematically covering my object's interface as suggested in this blog article.

Also, do you have any further tips on how to write TDD tests so that refactoring breaks as few tests as possible?

Edit: As Henning and tvanfosson correctly remarked, it's usually the setup part that is most expensive to write and maintain. Broken tests are (in my experience) usually a result of a refactoring to the domain model that is not compatible with the setup part of those tests.

This is a well-known problem that can be addressed by writing tests according to best practices. These practices are described in the excellent xUnit Test Patterns. This book describes test smells that lead to unmaintanable tests, as well as provide guidance on how to write maintanable unit tests.

After having followed those patterns for a long time, I wrote AutoFixture which is an open source library that encapsulates a lot of those core patterns.

It works as a Test Data Builder, but can also be wired up to work as an Auto-Mocking container and do many other strange and wonderful things.

It helps a lot with regards to maintainance because it raises the abstraction level of writing a test considerably. Tests become a lot more declarative because you can state that you want an instance of a certain type instead of explicitly writing how it is created.

Imagine that you have a a class with this constructor signature

public MyClass(Foo foo, Bar bar, Sgryt sgryt)

As long as AutoFixture can resolve all the constructor arguments, you can simply create a new instance like this:

var sut = fixture.CreateAnonymous<MyClass>();

The major benefit is that if you decide to refactor the MyClass constructor, no tests break because AutoFixture will figure it out for you.

That's just a glimpse of what AutoFixture can do. It's a stand-alone library, so it will work with your unit testing framework of choice.

I'm looking into using parallel unit tests for our project(s) and was wondering about any best practices for actually writing such parallel unit tests.

If by parallel unit tests you mean tests that can run concurrently, the most important advice I can give you is to avoid so-called Shared Fixtures.

The book xUnit Test Patterns describe the term Fixture, which basically can be described as the entire context in which each test case executes, including persistent and transient data.

A Shared Fixture indicates that test cases share some context while running. If that context is mutable, race conditions may occur.

Keeping a Shared Fixture immutable (a so-called Immutable Shared Fixture) will allow you to run tests in parallel, but even better, so-called Fresh Fixtures (where each test case has its own Fixture) are thread-safe by definition, since only the test case itself has access to the Fixture.

Examples of Shared Fixtures include any sort of test that use a shared database, but also include tests where you have static in-memory state in either the System Under Test (SUT) or the tests themselves, so you need to avoid that.

You should also keep in mind that if your SUT accesses shared (static) data, that access itself must be thread-safe.

This answer to a question about C++ unit test frameworks suggests a possibility that had not occurred to me before: using C++/CLI and NUnit to create unit tests for native C++ code.

We use NUnit for our C# tests, so the possibility of using it for C++ as well seems enticing.

I've never used managed C++, so my concern is are there any practical limitations to this approach? Are many of you doing this? If so, what was your experience like?

The biggest concern is the learning curve of the C++/CLI language (formerly Managed C++) itself, if the tests need to be understood or maintained by non-C++ developers.

It takes a minimum of 1-2 years of C++ OOP experience in order to be able to make contributions into a C++CLI/NUnit test project and to solve the various issues that arise between the managed-native code interfaces. (By contribution, I mean being able to work standalone and able to make mock objects, implement and consume native interfaces in C++/CLI, etc. to meet all testing needs.)

Some people may just never grasp C++/CLI good enough to be able to contribute.

For certain types of native software libraries with very demanding test needs, C++/CLI/NUnit is the only combination that will meet all of the unit testing needs while keeping the test code agile and able to respond to changes. I recommend the book xUnit Test Patterns: Refactoring Test Code to go along this direction.

I want to learn how to build “robust” software that is designed to test itself. In other words, how do I implement automated tests in my software ( using java or groovy or c++ ).

So I want to know where to learn this (books or websites) and which tools and libraries I will need for this?

I found The Art of Unit Testing by Roy Osherove to be very helpful in understanding the basics of unit testing, integeration testing, TDD and so on. It's a bit tailored for .Net languages, but it also provides very good information on the ideas behind automated testing.

Look at the xUnit testing frameworks (cppUnit for C++, JUnit for Java) and check out the wonderful book xUnit Test Patterns: Refactoring Test Code.

And if you really want to get into it, check out test-driven development. A good introduction is Uncle Bob's The Three Laws of TDD and the bowling game kata (see also bowling game episode). A great book on the subject is Test Driven Development: By Example.

Say I'm trying to test a simple Set class

public IntSet : IEnumerable<int>
{
    Add(int i) {...}
    //IEnumerable implementation...
}

And suppose I'm trying to test that no duplicate values can exist in the set. My first option is to insert some sample data into the set, and test for duplicates using my knowledge of the data I used, for example:

    //OPTION 1
    void InsertDuplicateValues_OnlyOneInstancePerValueShouldBeInTheSet()
    {
        var set = new IntSet();

        //3 will be added 3 times
        var values = new List<int> {1, 2, 3, 3, 3, 4, 5};
        foreach (int i in values)
            set.Add(i);

        //I know 3 is the only candidate to appear multiple times
        int counter = 0;
        foreach (int i in set)
            if (i == 3) counter++;

        Assert.AreEqual(1, counter);
    }

My second option is to test for my condition generically:

    //OPTION 2
    void InsertDuplicateValues_OnlyOneInstancePerValueShouldBeInTheSet()
    {
        var set = new IntSet();

        //The following could even be a list of random numbers with a duplicate
        var values = new List<int> { 1, 2, 3, 3, 3, 4, 5};
        foreach (int i in values)
            set.Add(i);

        //I am not using my prior knowledge of the sample data 
        //the following line would work for any data
        CollectionAssert.AreEquivalent(new HashSet<int>(values), set);
    } 

Of course, in this example, I conveniently have a set implementation to check against, as well as code to compare collections (CollectionAssert). But what if I didn't have either ? This code would be definitely more complicated than that of the previous option! And this is the situation when you are testing your real life custom business logic.

Granted, testing for expected conditions generically covers more cases - but it becomes very similar to implementing the logic again (which is both tedious and useless - you can't use the same code to check itself!). Basically I'm asking whether my tests should look like "insert 1, 2, 3 then check something about 3" or "insert 1, 2, 3 and check for something in general"

EDIT - To help me understand, please state in your answer if you prefer OPTION 1 or OPTION 2 (or neither, or that it depends on the case, etc). Just to clarify, it's pretty clear that in this case (IntSet), option 2 is better in all aspects. However, my question pertains to the cases where you