XUnit Test Patterns

Gerard Meszaros

Mentioned 64

Improves software return on investment by teaching the reader how to refactor test code and reduce or prevent crippling test maintenance.

More on Amazon.com

Mentioned in questions and answers.

I am working to integrate unit testing into the development process on the team I work on and there are some sceptics. What are some good ways to convince the sceptical developers on the team of the value of Unit Testing? In my specific case we would be adding Unit Tests as we add functionality or fixed bugs. Unfortunately our code base does not lend itself to easy testing.

In short - yes. They are worth every ounce of effort... to a point. Tests are, at the end of the day, still code, and much like typical code growth, your tests will eventually need to be refactored in order to be maintainable and sustainable. There's a tonne of GOTCHAS! when it comes to unit testing, but man oh man oh man, nothing, and I mean NOTHING empowers a developer to make changes more confidently than a rich set of unit tests.

I'm working on a project right now.... it's somewhat TDD, and we have the majority of our business rules encapuslated as tests... we have about 500 or so unit tests right now. This past iteration I had to revamp our datasource and how our desktop application interfaces with that datasource. Took me a couple days, the whole time I just kept running unit tests to see what I broke and fixed it. Make a change; Build and run your tests; fix what you broke. Wash, Rinse, Repeat as necessary. What would have traditionally taken days of QA and boat loads of stress was instead a short and enjoyable experience.

Prep up front, a little bit of extra effort, and it pays 10-fold later on when you have to start dicking around with core features/functionality.

I bought this book - it's a Bible of xUnit Testing knowledge - tis probably one of the most referenced books on my shelf, and I consult it daily: link text

I've heard that unit testing is "totally awesome", "really cool" and "all manner of good things" but 70% or more of my files involve database access (some read and some write) and I'm not sure how to write a unit test for these files.

I'm using PHP and Python but I think it's a question that applies to most/all languages that use database access.

The book xUnit Test Patterns describes some ways to handle unit-testing code that hits a database. I agree with the other people who are saying that you don't want to do this because it's slow, but you gotta do it sometime, IMO. Mocking out the db connection to test higher-level stuff is a good idea, but check out this book for suggestions about things you can do to interact with the actual database.

We have tried to introduce unit testing to our current project but it doesn't seem to be working. The extra code seems to have become a maintenance headache as when our internal Framework changes we have to go around and fix any unit tests that hang off it.

We have an abstract base class for unit testing our controllers that acts as a template calling into the child classes' abstract method implementations i.e. Framework calls Initialize so our controller classes all have their own Initialize method.

I used to be an advocate of unit testing but it doesn't seem to be working on our current project.

Can anyone help identify the problem and how we can make unit tests work for us rather than against us?

Good question!

Designing good unit tests is hard as designing the software itself. This is rarely acknowledged by developers, so the result is often hastily-written unit tests that require maintenance whenever the system under test changes. So, part of the solution to your problem could be spending more time to improve the design of your unit tests.

I can recommend one great book that deserves its billing as The Design Patterns of Unit-Testing

HTH

I will have the following components in my application

  • DataAccess
  • DataAccess.Test
  • Business
  • Business.Test
  • Application

I was hoping to use Castle Windsor as IoC to glue the layers together but I am bit uncertain about the design of the gluing.

My question is who should be responsible for registering the objects into Windsor? I have a couple of ideas;

  1. Each layer can register its own objects. To test the BL, the test bench could register mock classes for the DAL.
  2. Each layer can register the object of its dependencies, e.g. the business layer registers the components of the data access layer. To test the BL, the test bench would have to unload the "real" DAL object and register the mock objects.
  3. The application (or test app) registers all objects of the dependencies.

Can someone help me with some ideas and pros/cons with the different paths? Links to example projects utilizing Castle Windsor in this way would be very helpful.

In general, all components in an application should be composed as late as possible, because that ensures maximum modularity, and that modules are as loosely coupled as possible.

In practice, this means that you should configure the container at the root of your application.

  • In a desktop app, that would be in the Main method (or very close to it)
  • In an ASP.NET (including MVC) application, that would be in Global.asax
  • In WCF, that would be in a ServiceHostFactory
  • etc.

The container is simply the engine that composes modules into a working application. In principle, you could write the code by hand (this is called Poor Man's DI), but it is just so much easier to use a DI Container like Windsor.

Such a Composition Root will ideally be the only piece of code in the application's root, making the application a so-called Humble Executable (a term from the excellent xUnit Test Patterns) that doesn't need unit testing in itself.

Your tests should not need the container at all, as your objects and modules should be composable, and you can directly supply Test Doubles to them from the unit tests. It is best if you can design all of your modules to be container-agnostic.

Also specifically in Windsor you should encapsulate your component registration logic within installers (types implementing IWindsorInstaller) See the documentation for more details

I am looking for podcasts or videos on how to do unit testing.

Ideally they should cover both the basics and more advanced topics.

Other hanselminutes episodes on testing:

Other podcasts:

Other questions like this:

Blog posts:

I know you didn't ask for books but... Can I also mention that Beck's TDD book is a must read, even though it may seem like a dated beginner book on first flick through (and Working Effectively with Legacy Code by Michael C. Feathers of course is the bible). Also, I'd append Martin(& Martin)'s Agile Principles, Patterns & Techniques as really helping in this regard. In this space (concise/distilled info on testing) also is the excellent Foundations of programming ebook. Goob books on testing I've read are The Art of Unit Testing and xUnit Test Patterns. The latter is an important antidote to the first as it is much more measured than Roy's book is very opinionated and offers a lot of unqualified 'facts' without properly going through the various options. Definitely recommend reading both books though. AOUT is very readable and gets you thinking, though it chooses specific [debatable] technologies; xUTP is in depth and neutral and really helps solidify your understanding. I read Pragmatic Unit Testing in C# with NUnit afterwards. It's good and balanced though slightly dated (it mentions RhinoMocks as a sidebar and doesnt mention Moq) - even if nothing is actually incorrect. An updated version of it would be a hands-down recommendation.

More recently I've re-read the Feathers book, which is timeless to a degree and covers important ground. However it's a more 'how, for 50 different wheres' in nature. It's definitely a must read though.

Most recently, I'm reading the excellent Growing Object-Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce. I can't recommend it highly enough - it really ties everything together from big to small in terms of where TDD fits, and various levels of testing within a software architecture. While I'm throwing the kitchen sink in, Evans's DDD book is important too in terms of seeing the value of building things incrementally with maniacal refactoring in order to end up in a better place.

public class Student
{
    public string Name { get; set; }
    public int ID { get; set; }
}

...

var st1 = new Student
{
    ID = 20,
    Name = "ligaoren",
};

var st2 = new Student
{
    ID = 20,
    Name = "ligaoren",
};

Assert.AreEqual<Student>(st1, st2);// How to Compare two object in Unit test?

How to Compare two collection in Unitest?

What you are looking for is what in xUnit Test Patterns is called Test-Specific Equality.

While you can sometimes choose to override the Equals method, this may lead to Equality Pollution because the implementation you need to the test may not be the correct one for the type in general.

For example, Domain-Driven Design distinguishes between Entities and Value Objects, and those have vastly different equality semantics.

When this is the case, you can write a custom comparison for the type in question.

If you get tired doing this, AutoFixture's Likeness class offers general-purpose Test-Specific Equality. With your Student class, this would allow you to write a test like this:

[TestMethod]
public void VerifyThatStudentAreEqual()
{
    Student st1 = new Student();
    st1.ID = 20;
    st1.Name = "ligaoren";

    Student st2 = new Student();
    st2.ID = 20;
    st2.Name = "ligaoren";

    var expectedStudent = new Likeness<Student, Student>(st1);

    Assert.AreEqual(expectedStudent, st2);
}

This doesn't require you to override Equals on Student.

Likeness performs a semantic comparison, so it can also compare two different types as long as they are semantically similar.

I recently finished a project using TDD and I found the process to be a bit of a nightmare. I enjoyed writing tests first and watching my code grow but as soon as the requirements started changing and I started doing refactorings I found that I spent more time rewriting / fixing unit tests than I did writing code, much more time in fact.

I felt while I was going through this process it would be much easier to do the tests after the application was finished but if I did that I would of lost all the benefits of TDD.

So are there any hits / tips for writing maintainable TDD code? I'm currently reading Roy Osherove's The Art Of Unit Testing, are there any other resources that could help me out?

Thanks

Yes, there a whole book called xUnit Test Patterns that deal with this issue.

It's a Martin Fowler signature book, so it has all the trappings of a classic patterns book. Whether you like that or not is a matter of personal taste, but I, for one, found it immensely invaluable.

Anyhow, the gist of the matter is that you should treat your test code as your would your production code. First and foremost, you should adhere to the DRY principle, because that makes it easier to refactor your API.

Practice

It takes a while to learn how to write decent unit tests. A difficult project (more like projects) is nothing strange.

The xUnit Test Patterns book recommended already is good, and I've heard good things about the book you're currently reading.

As for general advice it depends on what was hard about your tests. If they broke often, they may not be unit tests, and more so integration tests. If they were difficult to set up, the SUT (System Under Test) could be showing signs of being too complex and would need furthering modularisation. The list goes on.

Some advice I live by is following the AAA rule.

Arrange, Act and Assert. Each test should follow this formula. This makes the test readable, and easy to maintain if and when they do break.

Design is Still Important

I practice TDD, but before any code is wrote I grab a whiteboard and scribble away. While TDD allows your code to evolve, some up front design is always a benefit. Then you at least have a starting point, from here your code can be driven by the tests you write.

If I'm carrying out a particular difficult task, I make a prototype. Forget TDD, forget best practices, just bash out some code. Obviously this is not production code, but it provides a starting point. From this prototype I then think about the actual system, and what tests I require.

Check out the Google Testing Blog - this was the turning point for myself when starting TDD. Misko's articles (and site - the Guide to Testable code especially) are excellent, and should point you in the right direction.

Where can I find good literature on unit testing? Book titles and links are welcome.

Update: Here is a list of books mentioned in answers below

xUnit Test Patterns: Refactoring Test Code

Growing Object-Oriented Software Guided by Tests

The Art Of Unit Testing

The real challenge of software testing is solving the puzzle of test design.

Testing Object-Oriented Systems: Models, Patterns, and Tools provides three dozen test design patterns applicable to unit test design. It also provides many design patterns for test automation. These patterns distill many hard-won best practices and research insights.

Pragmatic Unit Testing

Test Driven Development: By Example

I was reading the Joel Test 2010 and it reminded me of an issue i had with unit testing.

How do i really unit test something? I dont unit test functions? only full classes? What if i have 15 classes that are <20lines. Should i write a 35line unit test for each class bringing 15*20 lines to 15*(20+35) lines (that's from 300 to 825, nearly 3x more code).

If a class is used by only two other classes in the module, should i unit test it or would the test against the other two classes suffice? what if they are all < 30lines of code should i bother?

If i write code to dump data and i never need to read it such as another app is used. The other app isnt command line or it is but no way to verify if the data is good. Do i still need to unit test it?

What if the app is a utility and the total is <500lines of code. Or is used that week and will be used in the future but always need to be reconfiguration because it is meant for a quick batch process and each project will require tweaks because the desire output is unchanged. (i'm trying to say theres no way around it, for valid reasons it will always be tweaked) do i unit test it and if so how? (maybe we dont care if we break a feature used in the past but not in the present or future).

etc.

I think this should be a wiki. Maybe people would like to say an exactly of what they should unit test (or should not)? maybe links to books are good. I tried one but it never clarified what should be unit tested, just the problems of writing unit testing and solutions.


Also if classes are meant to only be in that project (by design, spec or whatever other reason) and the class isnt useful alone (lets say it generates the html using data that returns html ready comments) do i really need to test it? say by checking if all public functions allow null comment objects when my project doesnt ever use null comment. Its those kind of things that make me wonder if i am unit testing the wrong code. Also tons of classes are throwaway when the project. Its the borderline throwaway or not very useful alone code which bothers me.

Some Time ago, i had The same question you have posted in mind. I studied a lot of articles, Tutorials, books and so on... Although These resources give me a good starting point, i still was insecure about how To apply efficiently Unit Testing code. After coming across xUnit Test Patterns: Refactoring Test Code and put it in my shelf for about one year (You know, we have a lot of stuffs To study), it gives me what i need To apply efficiently Unit Testing code. With a lot of useful patterns (and advices), you will see how you can become an Unit Testing coder. Topics as

  • Test strategy patterns
  • Basic patterns
  • Fixture setup patterns
  • Result verification patterns
  • Test double patterns
  • Test organization patterns
  • Database patterns
  • Value patterns

And so on...

I will show you, for instance, derived value pattern

A derived input is often employed when we need to test a method that takes a complex object as an argument. For example, thorough input validation testing requires we exercise the method with each of the attributes of the object set to one or more possible invalid values. Because The first rejected value could cause Termination of The method, we must verify each bad attribute in a separate call. We can instantiate The invalid object easily by first creating a valid object and then replacing one of its attributes with a invalid value.

A Test organization pattern which is related To your question (Testcase class per feature)

As The number of Test methods grows, we need To decide on which Testcase class To put each Test method... Using a Testcase class per feature gives us a systematic way To break up a large Testcase class into several smaller ones without having To change out Test methods.

But before reading

xUnit Test Patterns

My advice: read carefully

A new project we began introduced a lot of new technologies we weren't so familiar with, and an architecture that we don't have a lot of practice in. In other words, the interfaces and interactions between service classes etc of what we're building are fairly volatile, even more so due to internal and customer feedback. Though I've always been frustrated by the ever-moving specification, I see this to some degree a necessary part of building something we've never built before - if we just stuck to the original design and scope, the end product would probably be a whole lot less innovative and useful than it's becoming.

I also introduced test-driven development (TDD), as the benefits are well-documented and conceptually I loved the idea. Two more new things to learn - NUnit and mocking - but seeing all those green circles made it all worthwhile.

Over time, however, those constant changes in design seemed to mean I was spending a whole lot more time changing my tests than I was on writing the code itself. For this reason alone, I've gone back to the old ways of testing - that is, not automated.

While I have no doubt that the application would be far more robust with hundreds of excellent unit tests, I've found the trade-off of time to launch the product to be mostly unacceptable. My question is, then - have any of you also found that TDD can be a hassle if you're prototyping something / building a beta version? Does TDD go much more naturally hand-in-hand with something where the specifications are more fixed, or where the developers have more experience in the language and technologies? Or have I done something fundamentally wrong?

Note that I'm not trying to criticise TDD here - just I'm not sure it's always the best fit for all situations.

The short answer is that TDD is very valuable for beta versions, but may be less so for prototyping.

I think it is very important to distinguish between beta versions and prototyping.

A beta version is essentially a production version that is just still in development, so you should definitely use TDD in that scenario.

A prototype/proof of concept is something you build with the express intent of throwing it away once you've gotten the answers out of it that you wanted.

It's true that project managers will tend to push for the prototype to be used as a basis for production code, but it is very important to resist that. If you know that's not possible, treat the prototype code as you would your production code, because you know it is going to become your production code in the future - and that means you should use TDD with it as well.

When you are learning a new technology, most code samples etc. are not written with unit tests in mind, so it can be difficult to translate the new technology to the unit testing mindset. It most definitely feels like a lot of overhead.

In my experience, however, unit testing often really forces you to push the boundaries of the new technology that you are learning. Very often, you need to research and learn all the different hooks the new technology provides, because you need to be able to isolate the technology via DI or the like.

Instead of only following the beaten path, unit testing frequently forces you to learn the technology in much more depth, so what may feel like overhead is actually just a more in-depth prototype - one that is often more valuable, because it covers more ground.

Personally, I think unit testing a new technology is a great learning tool.

The symptoms you seem to experience regarding test maintainability is a bit orthogonal, I think. Your tests may be Overspecified, which is something that can happen just as well when working with known technologies (but I think it is probably easier to fall into this trap when you are also learning a new technology at the same time).

The book xUnit Test Patterns describes the Overspecified Test antipattern and provides a lot of guidance and patterns that can help you write more maintainable tests.

On my search for a Unit-Testing tool for C# i have found xUnit.NET. Untill now, i read most of the articles on http://xunit.codeplex.com/ and even tried out the examples given at How do I use xUnit.net?.

But sadly, on the offical page i could just find the basic informations to xUnit.NET. Is there any further information avadible for it?

Besides the xUnit-1.9.1.chm-File mentioned by Sean U and the Examples on the official xUnit.NET website I found two other resources to help me understand the basics of the work with xUnit.NET:

Sadly, as pointed out also by Sean U, it seems as there are no books at all about the xUnit.NET-Framework yet. So, for further information it looks like one has go with studying the *.chm-File and reading general books about unit testing. Or switch to another testing-framework, that's what I think I'll do...

Update

Ognyan Dimitrov added some additional resources in his comments:

If you decide to abandon xUnit and use NUnit instead, a good book to read is "The Art of Unit Testing (with examples in .NET)".

Nice clear explanations of both basic and advanced unit testing concepts, using the NUnit framework.

I am a hacker not and not a full-time programmer but am looking to start my own full application development experiment. I apologize if I am missing something easy here. I am looking for recommendations for books, articles, sites, etc for learning more about test driven development specifically compatible with or aimed at Python web application programming. I understand that Python has built-in tools to assist. What would be the best way to learn about these outside of RTFM? I have searched on StackOverflow and found the Kent Beck's and David Astels book on the subject. I have also bookmarked the Wikipedia article as it has many of these types of resources.

Are there any particular ones you would recommend for this language/application?

A little late to the game with this one, but I have been hunting for a Python oriented TDD book, and I just found Python Testing: Beginner's Guide by Daniel Arbuckle. Haven't had a chance to read it yet, but when I do, I'll try to remember to post a follow up here. The reviews on the Amazon page look pretty positive though.

I know that Kent Beck's book (which you mentioned) covers TDD in Python to some pretty good depth. If I remember correctly, the last half of the book takes you through development of a unit test framework in Python. There's nothing specific to web development, though, which is a problem in many TDD resources that I've read. It's a best practice to keep your business logic separate from your presentation in order to make your BL more testable, among other reasons.

Another good book that you might want to look into is xUnit Test Patterns. It doesn't use Python, but it does talk a lot about designing for testability, how to use mocks and stubs (which you'll need for testing web applications), and automating testing. It's more advanced than Beck's book, which makes it a good follow-up.

I've read about unit testing and heard a lot of hullabaloo by others touting its usefulness, and would like to see it in action. As such, I've selected this basic class from a simple application that I created. I have no idea how testing would help me, and am hoping one of you will be able to help me see the benefit of it by pointing out what parts of this code can be tested, and what those tests might look like. So, how would I write unit tests for the following code?

public class Hole : INotifyPropertyChanged
{
    #region Field Definitions
    private double _AbsX;
    private double _AbsY;
    private double _CanvasX { get; set; }
    private double _CanvasY { get; set; }
    private bool _Visible;
    private double _HoleDia = 20;
    private HoleTypes _HoleType;
    private int _HoleNumber;
    private double _StrokeThickness = 1;
    private Brush _StrokeColor = new SolidColorBrush(Colors.Black);
    private HolePattern _ParentPattern;
    #endregion

    public enum HoleTypes { Drilled, Tapped, CounterBored, CounterSunk };
    public Ellipse HoleEntity = new Ellipse();
    public Ellipse HoleDecorator = new Ellipse();
    public TextBlock HoleLabel = new TextBlock();

    private static DoubleCollection HiddenLinePattern = 
               new DoubleCollection(new double[] { 5, 5 });

    public int HoleNumber
    {
        get
         {
            return _HoleNumber;
         }
        set
        {
            _HoleNumber = value;
            HoleLabel.Text = value.ToString();
            NotifyPropertyChanged("HoleNumber");
        }
    }
    public double HoleLabelX { get; set; }
    public double HoleLabelY { get; set; }
    public string AbsXDisplay { get; set; }
    public string AbsYDisplay { get; set; }

    public event PropertyChangedEventHandler PropertyChanged;
    //public event MouseEventHandler MouseActivity;

    // Constructor
    public Hole()
    {
        //_HoleDia = 20.0;
        _Visible = true;
        //this.ParentPattern = WhoIsTheParent;
        HoleEntity.Tag = this;
        HoleEntity.Width = _HoleDia;
        HoleEntity.Height = _HoleDia;

        HoleDecorator.Tag = this;
        HoleDecorator.Width = 0;
        HoleDecorator.Height = 0;


        //HoleLabel.Text = x.ToString();
        HoleLabel.TextAlignment = TextAlignment.Center;
        HoleLabel.Foreground = new SolidColorBrush(Colors.White);
        HoleLabel.FontSize = 12;

        this.StrokeThickness = _StrokeThickness;
        this.StrokeColor = _StrokeColor;
        //HoleEntity.Stroke = Brushes.Black;
        //HoleDecorator.Stroke = HoleEntity.Stroke;
        //HoleDecorator.StrokeThickness = HoleEntity.StrokeThickness;
        //HiddenLinePattern=DoubleCollection(new double[]{5, 5});
    }

    public void NotifyPropertyChanged(String info)
    {
        if (PropertyChanged != null)
        {
            PropertyChanged(this, 
                       new PropertyChangedEventArgs(info));
        }
    }

    #region Properties
    public HolePattern ParentPattern
    {
        get
        {
            return _ParentPattern;
        }
        set
        {
            _ParentPattern = value;
        }
    }

    public bool Visible
    {
        get { return _Visible; }
        set
        {
            _Visible = value;
            HoleEntity.Visibility = value ? 
             Visibility.Visible : 
             Visibility.Collapsed;
            HoleDecorator.Visibility = HoleEntity.Visibility;
            SetCoordDisplayValues();
            NotifyPropertyChanged("Visible");
        }
    }

    public double AbsX
    {
        get { return _AbsX; }
        set
        {
            _AbsX = value;
            SetCoordDisplayValues();
            NotifyPropertyChanged("AbsX");
        }
    }

    public double AbsY
    {
        get { return _AbsY; }
        set
        {
            _AbsY = value;
            SetCoordDisplayValues();
            NotifyPropertyChanged("AbsY");
        }
    }

    private void SetCoordDisplayValues()
    {
        AbsXDisplay = HoleEntity.Visibility == 
        Visibility.Visible ? String.Format("{0:f4}", _AbsX) : "";
        AbsYDisplay = HoleEntity.Visibility == 
        Visibility.Visible ? String.Format("{0:f4}", _AbsY) : "";
        NotifyPropertyChanged("AbsXDisplay");
        NotifyPropertyChanged("AbsYDisplay");
    }

    public double CanvasX
    {
        get { return _CanvasX; }
        set
        {
            if (value == _CanvasX) { return; }
            _CanvasX = value;
            UpdateEntities();
            NotifyPropertyChanged("CanvasX");
        }
    }

    public double CanvasY
    {
        get { return _CanvasY; }
        set
        {
            if (value == _CanvasY) { return; }
            _CanvasY = value;
            UpdateEntities();
            NotifyPropertyChanged("CanvasY");
        }
    }

    public HoleTypes HoleType
    {
        get { return _HoleType; }
        set
        {
            if (value != _HoleType)
            {
                _HoleType = value;
                UpdateHoleType();
                NotifyPropertyChanged("HoleType");
            }
        }
    }

    public double HoleDia
    {
        get { return _HoleDia; }
        set
        {
            if (value != _HoleDia)
            {
                _HoleDia = value;
                HoleEntity.Width = value;
                HoleEntity.Height = value;
                UpdateHoleType(); 
                NotifyPropertyChanged("HoleDia");
            }
        }
    }

    public double StrokeThickness
    {
        get { return _StrokeThickness; }
        //Setting this StrokeThickness will also set Decorator
        set
        {
            _StrokeThickness = value;
            this.HoleEntity.StrokeThickness = value;
            this.HoleDecorator.StrokeThickness = value;
            NotifyPropertyChanged("StrokeThickness");
        }
    }

    public Brush StrokeColor
    {
        get { return _StrokeColor; }
        //Setting this StrokeThickness will also set Decorator
        set
        {
            _StrokeColor = value;
            this.HoleEntity.Stroke = value;
            this.HoleDecorator.Stroke = value;
            NotifyPropertyChanged("StrokeColor");
        }
    }

    #endregion

    #region Methods

    private void UpdateEntities()
    {
        //-- Update Margins for graph positioning
        HoleEntity.Margin = new Thickness
        (CanvasX - HoleDia / 2, CanvasY - HoleDia / 2, 0, 0);
        HoleDecorator.Margin = new Thickness
        (CanvasX - HoleDecorator.Width / 2, 
         CanvasY - HoleDecorator.Width / 2, 0, 0);
        HoleLabel.Margin = new Thickness
        ((CanvasX * 1.0) - HoleLabel.FontSize * .3, 
         (CanvasY * 1.0) - HoleLabel.FontSize * .6, 0, 0);
    }

    private void UpdateHoleType()
    {
        switch (this.HoleType)
        {
            case HoleTypes.Drilled: //Drilled only
                HoleDecorator.Visibility = Visibility.Collapsed;
                break;
            case HoleTypes.Tapped: // Drilled & Tapped
                HoleDecorator.Visibility = (this.Visible == true) ? 
                Visibility.Visible : Visibility.Collapsed;
                HoleDecorator.Width = HoleEntity.Width * 1.2;
                HoleDecorator.Height = HoleDecorator.Width;
                HoleDecorator.StrokeDashArray = 
                LinePatterns.HiddenLinePattern(1);
                break;
            case HoleTypes.CounterBored: // Drilled & CounterBored
                HoleDecorator.Visibility = (this.Visible == true) ? 
                Visibility.Visible : Visibility.Collapsed;
                HoleDecorator.Width = HoleEntity.Width * 1.5;
                HoleDecorator.Height = HoleDecorator.Width;
                HoleDecorator.StrokeDashArray = null;
                break;
            case HoleTypes.CounterSunk: // Drilled & CounterSunk
                HoleDecorator.Visibility = (this.Visible == true) ? 
                Visibility.Visible : Visibility.Collapsed;
                HoleDecorator.Width = HoleEntity.Width * 1.8;
                HoleDecorator.Height = HoleDecorator.Width;
                HoleDecorator.StrokeDashArray = null;
                break;
        }
        UpdateEntities();
    }

    #endregion

}

We must test it, right?

Tests are validation that the code works as you expect it to work. Writing tests for this class right now will not yield you any real benefit (unless you uncover a bug while writing the tests). The real benefit is when you will have to go back and modify this class. You may be using this class in several different places in your application. Without tests, changes to the class may have unforseen reprecussions. With tests, you can change the class and be confident that you aren't breaking something else if all of your tests pass. Of course, the tests need to be well written and cover all of the class's functionality.

So, how to test it?

At the class level, you will need to write unit tests. There are several unit testing frameworks. I prefer NUnit.

What am I testing for?

You are testing that everything behaves as you expect it to behave. If you give a method X, then you expect Y to be returned. In Gord's answer, he suggested testing that your event actually fires off. This would be a good test.

The book, Agile Principles, Patterns, and Practices in C# by Uncle Bob has really helped me understand what and how to test.

Tests will help, if you need to make changes.

According to Feathers (Feathers, Working Effectively with Legacy Code, p. 3) there are four reasons for changes:

  • Adding a feature
  • Fixing a bug
  • Improving design
  • Optimizing resource usage

When there is the need for change, you want to be confident that you don't break anything. To be more precise: You don't want to break any behavior (Hunt, Thomas, Pragmatic Unit Testing in C# with NUnit, p. 31).

With unit testing in place you can do changes with much more confidence, because they would (provided they are programmed properly) capture changes in behavior. That's the benefit of unit tests.

It would be difficult to make unit tests for the class you gave as an example, because unit tests also requires a certain structure of the code under test. One reason I see is that the class is doing too much. Any unit tests you will apply on that class will be quite brittle. Minor change may make your unit tests blow up and you will end up wasting much time with fixing problems in your test code instead of your production code.

To reap the benefits of unit tests requires to change the production code. Just applying unit tests principles, without considering this will not give you the positive unit testing experience.

How to get the positive unit testing experience? Be openminded for it and learn.

I would recommend you Working Effectively with Legacy Code for an existing code basis (as that piece of code you gave above). For an easy kick start into unit testing try Pragmatic Unit Testing in C# with NUnit. The real eye opener for me was xUnit Test Patterns: Refactoring Test Code.

Good luck in you journey!

Although there are plenty of resources, even here on SO, only two of the terms are compared to each other in these Q/A.

So, in short, what is each one of them? And how they all relate to each other? Or don't they at all?

Difference between mock and stub is very simple - mock can make your test fail, while stub can't. That's all there is. Additionally, you can think of stub as of something that provides values. Nowadays, fake is just a generic term for both of them (more on that later).

Example

Let's consider a case where you have to build a service that sends packages via communication protocol (exact details are irrelevant). You simply supply service with package code and it does the rest. Given the snippet below, can you identify which dependency would be a stub and which mock in potential unit test?

public class DistributionService
{
    public double SendPackage(string packageCode)
    {
        var contents = this.packageService.GetPackageContents(packageCode);
        if (contents == null)
        {
            throw new InvalidOperationException(
                "Attempt to send non-exisiting package");
        }

        var package = this.packageBuilder.Build(contents);
        this.packageDistributor.Send(package);
    }
}

It's fairly easy to tell that packageBuilder simply provides value and there's no possible way it could make any test fail. That's a stub. Even though it might seem more blurry, packageService is stub too. It provides a value (what we do with the value is irrelevant from stub's point of view). Of course, later we'll use that value to test whether exception is thrown, but it's still all within our control (as in, we tell stub exactly what to do and forget about it - it should have no further influence on test).

It gets different with packageDistributor. Even if it provides any value, it's not consumed. Yet the call to Send seems to be pretty important part of our implementation and we'll most likely want to verify it is called.

At this point we should get to a conclusion that packageDistributor is a mock. We'll have a dedicated unit test asserting that Send method was called and if for some reasons it wasn't - we want to know that, as it's important part of the entire process. Other dependencies are stubs as all they do is provide values to other, perhaps more relevant pieces of code.

Quick glance at TDD

Stub being stub, could be just as well replaced with constant value in naive implementation:

var contents = "Important package";
var package = "<package>Important package</package>";
this.packageDistributor.Send(package);

This is essentially what mocking frameworks do with stubs - instruct them to return configurable/explicit value. Old-school, hand-rolled stubs often do just that - return constant value.

Obviously, such code doesn't make much sense, but anyone who ever done TDD surely seen bunch of such naive implementations at the early stage of class development. Iterative development that results from TDD will often help identify roles of your class' dependencies.

Stubs, mocks and fakes nowadays

At the beginning of this post I mentioned that fake is just a generic term. Given that mock can also serve as stub (especially when modern mocking frameworks are concerned), to avoid confusion it's good idea to call such object a fake. Nowadays, you can see this trend growing - original mock - stub distinction is slowly becoming a thing of the past and more universal names are used. For example:

  • FakeItEasy uses fake
  • NSubstitute uses substitute
  • Moq uses mock (name is old, but no visible distinction is made whether it's stub or mock)

References, further reading

I've been working on an ASP.NET MVC project for about 8 months now. For the most part I've been using TDD, some aspects were covered by unit tests only after I had written the actual code. In total the project pretty has good test coverage.

I'm quite pleased with the results so far. Refactoring really is much easier and my tests have helped me uncover quite a few bugs even before I ran my software the first time. Also, I have developed more sophisticated fakes and helpers to help me minimize the testing code.

However, what I don't really like is the fact that I frequently find myself having to update existing unit tests to account for refactorings I made to the software. Refactoring the software is now quick and painless, but refactoring my unit tests is quite boring and tedious. In fact the cost of maintaining my unit tests is higher than the cost of writing them in the first place.

I am wondering whether I might be doing something wrong or if this relation of cost of test development vs. test maintenance is normal. I've already tried to write as many tests as possible so that these cover my user stories instead of systematically covering my object's interface as suggested in this blog article.

Also, do you have any further tips on how to write TDD tests so that refactoring breaks as few tests as possible?

Edit: As Henning and tvanfosson correctly remarked, it's usually the setup part that is most expensive to write and maintain. Broken tests are (in my experience) usually a result of a refactoring to the domain model that is not compatible with the setup part of those tests.

This is a well-known problem that can be addressed by writing tests according to best practices. These practices are described in the excellent xUnit Test Patterns. This book describes test smells that lead to unmaintanable tests, as well as provide guidance on how to write maintanable unit tests.

After having followed those patterns for a long time, I wrote AutoFixture which is an open source library that encapsulates a lot of those core patterns.

It works as a Test Data Builder, but can also be wired up to work as an Auto-Mocking container and do many other strange and wonderful things.

It helps a lot with regards to maintainance because it raises the abstraction level of writing a test considerably. Tests become a lot more declarative because you can state that you want an instance of a certain type instead of explicitly writing how it is created.

Imagine that you have a a class with this constructor signature

public MyClass(Foo foo, Bar bar, Sgryt sgryt)

As long as AutoFixture can resolve all the constructor arguments, you can simply create a new instance like this:

var sut = fixture.CreateAnonymous<MyClass>();

The major benefit is that if you decide to refactor the MyClass constructor, no tests break because AutoFixture will figure it out for you.

That's just a glimpse of what AutoFixture can do. It's a stand-alone library, so it will work with your unit testing framework of choice.

I'm looking into using parallel unit tests for our project(s) and was wondering about any best practices for actually writing such parallel unit tests.

If by parallel unit tests you mean tests that can run concurrently, the most important advice I can give you is to avoid so-called Shared Fixtures.

The book xUnit Test Patterns describe the term Fixture, which basically can be described as the entire context in which each test case executes, including persistent and transient data.

A Shared Fixture indicates that test cases share some context while running. If that context is mutable, race conditions may occur.

Keeping a Shared Fixture immutable (a so-called Immutable Shared Fixture) will allow you to run tests in parallel, but even better, so-called Fresh Fixtures (where each test case has its own Fixture) are thread-safe by definition, since only the test case itself has access to the Fixture.

Examples of Shared Fixtures include any sort of test that use a shared database, but also include tests where you have static in-memory state in either the System Under Test (SUT) or the tests themselves, so you need to avoid that.

You should also keep in mind that if your SUT accesses shared (static) data, that access itself must be thread-safe.

This answer to a question about C++ unit test frameworks suggests a possibility that had not occurred to me before: using C++/CLI and NUnit to create unit tests for native C++ code.

We use NUnit for our C# tests, so the possibility of using it for C++ as well seems enticing.

I've never used managed C++, so my concern is are there any practical limitations to this approach? Are many of you doing this? If so, what was your experience like?

The biggest concern is the learning curve of the C++/CLI language (formerly Managed C++) itself, if the tests need to be understood or maintained by non-C++ developers.

It takes a minimum of 1-2 years of C++ OOP experience in order to be able to make contributions into a C++CLI/NUnit test project and to solve the various issues that arise between the managed-native code interfaces. (By contribution, I mean being able to work standalone and able to make mock objects, implement and consume native interfaces in C++/CLI, etc. to meet all testing needs.)

Some people may just never grasp C++/CLI good enough to be able to contribute.

For certain types of native software libraries with very demanding test needs, C++/CLI/NUnit is the only combination that will meet all of the unit testing needs while keeping the test code agile and able to respond to changes. I recommend the book xUnit Test Patterns: Refactoring Test Code to go along this direction.

I want to learn how to build “robust” software that is designed to test itself. In other words, how do I implement automated tests in my software ( using java or groovy or c++ ).

So I want to know where to learn this (books or websites) and which tools and libraries I will need for this?

I found The Art of Unit Testing by Roy Osherove to be very helpful in understanding the basics of unit testing, integeration testing, TDD and so on. It's a bit tailored for .Net languages, but it also provides very good information on the ideas behind automated testing.

Look at the xUnit testing frameworks (cppUnit for C++, JUnit for Java) and check out the wonderful book xUnit Test Patterns: Refactoring Test Code.

And if you really want to get into it, check out test-driven development. A good introduction is Uncle Bob's The Three Laws of TDD and the bowling game kata (see also bowling game episode). A great book on the subject is Test Driven Development: By Example.

Say I'm trying to test a simple Set class

public IntSet : IEnumerable<int>
{
    Add(int i) {...}
    //IEnumerable implementation...
}

And suppose I'm trying to test that no duplicate values can exist in the set. My first option is to insert some sample data into the set, and test for duplicates using my knowledge of the data I used, for example:

    //OPTION 1
    void InsertDuplicateValues_OnlyOneInstancePerValueShouldBeInTheSet()
    {
        var set = new IntSet();

        //3 will be added 3 times
        var values = new List<int> {1, 2, 3, 3, 3, 4, 5};
        foreach (int i in values)
            set.Add(i);

        //I know 3 is the only candidate to appear multiple times
        int counter = 0;
        foreach (int i in set)
            if (i == 3) counter++;

        Assert.AreEqual(1, counter);
    }

My second option is to test for my condition generically:

    //OPTION 2
    void InsertDuplicateValues_OnlyOneInstancePerValueShouldBeInTheSet()
    {
        var set = new IntSet();

        //The following could even be a list of random numbers with a duplicate
        var values = new List<int> { 1, 2, 3, 3, 3, 4, 5};
        foreach (int i in values)
            set.Add(i);

        //I am not using my prior knowledge of the sample data 
        //the following line would work for any data
        CollectionAssert.AreEquivalent(new HashSet<int>(values), set);
    } 

Of course, in this example, I conveniently have a set implementation to check against, as well as code to compare collections (CollectionAssert). But what if I didn't have either ? This code would be definitely more complicated than that of the previous option! And this is the situation when you are testing your real life custom business logic.

Granted, testing for expected conditions generically covers more cases - but it becomes very similar to implementing the logic again (which is both tedious and useless - you can't use the same code to check itself!). Basically I'm asking whether my tests should look like "insert 1, 2, 3 then check something about 3" or "insert 1, 2, 3 and check for something in general"

EDIT - To help me understand, please state in your answer if you prefer OPTION 1 or OPTION 2 (or neither, or that it depends on the case, etc). Just to clarify, it's pretty clear that in this case (IntSet), option 2 is better in all aspects. However, my question pertains to the cases where you don't have an alternative implementation to check against, so the code in option 2 would be definitely more complicated than option 1.

According to xUnit Test Patterns, it's usually more favorable to test the state of the system under test. If you want to test its behavior and the way in which the algorithm operates, you can use Mock Object Testing.

That being said, both of your tests are known as Data Driven Tests. What is usually acceptable is to use as much knowledge as the API provides. Remember, those tests also serve as documentation for your software. Therefore it's critical to keep them as simple as possible - whatever that means for your specific case.

We're developing a data heavy modular web application stack with java but have little expert knowledge concerning tests. What we currently do is using JUnit to run a mixture of unit tests and functional tests. I described the problem in more detail here.

Now we decided to set up standards early on, on how to test our modules and applications so we'll need to read up on the principles and best practices of testing in general and in the java spring environment in particular.

What I'd like to have covered are the definitions, use cases and reasoning behind the different kinds of tests from unit test to scenario test, or as google calls them: small, medium, large tests. Like I said, we're developing a data heavy web application.

I'd like to be able to deduce from the book how necessary and how useful each testing stage is on which corresponding level of our application (core, database access, security module, entity managers, web-model, web-controller, web-view)

It would be nice if the examples in the book are directly applicable to our application stack. We're using spring, JPA(hibernate), JSF, spring security. For testing so far we're using the basic Junit and some powermock. So JBOSS, Seam or Java Enterprise books are not so usefull.

If there are great articles on the web that paint a clear picture and really do help, feel free to share those as well (I can use google, SO and wiki myself, so please only articles that you actually read and deem very helpful), but a book would be nice so I can read up from the basics and don't have to piece it all together from various articles and questions.

Thanks!

Edit - books we ordered

Just started reading Growing object oriented software guided by tests and I already like it a lot. Not for the total beginner but shows how to develop test driven with agile techniques. Really cleans up old ways of thinking about software development.

We also ordered xUnit Test Patterns: Refactoring Test Code to get an idea of how to best unit test the different areas in our application stack. Got this recomendation twice so I'm hopeful it will be helpful.

Take a look at the resources under this answer. I highly recommend the first three books, the first two directly address your "levels" questions. The first is more focused on specific tools, while the second is more conceptual.

Book recommendation: JUnit Recipes - Practical Methods for Programmer Testing

Tools: JUnit + Hamcrest + Mockito

And you're using Spring, check out spring-test, it offers some great facilites, Spring Testing

Suppose you have a method:

public void Save(Entity data)
{
    this.repositoryIocInstance.EntitySave(data);
}

Would you write a unit test at all?

public void TestSave()
{
    // arrange
    Mock<EntityRepository> repo = new Mock<EntityRepository>();
    repo.Setup(m => m.EntitySave(It.IsAny<Entity>());

    // act
    MyClass c = new MyClass(repo.Object);
    c.Save(new Entity());

    // assert
    repo.Verify(m => EntitySave(It.IsAny<Entity>()), Times.Once());
}

Because later on if you do change method's implementation to do more "complex" stuff like:

public void Save(Entity data)
{
    if (this.repositoryIocInstance.Exists(data))
    {
        this.repositoryIocInstance.Update(data);
    }
    else
    {
        this.repositoryIocInstance.Create(data);
    }
}

...your unit test would fail but it probably wouldn't break your application...

Question

Should I even bother creating unit tests on methods that don't have any return types* or **don't change anything outside of internal mock?

The short answer to your question is: Yes, you should definitely test methods like that.

I assume that it is important that the Save method actually saves the data. If you don't write a unit test for this, then how do you know?

Someone else may come along and remove that line of code that invokes the EntitySave method, and none of the unit tests will fail. Later on, you are wondering why items are never persisted...

In your method, you could say that anyone deleting that line would only be doing so if they have malign intentions, but the thing is: Simple things don't necessarily stay simple, and you better write the unit tests before things get complicated.

It is not an implementation detail that the Save method invokes EntitySave on the Repository - it is part of the expected behavior, and a pretty crucial part, if I may say so. You want to make sure that data is actually being saved.

Just because a method does not return a value doesn't mean that it isn't worth testing. In general, if you observe good Command/Query Separation (CQS), any void method should be expected to change the state of something.

Sometimes that something is the class itself, but other times, it may be the state of something else. In this case, it changes the state of the Repository, and that is what you should be testing.

This is called testing Inderect Outputs, instead of the more normal Direct Outputs (return values).

The trick is to write unit tests so that they don't break too often. When using Mocks, it is easy to accidentally write Overspecified Tests, which is why most Dynamic Mocks (like Moq) defaults to Stub mode, where it doesn't really matter how many times you invoke a given method.

All this, and much more, is explained in the excellent xUnit Test Patterns.

Code evolves, and as it does, it also decays if not pruned, a bit like a garden in that respect. Pruning mean refactoring to make it fulfill its evolving purpose.

Refactoring is much safer if we have a good unit test coverage. Test-driven development forces us to write the test code first, before the production code. Hence, we can't test the implementation, because there isn't any. This makes it much easier to refactor the production code.

The TDD cycle is something like this: write a test, test fails, write production code until the test succeeds, refactor the code.

But from what I've seen, people refactor the production code, but not the test code. As test code decays, the production code will go stale and then everything goes downhill. Therefore, I think it is necessary to refactor test code.

Here's the problem: How do you ensure that you don't break the test code when you refactor it?

(I've done one approach, https://thecomsci.wordpress.com/2011/12/19/double-dabble/, but I think there might be a better way.)

Apparently there's a book, http://www.amazon.com/dp/0131495054, which I haven't read yet.

There's also a Wiki page about this, http://c2.com/cgi/wiki?RefactoringTestCode, which doesn't have a solution.

Refactoring your tests is a two step process. Simply stated: First you must use your application under test to ensure that the tests pass while refactoring. Then, after your refactored tests are green, you must ensure that they will fail. However to do this properly requires some specific steps.

In order to properly test your refactored tests, you must change the application under test to cause the test to fail. Only that test condition should fail. That way you can ensure that the test is failing properly in addition to passing. You should strive for a single test failure, but that will not be possible in some cases (i.e. not unit tests). However if you are refactoring correctly there will be a single failure in the refactored tests, and the other failures will exist in tests not related to the current refactoring. Understanding your codebase is required to properly identify cascading failures of this type and failures of this type only apply to tests other than unit tests.

I am on a team where i am trying to convince my teammates to adopt TDD (as i have seen it work in my previous team and the setup is similar). Also, my personal belief is that, at least in the beginning, it really helps if both TDD and Pair Programming are done in conjunction. That way, two inexperienced (in TDD) developers can help each other, discuss what kind of tests to write and make good headway.

My manager, on the other hand, feels that if we introduce two new development practices in the team at once, there is a good chance that both might fail. So, he wants to be a little more conservative and introduce any one.

How do i convince him that both of these are complementary and not orthogonal. Or am i wrong?

I think understand the definition of State / Interaction based testing (read the Fowler thing, etc). I found that I started state based but have been doing more interaction based and I'm getting a bit confused on how to test certain things.

I have a controller in MVC and an action calls a service to deny a package:

public ActionResult Deny(int id)
{
    service.DenyPackage(id);

    return RedirectToAction("List");
}

This seems clear to me. Provide a mock service, verify it was called correctly, done.

Now, I have an action for a view that lets the user associate a certificate with a package:

public ActionResult Upload(int id)
{
    var package = packageRepository.GetPackage(id);
    var certificates = certificateRepository.GetAllCertificates();

    var view = new PackageUploadViewModel(package, certificates);

    return View(view);
}

This one I'm a bit stumped on. I'm doing Spec style tests (possibly incorrectly) so to test this method I have a class and then two tests: verify the package repository was called, verify the certificate repository was called. I actually want a third to test to verify that the constructor was called but have no idea how to do that! I'm get the impression this is completely wrong.

So for state based testing I would pass in the id and then test the ActionResult's view. Okay, that makes sense. But wouldn't I have a test on the PackageUploadViewModel constructor? So if I have a test on the constructor, then part of me would just want to verify that I call the constructor and that the action return matches what the constructor returns.

Now, another option I can think of is I have a PackageUploadViewModelBuilder (or something equally dumbly named) that has dependency on the two repositories and then I just pass the id to a CreateViewModel method or something. I could then mock this object, verify everything, and be happy. But ... well ... it seems extravagant. I'm making something simple ... not simple. Plus, controller.action(id) returning builder.create(id) seems like adding a layer for no reason (the controller is responsible for building view models.. right?)

I dunno... I'm thinking more state based testing is necessary, but I'm afraid if I start testing return values then if Method A can get called in 8 different contexts I'm going to have a test explosion with a lot of repetition. I had been using interaction based testing to pass some of those contexts to Method B so that all I have to do is verify Method A called Method B and I have Method B tested so Method A can just trust that those contexts are handled. So interaction based testing is building this hierarchy of tests but state based testing is going to flatten it out some.

I have no idea if that made any sense.

Wow, this is long ...

I think Roy Osherove recently twitted that as a rule of thumb, your tests should be 95 percent state-based and 5 percent interaction-based. I agree.

What matters most is that your API does what you want it to, and that is what you need to test. If you test the mechanics of how it achieves what it needs to do, you are very likely to end up with Overspecified Tests, which will bite you when it comes to maintainability.

In most cases, you can design your API so that state-based testing is the natural choice, because that is just so much easier.

To examine your Upload example: Does it matter that GetPackage and GetAllCertificates was called? Is that really the expected outcome of the Upload method?

I would guess not. My guess is that the purpose of the Upload method - it's very reason for existing - is to populate and serve the correct View.

So state-based testing would examine the returned ViewResult and its ViewModel and verify that it has all the correct values.

Sure, as the code stands right now, you will need to provide Test Doubles for packageRepository and certificateRepository, because otherwise exceptions will be thrown, but it doesn't look like it is important in itself that the repository methods are being called.

If you use Stubs instead of Mocks for your repositories, your tests are no longer tied to internal implementation details. If you later on decide to change the implementation of the Upload method to use cached instances of packages (or whatever), the Stub will not be called, but that's okay because it's not important anyway - what is important is that the returned View contains the expected data.

This is much more preferrable than having the test break even if all the returned data is as it should be.

Interestingly, your Deny example looks like a prime example where interaction-based testing is still warranted, because it is only by examining Indirect Outputs that you can verify that the method performed the correct action (the DenyPackage method returns void).

All this, and more, is explained very well in the excellent book xUnit Test Patterns.

Jimmy Bogard, wrote an article: Getting value out of your unit tests, where he gives four rules:

  • Test names should describe the what and the why, from the user’s perspective
  • Tests are code too, give them some love
  • Don’t settle on one fixture pattern/organizational style
  • One Setup, Execute and Verify per Test

In your opinion these guidelines are complete? What are your guidelines for unit tests? Please avoid specific language idioms, try to keep answers language-agnostic .

There's an entire, 850 page book called xUnit Test Patterns that deal with this topic, so it's not something that can be easily boiled down to a few hard rules (although the rules you mention are good).

A more digestible book that also covers this subject is The Art of Unit Testing.

If I may add the rules I find most important, they would be:

  • Use Test-Driven Development. It's the by far the most effective road towards good unit tests. Trying to retrofit unit tests unto existing code tend to be difficult at best.
  • Keep it simple: Ideally, a unit test should be less than 10 lines of code. If it grows to much more than 20 lines of code, you should seriously consider refactoring either the test code, or the API you are testing.
  • Keep it fast. Unit test suites are meant to be executed very frequently, so aim at keeping the entire suite under 10 s. That can easily mean keeping each test under 10 ms.

     Before we start I know a fair few people consider tests that hit the database not "unit tests". Maybe "integration tests" would be a better name. Either way developer tests that hit the database.

     To enable unit-testing I have a developer local database which I clear and the populate with a know set of data at the start of each test using dbUnit. This all works well enough until a table used by the test changes in some way and I have to manually update all the XML datasets. Which is a pain. I figure other people must have hit the same problem and hopefully found a nice neat solution to it. So for tests that require populating a database what do you use and how do you handle table definitions changing? (While I use Java I am open to solutions utilizing different technologies.)

EDIT: To clarify a little. I have a contrived test like:

void testLoadRevision() {
    database.clear(); // Clears every table dbUnit knows about.
    database.load("load/trevision.xml", "load/tissue.xml");
    SomeDatabaseThingie subject = new SomeDatabaseThingie(databaseProvider);
    Revision actual = subject.load();
    assert(actual, expected);
}

In that I have two tables - tRevision and tIssue. A loaded revision uses a small amount of data from tIssue. Later on tIssue acquires a new field that revisions do not care about. As the new field is "not null" and has no sensible default this test it will fail as the tIssue.xml will be invalid.

With small changes like this it is not too hard to edit the tIssue. But when the number of XML files starts to balloon with each flow it becomes a large amount of work.

Cheers,
    mlk

I think the answer to this question comes in two phases:

There is only one authoritative definition of the schema

There should be only one definition of what the database looks like. In normal cases, I prefer to have a SQL DDL script that specifies the schema for the database.

The unit tests should use the same authoritative definition of the database schema as the application uses, and it should create the database based on that definition before the test run and remove it completely again after the test run.

That said, tooling may come out of sync with the schema, and you will manually need to update the tool-generated stuff. For example, I use the Entity Framework for .NET that auto-generates classes based on the database schema. When I change the schema, I need to manually tell my tool to update these classes. It's a pain, but I'm not aware of any way out of that, unless the tooling supports automation.

Each test should start with empty data

Each test should start with the database without any data. Every test should populate only the data it needs to execute the test, and when it is done, it should clean out the database again.

What you are currently doing sounds like an anti-pattern called General Fixture, where you try to pre-load a set of data that represents as broad a set of scenarios as possible. However, it makes it very hard to test mutually exclusive conditions, and may also lead to Test Interdependence if you modify this pre-loaded data in some tests.

This is really well explained in the excellent book xUnit Test Patterns.

Do you guys recommend any books, videos or talks on TDD and CI for PHP?

I would recommend "Real-World Solutions for Developing High-Quality PHP Frameworks and Applications" by Sebastian Bergmann (The creator of PHPUnit).

Also "xUnit Test Patterns: Refactoring Test Code" is pretty good, not PHP specific though.

I'm working on a backend for an open source Python ORM. The library includes a set of 450 test cases for each backend, all lumped into one giant test class.

To me, that sounds like a lot for one class, but I've never worked on a project that has 450 test cases (I believe this library has ~2000 test cases not including the test cases for each backend). Am I correct in feeling this is a bit on the high end (given that there's not really any magic number above which you should break something up), or is it just not as big a deal for a test class to have so many tests?

And even if that's not too many test cases, how would one go about refactoring an overly large test class? Most of my knowledge about refactoring is around making sure that tests are in place for the code that's being refactored. I've never had to deal with a situation where it's the tests themselves that need to be refactored.

EDIT: Previously, I had said that these were unit tests, which isn't quite true. These are more appropriately termed integration tests.

450 tests in one class sounds like a lot, but how bad it is depends on how they are organized. If they all are truly independent of each other and the test class' members, it may not be a big deal - other than it must be hard to locate a specific test.

On the other hand, if the test class has members that are used by only some of the tests and ignored by others, it's a Test Smell called Obscure Test containing such Root Causes as General Fixture and Irrelevant Information (please note the jargon - I'll get back to this).

There are several ways of organizing tests into classes. The most common patterns are Testcase Class per Class, Testcase Class per Feature and Testcase Class per Fixture.

How you structure tests is important not only while you are writing the tests, but also afterwards for maintainability reasons. For that reason alone, I'd tend to say that it would be a worthwhile effort refactoring your tests. In TDD, the test code base is (almost) as important as the real code base, and should be treated with the same kind of respect.

There's a whole book about this subject, called xUnit Test Patterns: Refactoring Test Code that I can't recommand enough. It contains a complete pattern language that deals with unit testing and TDD, and all the pattern names I've used here originates from it.

In a complex VS2008 solution I have three unit test projects. As they operate on the same test database it is important that the test projects are executed one after the other. It is not important which project first, just that one is finished before the other starts.

If I want to execute them all, there are several ways to do that, which lead to different results:

  • I have a test list .vsmdi file where the tests are ordered by project. If I open the list and execute the tests from the test list editor, everything is fine.
  • If I open the Test View window, sort the tests by project and run them, again everything is fine.
  • However if I run the tests by selecting 'Test -> Run -> All Tests in Solution' from the menu, they get executed in random order where some of them fail, as one of the other test projects already manipulated the test db.

So the question is, what determines the unit test sequence when using the third approach? Is there a way to specify a default test list in the .testrunconfig?

As there are workarounds, the issue is not critical at all. But any thoughts are welcome. Thanks.

You are experiencing a unit testing smell called Interactive Tests, described in the excellent xUnit Test Patterns. More specifically, you suffer from a Shared Fixture, which is another way of saying that several of your tests use the same shared database, and that they are dependent on the outcome of previous test runs.

This is an anti-pattern and should be avoided at all cost. The book offers extensive guidance on how to deal with this situation.

The reason I start out by describing this is that this is the reason that MSTest doesn't guarantee ordering of the tests. Although the order may seem (semi)deterministic, you can't rely on it. Some other unit testing frameworks (xUnit.NET comes to mind) even go so far as to run tests in a random order each time you run the test suite. MSTest isn't that extreme, but there are no way you can order your unit tests.

That said, if you are running Visual Studio Team Suite or (I believe) Team Test, there's a type of test called an Ordered Test. You can use that to specify that all tests (including unit tests) should execute in a specific order.

I've never really written unit tests before (or tests, for that matter, really). I tend to obsessively run/compile after writing even the smallest bit of code to check for errors. I've been doing a bit of reading up on unit tests lately, and I'm curious how to best go about using/implementing them. My main language as of late has been Python, but I think this is a pretty language agnostic question. Does anyone have some tips (or good reading) on how to do this properly?

Thanks!

Unit testing is one thing, another thing to consider is test driven development, where the act of writing the tests first affects the design/ feel of the finally delivered code - hopefully for the better. I find this helps especially if the problem domain is not fully understood at the start of programming.

Clarke Ching does a good one hour talk about TDD using excel. If you spend an hour reading through this, you should get the idea.

http://www.clarkeching.com/files/tdd_for_managers_and_nonprogrammers_using_excell_and_vba_final.pdf

You know you have arrived with unit testing when xUnit Test Patterns is an enjoyable read. http://www.amazon.co.uk/xUnit-Test-Patterns-Refactoring-Signature/dp/0131495054/ref=sr_1_1?ie=UTF8&qid=1288638075&sr=8-1

That is probably a big ask initially though, and I would suggest something thinner about either refactoring or TDD would be a more gentle introduction to this fascinating subject.

I want to be able to split a big test to smaller tests so that when the smaller tests pass they imply that the big test would also pass (so there is no reason to run the original big test). I want to do this because smaller tests usually take less time, less effort and are less fragile. I would like to know if there are test design patterns or verification tools that can help me to achieve this test splitting in a robust way.

I fear that the connection between the smaller tests and the original test is lost when someone changes something in the set of smaller tests. Another fear is that the set of smaller tests doesn't really cover the big test.

An example of what I am aiming at:

//Class under test
class A {

  public void setB(B b){ this.b = b; }

  public Output process(Input i){
    return b.process(doMyProcessing(i));
  }

  private InputFromA doMyProcessing(Input i){ ..  }

  ..

}

//Another class under test
class B {

   public Output process(InputFromA i){ .. }

  ..

}

//The Big Test
@Test
public void theBigTest(){
 A systemUnderTest = createSystemUnderTest(); // <-- expect that this is expensive

 Input i = createInput();

 Output o = systemUnderTest.process(i); // <-- .. or expect that this is expensive

 assertEquals(o, expectedOutput());
}

//The splitted tests

@PartlyDefines("theBigTest") // <-- so something like this should come from the tool..
@Test
public void smallerTest1(){
  // this method is a bit too long but its just an example..
  Input i = createInput();
  InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow
  Output expected = expectedOutput();  // this should be the same in both tests and it should be ensured somehow

  B b = mock(B.class);
  when(b.process(x)).thenReturn(expected);

  A classUnderTest = createInstanceOfClassA();
  classUnderTest.setB(b);

  Output o = classUnderTest.process(i);

  assertEquals(o, expected);
  verify(b).process(x);
  verifyNoMoreInteractions(b);
}

@PartlyDefines("theBigTest") // <-- so something like this should come from the tool..
@Test
public void smallerTest2(){
  InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow
  Output expected = expectedOutput();  // this should be the same in both tests and it should be ensured somehow

  B classUnderTest = createInstanceOfClassB();

  Output o = classUnderTest.process(x);

  assertEquals(o, expected);
}

All I can suggest is the book xUnit Test Patterns. If there is a solution it should be in there.

I am working in an enterprise project and my team is responsible for creating front end of the application and there is another team developing webservices and has provided WSDL for all the services that will be available as part of this project. In development phase our local dev environment will point to one of the development box of the team responsible for creating web services. It is quite possible that their dev environment will be shaky in middle of iteration. In order to mitigate that risk, we are using SOAP UI on our local machine and start mocked service and do the development. Whenever we need different flavors of response we modify the local service response XML. This process is running good but i was wondering if there is a way that for each service say i create 10 responses and deploy them as a war on tomcat on one of the machine and my whole development team points to that war which will expose the same service and based on a parameter it can send one response out of the 10 responses bundled in war. i don't want to spend any effort on this. Is there a tool which provides this kind of functionality out of the box.

It would make your life easier if you split up your internal architecture a bit. Instead of inflexibly letting the client code rely on an external SOAP service, it would be beneficial to define an interface for internal use. You could call this IServiceProxy or some such name.

Let the client code talk to that interface and use Dependency Injection (DI) to inject an instance of it into the client. This means that for a lot of development usage, you can simply replace this interface with a Test Double (e.g. a Mock).

If you must also have a SOAP service to verify that your SOAP stack works as intended, watch out for the so-called Shared Fixture test smell. A shared 'test' service on a single server would be a Shared Fixture, and it is likely to give you more trouble than it's worth because developers would be stepping over each other and it would be a bottleneck.

A better alternative is to set up a SOAP service on each developer's machine, or, if that's not possible, a dedicated service for each developer.

You can read more about Shared Fixtures and many other test patterns and anti-patterns in the excellent xUnit Test Patterns.

I don't think I understand testing as well a should. I have written a lot of tests and gotten decent coverage but I cannot help feeling it has not been intuitive.

Let me explain: if I have a class the I am testing a method and it needs to be passed a big object of some sort, with all kinds of state. This object in turn contains other objects and their states that I know nothing of, how do I create mock or stub objects for this method and give it data that it can work with. It seems I have to create a big object with all kinds of internal sub object information to exercise my method. I'm clearly confused!

The other answers here are pointing you to mocking frameworks, which you should definitely look at if you're not already using (use Mockito!). However, this is almost certainly an instance of your tests telling you that you've got design problems. If you find yourself having to provide all kinds of unrelated information and mock objects just to make a test pass, then you're

  1. trying to test too many pieces at once,
  2. writing a test that will be very difficult to read and understand when you're done with it because it's impossible to determine what the test is supposed to be focused on due to a low signal/noise ratio, and/or
  3. writing an extremely fragile test that will break on the slightest, unrelated change, incurring high maintenance costs and a "just make the test pass" mentality that doesn't care what the test is supposed to be testing.

These are all symptoms of a system not designed for testability, which almost universally equates to a system not designed for readability, meaning it's not designed well.

If you care about testing well, embrace test-first thinking and TDD. I highly recommend a couple of books on the subject: "xUnit Test Patterns", which I've read and reviewed, and "Growing Object-Oriented Software, Guided by Tests", which I'm almost finished reading.

I had to start writing some unit tests, using QualityTools.UnitTestFramework, for a web service layer we have developed, when my approach seemed to be incorrect from the beginning.

It seems that unit tests should be able to run in any order and not rely on other tests.

My initial thought was to have the something similar to the following tests (a simplified expample) which would run as an ordered test in the same order.

AddObject1SuccessTest
AddObject2WithSameUniqueCodeTest
(relies on first test having created object1 first then expects fail)
AddObject2SuccessTest
UpdateObject2WithSameUniqueCodeTest
(relies on first test having created object1 and thrid test having created object2 first then expects fail)
UpdateObject2SuccessTest
GetObjectListTest
DeleteObjectsTest
(using added IDs)

However, there is no state between tests and no apparent way of passing say added IDs to the deletetest for example.

So, is it then the case that the correct approach for unit testing complex interactions is by scenario?

For example

AddObjectSuccessTest
(which creates an object, gets it to validate the data and then deletes it)
AddObjectWithSameUniqueCodeTest
(which creates object 1 then attempts to create object 2 with a fail and then deletes object 1)
UpdateObjectWithSameUniqueCodeTest
(which creates object 1 then creates object 2 and then attempts to update object 2 to have the same unique code as object 1 with a fail and then deletes object 1 and object 2)

Am I coming at this wrong?

Thanks

It is a tenet of unit testing that each test case should be independent of any other test case. MSTest (as well as all other unit testing frameworks) enforce this by not guaranteeing the order in which tests are run - some (xUnit.NET) even go so far as to randomize the order between each test run.

It is also a recommended best practice that units are condensed into simple interactions. Although no hard and fast rule can be provided, it's not a unit test if the interaction is too complex. In any case, complex tests are brittle and have a very high maintainance overhead, which is why simple tests are preferred.

It sounds like you have a case of shared state between your tests. This leads to interdependent tests and should be avoided. Instead you can write reusable code that sets up the pre-condition state for each test, ensuring that this state is always correct.

Such a pre-condition state is called a Fixture. The book xUnit Test Patterns contains lots of information and guidance on how to manage Fixtures in many different scenarios.

I'm using Moq to create a mock object of HttpResponseBase. I need to be able to test that HttpResponseBase.End() was called in my library. To do this, I specify some text before the call and some text after. Then I check that only the text before the call to End() is present in HttpResponseBase.Output.

The problem is, I can't figure out how to mock HttpResponseBase.End() so that it stops processing, like it does in ASP.NET.

public static HttpResponseBase CreateHttpResponseBase() {
    var mock = new Mock<HttpResponseBase>();
    StringWriter output = new StringWriter();

    mock.SetupProperty(x => x.StatusCode);
    mock.SetupGet(x => x.Output).Returns(output);
    mock.Setup(x => x.End()) /* what do I put here? */;
    mock.Setup(x => x.Write(It.IsAny<string>()))
        .Callback<string>(s => output.Write(s));

    return mock.Object;
}

It is a bit unclear to me what it is you are trying to achieve, but from your description, it sounds like you are attempting to get your Abstraction to behave like a particular implementation. In other words, because HttpResponse.End() has a certain behavior, you want your Mock to have the same behavior?

In general, that is not particularly easy to do with Moq, since it has no concept of ordered expectations (unlike RhinoMocks). There is, however, a feature request for it.

You might be able to use a Callback together with setting up the End method to toggle a flag that determines any further behavior of the Mock, but it's not going to be particularly pretty. I'm thinking about something like this:

bool ended = false;
var mock = new Mock<HttpResponseBase>();
mock.Setup(x => x.End()).Callback(() => ended = true);
// Other setups involving 'ended' and Callbacks

Then have all other Setups have dual implementatations based on whether ended is true or false.

It would be pretty damn ugly, so I'd seriously reconsider my options at this point. There are at least two directions you can take:

  1. Make a Fake implementation of HttpResponseBase instead of using Moq. It sounds like you are expecting such specific behavior of the implementation that a Test Double with embedded logic sounds like a better option. Put shortly, a Fake is a Test Double that can contain semi-complex logic that mimics the intended production implementation. You can read more about Fakes and other Test Doubles in the excellent xUnit Test Patterns book.
  2. Reconsider your initial assumptions. It sounds to me like you are tying your client very closely to a particular behavior of HttpResponseBase, so you may be violating the Liskov Substitution Principle. However, I may be mistaken, as a method called 'End' carries certain connotations beyond the purely semantic, but still, I'd personally consider if a better design was possible.

Unit tests are often deployed with software releases to validate the install - i.e. do the install, run the tests and if they pass then the install is good.

I'm about to embark on a project that will involve delivering prototype software library releases to customers. The unit tests will be delivered as part of each release and in addition to using the tests to validate the install, I plan on using the unit tests that test the API as a "contract" for how the release should be used. If the user uses the release in a similar manner to how it is used by the unit tests then great. If they use it some other way then all bets are off.

Has anybody tried this before? Any thoughts on whether this is a good/bad idea?

Edit: To highlight a good point raised by ChrisA and Dan in replies below, the "unit tests that test the API" are better called the integration tests and their intent is to exercise the API and the software to demonstrate the functionality of the software from a customer perspective.

Meszaros calls this "Tests as documentation"

When I'm testing my simple geometric library, I usually have one or two methods in every test class. All I check is whether objects do proper coordinates calculation or not. Is it okay (meaning the number of methods)?

There is no inherent problem with that, nor does it indicate a test smell in itself.

In general, there are several different patterns for arranging test cases in classes. The book xUnit Test Patterns list several of them:

  • Testcase Class per Class
  • Testcase Class per Feature
  • Testcase Class per Fixture

Normally, the most common pattern is Testcase Class per Class, which means that you have a class contiaining test cases that all target the same System Under Test (SUT). That can sometimes lead to small test classes, but it's not a problem.

Wherever possible I use TDD:

  • I mock out my interfaces
  • I use IOC so my mocked ojbects can be injected
  • I ensure my tests run and that the coverage increases and I am happy.

then...

  • I create derived classes that actually do stuff, like going to a database, or writing to a message queue etc.

This is where code coverage decreases - and I feel sad.

But then, I liberally spread [CoverageExclude] over these concrete classes and coverage goes up again.

But then instead of feeling sad, I feel dirty. I somehow feel like I'm cheating even though it's not possible to unit-test the concrete classes.

I'm interested in hearing how your projects are organised, i.e. how do you physically arrange code that can be tested against code that can't be tested.

I'm thinking that perhaps a nice solution would be to separate out untestable concrete types into their own assembly and then ban the use of [CoverageExclude] in the assemblies that do contain testable code. This'd also make it easier to create an NDepend rule to fail the build when this attribute is incorrectly found in the testable assemblies.


Edit: the essense of this question touches on the fact that you can test the things that USE your mocked interfaces but you can't (or shouldn't!) UNIT-test the objects that ARE the real implementations of those interfaces. Here's an example:

public void ApplyPatchAndReboot( )
{ 
    _patcher.ApplyPatch( ) ;
    _rebooter.Reboot( ) ;
}

patcher and rebooter are injected in the constructor:

public SystemUpdater(IApplyPatches patcher, IRebootTheSystem rebooter)...

The unit test looks like:

public void should_reboot_the_system( )
{
    ... new SystemUpdater(mockedPatcher, mockedRebooter);
    update.ApplyPatchAndReboot( );
}

This works fine - my UNIT-TEST coverage is 100%. I now write:

public class ReallyRebootTheSystemForReal : IRebootTheSystem
{
    ... call some API to really (REALLY!) reboot
}

My UNIT-TEST coverage goes down and there's no way to UNIT-TEST the new class. Sure, I'll add a functional test and run it when I've got 20 minutes to spare(!).

So, I suppose my question boils down to the fact that it's nice to have near 100% UNIT-TEST coverage. Said another way, it's nice to be able to unit-test near 100% of the behaviour of the system. In the above example, the BEHAVIOUR of the patcher should reboot the machine. This we can verify for sure. The ReallyRebootTheSytemForReal type isn't strictly just behaviour - it has side effects which means it can't be unit-tested. Since it can't be unit-test it affects the test-coverage percentage. So,

  • Does it matter that these things reduce unit-test coverage percantage?
  • Should they be segregated into their own assemblies where people expect 0% UNIT-TEST coverage?
  • Should concrete types like this be so small (in Cyclomatic Complexity) that a unit-test (or otherwise) is superfluous

You are on the right track. Some of the concrete implementations you probably can test, such as Data Access Components. Automated testing against a relational database is most certainly possible, but should also be factored out into its own library (with a corresponding unit test library).

Since you are already using Dependency Injection, it should be a piece of cake for you compose such a dependency back into your real application.

On the other hand, there will also be concrete dependencies that are essentially un-testable (or de-testable, as Fowler once joked). Such implementations should be kept as thin as possible. Often, it is possible to design the API that such a Dependency exposes in such a way that all the logic happens in the consumer, and the complexity of the real implementation is very low.

Implementing such concrete Dependencies is an explicit design decision, and when you make that decision, you simultaneously decide that such a library should not be unit tested, and thus code coverage should not be measured.

Such a library is called a Humble Object. It (and many other patterns) are described in the excellent xUnit Test Patterns.

As a rule of thumb I accept that code is untested if it has a Cyclomatic Complexity of 1. In that case, it's more or less purely declarative. Pragmatically, untestable components are in order as long as they have low Cyclomatic Complexity. How low 'low' is you must decide for yourself.

In any case, [CoverageExclude] seems like a smell to me (I didn't even know it existed before I read your question).

A part of my project interacts with iTunes using COM. The goal of the test in question is to verify that my object asks the iTunes API to export all of the album artwork for a collection of tracks to a file.

I have successfully written a test which can prove my code is doing this, however to accomplish that I have had to stub out a chunk of the iTunes implementation, while thiis is to be expected in a unit test I am concerned at the ratio of stub setup code vs. code doing actual testing

My questions:

  1. Is the fact that there is more stub setup code then acting code indicitive of another underlying problem in my code>
  2. There is a lot of setup code and I don't believe repeating it per test is a good idea. What is the best way to refactor this code so that this setup code is seperate from, but available to other tests that need to utilise the stubs.

This seams like the kind of question that might have been asked before, so I applagise in advance if I have created a duplicate

For reference, here is the complete unit test that I am concerned about

[Fact]
    public void Add_AddTrackCollection_AsksiTunesToExportArtForEachTrackInCollectionToAFile()
    {
        var trackCollection = MockRepository.GenerateStub<IITTrackCollection>(null);
        var track = MockRepository.GenerateStub<IITTrack>(null);
        var artworkCollection = MockRepository.GenerateStub<IITArtworkCollection>(null);
        var artwork = MockRepository.GenerateMock<IITArtwork>(null);
        var artworkCache = new ArtworkCache();
        trackCollection.Stub<IITTrackCollection, int>(collection => {return collection.Count; }).Return(5);
        trackCollection.Stub<IITTrackCollection, IITTrack>(collection => { return trackCollection[0]; }).IgnoreArguments().Return(track);
        track.Stub<IITTrack, IITArtworkCollection>(stub => { return stub.Artwork; }).Return(artworkCollection);
        artworkCollection.Stub<IITArtworkCollection, int>(collection => { return collection.Count; }).Return(1);
        artworkCollection.Stub<IITArtworkCollection, IITArtwork>(collection => { return artworkCollection[0]; }).IgnoreArguments().Return(artwork);
        artwork.Expect<IITArtwork>(stub => { stub.SaveArtworkToFile(null); }).IgnoreArguments().Repeat.Times(trackCollection.Count-1);
        artwork.Replay();
        artworkCache.Add(trackCollection);
        artwork.VerifyAllExpectations();

        //refactor all the iTunes fake-out that isn't specific to this test into it's own method and call that from ctor.

To address both of your questions:

  1. In both common unit test patterns (Arrange/Act/Assert, and Four-Phase Test), there will almost always be more Arrange code than Act code, since by definition, Act should only contain a single statement. However, it is still a good idea to attempt to minimize the Arrange code as much as possible.

  2. There are more than one way to refactor Setup code into reusable code. As Alex writes in his answer, many unit testing frameworks support setup methods. This is called Implicit Fixture Setup, and in my opinion something that should be avoided, since it does not communicate intent very well. Instead, I prefer explicit setup methods, usually encapsulated into a Fixture Object.

In general, the need for complex Setup code should always cause you to consider if you can model your API differently. This is not always the case, but when it is, you often end up with a better and more concise API than you started out with. That's one of the advantages of TDD.

For the cases where setup is just complex because input is complex, I will recommend AutoFixture, which is a general-purpose Test Data Builder.

Many of the patterns I have used in this answer are described in xUnit Test Patterns, which is an excellent book.

I am completely new to Unit test case writing. I am using MVVMLigh with WPF. Is it necessary to use some third party test framework or .Net Unit test framework it enough? Also how to handle static class in unit test case? In this case AppMessages class.

Can some one please guide me how to write unit cases for following piece of code:

public MyViewModel(Participant participant)
{    
    if (participant != null)
    {
        this.ParentModel = parentModel;
        OkCommand = new RelayCommand(() => OkCommandExecute());
        CalculateAmountCommand = new RelayCommand(() => CalculateAmount());        
    }
    else
    {
        ExceptionLogger.Instance.LogException(Constants.ErrorMessages.FinancialLineCanNotBeNull, "FinancialLineViewModel");
        AppMessages.DisplayDialogMessage.Send(Constants.ErrorMessages.FinancialLineCanNotBeNull, MessageBoxButton.OK, Constants.DefaultCaption, null);
    }
}

public static class AppMessages
{
    enum AppMessageTypes
    {
        FinancialLineViewDisplay,
        FinancialLineViewClose,
        DisplayDialogMessage
    }

    public static class DisplayDialogMessage
    {
        public static void Send(string message, MessageBoxButton button, string caption, System.Action<MessageBoxResult> action)
        {
            DialogMessage dialogMessage = new DialogMessage(message, action)
            {
                Button = button,
                Caption = caption
            };

            Messenger.Default.Send(dialogMessage, AppMessageTypes.DisplayDialogMessage);
        }

        public static void Register(object recipient, System.Action<DialogMessage> action)
        {
            Messenger.Default.Register<DialogMessage>(recipient, AppMessageTypes.DisplayDialogMessage, action);
        }
    }
}

public class ExceptionLogger
{
    private static ExceptionLogger _logger;
    private static object _syncRoot = new object();

    public static ExceptionLogger Instance
    {
        get
        {
            if (_logger == null)
            {
                lock (_syncRoot)
                {
                    if (_logger == null)
                    {
                        _logger = new ExceptionLogger();
                    }
                }
            }

            return _logger;
        }
    }

    public void LogException(Exception exception, string additionalDetails)
    {
        LogException(exception.Message, additionalDetails);
    }

    public void LogException(string exceptionMessage, string additionalDetails)
    {
        MessageBox.Show(exceptionMessage);
    }
}

About testability

Due to using of singletons and static classes MyViewModel isn't testable. Unit testing is about isolation. If you want to unit test some class (for example, MyViewModel) you need to be able to substitute its dependencies by test double (usually stub or mock). This ability comes only when you providing seams in your code. One of the best technique to provide seams is Dependency Injection. The best resource for learning DI is book from Mark Seemann (Dependency Injection in .NET).

You can't easily substitute calls of static members. But if you use much static members then your design isn't perfect.

Of course, you can use unconstrained isolation framework such as Typemock Isolator, JustMock or Microsoft Fakes to fake static method calls but it costs money and it don't push you to better design. This frameworks are great for creating test harness for legacy code.

About design

  1. Constructor of MyViewModel is doing too much. Constructors should be simple.
  2. If dependecy is null then constructor must throw ArgumentNullException but not silently log about error. Throwing exception is a clear indication that your object isn't usable.

About testing framework

You can use any unit testing framework you like. Even MSTest, but personally I don't recommend it. NUnit and xUnit.net is MUCH better.

Further reading

  1. Mark Seeman - Dependency Injection in .NET
  2. Roy Osherove - The Art of Unit Testing (2nd Edition)
  3. Michael Feathers - Working Effectively with Legacy Code
  4. Gerard Meszaros - xUnit Test Patterns

Sample (using MvvmLight, NUnit and NSubstitute)

public class ViewModel : ViewModelBase
{
    public ViewModel(IMessenger messenger)
    {
        if (messenger == null)
            throw new ArgumentNullException("messenger");

        MessengerInstance = messenger;
    }

    public void SendMessage()
    {
        MessengerInstance.Send(Messages.SomeMessage);
    }
}

public static class Messages
{
    public static readonly string SomeMessage = "SomeMessage";
}

public class ViewModelTests
{
    private static ViewModel CreateViewModel(IMessenger messenger = null)
    {
        return new ViewModel(messenger ?? Substitute.For<IMessenger>());
    }

    [Test]
    public void Constructor_WithNullMessenger_ExpectedThrowsArgumentNullException()
    {
        var exception = Assert.Throws<ArgumentNullException>(() => new ViewModel(null));
        Assert.AreEqual("messenger", exception.ParamName);
    }

    [Test]
    public void SendMessage_ExpectedSendSomeMessageThroughMessenger()
    {
        // Arrange
        var messengerMock = Substitute.For<IMessenger>();
        var viewModel = CreateViewModel(messengerMock);

        // Act
        viewModel.SendMessage();

        // Assert
        messengerMock.Received().Send(Messages.SomeMessage);
    }
}

I am currently working on a project that is using MS Test for unit testing. When I do a "Run All Tests" I get the following error for about 1/3 of the tests:

Test method [Test Method] threw exception System.IO.FileLoadException, but exception System.InvalidOperationException was expected. Exception message: System.IO.FileLoadException: Loading this assembly would produce a different grant set from other instances. (Exception from HRESULT: 0x80131401)

If I go to any of the failing tests and Run the test by itself it will give the same error. If I put a break point in the test and debug the test it will pass with no errors. If I again run the individual test it will pass. If I go back to running all tests I get the above error for 1/3 of the tests again.

I had this problem before and I didn't do anything to fix it and it just magically went away. But now it is back and very frustrating.

What is causing this error? Is there a fix for this error?

It sounds like you have Interacting Tests - an xUnit Test Patterns smell.

In short, some tests are dependent on previous tests to have executed, so when you run them in isolation, they change behavior because their implicit assumptions about their environment turn out to be wrong.

This could also explain why you had the problem before, and it then went away. Although MSTest seems to be fairly stable in how it orders tests, it may decide to run them in a different order the next time.

I can't tell you how to resolve the problem as it is individual. However, look for Shared Fixtures. Examples include

  • Databases
  • Files
  • Static (Shared in Visual Basic) types

In your case, the FileLoadException suggests that your tests expect some files to be around. When you run the entire test suite, those files have been left by previous test cases, while they are noticably absent when the test is executed in isolation.

I am using an interface (IDataSet) in front of System.Data.DataSet for testing purposes. I want to be able to mock the Copy() method. Basically I want a copy/clone of the same mocked object.

Here's some pseudo code of what I would like to do.

Mock<IDataSet> dataMock = new Mock<IDataSet>();
Mock<IDataSet> copyMock = ???    // How do I clone the mock?

dataMock.Setup(c => c.Copy()).Returns(copyMock.Object);

Is this possible?

Basically, a Mock is not the real thing, so it does not have real behavior. It's not supposed to have real behavior - it's supposed to do whatever you tell it while keeping track of what happened. Nothing more and nothing less.

This means that you have to tell it how its Copy method works. If you do the following, that's the implmementation the Copy method will have:

Mock<IDataSet> dataMock = new Mock<IDataSet>();
Mock<IDataSet> copyMock = new Mock<IDataSet>();

dataMock.Setup(c => c.Copy()).Returns(copyMock.Object);

However, you can also do this:

Mock<IDataSet> dataMock = new Mock<IDataSet>();
Mock<IDataSet> copyMock = dataMock;

dataMock.Setup(c => c.Copy()).Returns(copyMock.Object);

and that, then, becomes the implementation of the Copy method. Remember: an interface is not a contract that states what the method should do; it only defines the signature of methods.

You were probably hoping to copy data from one IDataSet to another, but remember that a Mock is pure behavior; it has no data.

A couple of alternatives you can think about are the following:

  • Replace IDataSet with an abstract DataSetBase class, and implement the Copy method like you want it to behave (that is, not as an abstract or virtual method).
  • Instead of creating a Mock of IDataSet, use a Fake. A Fake is a a test-specifict implementation of an interface that has behavior close to the real thing. There are no frameworks/libraries for creating Fakes, so you would need to code such a Fake by hand.
  • Consider whether the Copy method should really be part of the interface. It sounds to me like it's an implementation detail that doesn't belong on the interface in the first place.

You can read about Stubs, Mocks, Fakes and other unit testing design patterns in the excellent book xUnit Test Patterns.

I have a class (of many) that have properties. Some have logic in them and some don't. Assuming I want to test these properties, how do I go about doing that?

Recently, I've been interested in BDD style for creating unit tests.

see here and here.

So I'd do a setup of the context - basically create the SUT and load up whatever is needed. Then in each Observation (test method), I'd verify that a particular property contains what it should contain.

Here's my question. If the SUT has 20 properties, then do I create 20 Observations/Tests? Could be more if one of the properties contained more interesting logic I guess.

[Observation]
public void should_load_FirstName()
{
    Assert.Equals<string>("John", SUT.FirstName);
}

[Observation]
public void should_load_LastName()
{
    Assert.Equals<string>("Doe", SUT.LastName);
}

[Observation]
public void should_load_FullName()
{
    Assert.Equals<string>("John Doe", SUT.FullName);
}

But would it be better if aggregated the simple ones in a single observation?

[Observation]
public void should_load_properties()
{
    Assert.Equals<string>("John", SUT.FirstName);
    Assert.Equals<string>("Doe", SUT.LastName);
    Assert.Equals<string>("John Doe", SUT.FullName);
}

Or what if I used a custom attribute (that can be applied multiple times to a method). So that I can possible do, something like:

[Observation(PropertyName="FirstName", PropertyValue="John")]
[Observation(PropertyName="LastName", PropertyValue="Doe")]
[Observation(PropertyName="FullName", PropertyValue="John Doe")]
public void should_load_properties()
{
}

In general you should strive after having only one logical assertion per test. The excellent book xUnit Test Patterns contains a good discussion about that, but the salient point is that it makes it easier to understand where a violation occurs if there's only one reason a test can fail. That's probably a bit more relevant for Regression Testing than BDD, though...

All this implies that your option of writing a single test that verifies all properties is probably the least attractive, although you could argue that verifying all properties is a single logical assertion...

A more central tenet of xDD (TDD, BDD, whatever) is that tests should act as Executable Specifications. In other words, it should be immediately apparent when you look at the test not only what is being tested, but also why the expected value is as it is. In your examples, it is not very clear why SUT.FirstName is expected to be "John" and not, say, "Jane".

If at all possible, I would write these tests to use Derived Values instead of hard-coded values.

For writable properties, I often write tests that simply verify that the getter returns the value assigned to the setter.

Fore read-only properties, I often write tests that verify that the value matches a constructor argument.

Such tests can be encapsulated into reusable test code that encapsulates common testing idioms. I'm currently working on a library that can do just that.

I have a piece of logic I want to test and it uses dependency injected interface with one (or more) void methods, example:

interface IMyService
{
    void MethodA (MyComplexObject arg1, int arg2);
}

What I would want is to create a stub for this IMyService that would just record the method invocations of MethodA and I would later be able to access it as a list, something like:

MyComplexObject actualParameter = serviceRecorder
    .GetMethodRecordings("MethodA").GetRecord(10).GetInputParameter(0);

I need this to examine the contents of such a parameter for a certain invocation and make assertions on it. I know there are other was of doing it (like setting expectation calls with constraints), but this seems much easier to write for cases when you have a lot of invocations and you want to make assertions on the 51th one only, for example.

So is there some sort of mechanism in Rhino.Mocks for this or am I left to my own devices (writing dummy IMyService implementation with recording capabilities)?

NOTE: (I'm aware this could lead to tests being fragile and I'm aware of the consequences).

UPDATE: here's what I found so far (thanks in part to Mark's help in naming this pattern as Test Spy):

Take a look at the Arrange-Act-Assert (AAA) syntax of Rhino Mocks.

In overall, the Record-Replay syntax is obsolete. It was fantastic when it was invented, but with the advent of lambda expressions we got something even better.

Rhino Mocks 4 is probably not going to support Record-Replay, but instead relies on lambda expressions, and so does Moq.

Finally, a Test Double that records invocations for later inspection is called a Test Spy - see xUnit Test Patterns for more information :)

I've been asked to work on changing a number of classes that are core to the system we work on. The classes in question each require 5 - 10 different related objects, which themselves need a similiar amount of objects.

Data is also pulled in from several data sources, and the project uses EJB2 so when testing, I'm running without a container to pull in the dependencies I need!

I'm beginning to get overwhelmed with this task. I have tried unit testing with JUnit and Easymock, but as soon as I mock or stub one thing, I find it needs lots more. Everything seems to be quite tightly coupled such that I'm reaching about 3 or 4 levels out with my stubs in order to prevent NullPointerExceptions.

Usually with this type of task, I would simply make changes and test as I went along. But the shortest build cycle is about 10 minutes, and I like to code with very short iterations between executions (probably because I'm not very confident with my ability to write flawless code).

Anyone know a good strategy / workflow to get out of this quagmire?

As you suggest, it sounds like your main problem is that the API you are working with is too tightly coupled. If you have the ability to modify the API, it can be very helpful to hide immediate dependencies behind interfaces so that you can cut off your dependency graph at the immediate dependency.

If this is not possible, an Auto-Mocking Container may be of help. This is basically a container that automatically figures out how to return a mock with good default behavior for nested abstractions. As I work on the .NET framework, I can't recommend any for Java.

If you would like to read up on unit testing patterns and best practices, I can only recommend xUnit Test Patterns.

For strategies for decoupling tightly coupled code I recommend Working Effectively with Legacy Code.

Nowadays most programmers know about code refactorings.

What about refactorings of data structures, are there any good readings about it?

One paradigm that I can think of is the normalization process of a relational database.

Are there any other good examples?

xUnit Test Patterns: Refactoring Test Code is a good reference for refactoring your code to work well with unit tests. It's not exactly what your asking for but its a good reference to keeep on hand.

Refactoring Databases : Evolutionary Database Design seems to be a worthwile read on the subject, judging from the fact that it won the 2007 Software Development Jolt Productivity Award in the Technical Books category. I haven't yet read it though, so I can't comment on it personally.

Example:

public bool Save(MyObj instance)
{
    if (instance.IsNew)
    {
        this.repository.Create(instance);
    }
    else
    {
        this.repository.Update(instance);
    }
}

How do I create a test in Moq that verifies:

  1. that a property IsNew is being read
  2. that either Create() or Update() has been invoked

Off the top of my head: Verifying that the IsNew property is being read:

var mock = new Mock<MyObj>();
mock.Setup(m => m.IsNew).Returns(true).Verifiable();
//...
sut.Save(mock.Object);
//...
mock.Verify();

In the example above, the IsNew property will return true, so the Create path will be taken.

To verify that either the Create or Update method was invoked, you need to have some hook into that functionality. It looks like Repository is a static class, in which case you can't replace it with a Test Double, but I may be reading your code in the wrong way... If you can replace it with a Test Double (Mock), you can use the same principle as outlined above.

If you can examine the state of your Repository after the Save method has been called, you may be able to tell by State-Based Testing which of the two code paths were followed.

If there's no externally observable difference between the result of the two code paths, it's probably a better idea not to test this specific implementation detail. It might lead you towards an anti-pattern called Overspecified Test - you can read more about this anti-pattern and many other unit test-related things in the excellent book xUnit Test Patterns.


Edit: Testing the repositories can be done in the same way:

var myObjMock = new Mock<MyObj>();
myObjMock.Setup(m => m.IsNew).Returns(true);

var repositoryMock = new Mock<Repository>();
repositoryMock.Setup(m => m.Create(myObjMock.Object)).Verifiable();

var sut = new SomeClass(repositoryMock.Object);
sut.Save(myObjMock.Object);

repositoryMock.Verify();

The call to Verifiable is the key. Without it, the default behavior of Moq is to get out of the way, do the best it can and not throw any exceptions if at all possible.

When you call Verifiable, you instruct the mock to expect that particular behavior. If that expectation has not been met when you call Verify, it will throw an exception, and thus fail the test.

This is a test suite that is green using Rhino Mocks.

[SetUp]
  public void BeforeEachTest()
  {
     _mocksRepo = new MockRepository();
     _mockBank = _mocksRepo.StrictMock<IBank>();
     //_mockPrinter = _mocksRepo.StrictMock<IPrinter>();
     _mockPrinter = _mocksRepo.DynamicMock<IPrinter>();
     _mockLogger = _mocksRepo.StrictMock<ILog>();

     _testSubject = new CrashTestDummy(DUMMY_NAME, _mockPrinter, _mockLogger);
  }

  [TearDown]
  public void AfterEachTest()
  {
     _mocksRepo.ReplayAll(); // 2nd call to ReplayAll does nothing. Safeguard check
     _mocksRepo.VerifyAll();
  }

  [Test]
  public void Test_ConstrainingArguments()
  {
     _mockPrinter.Print(null);
     LastCall.Constraints(Text.StartsWith("The current date is : "));
     _mocksRepo.ReplayAll();

     _testSubject.PrintDate();
  }

Now to make a test green in another fixture, I had to make a slight change to the ctor - subscribe to an event in the printer interface. This resulted in all tests in the above test fixture going red.

public CrashTestDummy(string name, IPrinter namePrinter, ILog logger)
      {
         _printer = namePrinter;
         _name = name;
         _logger = logger;

         _printer.Torpedoed += KaboomLogger;   // CHANGE
      }

The NUnit errors tab shows

LearnRhinoMocks.Rhino101.Test_ConstrainingArguments:
TearDown : System.Reflection.TargetInvocationException : Exception has been thrown by the target of an invocation.
  ----> Rhino.Mocks.Exceptions.ExpectationViolationException : IPrinter.add_Torpedoed(System.EventHandler`1[LearnRhinoMocks.MyEventArgs]); Expected #1, Actual #0.

The way to fix this is to move the line where the test subject is created from Setup() below the ReplayAll() line in the test. Rhino mocks thinks that you have setup an event subscribe as an expectation otherwise. However this fix means (some) duplication in each test. Each test usually adds some expectations before calling ReplayAll.

I know that this is a specific scenario which involves event subscription in the test subject ctor.

  • However this is a normal scenario e.g. in a ModelViewPresenter pattern, I'm curious to know if there is a recommended way to do this?
  • Also I didnt like the way multiple tests in a test fixture failed due to change driven by an external test ? Am I in test-design smell country?

According to xUnit Test Patterns, you are, indeed, in test-design smell country :)

The issue is a test smell called General Fixture, which means that the run-time environment is always configured in the same way across many different tests.

It's important to realize that when it comes to xUnit Test Patterns, the term Fixture means something different than in NUnit. It's not a test class, but rather covers the concept of everything that must be in place as preconditions in a test case before you exercise the System Under Test (SUT).

It is very common to use a setup method such as your BeforeEachTest method to set up the Fixture, but there are other ways, which I'll get back to shortly.

The problem with General Fixture is that you attempt to cover too many specific test cases with slightly different preconditions with the same Fixture. This is one of the underlying reasons you now observe this interdependency between tests.

To compound the issue, NUnit is special in that it reuses the same instance of a particular test class across multiple test cases, so that state may stick from one test to another. All other xUnit frameworks create a new instance of the test class for each test case, so that this type of issue is less common.

This brings me back to an alternative to setting up the Fixture in a setup method (what is called Implicit Setup): write a method or object that encapsulates the Fixture and create it as the first line of code in each test case. This will also allow you to vary the Fixture slightly from test case to test case by parameterizing this Fixture setup code.

Here's an example of how I usually do this.

what are the best books to learn about junit, jmock and testing generally? Currently I'm reading pragmatic unit testing in Java, I'm on chapter 6 its good but it gets complicated.. is there a book for a bottom up? from your experience which helped you get the testing concept

Test Driven Developemnt by Kent Beck is the original. Read it's great. But the best way to learn is to practice. Check out different Katas (exercises) at the dojo site.

For me, the best thing that has helped me learn unit testing is reading the many blogs out there.

After that there are books such as Test Driven Development by Example by Kent Beck, xUnit Test Patterns, The Art of Unit Testing etc.

Some books are for java, others for C#...I don't really think it matters which language you read about TDD in as it all helps in one way or another.

  • How to write a unit test framework?
  • Can anyone suggest some good reading?

I wish to work on basic building blocks that we use as programmers, so I am thinking of working on developing a unit test framework for Java. I don't intend to write a framework that will replace junit; my intention is to gain some experience by doing a worthy project.

There are several books that describe how to build a unit test framework. One of those is Test-Driven Development: By Example (TDD) by Kent Beck. Another book you might look at is xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros.

  • Why do you want to build your own unit test framework?
  • Which ones have you tried and what did you find that was missing?

If (as your comments suggest) your objective is to learn about the factors that go into making a good unit test framework by doing it yourself, then chapters 18-24 (Part II: The xUnit Example) of the TDD book show how it can be done in Python. Adapting that to Java would probably teach you quite a lot about Python, unit testing frameworks and possibly Java too.

It will still be valuable to you to have some experience with some unit test framework so that you can compare what you produce with what others have produced. Who knows, you might have some fundamental insight that they've missed and you may improve things for everyone. (It isn't very likely, I'm sorry to say, but it is possible.)

Note that the TDD people are quite adamant that TDD does not work well with databases. That is a nuisance to me as my work is centred on DBMS development; it means I have to adapt the techniques usually espoused in the literature to accommodate the realities of 'testing whether the DBMS works does mean testing against a DBMS'. I believe that the primary reason for their concern is that setting up a database to a known state takes time, and therefore makes testing slower. I can understand that concern - it is a practical problem.

I am about to embark on writing a system that needs to re-balance it's load distribution amongst the remaining nodes once one of more of the nodes involved fail. Anyone have any good references on what to avoid and what works?

In particular I'm curious how one should start in order to build such a system to to be able to unit-test it.

This question smells like my distributed systems class. So I feel I should point out the textbook we used.

It covers many aspects of distributed systems at an abstract level, so a lot of its content would apply to what you're going to do.

It does a pretty good job of pointing out pitfalls and common mistakes, as well as giving possible solutions.

The first edition is available for free download from the authors.

The book doesn't really cover unit-testing of distributed systems though. I could see entire book written on just that.

This sounds like a task that involves a considerable degree of out-of-process communication and other environment-dependent code.

To make your code Testable, it is important to abstract such code away from your main logic so that you can unit test the core engine without having to depend on any of these environment-specific things.

The recommended approach is to hide such components behind an interface that you can then replace with so-called Test Doubles in unit tests.

The book xUnit Test Patterns cover many of these things, and much more, very well.

I am new to unit testing but am beginning to think I need it. I have an ASP.NET web forms application that is being extended in unforeseen directions meaning that certain bits of code are being adapted for multiple uses. I need to try and ensure that when changing these units I don't break their original intended use. So how do I best go about applying unit tests retrospectively to existing code? Practical advice appreciated. Thanks

Very slowly!

I'm currently doing the same thing on my project and it is a lot of effort. In order to unit test existing classes they often need to be completely redesigned...and because of high coupling changing a single class can result in changes that need to be made in several dozen other classes.

The end result is good clean code that works, can be extended, and can be verified....but it does take a lot of time and effort.

I've bought myself several books on unit testing that are helping me through the process.

You might want to consider getting yourself xUnit Test Patterns and Working Effectively with Legacy Code.

I have an interface like so:

[ContractClass(typeof(ContractStockDataProvider))]
public interface IStockDataProvider
{
    /// <summary>
    /// Collect stock data from cache/ persistence layer/ api
    /// </summary>
    /// <param name="symbol"></param>
    /// <returns></returns>
    Task<Stock> GetStockAsync(string symbol);

    /// <summary>
    /// Reset the stock history values for the specified date
    /// </summary>
    /// <param name="date"></param>
    /// <returns></returns>
    Task UpdateStockValuesAsync(DateTime date);

    /// <summary>
    /// Updates the stock prices with the latest values in the StockHistories table.
    /// </summary>
    /// <returns></returns>
    Task UpdateStockPricesAsync();

    /// <summary>
    /// Determines the last population date from the StockHistories table, and 
    /// updates the table with everything available after that.
    /// </summary>
    /// <returns></returns>
    Task BringStockHistoryCurrentAsync();

    event Action<StockEventArgs> OnFeedComplete;
    event Action<StockEventArgs> OnFeedError;
}

I have a corresponding contract class like so:

[ContractClassFor(typeof (IStockDataProvider))]
public abstract class ContractStockDataProvider : IStockDataProvider
{
    public event Action<StockEventArgs> OnFeedComplete;
    public event Action<StockEventArgs> OnFeedError;

    public Task BringStockHistoryCurrentAsync()
    {
        return default(Task);
    }

    public Task<Stock> GetStockAsync(string symbol)
    {
        Contract.Requires<ArgumentException>(!string.IsNullOrWhiteSpace(symbol), "symbol required.");
        Contract.Requires<ArgumentException>(symbol.Equals(symbol.ToUpperInvariant(), StringComparison.InvariantCulture),
            "symbol must be in uppercase.");
        return default(Task<Stock>);
    }

    public Task UpdateStockPricesAsync()
    {
        return default(Task);
    }

    public Task UpdateStockValuesAsync(DateTime date)
    {
        Contract.Requires<ArgumentOutOfRangeException>(date <= DateTime.Today, "date cannot be in the future.");
        return default(Task);
    }
}

I made a unit test like so:

[TestClass]
public class StockDataProviderTests
{
    private Mock<IStockDataProvider> _stockDataProvider;

    [TestInitialize]
    public void Initialize()
    {
        _stockDataProvider = new Mock<IStockDataProvider>();
    }

    [TestMethod]
    [ExpectedException(typeof(ArgumentException))]
    public async Task GetStockAsyncSymbolEmptyThrowsArgumentException() 
    {
        //arrange
        var provider = _stockDataProvider.Object;

        //act
        await provider.GetStockAsync(string.Empty);

        //assert
        Assert.Fail("Should have thrown ArgumentException");
    }
}

From what I've read, this should be sufficient to pass the unit test, but upon acting, the unit test fails by not throwing an exception.

I'm not trying to test the contract functionality, but I am interested in testing the validation logic to make sure my requirements are met for concrete implementations of the IStockDataProvider interface.

Am I doing this wrong? How can I verify, using my unit tests, that I have properly specified my inputs?

UPDATE

So, while mocking the interface and testing the validation logic does not seem to work, my concrete class (not inheriting from the abstract) validates the inputs properly in testing. So it may just not be supported in mocking, though I don't quite know why.

The reason why your mocks weren't throwing exceptions is quite simple. Interfaces can't have methods. Therefore, you can't specify contracts on an interface directly. But, you already knew this. That's why you created a contract class for your interface (which, by the way, should be a private abstract class).

Because you're attempting to mock the interface, the mocking tool knows nothing about the contracts. All mocking tools do is look at the definition of the interface and create a proxy object. A proxy is a stand-in, a double, and it has no behavior at all! Now, with libraries like Moq, you can make those proxies have behavior by using methods such as Returns(It.Is.Any()). But again, this is turning a proxy more into a stub at this point. Moreover, and more importantly, this wouldn't work with mocking libraries for one reason: the proxy is created on the fly, at runtime during the test. So no "rewriting" of the proxy is being performed by ccrewrite.

So how would you test that you specified the right conditions for your contracts?

You should create a new library called MyProjectName.Tests.Stubs, for example. Then, you should create an actual stub object instance for your interface in this project. It doesn't have to be elaborate. Just enough to allow you to call the methods in a unit test to test that the Contracts work as expected. Oh, and one more important thing for this to work: Enable Perform Runtime Contract Checking on this newly created stubs project for the Debug build. Otherwise, the stubs you create which inherit from your interface will not be instrumented with contracts.

Reference this new MyProjectName.Tests.Stubs assembly in your unit test project. Use the stubs to test your interfaces. Here's some code (note, I'm using your code from your post--so if the contracts don't work as expected, don't blame me--fix your code ;) ):

// Your Main Library Project
//////////////////////////////////////////////////////////////////////

[ContractClass(typeof(ContractStockDataProvider))]
public interface IStockDataProvider
{
    /// <summary>
    /// Collect stock data from cache/ persistence layer/ api
    /// </summary>
    /// <param name="symbol"></param>
    /// <returns></returns>
    Task<Stock> GetStockAsync(string symbol);

    /// <summary>
    /// Reset the stock history values for the specified date
    /// </summary>
    /// <param name="date"></param>
    /// <returns></returns>
    Task UpdateStockValuesAsync(DateTime date);

    /// <summary>
    /// Updates the stock prices with the latest values in the StockHistories table.
    /// </summary>
    /// <returns></returns>
    Task UpdateStockPricesAsync();

    /// <summary>
    /// Determines the last population date from the StockHistories table, and 
    /// updates the table with everything available after that.
    /// </summary>
    /// <returns></returns>
    Task BringStockHistoryCurrentAsync();

    event Action<StockEventArgs> OnFeedComplete;
    event Action<StockEventArgs> OnFeedError;
}

// Contract classes should:
//    1. Be Private Abstract classes
//    2. Have method implementations that always
//       'throw new NotImplementedException()' after the contracts
//
[ContractClassFor(typeof (IStockDataProvider))]
private abstract class ContractStockDataProvider : IStockDataProvider
{
    public event Action<StockEventArgs> OnFeedComplete;
    public event Action<StockEventArgs> OnFeedError;

    public Task BringStockHistoryCurrentAsync()
    {
        // If this method doesn't mutate state in the class,
        // consider marking it with the [Pure] attribute.

        //return default(Task);
        throw new NotImplementedException();
    }

    public Task<Stock> GetStockAsync(string symbol)
    {
        Contract.Requires<ArgumentException>(
            !string.IsNullOrWhiteSpace(symbol),
            "symbol required.");
        Contract.Requires<ArgumentException>(
            symbol.Equals(symbol.ToUpperInvariant(), 
                StringComparison.InvariantCulture),
            "symbol must be in uppercase.");

        //return default(Task<Stock>);
        throw new NotImplementedException();
    }

    public Task UpdateStockPricesAsync()
    {
        // If this method doesn't mutate state within
        // the class, consider marking it [Pure].

        //return default(Task);
        throw new NotImplementedException();
    }

    public Task UpdateStockValuesAsync(DateTime date)
    {
        Contract.Requires<ArgumentOutOfRangeException>(date <= DateTime.Today, 
            "date cannot be in the future.");

        //return default(Task);
        throw new NotImplementedException();
    }
}

// YOUR NEW STUBS PROJECT
/////////////////////////////////////////////////////////////////
using YourNamespaceWithInterface;

// To make things simpler, use the same namespace as your interface,
// but put '.Stubs' on the end of it.
namespace YourNamespaceWithInterface.Stubs
{
    // Again, this is a stub--it doesn't have to do anything
    // useful. So, if you're not going to use this stub for
    // checking logic and only use it for contract condition
    // checking, it's OK to return null--as you're not actually
    // depending on the return values of methods (unless you
    // have Contract.Ensures(bool condition) on any methods--
    // in which case, it will matter).
    public class StockDataProviderStub : IStockDataProvider
    {
        public Task BringStockHistoryCurrentAsync()
        {
            return null;
        }

        public Task<Stock> GetStockAsync(string symbol)
        {
            Contract.Requires<ArgumentException>(
                !string.IsNullOrWhiteSpace(symbol),
                "symbol required.");
            Contract.Requires<ArgumentException>(
                symbol.Equals(symbol.ToUpperInvariant(), 
                    StringComparison.InvariantCulture),
                "symbol must be in uppercase.");

            return null;
        }

        public Task UpdateStockPricesAsync()
        {
            return null;
        }

        public Task UpdateStockValuesAsync(DateTime date)
        {
            Contract.Requires<ArgumentOutOfRangeException>(
                date <= DateTime.Today, 
                "date cannot be in the future.");

            return null;
        }
    }
}

// IN YOUR UNIT TEST PROJECT
//////////////////////////////////////////////////////////////////
using YourNamespaceWithInteface.Stubs

[TestClass]
public class StockDataProviderTests
{
    private IStockDataProvider _stockDataProvider;

    [TestInitialize]
    public void Initialize()
    {
        _stockDataProvider = new StockDataProviderStub();
    }

    [TestMethod]
    [ExpectedException(typeof(ArgumentException))]
    public async Task GetStockAsyncSymbolEmptyThrowsArgumentException() 
    {
        //act
        await provider.GetStockAsync(string.Empty);

        //assert
        Assert.Fail("Should have thrown ArgumentException");
    }
}

By creating the project containing stubs implementations of your interface and enabling Perform Runtime Contract Checking on the stub project, you can now test the contract conditions in unit tests.

I would also highly recommend that you do some reading on unit testing and the roles of various test doubles. At one time, I thought aren't mocks, stubs, fakes all the same thing. Well, yes, and no. The answer is a bit nuanced. And unfortunately, libraries like MoQ, while great!, don't help because they tend to muddy the waters about what it is you're actually using in your tests when using these libraries. Again, that's not to say they're not helpful, useful, or great—but just that you need to understand exactly what it is you're using when you're using these libraries. A recommendation I can make is xUnit Test Patterns. There's also a website: http://xunitpatterns.com/.

I'm new to Unit Tests so I've been trying to code some examples to learn the right way to use them. I have an example project which uses Entity Framework to connect to a database.

I'm using an n-tier architecture composed by a data access layer which queries the database using EF, a business layer which invokes data access layer methods to query database and perform its business purpose with the data retrieved and a service layer which is composed of WCF services that simply invoke business layer objects.

Do I have to code unit tests for every single layer (data access, business layer, services layer?

Which would be the right way to code a unit test for a method that queries a database? The next code is an example of a method in my data access layer which performs a select on the database, how should its unit test be like?

public class DLEmployee
{

    private string _strErrorMessage = string.Empty;
    private bool _blnResult = true;

    public string strErrorMessage
    {
        get
        {
            return _strErrorMessage;
        }
    }
    public bool blnResult
    {
        get
        {
            return _blnResult;
        }
    }

    public Employee GetEmployee(int pintId)
    {
        Employee employee = null;
        _blnResult = true;
        _strErrorMessage = string.Empty;

        try
        {
            using (var context = new AdventureWorks2012Entities())
            {
                employee = context.Employees.Where(e => e.BusinessEntityID == pintId).FirstOrDefault();
            }
        }
        catch (Exception ex)
        {
            _strErrorMessage = ex.Message;
            _blnResult = false;

        }

        return employee;
    }

Here are my 2 cents based on Domain Driven Design principles:

  • Your business layer should not depend on the concrete data layer, it should just define some abstract interfaces that the data layer can implement (repositories).
  • You should definitely unit-test your business layer with a fake data layer that does not touch the file system.
  • You might create integration tests including your service and business layer with a fake data layer. There is no point mocking out the business layer and check what the service layer calls on the business layer (behavior testing), rather check what state changes it makes on the business objects that are observable through the business layer.
  • You should create some end-to-end tests with a real data layer, the service and business layer and exercise some use-cases on the service layer.

If you just started on unit testing, I advise you to read Kent Beck's Test Driven Development by Example and Gerard Meszaros' xUnit Test Patterns

I need to write a unit test for a method where I arrange data according to another default list.

This is the method.

internal AData[] GetDataArrayInInitialSortOrder(ABData aBData)
{
    Dictionary<string,AData > aMap = aBData.ADataArray.ToDictionary(v => v.GroupName, v => v);
    List<AData> newDataList = new List<AData>();
    foreach (AData aData in _viewModel.ADList)
        newDataList.Add(aMap[aData.GroupName]);
    return newDataList.ToArray();
}

Please help I am new in unit testing and this is not easy for me.

I am not sure what I should be doing here. Should I be hardcoding all the values in or should I have them in a CONST variables. Everything I seen seems to hard code the values in so I am not sure.

Like this is what I was doing now.

Say in my controller I had a validation test to check if the user tries to submit a form with a blank field.

Now I would have a if statement checking for blank or null variable. If this happened then I would add the error to the ModelState with an error message that I wrote.

so in my unit test I want to make sure that if a blank form variable is submitted that it gets caught.

now in my unit testing I just made a CONST varible and copied in and pasted the validation message in.

So in my assert I compare what the actual message is compared to the message stored in my CONST Varrible. I do this by calling like the Model state and call the field up where I expect the error to be.

Like:

result.ViewData.ModelState["username"].Errors[0];

So if the message is there then it must have gone into my code otherwise it would not exist.

So it occurred to me maybe I should make a new class that will be static and hold all these CONST variables.

That way both the controller views and the unit tests can use them. That way if I got to change the error message then I only need to change it one place. Since I am not testing what the error message is I am testing if it gets set.

The same thing say for exceptions I have some custom messages but I am not testing if the message is right, more if the expection got caught.

The way I am testing it though is to see if the message is the message that I expect since if it is not the message or the message does not exist then something went wrong.

I am new to unit testing so I wanted to make sure that what I am going to do won't some how screw up my unit tests.

To me it makes sense but I thought better check first.

Thanks

It's important to write each test in a way that is robust to subsequent change. You will often need to change parts of your application at a later date, and each time you do that, there's a risk that you will break one of more of your tests.

If your tests are robust to change, a failing test will truly indicate a regression bug.

However, if your tests are what's called Overspecified Tests, every little change you make in your code base may cause tests to fail - not because there was a regression bug, but because the test is too brittle. When this happens, you lose faith in your tests; test maintainance takes a lot of time, and you'll eventually end up abandoning the test suite altogether.

As I read your question, you are already beginning to see the contours of this anti-pattern. I assume that is why you don't test for the specific texts returned, but merely whether they are being set at all. I think this is correct - I rarely test for specific strings, but rather whether a string is specified at all. This makes the test more robust to change, and you avoid the Overspecified Test anti-pattern.

In many cases, instead of doing an Assert.AreEqual on two strings, you can just use Assert.IsNotNull, or perhaps Assert.IsFalse(string.IsNullOrEmpty(result)) (your platform seems to be .NET).

In general, assertions based on Derived Values are very robust to change, so you may want to take a look at the following blog post:

If you feel particularly adventurous, I can only recommend that you read xUnit Test Patterns, from where many of the patterns and anti-patterns I mention originate. The Art of Unit Testing is also good...

I begin studying Unit testing with "(NUnit)". I know that this type of testing is used to test "classes" , "functions" and the "interaction between those functions".

In my case I develop "asp.net web applications".

  • How can i use this testing to test my pages(as it is considered as a class and the methods used in)and in which sequence?, i have three layers:

    1. Interface layer(the .cs of each page).

    2. Data access layer(class for each entity)(DAL).

    3. Database layer (which contains connection to the database,open connection,close connection,....etc).

    4. Business layer(sometimes to make calculation or some separate logic).

  • How to test the methods that make connection to the database?

  • How to make sure that my testing not a waste of time?.

There are unit and integration tests. Unit testing is testing single components/classes/methods/functions and interaction between them but with only one real object (system under test-SUT) and test doubles. Test doubles can be divided to stubs and mocks. Stubs provide prepared test data to SUT. That way you isolate SUT from the environment. So You don't have to hit database, web or wcf services and so on and you have same input data every time. Mocks are used to verify that SUT works as expected. SUT calls methods on mock object not even knowing it is not real object. Then You verify that SUT works by asserting on mock object. You can write stubs and mocks by hand or use one of many mocking frameworks. One of which is http://code.google.com/p/moq/

If You want to test interaction w/database that's integration testing and generally is a lot harder. For integration testing You have to setup external resources in well known state.

Let's take your layers:

  1. You won't be able to unit test it. Page is to tightly coupled to ASP.NET runtime. You should try to not have much code in code behind. Just call some objects from your code behind and test those objects. You can look at MVC design patters. If You must test Your page You should look at http://watin.org/. It automates Your internet browser, clicks buttons on page and verifies that page displays expected result's.

  2. This is integration testing. You put data in database, then read it back and compare results. After test or before test You have to bring test database to well known state so that tests are repeatable. My advice is to setup database before test runs rather then after test runs. That way You will be able to check what's in database after test fails.

  3. I don't really know how that differs from that in point no. 2.

  4. And this is unit testing. Create object in test, call it's methods and verify results.

How to test methods that make connections to the database is addresed in point 2. How to not waste time? That will come with experience. I don't have general advice other then don't test properties that don't have any logic in it.

For great info about unit testing look here:

http://artofunittesting.com/

http://www.amazon.com/Test-Driven-Development-Kent-Beck/dp/0321146530

http://www.amazon.com/Growing-Object-Oriented-Software-Guided-Tests/dp/0321503627/ref=sr_1_2?ie=UTF8&s=books&qid=1306787051&sr=1-2

http://www.amazon.com/xUnit-Test-Patterns-Refactoring-Code/dp/0131495054/ref=sr_1_1?ie=UTF8&s=books&qid=1306787051&sr=1-1

Edit:

SUT, CUT - System or Class under test. That's what You test. Test doubles - comes from stunt doubles. They do dangerous scenes in movies so that real actors don't have to. Same here. Test doubles replace real objects in tests so that You can isolate SUT/CUT in tests from environment.

Let's look at this class


public class NotTestableParty
{
    public bool ShouldStartPreparing()
    {
        if (DateTime.Now.Date == new DateTime(2011, 12, 31))
        {
            Console.WriteLine("Prepare for party!");
            return true;
        }
        Console.WriteLine("Party is not today");
        return false;
    }
}

How will You test that this class does what it should on New Years Eve? You have to do it on New Years Eve :)

Now look at modified Party class Example of stub:

    public class Party
    {
        private IClock clock;

        public Party(IClock clock)
        {
            this.clock = clock;
        }

        public bool ShouldStartPreparing()
        {
            if (clock.IsNewYearsEve())
            {
                Console.WriteLine("Prepare for party!");
                return true;
            }
            Console.WriteLine("Party is not today");
            return false;
        }
    }

    public interface IClock
    {
        bool IsNewYearsEve();
    }

    public class AlwaysNewYearsEveClock : IClock
    {
        public bool IsNewYearsEve()
        {
            return true;
        }
    }

Now in test You can pass the fake clock to Party class

        var party = new Party(new AlwaysNewYearsEveClock());
        Assert.That(party.ShouldStartPreparing(), Is.True);

And now You know if Your Party class works on New Years Eve. AlwaysNewYearsEveClock is a stub.

Now look at this class:

    public class UntestableCalculator
    {
        private Logger log = new Logger();

        public decimal Divide(decimal x, decimal y)
        {
            if (y == 0m)
            {
                log.Log("Don't divide by 0");
            }

            return x / y;
        }
    }

    public class Logger
    {
        public void Log(string message)
        {
            // .. do some logging
        }
    }

How will You test that Your class logs message. Depending on where You log it You have to check the file or database or some other place. That wouldn't be unit test but integration test. In order to unit test You do this.

    public class TestableCalculator
    {
        private ILogger log;
        public TestableCalculator(ILogger logger)
        {
            log = logger;
        }
        public decimal Divide(decimal x, decimal y)
        {
            if (y == 0m)
            {
                log.Log("Don't divide by 0");
            }
            return x / y;
        }
    }

    public interface ILogger
    {
        void Log(string message);
    }
    public class FakeLogger : ILogger
    {
        public string LastLoggedMessage;
        public void Log(string message)
        {
            LastLoggedMessage = message;
        }
    }

And in test You can

var logger = new FakeLogger();
        var calculator = new TestableCalculator(logger);
        try
        {
            calculator.Divide(10, 0);
        }
        catch (DivideByZeroException ex)
        {
            Assert.That(logger.LastLoggedMessage, Is.EqualTo("Don't divide by 0"));
        }

Here You assert on fake logger. Fake logger is mock object.

I'm brand new to unit testing, and I'm using the VS 2010 unit testing framework.

I've got a function that grabs an integer from the user, then executes different functions based on user input. I've read a lot on unit testing but I haven't found anything that shows me how to test each branch of a switch statement. What I've got so far:

    [TestMethod]
    public void RunBankApplication_Case1()
    {
        using (var sw = new StringWriter())
        {
            using (var sr = new StringReader("1"))
            {
                Console.SetOut(sw);
                Console.SetIn(sr);
                BankManager newB = new BankManager();
                newB.RunBankApplication();
                var result = sw.ToString();

                string expected = "Enter Account Number: ";
                Assert.IsTrue(result.Contains(expected));
            }
        }
    }

When the function under case 1 gets called, the first thing that happens is the string "Enter Account Number: " gets written to the console. However, this isn't working at all. Am I not passing input to the console correctly? Thanks for the help!

Edit: my RunBankApplication() function:

do
      {
            DisplayMenu();

            option = GetMenuOption();

            switch (option)
            {
                case 1:
                    if (!CreateAccount())
                    {
                        Console.WriteLine("WARNING: Could not create account!");
                    }
                    break;
                case 2:
                    if (!DeleteAccount())
                    {
                        Console.WriteLine("WARNING: Could not delete account!");
                    }

                    break;
                case 3:
                    if (!UpdateAccount())
                    {
                        Console.WriteLine("WARNING: Could not update account!");
                    }

                    break;
                case 4: DisplayAccount();
                    break;
                case 5: status = false;
                    break;
                default: Console.WriteLine("ERROR: Invalid choice!");
                    break;
            }
        } while (status);

it's not right approach. You shouldn't communicate with Console in unit tests. Just extract your function which works with input parameters and test this function.

like this:

in YourExtractedClass:

      public string GetMessage(string input)
        {
            var result = string.Empty;

            switch (input)
            {
                case "1":
                    result = "Enter Account Number: ";
                    break;
                case "2":
                    result = "Hello World!";
                    break;
                default:
                    break;
            }

            return result;
        }

....

In your Test class for YourExtractedClass

    [Test]
    public void GetMessage_Input1_ReturnEnterAccountNumberMessage()
    {
        var result = GetMessage("1");
        var expected = "Enter Account Number: ";

        Assert.That(result == expected);
    }

    [Test]
    public void GetMessage_Input2_ReturnHelloWorldMessage()
    {
        var result = GetMessage("1");
        var expected = "Hello World!";

        Assert.That(result == expected);
    }

And one more thing: it's better to move you strings ("Enter Account Number" etc) to one place (fro example to some Constants class). Don't repeat yourself!

read good books about unit testing:

The Art of Unit Testing: With Examples in .Net

Pragmatic Unit Testing in C# with NUnit, 2nd Edition

xUnit Test Patterns: Refactoring Test Code

I am still confused of when I have to make a wrapper and interface to fake my tests.

Like in a book I am reading about MVC the author uses the Moq framework.

So the author first makes a IFormAuthentication interface.

Writes some methods there and then makes a WrapperClass that implements these methods and then writes the actual code for the methods(ie signout).

So then in Moq he just uses the interface. So this makes sense to me but correct me if I am wrong.

He is doing this because he wants moq to make a fake mockup by using the interface? Then in MVC application he has it step up so that if the interface is null it will then make a new wrapper class.

So I am guessing this is so when it time to actually run it uses the wrappers that actually contains the real methods so that the application will work as it should.

So hopefully I got that right.

Now he goes on to do the Membership ones and he is says something like "look how many methods I would have to implement with an interface(I am also guessing he would make a wrapper too)".

Instead we will get Moq to do it and then he passes to Moq MembershipProvider and it creates all this stuff.

So my questions is how did he know? Like how is it on one hand you can't do FormsAuthentication methods this way but you can do the MembershipProvider?

I don't even think you can do Like just Membership it has to be MembershipProvider.

So where do I get this information? Like I want to do my SMTP(MailMessage) and I want to know if I have to write an interface and then a wrapper or can I do the same thing like MembershipProvider?

I am not sure I don't know how to tell.

Thanks

The reason why the author didn't extract an interface for MembershipProvider is because it is already an abstract class, so it serves excellently as a Test Double already.

In other words: Extracting an interface is necessary only if you want to abstract away a class that is outside of your control. In that case, you can then make an Adapter that wraps the real implementation, while still providing the ability to replace the real implementation with a Test Double.

When it comes to Abstractions, interfaces and base classes with virtual members are conceptually equivalent.

You can read more about Test Doubles in the excellent xUnit Test Patterns.

I am working on a large test project consisting of thousands of integration tests. It is a bit messy with lots of code duplication. Most tests are composed of several steps, and a lot of "dummy" objects are created. With dummy I mean something like this:

new Address {
    Name = "fake address",
    Email = "some@email.com",
    ... and so on
}

where it often really doesn't matter what the data is. This kind of code is spread out and duplicated in tests, test base classes, and helpers.

What I want is to have a single "test data builder", having a single responsibility, generate test data which is consumed by the tests.

One approach is to have a class with a bunch of static methods like following:

Something CreateSomething(){
    return new Something {
    // set default dummy values
}

and an overload:

Something CreateSomething(params){
    return new Something {
    // create the Something from the params
}

Another approach is to have xml files containing the data but i am afraid then it would be too far away from the tests.

The goal is to move this kind of code out of the tests because right now the tests are big and not readable. In a typical case, of 50 lines of test code, 20-30 is of this kind of code.

Are there any patterns for accomplishing this? Or any example of big open source codebase with something similar that I can have a look at?

I would shy away from xml files that specify test dependencies.

My thought process stems from a lack of refactoring tools that these xml files cannot take advantage of within the Visual Studio ecosystem.

Instead, I would create a TestAPI. This API will be responsible for serving dependency data to test clients.

Note that the dependency data that is being requested will already be initialized with general data and ready to go for the clients that are requesting the dependencies.

Any values that serve as required test inputs for a particular test would be assigned within the test itself. This is because I want the test to self document its intent or objective and abstract away the dependencies that are not being tested.

XUnit Test Patterns provided me a lot of insight for writing tests.

Optional assignment for one of my classes. 30-45 minute presentation/case study on either of these two topics:

  1. Examples of currently existing design patterns in real life projects: what problem they solve, why are they better than other techniques, etc
  2. New design patterns, what problems they solve that other design patterns can't, etc

Note that "new" and "existing" are with respect to the GoF book and the design patterns listed within.

For the first one, source code is not required, but it's probably a plus, so an open source project would be the best.

For the second, I'd basically need to be able to give a description like the ones in the GoF book for each pattern, with proper motivations, examples, and the like.

Anyone got some good ideas/pointers?

I truly hope this question doesn't get deleted since I really do need help from the programming pros out there...

I've been programming for quite a while (my main platform being .Net), and am about 85% self-taught - Usually from the school of hard-knocks. For example, I had to learn the hard way why it was better to put your configuration parameters in an .ini file (that was easier for my users to change) rather than hard-code them into my main app and now to use the app.config rather than even a config file sometimes. Or what a unit-test was and why it proved to be so important. Or why smaller routines / bits of code are better 90% of the time than one, long worker process...

Basically, because I was thrown into the programming arena without even being shown .Net, I taught myself the things I needed to know, but believe I missed MANY of the fundamentals that any beginner would probably be taught in an intro class.

I still have A LOT more to learn and was hoping to be pointed in the right direction towards a book / web-resource / course / set of videos to really teach the best programming practices / how to set things up in the best way possible (in my case, specifically, for .Net). I know this may seem subjective and every programmer will do things differently, but if there are any good books / resources out there that have helped you learn the important fundamentals / tricks (for example, self-documenting code / design patterns / ) that most good programmers should know, please share!!!

My main platform is .Net.

Thanks!!!!