The Art of Unit Testing

Roy Osherove

Mentioned 9

2nd edition of the step-by-step guide that helps developers to write test sets that are maintainable, readable and trustworthy.

More on Amazon.com

Mentioned in questions and answers.

I know how I use these terms, but I'm wondering if there are accepted definitions for faking, mocking, and stubbing for unit tests? How do you define these for your tests? Describe situations where you might use each.

Here is how I use them:

Fake: a class that implements an interface but contains fixed data and no logic. Simply returns "good" or "bad" data depending on the implementation.

Mock: a class that implements an interface and allows the ability to dynamically set the values to return/exceptions to throw from particular methods and provides the ability to check if particular methods have been called/not called.

Stub: Like a mock class, except that it doesn't provide the ability to verify that methods have been called/not called.

Mocks and stubs can be hand generated or generated by a mocking framework. Fake classes are generated by hand. I use mocks primarily to verify interactions between my class and dependent classes. I use stubs once I have verified the interactions and am testing alternate paths through my code. I use fake classes primarily to abstract out data dependencies or when mocks/stubs are too tedious to set up each time.

I am surprised that this question has been around for so long and nobody has as yet provided an answer based on Roy Osherove's "The Art of Unit Testing".

In "3.1 Introducing stubs" defines a stub as:

A stub is a controllable replacement for an existing dependency (or collaborator) in the system. By using a stub, you can test your code without dealing with the dependency directly.

And defines the difference between stubs and mocks as:

The main thing to remember about mocks versus stubs is that mocks are just like stubs, but you assert against the mock object, whereas you do not assert against a stub.

Fake is just the name used for both stubs and mocks. For example when you don't care about the distinction between stubs and mocks.

The way Osherove's distinguishes between stubs and mocks, means that any class used as a fake for testing can be both a stub or a mock. Which it is for a specific test depends entirely on how you write the checks in your test.

  • When your test checks values in the class under test, or actually anywhere but the fake, the fake was used as a stub. It just provided values for the class under test to use, either directly through values returned by calls on it or indirectly through causing side effects (in some state) as a result of calls on it.
  • When your test checks values of the fake, it was used as a mock.

Example of a test where class FakeX is used as a stub:

const pleaseReturn5 = 5;
var fake = new FakeX(pleaseReturn5);
var cut = new ClassUnderTest(fake);

cut.SquareIt;

Assert.AreEqual(25, cut.SomeProperty);

The fake instance is used as a stub because the Assert doesn't use fake at all.

Example of a test where test class X is used as a mock:

const pleaseReturn5 = 5;
var fake = new FakeX(pleaseReturn5);
var cut = new ClassUnderTest(fake);

cut.SquareIt;

Assert.AreEqual(25, fake.SomeProperty);

In this case the Assert checks a value on fake, making that fake a mock.

Now, of course these examples are highly contrived, but I see great merit in this distinction. It makes you aware of how you are testing your stuff and where the dependencies of your test are.

I agree with Osherove's that

from a pure maintainability perspective, in my tests using mocks creates more trouble than not using them. That has been my experience, but I’m always learning something new.

Asserting against the fake is something you really want to avoid as it makes your tests highly dependent upon the implementation of a class that isn't the one under test at all. Which means that the tests for class ActualClassUnderTest can start breaking because the implementation for ClassUsedAsMock changed. And that sends up a foul smell to me. Tests for ActualClassUnderTest should preferably only break when ActualClassUnderTest is changed.

I realize that writing asserts against the fake is a common practice, especially when you are a mockist type of TDD subscriber. I guess I am firmly with Martin Fowler in the classicist camp (See Martin Fowler's "Mocks aren't Stubs") and like Osherove avoid interaction testing (which can only be done by asserting against the fake) as much as possible.

For fun reading on why you should avoid mocks as defined here, google for "fowler mockist classicist". You'll find a plethora of opinions.

I know that one way to do it would be:

@Test
public void foo(){
   try{
      //execute code that you expect not to throw Exceptions.
   }
   catch(Exception e){
      fail("Should not have thrown any exception");
   }
}

Is there any cleaner way of doing this. (Probably using Junit's @Rule?)

You're approaching this the wrong way. Just test your functionality: if an exception is thrown the test will automatically fail. If no exception is thrown, your tests will all turn up green.

I have noticed this question garners interest from time to time so I'll expand a little.

Background to unit testing

When you're unit testing it's important to define to yourself what you consider a unit of work. Basically: an extraction of your codebase that may or may not include multiple methods or classes that represents a single piece of functionality.

Or, as defined in The art of Unit Testing, 2nd Edition by Roy Osherove, page 11:

A unit test is an automated piece of code that invokes the unit of work being tested, and then checks some assumptions about a single end result of that unit. A unit test is almost always written using a unit testing framework. It can be written easily and runs quickly. It's trustworthy, readable, and maintainable. It's consistent in its results as long as production code hasn't changed.

What is important to realize is that one unit of work usually isn't just one method but at the very basic level it is one method and after that it is encapsulated by other unit of works.

enter image description here

Ideally you should have a test method for each separate unit of work so you can always immediately view where things are going wrong. In this example there is a basic method called getUserById() which will return a user and there is a total of 3 unit of works.

The first unit of work should test whether or not a valid user is being returned in the case of valid and invalid input.
Any exceptions that are being thrown by the datasource have to be handled here: if no user is present there should be a test that demonstrates that an exception is thrown when the user can't be found. A sample of this could be the IllegalArgumentException which is caught with the @Test(expected = IllegalArgumentException.class) annotation.

Once you have handled all your usecases for this basic unit of work, you move up a level. Here you do exactly the same, but you only handle the exceptions that come from the level right below the current one. This keeps your testing code well structured and allows you to quickly run through the architecture to find where things go wrong, instead of having to hop all over the place.

Handling a tests' valid and faulty input

At this point it should be clear how we're going to handle these exceptions. There are 2 types of input: valid input and faulty input (the input is valid in the strict sense, but it's not correct).

When you work with valid input you're setting the implicit expectancy that whatever test you write, will work.

Such a method call can look like this: existingUserById_ShouldReturn_UserObject. If this method fails (e.g.: an exception is thrown) then you know something went wrong and you can start digging.

By adding another test (nonExistingUserById_ShouldThrow_IllegalArgumentException) that uses the faulty input and expects an exception you can see whether your method does what it is supposed to do with wrong input.

TL;DR

You were trying to do two things in your test: check for valid and faulty input. By splitting this into two method that each do one thing, you will have much clearer tests and a much better overview of where things go wrong.

By keeping the layered unit of works in mind you can also reduce the amount of tests you need for a layer that is higher in the hierarchy because you don't have to account for every thing that might have gone wrong in the lower layers: the layers below the current one are a virtual guarantee that your dependencies work and if something goes wrong, it's in your current layer (assuming the lower layers don't throw any errors themselves).

I've read some conflicting advice on the use of assert in the setUp method of a Python unit test. I can't see the harm in failing a test if a precondition that test relies on fails.

For example:

import unittest

class MyProcessor():
    """
    This is the class under test
    """

    def __init__(self):
        pass

    def ProcessData(self, content):
        return ['some','processed','data','from','content'] # Imagine this could actually pass

class Test_test2(unittest.TestCase):

    def LoadContentFromTestFile(self):
        return None # Imagine this is actually doing something that could pass.

    def setUp(self):
        self.content = self.LoadContentFromTestFile()
        self.assertIsNotNone(self.content, "Failed to load test data")
        self.processor = MyProcessor()

    def test_ProcessData(self):
        results = self.processor.ProcessData(self.content)
        self.assertGreater(results, 0, "No results returned")

if __name__ == '__main__':
    unittest.main()

This seems like a reasonable thing to do to me i.e. make sure the test is able to run. When this fails because of the setup condition we get:

F
======================================================================
FAIL: test_ProcessData (__main__.Test_test2)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "C:\Projects\Experiments\test2.py", line 21, in setUp
    self.assertIsNotNone(self.content, "Failed to load test data")
AssertionError: unexpectedly None : Failed to load test data

----------------------------------------------------------------------
Ran 1 test in 0.000s

FAILED (failures=1)

The purpose of setUp is to reduce Boilerplate code which creates between the tests in the test class during the Arrange phase.

In the Arrange phase you: setup everything needed for the running the tested code. This includes any initialization of dependencies, mocks and data needed for the test to run.

Based on the above paragraphs you should not assert anything in your setUp method.

So as mentioned earlier; If you can't create the test precondition then your test is broken. To avoid situations like this Roy Osherove wrote a great book called: The Art Of Unit Testing ( For a fully disclosure Lior Friedman(He was Roy's boss) is a friend of mine and I worked closely with them for more then 2 years, so I am little bit biased...)

Basically there are only a few reasons to have an interaction with external resources during the Arrange phase(or with things which may cause an exception), most of them(if not all of them) are related in integration tests.

Back to your example; There is a pattern to structure the tests where you need to load an external resource(for all/most of them). Just a side note; before you decide to apply this pattern make sure that you can't has this content as a static resource in your UT's class, if other test classes need to use this resource extract this resource into a module.

The following pattern decrease the possibility for failure, since you have less calls to the external resource:

class TestClass(unittest.TestCase):

    def setUpClass(self):
        # since external resources such as other servers can provide a bad content
        # you can verify that the content is valid
        # then prevent from the tests to run  
        # however, in most cases you shouldn't.
        self.externalResourceContent = loadContentFromExternalResource()


    def setUp(self):
        self.content = self.copyContentForTest()

Pros:

  1. less chances to failure
  2. prevent inconsistency behavior (1. something/one has edited the external resource. 2. you failed to load the external resource in some of your tests)
  3. faster execution

Cons:

  1. the code is more complex

It seems like using them as a method to ascertain whether the method under test executed properly is counterproductive because it will lead to brittle tests. In other words you're tying the test to the implementation. So if you later want to change the implementation you're also going to have to change the test. I'm asking this question because I was trained to always use at least one of these methods in every unit test and I think I may have just had an epiphany that this is actually a very bad practice.

First of all, it is important to understand that Verify-family methods are there for a reason -- they allow you to test unobservable1 behavior of your system. What do I mean by that? Consider simple example of application generating and sending reports. Your final component will most likely look like this:

public void SendReport(DateTime reportDate, ReportType reportType)
{
    var report = generator.GenerateReport(reportDate, reportType);
    var reportAsPlainText = converter.ConvertReportToText(report);
    reportSender.SendEmailToSubscribers(body: reportAsPlainText);
}

How do you test this method? It doesn't return anything, thus you cannot check values. It doesn't change state of the system (like, flipping some flag), thus you cannot check that either. The only visible result of SendReport being called is the fact that report was sent via SendEmailToSubscribers invocation. This is the main responsibility of SendReport method -- and this is what unit tests should verify.

Of course, your unit tests should not and will not check whether some email was sent or delivered. You will verify mock of reportSender. And this is where you use Verify methods. To check that some call to some mock actually took place.

As a final note, Roy Osherove in his book Art Of Unit Testing (2nd edition) separates unit tests into three categories, depending on what can be checked:

  • return value of method (simple, common)
  • change in system state (simple, rare)
  • call to external component (complex, rare)

Last category is where you use mocks and Verify methods on them. For other two, stubs are enough (Setup methods).

When your code is designed correctly, such test will (last category) be in minority in your code base, somewhere in 5% - 10% range (number taken from Roy's book, in line with my observations).


1: Unobservable as in that caller cannot easily verify what exactly happend after the call.

I am aware that you should depend on abstractions not concrete implementations but I am also aware of the YAGNI principle. I sometimes find myself struggling to reconcile both of these.

Consider the following classes;

public class Foo
{
    public void DoFoo()
    {
    }
    //private foo stuff
}

public class Bar
{
    private readonly Foo _foo;

    public Bar()
    {
        _foo = new Foo();
    }
}

"Bar" is the class I am interested in; obviously there is a problem, Bar is instantiating an instance of Foo, so let me refactor;

public class Bar
{
    private readonly Foo _foo;

    public Bar(Foo foo)
    {
        _foo = foo;
    }
}

Great, but Bar's constructor still depends on Foo, a concrete implementation. I haven't gained anything (have I?). To fix this I need to make foo an abstraction and this is where my problem begins.

Every example I ever find always (understandably) demonstrates constructor injection using abstractions. I'm all for programming defensively but lets presume I have no need for any other implementations except Foo (test doubles don't count). To create an "IFoo" interface or a "FooBase" abstract class surely violates the YAGNI principle? I would be making something for a possible future scenario and I can always do that later e.g.

public abstract class Foo
{
    public abstract void DoFoo();

    //private foo stuff
}

public class Foo1:Foo
{
    public override void DoFoo()
    {
    }
}

This doesn't break Bar and I could even do this for an interface provided I dropped the "I" convention (which I grow ever more sceptical of) e.g.

public interface Foo
{
    void DoFoo();
}

public abstract class FooBase:Foo
{
    public abstract void DoFoo();

    //private foo stuff
}

public class Foo1:FooBase
{
    public override void DoFoo()
    {
    }
}

What is wrong with injecting a concrete implementation since I can refactor this to an abstraction at a later stage (provided I give the abstraction the same name as the concrete implementation)?

Note: I am aware of the arguments for the "I" interface naming convention and this is not the point of my question. I am also aware that making Foo an abstract class will break the code wherever I was previously instantiating it, but presume I am using DI extensively and so I would only need to change the DI container registration, something I would probably have to do anyway if I were to introduce a new implementation of Foo.

but Bar's constructor still depends on Foo, a concrete implementation. I haven't gained anything (have I?).

What you gained here is that when the dependency Foo itself gets any dependencies of its own, or requires a different lifestyle, you can make this change without having to do sweeping changes throughout all consumers of Foo.

I have no need for any other implementations except Foo (test doubles don't count)

You can't just ignore unit testing in this. As Roy Osherove explained a long time ago, your test suite is another (equally important) consumer of your application with its own requirements. If adding the abstraction simplifies testing, you shouldn't need another reason for creating it.

To create an "IFoo" interface or a "FooBase" abstract class surely violates the YAGNI principle?

You won't violate YAGNI if you create this abstraction for testing. In that case YNI (You need it). By not creating the abstraction you you are optimizing locally within your production code. This is a local optimum instead of a global optimum, since this optimization doesn't take all the other (equally important) code that needs to be maintained (i.e. your test code) into consideration.

What is wrong with injecting a concrete implementation since I can refactor this to an abstraction

There isn't anything wrong per see to inject a concrete instance, although -as said- creating an abstraction could simplify testing. If it doesn't simplify testing and letting the consumer take a hard dependency on the implementation could be fine. But do note that depending on a concrete type can have its downsides. For instance, it becomes harder to replace it with a different instance (such as an interceptor or decorator) without having to make changes to the consumer(s). If this is not a problem, you might as well use the concrete type.

I am completely new to Unit test case writing. I am using MVVMLigh with WPF. Is it necessary to use some third party test framework or .Net Unit test framework it enough? Also how to handle static class in unit test case? In this case AppMessages class.

Can some one please guide me how to write unit cases for following piece of code:

public MyViewModel(Participant participant)
{    
    if (participant != null)
    {
        this.ParentModel = parentModel;
        OkCommand = new RelayCommand(() => OkCommandExecute());
        CalculateAmountCommand = new RelayCommand(() => CalculateAmount());        
    }
    else
    {
        ExceptionLogger.Instance.LogException(Constants.ErrorMessages.FinancialLineCanNotBeNull, "FinancialLineViewModel");
        AppMessages.DisplayDialogMessage.Send(Constants.ErrorMessages.FinancialLineCanNotBeNull, MessageBoxButton.OK, Constants.DefaultCaption, null);
    }
}

public static class AppMessages
{
    enum AppMessageTypes
    {
        FinancialLineViewDisplay,
        FinancialLineViewClose,
        DisplayDialogMessage
    }

    public static class DisplayDialogMessage
    {
        public static void Send(string message, MessageBoxButton button, string caption, System.Action<MessageBoxResult> action)
        {
            DialogMessage dialogMessage = new DialogMessage(message, action)
            {
                Button = button,
                Caption = caption
            };

            Messenger.Default.Send(dialogMessage, AppMessageTypes.DisplayDialogMessage);
        }

        public static void Register(object recipient, System.Action<DialogMessage> action)
        {
            Messenger.Default.Register<DialogMessage>(recipient, AppMessageTypes.DisplayDialogMessage, action);
        }
    }
}

public class ExceptionLogger
{
    private static ExceptionLogger _logger;
    private static object _syncRoot = new object();

    public static ExceptionLogger Instance
    {
        get
        {
            if (_logger == null)
            {
                lock (_syncRoot)
                {
                    if (_logger == null)
                    {
                        _logger = new ExceptionLogger();
                    }
                }
            }

            return _logger;
        }
    }

    public void LogException(Exception exception, string additionalDetails)
    {
        LogException(exception.Message, additionalDetails);
    }

    public void LogException(string exceptionMessage, string additionalDetails)
    {
        MessageBox.Show(exceptionMessage);
    }
}

About testability

Due to using of singletons and static classes MyViewModel isn't testable. Unit testing is about isolation. If you want to unit test some class (for example, MyViewModel) you need to be able to substitute its dependencies by test double (usually stub or mock). This ability comes only when you providing seams in your code. One of the best technique to provide seams is Dependency Injection. The best resource for learning DI is book from Mark Seemann (Dependency Injection in .NET).

You can't easily substitute calls of static members. But if you use much static members then your design isn't perfect.

Of course, you can use unconstrained isolation framework such as Typemock Isolator, JustMock or Microsoft Fakes to fake static method calls but it costs money and it don't push you to better design. This frameworks are great for creating test harness for legacy code.

About design

  1. Constructor of MyViewModel is doing too much. Constructors should be simple.
  2. If dependecy is null then constructor must throw ArgumentNullException but not silently log about error. Throwing exception is a clear indication that your object isn't usable.

About testing framework

You can use any unit testing framework you like. Even MSTest, but personally I don't recommend it. NUnit and xUnit.net is MUCH better.

Further reading

  1. Mark Seeman - Dependency Injection in .NET
  2. Roy Osherove - The Art of Unit Testing (2nd Edition)
  3. Michael Feathers - Working Effectively with Legacy Code
  4. Gerard Meszaros - xUnit Test Patterns

Sample (using MvvmLight, NUnit and NSubstitute)

public class ViewModel : ViewModelBase
{
    public ViewModel(IMessenger messenger)
    {
        if (messenger == null)
            throw new ArgumentNullException("messenger");

        MessengerInstance = messenger;
    }

    public void SendMessage()
    {
        MessengerInstance.Send(Messages.SomeMessage);
    }
}

public static class Messages
{
    public static readonly string SomeMessage = "SomeMessage";
}

public class ViewModelTests
{
    private static ViewModel CreateViewModel(IMessenger messenger = null)
    {
        return new ViewModel(messenger ?? Substitute.For<IMessenger>());
    }

    [Test]
    public void Constructor_WithNullMessenger_ExpectedThrowsArgumentNullException()
    {
        var exception = Assert.Throws<ArgumentNullException>(() => new ViewModel(null));
        Assert.AreEqual("messenger", exception.ParamName);
    }

    [Test]
    public void SendMessage_ExpectedSendSomeMessageThroughMessenger()
    {
        // Arrange
        var messengerMock = Substitute.For<IMessenger>();
        var viewModel = CreateViewModel(messengerMock);

        // Act
        viewModel.SendMessage();

        // Assert
        messengerMock.Received().Send(Messages.SomeMessage);
    }
}

The current system we are adopting at work is to write some extremely complex queries which perform multiple calculations and have multiple joins / sub-queries. I don't think I am experienced enough to say if this is correct or not so I am agreeing and attempting to function with this system as it has clear benefits.

The problem we are having at the moment is that the person writing the queries makes a lot of mistakes and assumes everything is correct. We have now assigned a tester to analyse all of the queries but this still proves extremely time consuming and stressful.

I would like to know how we could create an automated procedure (without specifically writing it with code if possible as I can work out how to do that the long way) to verify a set of 10+ different inputs, verify the output data and say if the calculations are correct.

I know I could write a script using specific data in the database and create a script using c# (the db is SQL Server) and verify all the values coming out but I would like to know what the official "standard" is as my experience is lacking in this area and I would like to improve.

I am happy to add more information if required, add a comment if necessary. Thank you.

Edit: I am using c#

The standard approach to testing code that runs SQL queries is to unit-test it. (There are higher-level kinds of testing than unit testing, but it sounds like your problem is with a small, specific part of your application so don't worry about higher-level testing yet.) Don't try to test the queries directly, but test the result of the queries. That is, write unit tests for each of the C# methods that runs a query. Each unit test should insert known data into the database, call the method, and asserting that it returns the expected result.

The two most common approaches to unit testing in C# are to use the Visual Studio unit test tools or NUnit. How to write unit tests is a big topic. Roy Osherove's "Art of Unit Testing" should be a good place to get started.

I was under the impression that mocking is faking data calls thus nothing is real. So when I trying to create my own Unit Test that are very similar to what other developers on my team are doing, I am thinking that this is not correct to be newing up the service.

    [Test]
    public void TemplateService_UpdateTemplate_ContentNameNotUnique_ServiceReturnsError()
    {
        Template Template = new Template()
        {
            TemplateId = 123,
        };

        // Arrange.
        TemplateUpdateRequest request = new TemplateUpdateRequest()
        {
            ExistingTemplate = Template
        };

        var templateRepo = new Mock<ITemplateRepository>();
        var uproduceRepo = new Mock<IUProduceRepository>();


            templateRepo.Setup(p => p.IsUniqueTemplateItemName(215, 456, "Content", It.IsAny<string>())).Returns(false);
            templateRepo.Setup(p => p.UpdateTemplate(request)).Returns(1);

            // Act.
            TemplateService svc = new TemplateService(templateRepo.Object, uproduceRepo.Object);

            TemplateResponse response = svc.UpdateTemplate(request);

            // Assert.
            Assert.IsNotNull(response);
            Assert.IsNotNull(response.data);
            Assert.IsNull(response.Error);

    }

So my issue is with this code:

TemplateService svc = new TemplateService(templateRepo.Object, uproduceRepo.Object);

Should the TemplateService really be newed up? What "if" the Service ended up hitting a database and/or file system? Then it becomes an integration test, and no longer a unit test, right?

TemplateResponse response = svc.UpdateTemplate(request);

Also, how do I really control whether this is truly going to pass or not? It is relying on a service call that I didn't write, so what if there is a bug, or encounters a problem, or return NULL, which is exactly what I do not want! ?

I recommend you to read the book The Art of Unit Testing: with examples in C# to learn good practice.

In your example, you are testing TemplateService class. Your concern is what if TemplateService calls database. It depends on how this class is implemented. From the example and mock setup, I can understand that the details of ITemplateRepository is responsible for database calling and that is why UpdateTemplate and IsUniqueTemplateItemName are mocked.

If you want to avoid the Null check, then you can check whether the svc.UpdateTemplate(request) calls the method UpdateTemplate of ITemplateRepository with its parameter.

It should be similar as follows

templateRepo.Verify(u => u.UpdateTemplate(It.Is<TemplateUpdateRequest>(r => r.ExistingTemplate.TemplateId == 123)),Times.Once);

You can verify other method calls that you have mocked.

I have a method that deletes a row from a table. I am writing the unit tests for it. One of the test cases has to cause an error condition on delete to test the catch of the try/catch block. I cannot for the life of me think of how to cause an error to catch. ADO, EF... anything will do.

Thanks

I have a method that deletes a row from a table. I am writing the unit tests for it.

Unless that method actually contains something testable, you may be spending time just to exercise framework functionality. There's nothing wrong with that as part of an integration test, but it may not make a very good unit test.

Let's assume there is something in the method to unit test. A clean way of isolating that functionality is to inject mocks of the dependencies into the class being tested.

In this case, it sounds like the dependency would be ADO.Net and/or EF. One of those mocks could be configured to throw an exception.

public class MyClass
{
    private readonly IRepository _repository;

    public MyClass( IRepository repository )
    {
        _repository = repository;
    }

    public void DoSomethingThatMightThrow()
    {
        // some logic that you want to test

        // this might throw
        var obj = _repository.Delete( 123 );

        // some logic that you want to test
    }
}

[TestMethod]
public void ATest()
{
    // uses Moq framework, but any mocking framework should do this

    var repository = new Mock<IRepository>();
    repository.Setup( o => o.Delete( It.IsAny<int>() ) ).Throws( new DataException() );

    var obj = new MyClass( repository );
    obj.DoSomethingThatMightThrow();
}

I'm currently reading The Art of Unit Testing, which discusses how to identify good units. The author asserts that a test doesn't have to map to a single method, but tests should isolate logical units and be easily repeatable. A dependency on a database in a unit test is rarely a good idea (again data access can be part of a great integration test).