Posts about Unit Testing
When testing code that depends on HTTP calls, it's most common to mock the unit of the code that does the HTTP calls. In integration tests, however, you don't want to modify the application code. So you need to mock the server responding to the HTTP calls. WireMock.Net is a convenient tool to do that directly from test code.
Old-school .NET console applications with a Program.Main method could easily be invoked from tests. You only needed to make the method and its containing class public. Then you could directly invoke it like any other method. That's not an option if you're using top-level statements because there's no Main method in your code to make publicly accessible.
Verifying whether a call to ILogger has been made from a unit test is somewhat tricky because we're more often calling extension methods than instance methods, and those can't be directly verified. We need to verify the call to the underlying instance method.
Moq is my preferred mocking tool for .NET. Although it has rather good documentation, it doesn't include an example for what I needed, so it took me a while to actually get it working.
In the sample project for my previous blog post, I noticed that every time I ran my test, its code was executed twice. At first, I thought that I had a bug in my code, but quickly dismissed this option. After further investigation, I finally attributed the strange behavior to the Fine Code Coverage Visual Studio extension.
Mocking HttpClient in .NET isn't difficult. But if you haven't done it before, it's not obvious how to approach it, since the class has no interface or virtual methods. What you should mock, is the protected abstract SendAsync method of the HttpMessageHandler class, which can be passed to HttpClient as a constructor parameter.
Mocking frameworks can be used not only to mock methods, but also to verify whether those mocked methods have actually been called. The latter is sometimes called interaction testing and is particularly useful when you want to test that specific calls to an external API have been called. Of course, Moq also supports such setup verification.
In large projects, service registration code for dependency injection can grow rather large. Forgetting to register a newly added dependency will break your application at the point of instantiating a class (transitively) requiring it. It's even more likely that this will happen to you if you need to register a dependency in multiple places (e.g., for multiple projects) and don't have tests for every one of them.
After I updated my copy of Visual Studio 2022 to the latest version 17.6, I couldn't run the tests from the Test Explorer anymore that worked just fine in version 17.5. I had similar experience in the past, so I immediately suspected that the problems were related to the versions of test-related NuGet packages in my project.
When I updated MSTest from v2 to v3 in one of my projects, some tests started failing. It was because of a breaking change in MS Test V3 that caused the TestContext.DeploymentDirectory property to return a different path.
Data-driven tests are great for repeating the same unit tests with many different inputs. However, a test from a project I worked on failed on multiple test cases because a double value was incorrectly handled as a decimal.
In a previous blog post, I looked at integration testing of ASP.NET Core Web APIs. But often unit tests make more sense. So instead of checking the already serialized responses from the REST service, you would check the value returned by the controller action method before it is serialized into an HTTP response by the ASP.NET Core runtime.
In most cases, you do not want to write tests for non-public methods, because that would make the tests dependent on the internal implementation of a class. But there are always exceptions.
When you write unit tests, make sure not only that they succeed if the tested code works as expected, but also that they fail if the code does not work as expected. Otherwise, these tests will give you a false sense of confidence in your code.
What is the best way to assert a JSON string value in a unit test? You can compare them as strings, but for that to work you need to use consistent formatting that ensures the same data is always serialized into identical JSON, for example something like the JSON Canonical Form. But there are other options as well.
Since .NET 5, the coverlet.collector NuGet package is pre-installed in the test project templates, which can generate code coverage reports when the tests are executed. Let us take a look at how you can use this in your code editor.
It pays in the long run to learn about the various capabilities of unit testing frameworks and use them to make unit testing code more maintainable. Let us go through the process of refactoring a set of copy-pasted tests into a single parameterized, i.e. data-driven test.
When I recently needed to update some existing unit tests, I noticed that many asynchronous tests were using async void in their signature. My first instinct was to fix them by using async Task instead, because that surely meant they were broken and would not detect failures correctly. But before I did that, I experimented a bit, and as far as I could tell, the tests worked as expected, correctly detecting failed assertions and unexpected exceptions. I decided to do some more research on the subject.
In an Angular application, I wanted to test a functionality that depends on the current time (by calling Date.now()). Jasmine has great built-in support for spying on methods, so my first approach was to create a spy for Date.now() that would return different values during testing. And it worked. It was not until later that I realized there was an even more elegant way to do this.
Angular's fakeAsync zone is a great tool for unit testing asynchronous code. Not only does it make it easy to wait for promises and observables to resolve, but it also gives you control over the passage of time. This makes it a nice alternative to Jasmine's Clock when working with Angular.
I really like Angular's support for unit testing HTTP requests, even if I find the documentation a bit spotty. I struggled a bit when I first tried to test error handling.
Angular does a lot to make testing your code easier. This includes a dedicated module for testing HttpClient code. However, the way HttpTestingController matches requests makes it unnecessarily difficult to test GET requests with query parameters.
Much of the Angular code is asynchronous. The fakeAsync function is very useful when testing such code, especially when not all promises and observables are publicly accessible. You can use the flush function instead of awaiting them individually. I recently learned that this does not work in all cases.
Mocks can be a helpful tool for replacing external dependencies in unit tests. However, caution is required when you embark on that route or you could end up with tests that don't really test your code under test.
By default, services in Angular are provided at the root module level. This way, the same instance of the service will be injected into any component depending on it. If a component needs a separate instance of the service for itself and its children, it can change the scope by declaring a service provider itself. However, this change also affects dependency injection in tests.
There's no specific guidance for testing Angular lifecycle hooks in the component testing scenarios covered by the Angular documentation. Maybe because they can be tested like any other method: a test can set up the component state and then manually invoke the hook. However, some caution is needed since hooks can also be called implicitly by Angular.
I was recently tasked with troubleshooting failing Angular unit tests for a component in a large codebase I was completely unfamiliar with. The tests were failing with: "Error: No value accessor for form control with unspecified name attribute".
Recently I was involved in troubleshooting an interesting issue with running unit tests for a Javascript project that was depending on a native Node module. The key step in resolving the issue was when we determined that the tests ran fine from the command line and only failed when the Mocha Test Explorer extension for Visual Studio Code was used.
The Moq mocking library in version 4.13.0 added support for matching generic type arguments when mocking generic methods. The documentation doesn't go into much detail but thanks to additional information in IntelliSense tooltips and the originating GitHub issue I managed to quickly resolve the no implicit reference conversion error which I encountered at first.
After getting asset loading working in Ionic unit tests, I wanted to also test the error handling in case an asset fails to load. Mocking the fetch call would be the easiest way to do that.
Recently I had to embed a JSON file as an asset in my Ionic 4 project. When I tried to test the code accessing the asset, I was unpleasantly surprised. The asset failed to load with a 404 error. The body of the response was "NOT FOUND".
When creating my first Vue.js project I configured it for TypeScript support and Jest based unit testing. This might not be the most common setup but it does express my tooling preferences. To no surprise, the first component I wanted to unit test used fetch which meant I also had to use jest-fetch-mock to mock its calls. There were a couple of obstacles on the way to the first working test.
I often need to quickly test some JavaScript or TypeScript code, be it for a blog post or otherwise. Recently, I started using Jest for that because it's so simple to set up. For basic JavaScript code, it's enough to install the npm package. Although Jest doesn't have built-in support for TypeScript, it's almost just as easy to use with the help of ts-jest.
It's the week of NT Conference 2018. I had two sessions this year. On Tuesday, I talked about a selection of common C# gotchas which can surprise even an experienced C# developer. In my second session, I explained the benefits of continuous testing and showed how to configure it for a .NET Core project in Visual Studio 2017 and Visual Studio Code.
I've been pretty busy since early summer, spending most of my spare time recording a video course about features for debugging and testing in Visual Studio 2017. Since Monday, the first part of this video course is finally available to everyone.
Although the official Ionic templates aren't preconfigured for unit testing, there is no lack of guidance in this field. It's not all that difficult to get started with unit testing any more. However, as the number of tests in the project will start to increase, it will soon become obvious that the test are quite slow.
Many Angular 2 and Ionic 2 APIs return RxJS observables. To unit test your code that's consuming them, you will need to create your own observable streams with test data. With this approach I wrote tests for code reacting to user's current geolocation.
In my previous posts I've already addressed unit testing of client-side JavaScript and TypeScript code, however since browsers don't natively support the ECMAScript 6 module system, simply transpiling the code is not enough for the tests to run in the browser.
Having unit tests usually drastically reduces the need for interactive debugging. However, being able to debug unit tests can sometimes prove very useful. Nowadays, any browser includes a fully featured debugger as part of its developer tools, and Karma test runner has a dedicated feature for in-browser debugging. I wrote short instructions on how exactly to use this feature for a project I am working on.
Although, I have now configured my Cordova project to automatically copy distribution files from installed Bower packages and reference them from HTML pages, there's still one final part of Bower-related configuration left: unit tests also require Bower dependencies. I don't want to manually update the configuration of my test runner every time I install an additional Bower package.
I first stumbled across wallaby.js while working on my previous articles about continuous testing in the JavaScript ecosystem using Karma. After reading more about it in Scott Hanselman's blog post, I decided to try it out myself.
After taking a closer look at continuous testing of JavaScript code, I'm moving on to other languages that transpile to JavaScript. I'll mostly be focusing on TypeScript, but there aren't many differences with other languages. I'll mention some of them at the end of this post.
Unit testing is a crucial part of development in any language, but its even more important in dynamically typed and interpreted languages such as JavaScript because there's no compiler doing at least basic validation of the code. Just like in .NET development, quick feedback is very valuable, and nothing can beat continuous testing in this field.
Ever since I have tried out NCrunch, I never looked back. Continuous running of tests as implemented in NCrunch significantly changed how I look at tests. Since then continuous testing has become much more mainstream and today both ReSharper and stock Visual Studio offer support for it. It was about time I took another look at the other tools and see how they compare to NCrunch today.
JSData is a feature rich data modeling library for JavaScript with good support for AngularJS. Although the documentation isn't perfect, it's not too difficult to get started with the library. Testing the code which uses the library, is a different matter altogether. There is a mocking library available, but it literally has no documentation.
Based on Visual Studio user experience, one would think running unit tests for Windows Store apps is not all that different from running standard .NET framework unit tests using MSTest testing framework. When you try running them on a build server, it turns out there are a lot of differences.
Whenever I'm developing a non-desktop Windows 8 application I prefer having as much business logic in portable class libraries as possible. The test project can then be a standard .NET class library, allowing mocking frameworks and other helper libraries to be used which are not available elsewhere. Having a dependency on a native platform specific library, such as SQLite, can still complicate things a bit.
This year I had 2 sessions of my own at NT conference: What's New in C# 6 and Diagnostic analyzers in Visual Studio 2015. Slides and demos for both them are available for download.
Effective development of diagnostic analyzers strongly depends on unit testing. Debugging diagnostic analyzers requires a second instance of Visual Studio to be started which will host the debugged analyzer as an extension. This takes far too long for being useful during development, therefore using tests instead is a must.
On Tuesday Microsoft Slovenia organized the second TechDays event leading up to the 20th NT conference. The event consisted of 4 tracks; I had the opening session for the Visual Studio 2015 track. As always slides and demos from my session are available for download.
The general advice is to avoid using IoC containers in your test code altogether. Unfortunately achieving that when IoC usage is being retrofitted into an existing application, can be challenging. Failing to do that will quickly result in having to reconfigure your IoC container for some of the tests.
The more I use FluentAssertions, the more I like its flexibility and extensibility. In this post I'm going to focus on assertion rules which can be used to define custom equality comparisons for specific types.
One of the main reasons for using an IoC container like Ninject is to be able to inject different dependencies into your classes in production code and in test code. Usually you don't even need to use an IoC container in your tests, but when you longer lived dependencies will usually be scoped differently in test code than they are in production code.
Unit tests best serve their purpose when they are brief and easy to understand. There are cases when it can be difficult to achieve this; nevertheless it's still worth putting in the effort, as it may pay off.
When working with complex data structures in your unit tests, you often end up with a lot of plumbing code. FluentAssertions is a library for .NET framework which can simplify your assertion code and at the same time make the error messages much clearer when tests fail.
The best resource on the subject of testing navigation in MvvmCross view models that I managed to find, was Slodge's blog post from almost two years ago. While it still contains useful guidance for today, there have been changes in the framework. After I got it to work in my own project I decided to publish a more up-to-date set of instructions here.
Most of the time there's no need to use an IoC container in unit tests. Still, testing the IoC container configuration makes sense in an application to avoid runtime errors which will occur when not all required dependencies for the created object are registered in the application beforehand. This doesn't necessarily mean, the right implementations are being used for all dependencies, but that's not what we want to test.
A great side effect of view models being implemented in portable class libraries when using MvvmCross, is the ability to unit test them on any platform, not necessarily the one you are targeting. So even when developing a mobile application, you can test it in full .NET framework and take advantage of all the tooling available there.
Since Visual Studio 2012 Update 2 there is a project template available for unit testing Windows Phone apps: Windows Phone Unit Test App. Unlike its predecessor Windows Phone Toolkit Test Framework, it doesn't require the tests to be manually started from the device or the emulator. They can be started directly from the unit test runner's window in Visual Studio. This feature should be a good enough reason for migrating any existing test projects from the old framework to the new template.
If you're used to depend on unit testing during your development or even practice test driven development, working on Windows Phone applications can be a challenge. The fact that the code must always run on the phone (or an emulator), poses some limitations on the execution of the tests.
There's usually no need to unit test the logging code. If you just want to ignore it in your tests, there's nothing you need to do when using log4net. If you want to make sure you're going to log the right information, you can use MemoryAppender in your unit tests.
Units tests are all about testing a specific unit of code without any external dependencies. This makes the tests faster and less fragile, since there are no out-of-process calls and all dependencies are under the test's control. Of course, it's not always easy to remove all external dependencies. One such example is a WCF service using entity framework for database access in its operations.
Writing unit tests for code that needs to be run on the UI thread can be quite a challenge in WinRT. TestMethodAttribute supports asynchronous test methods but doesn't run them on UI thread. UITestMethodAttribute runs test methods on UI thread but doesn't support asynchronous methods. Still, I managed to make the test method asynchronous and run it on the UI thread.
Since I usually write my unit tests in NUnit, I got into the habit of using parameterized tests when testing methods for which I need to check the result for many different input values. Unfortunately using NUnit is not an option with Windows Store apps. Only MSTest is supported, providing data-driven unit tests for such cases.
My favorite environment for running NUnitunit tests during the development process is definitely Unit Test Runner in CodeRush. When I just recently had to get a usable development environment up and running with Visual C# 2010 Express, I had to find a different solution since extensions are not supported in the Express SKUs of Visual Studio.
Using unit testing in real development environment isn't completely trivial. Pragmatic Unit Testing in C# with NUnit is a great book for everyone who wants to start with unit testing but just doesn't know how to do it.