TypeScript: Testing Private Members

I wrote about three years ago on how to test private member as well as last year and the year before. One article was more specific to C# and the two others more abstract and on TypeScript, but still relevant today. In this article, I’ll explain how I am testing private members without using any hack with Jest.

The goal of having private and public members is to mark a clear separation about what is restricted as internal use of the object that defines the member from what can be consumed from outside the scope of the object. The idea makes sense. The usage of the keyword “private” does not necessary. Using “private” does not work because you will not be able to test internal logic, neither mock them in your testing — it is blocked at the scope of the object.

class MyClass1 {
    private variable1: string = "init";
    public function2(p1: number, p2: number): void {
        // Code here...
        if (this.function1(p1)) {
            // More code here ...
        }
        // Code here... 
    }
    private function1(p1: number): boolean { }
}

There are some workarounds. One popular approach is to avoid testing these functions directly. The problem is that you have code that is not tested. An alternative is to test these private functions through public function. The problem is that you are using a function to proxy all the logic which make all test less “unit test” and fragile because these tests become dependant on another piece of code. If the logic of the private function remains the same, but the public function change, the code that is testing the private function will fail. It sways the simplicity into a nightmare. If the private function returns a boolean value, but the public function returns a void type, testing the return of the private function requires to understand the behavior of the public function that use it to extrapolate a behavior that corresponds to the return value. The proxy function, the public function, might be only a single one or many. When there is only a single function, the choice is limited and can push you in a corner without escape. When many functions, the selection of which one can also be problematic. In the end, the goal is to unit test, not to have many hurdles before even testing.

Another option is to cast to any and access the function. The problem is that any refactoring on the name will make the function to be “undefined.” It leads to issues of typing when the ground change that is the role of having a typed language in the first place.

describe("MyClass1", () => {
    describe("test the private function #1", () => {
        it("public proxy function", () => {
            const x = new MyClass1();
            expect(x.function2(1, 2)).toBe("????");
        });
        it("cast with any", () => {
            const x = new MyClass1() as any;
            expect(x.function1(1)).toBeTruthy();
        });
    });
});

So, if we have all these solutions with weaknesses, what is the best solution? The best solution that I have been used for a while now is this one: do not use private. Instead, you should use interface. The idea is that the concrete object will never be used directly, hence can have its members public. The usage across the whole application is done with an interface that exposes the members that the consumers can interact. Here is the same code as above but with the pattern of using an interface instead of private.

interface IMyClass1 {
    function2(p1: number, p2: number): void;
}

class MyClass1 implements IMyClass1 {
    private variable1: string = "init";
    public function2(p1: number, p2: number): void {
        // Code here...
        if (this.function1(p1)) {
            /// More code here ...
        }
        // Code here... 
    }
    public function1(p1: number): boolean { }
}

describe("MyClass1", () => {
    let x: MyClass1;
    beforeEach(() => {
        x = new MyClass1();
    });
    describe("function1 with an integer", () => {
        it("returns true", () => {
            expect(x.function1(1)).toBeTruthy();
        });
    });
});

It works perfectly in term of testability. The unit test code has access to all members because everything is public. It is easy to invoke all members directly but also to mock these and still keep and the type from TypeScript. In the application, we use the interface everywhere. The only place where we use the concrete class is during the initialization. Every declaration uses the interface — no exception.

Furthermore, a class is easily mockable with framework because you can access every public function and assign them a mock/spy/stub that allows to control specific branches of the code as well as managing the scope of the test to be as much as a single unit. The key of an efficient test is to have every block of code tested as a unit and then moving from bottom to top with more and more integration tests.

describe("MyClass1", () => {
    let x: MyClass1;
    beforeEach(() => {
        x = new MyClass1();
        x.function1 = jest.fn();
    });
    describe("function2", () => {
        it("calls function1", () => {
            x.function2(1,2);
            expect(x.function1).toHaveBeenCalledTimes(1);
        });
    });
});

Last but not the least, functions that are directly sitting in a module are very hard to unit test. It is not possible to mock or spy their dependencies. The lack of access sway my design to always encapsulate every function into a class.

In summary, encapsulation does not require to rely solely on public, private and protected keywords. The usage of an interface is powerful by adding the protection to what can be accessed while allowing developers to have a sane and simple way to test correctly without using a detour to obtain the desired piece of code.

Unit Tests and Coverage Report in Jenkins using Jest from Create-React-App

Since I left Microsoft Visual Studio Online (VSTS) has an employee I have been using Jenkins which is the continuous integration (ci) platform Netflix uses. I configured two Jenkins jobs for the project I am leading. One is handling every pull request done against master and the second one is executed during the merge of any pull request into master. For many months, I didn’t have the unit tests running on the platform. The reason is that I am, yet, used to how Jenkins works and even after several months feel VSTS more intuitive. Regardless, recently I took the time and setup to have my TypeScript code using Create-React-App to run my unit tests in these two Jenkins tasks. I am using Create-React-App, which come with the best testing framework I have experimented so far which is Jest. My goal was to have all the unit tests ran as well as to see the coverage.

Here are the steps required to have Jenkins handle your test. First thing is to install a dev dependency to “jest-junit”. The reason is that we need to convert the format of Jest into Junit.

npm install --save-dev jest-junit

The next step is to download a Python script in your repository. I have mine in “tools”. The reason is also about converting. Jest coverage file is not in the right format. The Python script converts the locv into Cobertura format. You can download once the script at this address.

wget https://raw.github.com/eriwen/lcov-to-cobertura-xml/master/lcov_cobertura/lcov_cobertura.py

Few configurations are required in the package.json. The first one is to create a test command that Jenkins execute instead of the default test command. The command calls the react-scripts. I am using TypeScript, hence I have to use the react-scripts-ts command. The next parameter is the “test” command which we still want to execute. The change starts with the test results processor. This is where you specify the jest-junit to execute once the tests are done. I set my coverage to be positioned into the “coverage” folder which is the folder I have ignored in the .gitignore and where I have normally my local coverage file outputted. Here are the three commands I have. The first one runs the test, the second run and coverage for the ci (this is the new stuff) and the last one is when I want to run locally the coverage.

"test": "react-scripts-ts test --env=jsdom",
"test:ci": "react-scripts-ts test --env=jsdom --testResultsProcessor ./node_modules/jest-junit --coverage --coverageDirectory=coverage",
"coverage": "react-scripts-ts test --env=jsdom --coverage",

Finally, you need few jest-unit configurations. This can be in your package.json. I have some coverage folder that I want to exclude which you can do in the jest configuration under collectCoverageFrom. I had these before doing the task we are doing of configuring Jenkins. Then, the coverage reported must be lcov and text. Finally, the new configurations are under “jest-junit”. The most important configuration is the “output” which is again in the coverage folder. You can change the destination and file as you wish. However, remember the location because you will need to use the same in a few instants inside Jenkins.

  "jest": {
    "collectCoverageFrom": [
      "**/*.{ts,tsx}",
      "!**/node_modules/**",
      "!**/build/**",
      "!**/definitionfiles/**",
      "!**/WebWrokers/**",
      "!**/*.mock.ts",
      "!src/setupTests.ts"
    ],
    "coverageReporters": [
      "lcov",
      "text"
    ]
  },
  "jest-junit": {
    "suiteName": "jest tests",
    "output": "coverage/junit.xml",
    "classNameTemplate": "{classname} - {title}",
    "titleTemplate": "{classname} - {title}",
    "ancestorSeparator": " > ",
    "usePathForSuiteName": "true"
  },

In Jenkins, you need to add 2 build steps and 2 post-build steps. The first build step is to run the unit test with the script we just added in the package.json. The type of build step is “Execute Shell”.

npm run test:ci

The second step is also an “Execute Shell”. This one calls the python code that we placed in the “tools” folder. It is important to change the path of your lov.info and coverage.xml. Both are in my “/coverage/” folder. The “base-dir” is the directory of the source of your code.

python tools/lcov_cobertura.py coverage/lcov.info --base-dir src/ --output coverage/coverage.xml

The next two steps are “Post-Build”. This time, two different types. The first one is “Publish JUnit test result report”. It has a single parameter which is the XML file. Mine is set to “coverage/junit.xml”. The second task is a “Publish Cobertura Coverage Report”. It also takes a single parameter which is the coverage.xml file. Mine is set to “coverage/coverage.xml”.

At that point, if you push the modification from the package.json and the Python script you will see Jenkins running the unit tests and doing the conversion. It is possible to adjust the threshold of how many tests your allow to fail to not break the build as well as setting the percentage of coverage you expect. You will get a report on the build history that allows you to sort and drill into the coverage report.

How to unit test private method in TypeScript (part 2)

I already posted about how to write unit tests for private method with TypeScript about one year ago. Few days ago, I had the same discussion that I had in my previous team concerning private method. The situation is similar. Developers don’t test private method and rely on public method to reach those private methods. At first, it may sound that we are going inside those private methods, therefore we are doing proper unit testing.

The problem by going through intermediate methods to access the one we want to test is that any change on intermediate methods will make multiple test to fail. When a unit tests fail, the goal is to know which unit of your code is failing. Imagine the situation where you have class A that has method a1, a2, a3. You want to to unit test a3, but the only entry point is a1 which is the only method public. This one call a2, who call in some particular situation a3. You have multiple conditions in a3 and you evaluate that you need 5 unit tests. The problem is that if a1 or a2 change in the future that all these 5 tests may fail, when they should not.

At that point, most people understand the situation and agree to test the private methods. However, there is some good ways to do it and some bad ways. The worst way to do it to cast the class A to be of type any and call a3 directly. Something like :

// Bad way:
var a = new A();
(a as any).a3();

The problem with the above code is that when you will refactor a3 to have a better name that no tool will find out this instance. More, this open the door to access private fields or inject new functions and fields to the class. At the end, it become a nightmare to maintain. We are using TypeScript to be strongly typed, our tests should continue to be as strong.

In the previous article I wrote, I talked about 2 patterns. The first one is about working around encapsulation with an interface. The second had two variations.

Let’s remember the first pattern. The first pattern is that class A should have an interface IA that is used everywhere. IA would only expose the method a1. Everywhere you use the interface and the only place where it doesn’t it’s when it’s getting injected by the inversion of control container. However, we can leverage this abstraction to keep a strong encapsulation for the application and use the implementation that has every method public. This way, developers still have only access to a1 in our example, but in our test we have access to everything else. This might not sound a proper solution at first since we open the encapsulation on the implemented class, but it’s the cheapest way to be able to test unit tests. That said, I am all with you that there is other solution like the pattern #2 presented in the previous article.

The second pattern presented was about moving code around. In our example, a2 and a3 are private and could be moved outside an other class. For example, let’s say that A was a user class, a1 was a method to get the user information to display to the screen, a2 a method to get the address information and a3 a method to format the street address. This could be refactored from :

class User{
    public getUserInformationToDisplay(){
        //...
        this.getUserAddress();
        //...
    }

    private getUserAddress(){
        //...
        this.formatStreet();
        //...
    }
    private formatStreet(){
        //...
    }
}

to:

class User{
    private address:Address;
    public getUserInformationToDisplay(){
        //...
        address.getUserAddress();
        //...
    }
}
class Address{
    private format: StreetFormatter;
    public format(){
        //...
        format.ToString();
        //...
    }
}
class StreetFormatter{
    public toString(){
        // ...
    }
}

Originally, we wanted to test the private method formatStreet (a3), and now it’s very easy because I do not even need to care about all the classes or function that call it, just to unit test the StreetFormatter class (which was the original a3). this is the best way to unit test private method : to divide it correctly into specific class. This is also costly in term of time.

I always prefer the second approach, but time constraints and the high velocity of shipping features is always that is a higher priority — even in software shop where the message is quality first. That said, I prefer using the first approach than not having any unit tests at all. It’s a good compromise that work well what ever your framework. I used both approach in TypeScript code that was using proprietary framework, as well with React and now with Angular. At the end, the important is to have the best coverage while being sure that everything tested are solid to help the software and not slow down the whole development.

nDepend 2017 New Features

I have been a fan of nDepend since many years now. nDepend is a tool that run in parallel of Visual Studio as well as in Visual Studio. Fully compatible from early version up to Visual Studio 2017. The role of nDepend is to improve your code quality and the newest version step up the game by having few new features. The three main new features are the called “Smart Technical Debt Estimation”, “Quality Gates” and “Issues Management”. In this post, I will focus on the first one “Smart Technical Debt Estimation”.

The feature goal is to give a number in term of cost. It also give grade to your code as well as an estimate effort in time to improve that rating. Everything is configurable for your preference.

First of all, if you are coming from nDepend version 6, when opening your old project with nDepend version 2017 you will get a notification. About dept not configured. You just need to click on it and you will get a message that will guide you on this new feature.

From there, you will get a first rating.

From here, you can click the rating, the effort or any numbers to get an idea how to improve. But before going too deep, you better configure your setting. For example, I ran this project on a project that I am phasing out where I’ll work about 1.5 hours for the next year per day. This can be configure!

Indeed, reducing the number of hour per day and running a new analysis, plunged the rating down in my case.

That said, the rating is meaningful if you have configured the rules to be along with what you believe to be a dept which mean that you have to setup the rules to fit your needs. For me, I never really modified the default values, always browsed the results and skipped those that wasn’t important for me. With this new feature, I am more encouraged to make the rule more like I want. For example, I do not mind about long method name, in fact, I like them.

I’ll post in few weeks the result of tweaking some rules, adjusting the code and so on. This is not an easy tasks, because it cannot be just changed blindly. Some private method that doesn’t seem to be used might be called by Entity Framework, some attribute classes may seem to be great sealed but they are also inherited by other classes. This is also true for other rules like static methods that can be converted, sometime it has some side effects. So far many features of this new version of nDepend seem promising. Not only for Visual Studio, but now with the integration to VSTS (Visual Studio Online) that can be used on each of your build.

How to Write a IF Statement that determine a value

Pretty much a basic case if you have done some programming. How to write a IF statement is an agnostic problem when it’s to assign one or multiple variables to be used. There is two patterns that I often see. The first one assign the variable or property directly.

if (/*what ever*/) {
    this.icon = "icon1";
}
else {
    this.icon = "icon2";
}

The second approach set the value into a temporary, scoped, variable and at the end of the IF assign the value to the field/property.

var iconType = "";
if (/*what ever*/) {
    iconType  = "icon1";
}
else {
    iconType  = "icon2";
}
this.icon = iconType ;

These two examples could be that instead of assigning to this.icon would be that we call this.featureMethod(icon). Like the two examples above, in the first approach, you would see the method twice, while on the second approach you would assign the value into a variable and have the method call once at the end. The first approach is appealing because you do not have to assign a temporary variable. However, we have code duplication that doesn’t seem to bother most people. The real problem is in code maintenance. If the method that needs to be invoked change it’s signature, you have two places to change instead of 1. If the IF become with more condition (else if) you will have to call the method (or assign field/property) few more times instead of just keeping a single call. These two argumentation leans in favor of the second approach and there is more. The second approach is cleaner in term of figuring out what is going on. The first approach is taking a decision and executing at the same time. If you look at the method, you cannot have a clear view of what is happening. From top to bottom you have multiple sections that do a condition check + action. Thus, the second approach is cleaner. We could even break down the code into two distinct part: arrange and act. We could refactor the method into 2 sub-methods which one determines the values to be used and the second that set values or calls methods.

I am bringing that point because the first approach seems to be taken with the argument that it’s the same as the second one. The real justification is that the first one is taking 2 lines of code less, hence faster to type which make it an easy default choice. If you are using the first approach, I suggest that you try for few times the second approach. You will see the benefits slowly when working and modifying that code again in the future.

here is an example of 3 temporary variables

function getMyLink(threshold: number) {
    // Default
    let url: string = "http://aboveHundred.com";
    let className: string = "default";
    let padding: number = 0;
    // Logics
    if (threshold <= 0) {
        url = "http://underOrZero.com";
        className = "dark-theme";
        padding = 100;
    }
    else if (threshold > 0 && threshold < 100) {
        url = "http://betweenZeroAndHundred.com";
        className = "light-theme";
        padding = 200;
    }
    // Assignments
    this.url = url;
    this.className = className;
    this.padding = padding;
}

If the next iteration of changes in the code requires to change one of the assignment to other variable, we have a single place to change. If instead of assigning we need to return something, we also have a single place to change.

function getMyLink(threshold: number) {
    // Default
    let url: string = "http://aboveHundred.com";
    let className: string = "default";
    let padding: number = 0;
    // Logics
    if (threshold <= 0) {
        url = "http://underOrZero.com";
        className = "dark-theme";
        padding = 100;
    }
    else if (threshold > 0 && threshold < 100) {
        url = "http://betweenZeroAndHundred.com";
        className = "light-theme";
        padding = 200;
    }
    // Now we return
    return `<a href="${url}" class="${className}" style="padding:${padding}">Click Here</a>`;
}

In term of flexibility, you may have to define these variables but the code is structured to be well resistant to future changes. Also, when a function requires a lot of assignation, it is often a case that the method will be long. It means that it’s even harder to have an idea of what is going on if assignations are done all over the function. I strongly believe that while assigning a long list of variables can be cumbersome that assigning them directly to several places reduce the readability and introduce more error (like forgetting one assignment in a specific case which keep an old assignment).

There are pros and cons in both, but the one I illustrate has more pros than cons in my opinion.

To recap about the advantage of having to assign values and then calling or assigning:

  • Remove code duplication
  • Easier refactoring since only one signature to change
  • Clearer readability of what happen into a method
  • Allow faster refactoring into smaller methods

How many unit tests should you write?

I always found automated tests very powerful. On the long run, you accumulate a lot of values by having a fast way to see if something has changed that wasn’t originally planned. It serves as a good living documentation (if tests are simple to understand). The current trend in the industry is to rely solely on automated tests. Whence, all tests need to be well written. Writing good unit tests is something that is harder that you can think. Since the last 2 years, I have been in multiple teams and I realized that most people do not really know how to unit test properly.

One thing that need to be clear is that even if you touched a line with a test, it doesn’t mean that this one doesn’t need to be touched by multiple tests. I see often people running code coverage to know if they are done or not when testing. That is wrong.

A second thing is that each method needs different amount of test depending of the complexity of those ones. You cannot have only one successful unit test, and one failure unit test and be done with your method.

Another concept that seem to be not understood is that even if you “know” that the value in parameter will never be null, you have to test it for null if you allow it. You cannot discriminate scenario because you know what is the current use of the code.

There is also the notion of the coverage not being a good metric to know if you have a well written plan. While I agree that 100% doesn’t mean 100% free bug, I do not agree that it’s acceptable to have 50% of coverage. If your software quality bar is to have only 50% of your code tested, than it means that you are not ready to not have human to test your software. In fact, you should probably test your code to have above 100% coverage — multiple lines will get hit per multiple tests.

Assertions is also debatable. Most of the time, people are asserting a lot of variable which make the test hard to maintain. Every tests should assert a minimum set of variable. In fact, you should aim to assert a specific scenario. If you are asserting a lot, than you are probably testing more than one thing, thus not testing a unit but a batch. Some exception are clear, for example, testing to map a class to another class. However, a rule of thumb is to identify these big tests and make them smaller by dividing the actual code into several methods.

Preparing your unit tests should be clean. You shouldn’t have to write 20 lines of code in each of your test to create your scenario to test. Divide that arrange code into builder class. This way, they can be reused across several tests. A bonus is that by doing so, you can use those building method to seed your development database.

Naming your test is something important. The reason is that other developer won’t be able to find specific scenario and might duplicate test or not look at them. If you have thousand and thousand of tests, it become even more important. I like following the name with GivenXXX_WhenXXX_ThenXXX. It forces to have a structure of Given a specific case When specific configuration is provided Then you expect a specific result.

Here is a simple code. How many test should be created?

public class SimpleCode
{
    public string Value { get; set; }  

    public SimpleCode Merge(SimpleCode parameter)
    {
        var mergedObject = new SimpleCode();
        if (parameter.Value != null || this.Value != null)
        {
            mergedObject.Value = parameter.Value ?? this.Value;
        }
        return mergedObject;
    }
}

Most people might see two or three tests, but the answer is six.

[TestMethod]
    public void GivenSimpleCode_WhenMainInstanceValueNull_AndParameterValueNull_ThenValueNull()
    {
        // Arrange
        var mainInstance = new SimpleCode() {Value= null};
        var simpleCodeParameter = new SimpleCode() { Value = null };

        // Act
        var result = mainInstance.Merge(simpleCodeParameter);

        // Assert
        Assert.IsNull(result.Value);
    }

    [TestMethod]
    public void GivenSimpleCode_WhenMainInstanceValueNotNull_AndParameterValueNull_ThenMainInstanceValue()
    {
        // Arrange
        const string VALUE = "Test";
        var mainInstance = new SimpleCode() { Value = VALUE };
        var simpleCodeParameter = new SimpleCode() { Value = null };

        // Act
        var result = mainInstance.Merge(simpleCodeParameter);

        // Assert
        Assert.AreEqual(VALUE, result.Value);
    }
    
    [TestMethod]
    public void GivenSimpleCode_WhenMainInstanceValueNull_AndParameterValueNotNull_ThenParameterValue()
    {
        // Arrange
        const string VALUE = "Test";
        var mainInstance = new SimpleCode() { Value = null };
        var simpleCodeParameter = new SimpleCode() { Value = VALUE };

        // Act
        var result = mainInstance.Merge(simpleCodeParameter);

        // Assert
        Assert.AreEqual(VALUE, result.Value);
    }

    [TestMethod]
    public void GivenSimpleCode_WhenMainInstanceValueNotNull_AndParameterValueNotNull_ThenValueParameterValue()
    {
        // Arrange
        const string VALUE1 = "Test1";
        const string VALUE2 = "Test2";
        var mainInstance = new SimpleCode() { Value = VALUE1 };
        var simpleCodeParameter = new SimpleCode() { Value = VALUE2 };

        // Act
        var result = mainInstance.Merge(simpleCodeParameter);

        // Assert
        Assert.AreEqual(VALUE2, result.Value);
    }

    [TestMethod]
    [ExpectedException(typeof(NullReferenceException))]
    public void GivenSimpleCode_WhenParameterNull_ThenException()
    {
        // Arrange
        var mainInstance = new SimpleCode();
        SimpleCode simpleCodeParameter = null;

        // Act & Assert
        mainInstance.Merge(simpleCodeParameter);
    }


    [TestMethod]
    public void GivenSimpleCode_DefaultValue_ValueNull()
    {
        // Arrange & Act
        var mainInstance = new SimpleCode();

        // Assert
        Assert.IsNull(mainInstance.Value);
    }
}

Often, people do not test the default value of a class. This is something everyone should do. What happen if at some point a default value change but was required to be a specific value depending of the scenario. Without this test, you are not notified that something is breaking from the expected value. In that example, the expected value is NULL. If someone change it to an empty string, this might break other scenario that rely to compare on NULL to determine some state. Another testing scenario that I see often is about passing wrong value has parameter. In that particular example, null will break and return a null reference exception. Creating the test make you realize that you should probably handle that case and have a better code, like the following one:

public SimpleCode Merge(SimpleCode parameter)
{
    if (parameter == null)
    {
        throw new ArgumentNullException(nameof(parameter));
    }
    var mergedObject = new SimpleCode();
    if (parameter.Value != null || this.Value != null)
    {
        mergedObject.Value = parameter.Value ?? this.Value;
    }
    return mergedObject;
}

In that case, I just thrown an exception and will need to assert that particular exception. Why is that better than just having the null reference exception. For three reasons, first, it’s clear that it shows that the null parameter is a known scenario. Second, today it’s an exception, tomorrow it could be that we return a value. A change will make this particular test that expect a specific exception to become red and thus having a change in the test. The third reason is that if another exception is thrown, that I’ll get an error. I expect to only have an argument null exception, not a null reference or an overflow. Any other cases will be considered a bug, not the one handled.

How many test should your write? As many as your code required. Is that time consuming? Of course. Automated tests take about 30-50% of the development time when well done. Your code base will have has many line of code for tests and for code. That is the cost of being able to run a lot of test multiple times per day as well as not having human resource to test. It’s worth it because all these scenarios are hard to test for a human and since these tests are supposed to be fast, that you can run them often. The gain is enormous as the size of your code base increase. Your confidence in changing your code should remain high and new developer coming into the team won’t be afraid to change something that disrupt the expectation. That said, I still strongly believe that automated tests should be paired with human testing before being released into the public, but this is another story. You can download and play with the code in this Git Hub repository.

How to move a part of your .Net code as open source

During the process of building your .Net application you may realize that there is a part of it that can be reusable. If you have the authorization from the company you are working for or if it’s simply a private side project you can extract a part of your software and make it open source. One advantage to make it open source is to get other people use it, and find potential issues or improvement. You may also receive contributions from external. Finally, it is a good way to build a portfolio by showing to the world a piece of your work. This article describe steps of how to move a part of a C# code from a private repository in VSTS (Visual Studio Team Services) into GitHub. During the process, we will show how to extract, create a new project, create a nuget package, distribute your package and how to have code compiled and tested on every time a commit is pushed to the public repository.

Isolation

Isolating your code withing your actual code is the first step. You must be able to extract the code you think can be reusable in a unique namespace within your actual project. A good idea is to have a folder where you move all your files and setup a unique namespace. This will probably make your actual code not working because of some missing references, but it should not be a big deal to add using at the top of each file that were using the logic. In this article, I will extract code about Circuit Breaker.
IsolateCode

Creating a New Project

The next step is to take the isolated code and to copy the files in a new solution. First, create a new solution with Visual Studio. Create 2 projects : one for your code, one for your unit test. Copy the files that you have isolated into the code project. Do the same with the actual unit test you have.SolutionNewProjects

Migrating Visual Studio Tests Tool to an Alternative

Some modifications may be required for unit test if you are using Microsoft.VisualStudio.TestTools.UnitTesting. Most open source continuous integration doesn’t run Visual Studio to build and run your test. Whence, you need to use something that can be ran over the console. This doesn’t mean that you will not be able to run unit tests with Visual Studio, but means that you cannot use the Microsoft Unit Testing Framework. Multiple alternatives to Microsoft Visual Studio Unit Test exist like nUnit and xUnit. For the example, I will convert all unit tests to xUnit.

The first step is to remove the reference to Microsoft.VisualStudio.TestTools.UnitTesting. This got added when adding a new unit test project. Open the project, under references, select the DLL and hit delete. After, we need to install xUnit Nuget packages.

xunit.1.9.2
xunit.runner.visualstudio.2.1.0
xunit.runner.console.2.1.0

One is the main xUnit library, the other one allow to run the test from Visual Studio and from a console. Visual Studio Nuget package allows to debug which is very interesting. The console is required to be able to run, later, the test automatically when you push your code into the public repository. The third step is to clean up the test files. Remove all using that reference to the Visual Studio Testing framework (using Microsoft.VisualStudio.TestTools.UnitTesting). The next clean up is to remove the attribute [TestClass]. xUnit doesn’t need to have an attribute on top of the class. Finally, you need to change [TestMethod] attribute to [Fact]. You will need to add xUnit using (using Xunit) at the top of the file.

If you have [TestCategory] attribute you will need to use [Trait(“Category”, “CI”)].
If you use [ExpectedException] to validate exception, than you will need to remove it to use a new assertion which is better because it’s more specific. The following code has the old attributes commented and the new attribute and new assertion.

//[TestMethod]
[Fact]
//[ExpectedException(typeof(ArgumentOutOfRangeException))]
public void GivenCircuitBreaker_WhenConstructorWithInvalidThreshold_ThenThrowException()
{
     Xunit.Assert.Throws<ArgumentOutOfRangeException>(() => new DotNetCircuitBreaker.CircuitBreaker(0, TimeSpan.FromMinutes(5)));
}

Assertions syntax is different too. For example, asserting an equality won’t be Assert.AreEqual(x,y) but Assert.Equal(x,y). You can find the whole list of assertion in the official xUnit documentation.

Generate Nuget Package

This step is potentially optional. However, having a new Nuget package generated every new version of your assembly. The first step is to generate a nuspec from your project. Use the Nuget.exe in the project. This will generate a file with some place older that will be replaced from your assembly. You should open this file to add some more detailed information like what new in the specific version, dependencies on other packages etc. Here is how look a nuspec file.

<?xml version="1.0"?>
<package >
  <metadata>
    <id>$id$</id>
    <version>$version$</version>
    <title>$title$</title>
    <authors>$author$</authors>
    <owners>$author$</owners>
    <licenseUrl>http://LICENSE_URL_HERE_OR_DELETE_THIS_LINE</licenseUrl>
    <projectUrl>http://PROJECT_URL_HERE_OR_DELETE_THIS_LINE</projectUrl>
    <iconUrl>http://ICON_URL_HERE_OR_DELETE_THIS_LINE</iconUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>$description$</description>
    <releaseNotes>Summary of changes made in this release of the package.</releaseNotes>
    <copyright>Copyright 2016</copyright>
    <tags>Tag1 Tag2</tags>
  </metadata>
</package>

From that basic generated file, it’s important to add at least the files you want to add into the nuget package. Here is how it looks at the for this example.

<?xml version="1.0"?>
<package >
  <metadata>
    <id>$id$</id>
    <version>$version$</version>
    <title>$title$</title>
    <authors>$author$</authors>
    <owners>$author$</owners>
    <projectUrl>https://github.com/MrDesjardins/DotNetCircuitBreaker</projectUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>$description$</description>
    <releaseNotes>Initial release</releaseNotes>
    <copyright>Copyright 2016</copyright>
    <tags>circuit breaker</tags>
  </metadata>
  <files>
		<file src="bin\Release\DotNetCircuitBreaker.dll" target="lib" />
		<file src="bin\Release\DotNetCircuitBreaker.pdb" target="lib" />
  </files>
</package>

Two important details. First, the project url is set to GitHub, where we will push the code in the next step. This is normal at this point that you might not know the complete URL but usually it’s your GitHub user name followed by the name of your project. Second is the files element that contain the release library and the pbd. The DLL is the actual code that will be used by the consumer of your package and the PDB is the debugging file that Visual Studio can use.

Once you have built you solution once (in release mode) you can try to generate the nuget package manually :

nuget pack DotNetCircuitBreaker.csproj -Prop Configuration=Release

It’s important to set the configuration, otherwise you may have Nuget getting you that error : “authors is required” or “description is required”. In all case, we do not want to generate the package manually in later I will show you how to generate and publish it automatically on the web.

For the first publish of your Nuget package, let’s use the manually created package to upload it. In few, the upload will be done by each publish with GitHub so you will not have to do it manually. Create a Nuget.org account, in your profile click Upload a package.

Publish your Source Code

We are almost ready to publish our code on GitHub — a public repository. GitHub is widely known and used by several huge company, Microsoft included. It is a good place to publish your source code because it has a huge volume of users, it’s easy to use and a lot of developer already have an account which let them participate to your project easily. Before setuping GitHub, we must create a file that describe the project which will be used as the main page when people come in you GitHub project. The second one is a ignore file that will discard binaries and other development files from being published. You can find a template to use on GitHub. Simply copy it and save it at the root of your project. The read me file is a markdown file. It must named readme.md. The files structure should look like this :

FileStructureWithReadMe

Next step is to create your project on GitHub.
GitHubNewRepository

This is very fast and once it’s created you can go in the root of your project with a console and use git :

git init
git remote add origin https://github.com/MrDesjardins/DotNetCircuitBreaker.git
git add .
git commit -m "first commit"
git push -u origin master

Continuous Integration

The next step is to have every time your push new code on GitHub to have this one built and your test ran. The goal is to be sure that what you publish is in a good state. We can also add a step that if everything is fine and that the version of your assembly increase to have a Nuget package generated. You can find all steps in this previous post : Continuous Integration with Travis CI.

Consume

Once the code is published, the last step is to consume your package from your open source. You need to go back to your original project where the code originate. Then, you need to go in reference, and get the Nuget package you published. The code is out of your main repository which allow you to work on it with affecting your main project. People outside your main project can contribute, find issues and fix this open source part. You can yourself use in multiple projects this package, even use it in different company. Finally, you have some exposure about what you can do and save many hours to other people who can benefit of your code.

AutoMapper.Mapper.CreateMap : Dynamically creating maps will be removed in version 5.0

AutoMapper from version 4.2 the static method CreateMap is obsolete and will be removed at version 5. It’s been years that people are configuring their mapping with this static method. Most people have divided their mapping into multiple classes across their model (domain) classes. While this can be a big task for huge solution, in most case the migration is simple. This article shows how to migrate from the static CreateMap method into a custom static variable that will handle all configurations. While the new patterns is great to be injected, it doesn’t mean that you should change your whole solution now to go in that direction.

First of all, if you had a custom interface or base class for the classes that define your mapping you should use instead AutoMapper.Profile. Having your class using this interface lets you override a method called Configure. You can from that base class call base.CreateMap. Since you access the CreateMap method, not statically, and with the same signature, the migration is easy. Here is an example.

public class OverallMapping: Profile
{
	protected override void Configure()
	{
		base.CreateMap<HealthOverall, HealthOverallViewModel>();
	}
}

The last step is to have all profiles loaded into your static variable. The easiest way is to use reflection to loops through all classes and to get all classes that inherit from Profile. The method that use the reflection is called once in your Global.asax.cs during the application start. Since it’s called once, the reflection call is not problematic on performance of your web application.

public static class MappingConfiguration
{
	public static void CreateMapping()
	{
		var profiles = (from type in typeof(MappingConfiguration).Assembly.GetTypes()
						where typeof(Profile).IsAssignableFrom(type) && !type.IsAbstract && type.GetConstructor(Type.EmptyTypes) != null
						select type).Select(d => (Profile)Activator.CreateInstance(d))
									 .ToArray();

		var config = new MapperConfiguration(cfg =>
		{
			foreach (var profile in profiles)
			{
				cfg.AddProfile(profile);
			}
		});
		MapperFacade.MapperConfiguration = config.CreateMapper();
	}
}

public static class Mapper{
	public static IMapper MapperConfiguration;
}

The static class and static property that hold all mapping is what you need to use in your application to map anything.

var viewModel = MapperFacade.MapperConfiguration.Map<HealthOverall, HealthOverallViewModel>(model);

That’s it! Pretty straight forward. What is time consuming is to change mapping configuration but this is still limited if your application already had a good division about how to define the mapping.

How to diagnostic slow code with Visual Studio

During the development of one feature, I noticed the performance to be very slow in some scenario. It was not obvious at first because the task was to simply update a user profile. The user profile in question is stored in a single table. It’s a pretty straight forward task. Before persisting the data, some validations are done but that is it.

This is where Visual Studio can be very useful with the integrated Diagnostic Tools. The diagnostic tools provide information about event and on any of them, you can come back in time and replay the call stacks which is pretty useful. It also gives some timing information, cpu usage and memory usage. To start diagnosing, simply attach Visual Studio to the process you want to diagnostic. After, open Visual Studio’s diagnostic tools that is located in the top menu under Debug > Profiler > Performance Explorer > Show Performance Explorer.

Here is an example of the output that I got from my performance problem.

DiagnosticTool

Visual Studio Diagnostic tools events include Entity Framework SQL statements. This is where I realized that the user’s table was updated but also hundred of others which looks to be a table linked to this one. Here was the performance bottleneck, the culprit! I never expected to update anything related to that table — just the main user’s table.

Entity Framework code was like this:

public void Update(ApplicationUser applicationModel)
{
	//Update the password IF necessary
	var local = UnitOfWork.Set<ApplicationUser>().Local.FirstOrDefault(f => f.Id == applicationModel.Id);
	if (local != null)
	{
		UnitOfWork.Entry(local).State = EntityState.Detached;
	}
	UnitOfWork.Entry(applicationModel).State = EntityState.Modified;
	if (string.IsNullOrEmpty(applicationModel.PasswordHash))
	{
		UnitOfWork.Entry(applicationModel).Property(f => f.PasswordHash).IsModified = false;
	}
	UnitOfWork.Entry(applicationModel).Property(f => f.UserName).IsModified = false;
	UnitOfWork.Entry(applicationModel).Property(f => f.CreationDateTime).IsModified = false;
	UnitOfWork.Entry(applicationModel).Property(f => f.ValidationDateTime).IsModified = false;
	UnitOfWork.Entry(applicationModel).Property(f => f.LastLogin).IsModified = false;
	UnitOfWork.Entry(applicationModel).Property(f => f.SecurityStamp).IsModified = false;
	UnitOfWork.Entry(applicationModel).Property(f => f.Language).IsModified = false;
}

As you can notice, nothing is done directly on the property that has the collection of “reputation”. The problem is that if the user as in that collection 250 objects, that for an unknown reason, Entity Framework does 250 updates. Since we want just to update first name, last name and few other basic properties than we need to be sure to remove those unwanted updates. After some modification with Entity Framework, like nulling every collection before updating, The SQL provided was only a single SQL, whence the performance at full speed.

How to have MsTest localized by Attribute?

Working with multiple languages requires to test in multiple languages too. A simple use case is if you have custom Asp.Net routing that you might want to test where an English route goes and the same for a French one. This goes beyond just text, but also how to handle numbers and datetime. The traditional way to do unit test in multiple localisations is to set the current thread at the beginning of the test, do the logic, assert and set the thread back to the original value.

The problem is that in all your code you need to set the thread manually. It would be better to have an attribute on the top of the test method and have this one handling the thread culture. Unfortunately, MSTest “TestMethod” attributes is sealed which mean you cannot inherit of this one. The work around is to create a custom attribute. However, this come with the challenge to hook into MsTest pipeline to have MsTest reads the attribute and act accordingly. This is what we will discuss in this article, how to use ContextAttribute, IMessageSink, IContributeObjectSink and so on.

First, let’s create a standard attribute that we will use on top of our test that need localization. We will use this one in combination of the TestMethod. The use will looks like this:

[TestMethod]
[LocalizedTest(LocalizedSection.EN_NAME)]
public void MyTest()
{
    //... Your test
}

The attribute has a parameter which is the Culture Name that we want to have the thread. The culture name is “en-us” for USA English or “fr-ca” Canadian French.

public class LocalizedTestAttribute:Attribute
{
    public string CultureName {get; set;}
    public LocalizedTestAttribute(string cultureName)
    {
        this.CultureName = cultureName;
    }
}

A second attribute is required to be at the top of the class tested. This is the way to notify the Microsoft test framework that we want to hook inside the pipeline of tasks that the testing framework is going while executing tests. This attribute inherit from ContextAttribute, from System.Runtime.Remoting.Contexts namespace. The role of that class is to define a collection of possible hooks. In that case, we have only one hook that we call LocalizedTestMessage. Those hooks are called “messages”. I am using a helper named TestProperty which handle generic code for every message. This generic class is inspired by the MsTestExtension source code.

public class LocalizedTestContextAttribute: ContextAttribute
{
    public LocalizedTestContextAttribute():base("LocalizedTest")
    {

    }

    public override void GetPropertiesForNewContext(IConstructionCallMessage msg)
    {
        if (msg == null)
            throw new ArgumentNullException("msg");
        msg.ContextProperties.Add(new TestProperty<LocalizedTestMessage>()); //We add 1 new message into the test pipeline
    }
}

public class TestProperty<T> : IContextProperty, IContributeObjectSink where T : IMessageSink, ITestMessage, new()
{

    private readonly string _name = typeof(T).AssemblyQualifiedName;

    [SecurityPermission(SecurityAction.LinkDemand, Flags = SecurityPermissionFlag.Infrastructure)]
    public bool IsNewContextOK(Context newCtx)
    {
        return true;
    }


    [SecurityPermission(SecurityAction.LinkDemand, Flags = SecurityPermissionFlag.Infrastructure)]
    public void Freeze(Context newContext)
    {
    }


    public string Name
    {
        [SecurityPermission(SecurityAction.LinkDemand, Flags = SecurityPermissionFlag.Infrastructure)]
        get { return _name; }
    }


    [SecurityPermission(SecurityAction.LinkDemand, Flags = SecurityPermissionFlag.Infrastructure)]
    public IMessageSink GetObjectSink(MarshalByRefObject obj, IMessageSink nextSink)
    {
        T testAspect = new T();
        testAspect.AddMessageSink(nextSink);
        return testAspect;
    }
}

public interface ITestMessage
{
    void AddMessageSink(IMessageSink messageSink);
}

Finally, we need to have our LocalizedTest message hook (message) defined. This class defines what is done before and after the execution of the test. This class is able to access the tested method to check if LocalizedTest attribute is defined on the class. If yes, it proceeds, otherwise it executes the method without changing anything. When the attribute is present, it backup the current thread culture, get the culture name from the attribute and set it to the test thread. It executes the test, and set back the original thread.

public class LocalizedTestMessage : BaseTestMessage<LocalizedTestAttribute>, IMessageSink, ITestMessage
{
    private IMessageSink nextSink;

    [SecurityPermission(SecurityAction.LinkDemand, Flags = SecurityPermissionFlag.Infrastructure)]
    public IMessage SyncProcessMessage(IMessage msg)
    {
        if (msg == null)
            throw new ArgumentNullException("msg");
        CultureInfo currentCultureInfo = null;
        CultureInfo currentUICultureInfo = null;

        //Before test get value to set back after test
        LocalizedTestAttribute localizationAttribute = base.GetAttribute(msg);
        if (localizationAttribute != null)
        {
            currentCultureInfo = System.Threading.Thread.CurrentThread.CurrentCulture;
            currentUICultureInfo = System.Threading.Thread.CurrentThread.CurrentUICulture;
            System.Threading.Thread.CurrentThread.CurrentCulture = new System.Globalization.CultureInfo(localizationAttribute.CultureName);
            System.Threading.Thread.CurrentThread.CurrentUICulture = System.Threading.Thread.CurrentThread.CurrentCulture;
        }

        //Execute test
        IMessage returnMessage = nextSink.SyncProcessMessage(msg);

        //After test set back value
        if (localizationAttribute != null && currentCultureInfo!= null && currentUICultureInfo!=null)
        {
            System.Threading.Thread.CurrentThread.CurrentCulture = currentCultureInfo;
            System.Threading.Thread.CurrentThread.CurrentUICulture = currentUICultureInfo;
        }
        return returnMessage;
    }

      
    [SecurityPermission(SecurityAction.LinkDemand, Flags = SecurityPermissionFlag.Infrastructure)]
    public IMessageCtrl AsyncProcessMessage(IMessage msg, IMessageSink replySink)
    {
        throw new InvalidOperationException();
    }

    public IMessageSink NextSink
    {
        [SecurityPermission(SecurityAction.LinkDemand, Flags = SecurityPermissionFlag.Infrastructure)]
        get
        {
            return nextSink;
        }
    }

    public void AddMessageSink(IMessageSink messageSink)
    {
        nextSink = messageSink;
    }
}

public abstract class BaseTestMessage<TAttribute> where TAttribute : Attribute
{

    [SecurityPermission(SecurityAction.LinkDemand, Flags = SecurityPermissionFlag.Infrastructure)]
    protected TAttribute GetAttribute(IMessage message)
    {
        string typeName = (string)message.Properties["__TypeName"];
        string methodName = (string)message.Properties["__MethodName"];
        Type callingType = Type.GetType(typeName);
        MethodInfo methodInfo = callingType.GetMethod(methodName);
        object[] attributes = methodInfo.GetCustomAttributes(typeof(TAttribute), true);
        TAttribute attribute = null;
        if (attributes.Length > 0)
        {
            attribute = attributes[0] as TAttribute;
        }
        return attribute;
    }
}

It would be even better if we could avoid having two different attributes on each test but this is a solution that still let us avoid having to handle thread on every test. It’s also important to notice that this is only good for MsTest. If you are using other testing framework like nUnit or xUnit that this will not work. However, these frameworks have other mechanism to handle pre and post tests too. The documentation is very slim on Microsoft about there infrastructure classes. It comes from a pre-era where Microsoft where less open that it is now.