Burning the last GIT commit into your telemetry/log

I enjoy knowing exactly what happens in the systems that I am actively working and that I need to maintain. One way to ease the process is to know precisely the version of the system when an error occurs. There are many ways to proceed like having a sequential number increasing, or having a version number (major, minor, path). I found that the easiest way is to leverage the GIT hash. The reason is that not only it point me into a unique place in the life of the code, but it also removes all manual incrementation that a version number has or to have to use/build something to increment a number for me.

The problem with the GIT hash is that you cannot run it locally. The reason is that every change you are doing must be committed and pushed. Hence the hash will always be at least one hash before the last. The idea is to inject the hash at build time in the continuous integration (CI) pipeline. This way, the CI is always running on the latest code (or a specific branch) and knows what is the code being compiled thus without having to save anything could inject the hash.

At the moment, I am working with Jenkins and React using the react-script-ts. I only had to change the build command to inject into a React environment variable a Git command.

"build": "REACT_APP_VERSION=$(git rev-parse --short HEAD) react-scripts-ts build",

In the code, I can get the version by using the process environment.

const applicationVersion = process.env.REACT_APP_VERSION;

The code is minimal and leverage Git system and environment variable that can be read inside React application easily. There is no mechanism to maintain, and the hash is a source of truth. When a bug occurs, it is easy to setup the development environment to the exact commit and to use the remaining of the logs to find out how the user reached the exception.

TypeScript and React – Continuous Integration for Pull Request from 3 minutes to 1 minute

At Netflix, software engineers own the full lifecycle of an application, from gathering the requirement to building the code, to the way we handle our life cycle process to the deployment which includes configuring AWS for DNS and load-balancing. I personally like to have on every pull request a build that makes sure that everything is building and not only on my machine as well as my unit tests to be run to make sure that no regression is introduced. For several months, this process was taking 3 minutes +-10 seconds. This was satisfying for me, it was accomplishing its main goal. I was expecting some time because of the nature of the project. First, I am using TypeScript, seconds I am using node modules and third I need to run these unit tests. The code is relatively small on that project. I wrote about 36k lines in the last 11 months and there are about 900 unit tests that need to run.

Moving from 3 minutes to 1 minute 30 seconds

The first step was to add the unit tests. Yes! The first few months only the build was running. Mainly because we are using Bitbucket and Jenkins and I never took the time to configure everything — and it is not straightforward to get coverage with Jenkins for JavaScript code. Nevertheless, I was using the create-react-app “react-scripts-ts build” commands which are way slower than running the command “react-scripts-ts test –env=jsdom –coverage”. In my case, it trimmed 1 minutes 30 seconds.

Still, I was observing in the remaining 1 minute 30 seconds a waste of 1 minute trying to get node_modules by the command “npm install” regardless of my step specifying “npm ci”. The difference between “install” and “ci” is that the first one is slower because it performs a task for the package-lock.json which the “ci” command skip by relying on an existing generated package-lock.json file

Moving from 1 minute 30 seconds to 1 minute 10 seconds

The install command was bothering me and I found out that we had some internal process taking over some steps. To keep it show, I had to do some modification. First, in Jenkins, you can preserve folder like the node_modules. This is under “Build Environment”. Do not forget to check the “Apply pattern also on directories”. However, the problem is still npm. The “ci” removes the node_module. We are not more advanced than before. So, the idea is to use back the install command.


There is still some room for improvement. To be honest, I had to completely not delete the whole workspace to have it to work. Jenkins was removing all the time the node_modules regardless of the syntax I was using. I also found suspecious that it takes 20 seconds for npm install to figure out that nothing has changed — that is very slow. I’ll have to investigate further with yarn.

Unit Tests and Coverage Report in Jenkins using Jest from Create-React-App

Since I left Microsoft Visual Studio Online (VSTS) has an employee I have been using Jenkins which is the continuous integration (ci) platform Netflix uses. I configured two Jenkins jobs for the project I am leading. One is handling every pull request done against master and the second one is executed during the merge of any pull request into master. For many months, I didn’t have the unit tests running on the platform. The reason is that I am, yet, used to how Jenkins works and even after several months feel VSTS more intuitive. Regardless, recently I took the time and setup to have my TypeScript code using Create-React-App to run my unit tests in these two Jenkins tasks. I am using Create-React-App, which come with the best testing framework I have experimented so far which is Jest. My goal was to have all the unit tests ran as well as to see the coverage.

Here are the steps required to have Jenkins handle your test. First thing is to install a dev dependency to “jest-junit”. The reason is that we need to convert the format of Jest into Junit.

npm install --save-dev jest-junit

The next step is to download a Python script in your repository. I have mine in “tools”. The reason is also about converting. Jest coverage file is not in the right format. The Python script converts the locv into Cobertura format. You can download once the script at this address.

wget https://raw.github.com/eriwen/lcov-to-cobertura-xml/master/lcov_cobertura/lcov_cobertura.py

Few configurations are required in the package.json. The first one is to create a test command that Jenkins execute instead of the default test command. The command calls the react-scripts. I am using TypeScript, hence I have to use the react-scripts-ts command. The next parameter is the “test” command which we still want to execute. The change starts with the test results processor. This is where you specify the jest-junit to execute once the tests are done. I set my coverage to be positioned into the “coverage” folder which is the folder I have ignored in the .gitignore and where I have normally my local coverage file outputted. Here are the three commands I have. The first one runs the test, the second run and coverage for the ci (this is the new stuff) and the last one is when I want to run locally the coverage.

"test": "react-scripts-ts test --env=jsdom",
"test:ci": "react-scripts-ts test --env=jsdom --testResultsProcessor ./node_modules/jest-junit --coverage --coverageDirectory=coverage",
"coverage": "react-scripts-ts test --env=jsdom --coverage",

Finally, you need few jest-unit configurations. This can be in your package.json. I have some coverage folder that I want to exclude which you can do in the jest configuration under collectCoverageFrom. I had these before doing the task we are doing of configuring Jenkins. Then, the coverage reported must be lcov and text. Finally, the new configurations are under “jest-junit”. The most important configuration is the “output” which is again in the coverage folder. You can change the destination and file as you wish. However, remember the location because you will need to use the same in a few instants inside Jenkins.

  "jest": {
    "collectCoverageFrom": [
    "coverageReporters": [
  "jest-junit": {
    "suiteName": "jest tests",
    "output": "coverage/junit.xml",
    "classNameTemplate": "{classname} - {title}",
    "titleTemplate": "{classname} - {title}",
    "ancestorSeparator": " > ",
    "usePathForSuiteName": "true"

In Jenkins, you need to add 2 build steps and 2 post-build steps. The first build step is to run the unit test with the script we just added in the package.json. The type of build step is “Execute Shell”.

npm run test:ci

The second step is also an “Execute Shell”. This one calls the python code that we placed in the “tools” folder. It is important to change the path of your lov.info and coverage.xml. Both are in my “/coverage/” folder. The “base-dir” is the directory of the source of your code.

python tools/lcov_cobertura.py coverage/lcov.info --base-dir src/ --output coverage/coverage.xml

The next two steps are “Post-Build”. This time, two different types. The first one is “Publish JUnit test result report”. It has a single parameter which is the XML file. Mine is set to “coverage/junit.xml”. The second task is a “Publish Cobertura Coverage Report”. It also takes a single parameter which is the coverage.xml file. Mine is set to “coverage/coverage.xml”.

At that point, if you push the modification from the package.json and the Python script you will see Jenkins running the unit tests and doing the conversion. It is possible to adjust the threshold of how many tests your allow to fail to not break the build as well as setting the percentage of coverage you expect. You will get a report on the build history that allows you to sort and drill into the coverage report.

How to setup your Kanban Board for your Dev team in 5 minutes

We are using VSTS at Microsoft Teams like many products at Microsoft. A few weeks ago, I decided to change our process of using the backlog as main view by using the Kanban board. As you may know, I worked for many years in VSTS and it feels so natural to have a quick glimpse of the Kanban board to know how everything is going. The reason is multiple but the main one is it was hard to figure out the state of each members. It was also hard to have a distinct view about what the front-end and the back-end team were working on. The last real benefit for me is that we do not need to waste time to evaluate every requirements and bugs, we just sort them in order of priority and blitz them through. In this post, I’ll give you a short way to build a Kanban Board that work for developers.

Starting with the Kanban board is not obvious. You need to go in your project, and click the “Work”, then “Backlog”. Then, you need to select the Backlog items on the left. There no indication of “Kanban” anywhere.

When you arrive there the first time, it’s even more disconcerting. You will see 4 columns which is “New”, “Approved”, “Commited” and “Done”. These are the default Kanban columns which doesn’t mean anything if you do not that flow which is almost all the time. So, we need to configure the columns. Configuring the Kanban board is done by clicking the gear at the top-right of the board.

This dialog contains many options and they are not ordered especially in the order to create a new board. What you want to do is skip all the Cards options which you should define later. Click “Columns” under “Board” and let’s change it into a flow that worked for me in the past as well as now. The first column to change is “Approved” for “Specs/Investigations”. This column will be the first step and used by your project manager or by your engineering manager to specify which work is getting details about them. This is also good for bug which will get there to be in investigation before having a coded fix. This column is also having the special option “Split column into doing and done”. This will result of having the column divided in two. The first part will be where someone commit working on it, and the other one is when the investigation or specification is written. This way, engineers knows that they can pick work from the column, not before. Having this column make it very clear when something is ready to pass to the next step and this is a must in teams that work fast and in an agile way. Anything in that column should be always pickup from top to bottom, whoever is available to grad new work.

The next columns are “Code”, “Tests”, “PullRequest”, “Verification” and we keep “Done”. The column code is when the developer is actually working on the code to build the requirement or fixing the bug. The column tests is when tests are created. Of course, unit tests can be written at the same time of your code, but this column ensure that additional integration code or even scenario tests are created. Sometime stuff doesn’t move there for a long time but it still good to see if something get stuck for a long time or not. The pull request columns means that the code is ready to be looked by others. Finally, when the pull request is in the master branch, you can move the work in the verification column which mean that it should be verified that everything work as intended in the master branch. Once verified, which can be done by people that are not even developer since it’s a verification that it works as intended. Once everything good, it goes in the Done.

Remember that bugs and requirement uses this board, so we must ensure that bug are manager with require the board to be setup this way. To change the behavior, in the board setting, go under “General” and “Working with bugs” and check “Bugs are managed with requirements”.

Next and last step is to separate the front-end and back-end work. This way, you can see what can block each others. This can be done by using the concept of swim lane. We will create one for the front and one for the back. This will divide the board in 2 vertically.

Of course, you can customize a lot more your board and VSTS is great to have coloring rules for example. However, within 5 minutes you can have your teams be up-and-running without doing too much.

nDepend 2017 New Features

I have been a fan of nDepend since many years now. nDepend is a tool that run in parallel of Visual Studio as well as in Visual Studio. Fully compatible from early version up to Visual Studio 2017. The role of nDepend is to improve your code quality and the newest version step up the game by having few new features. The three main new features are the called “Smart Technical Debt Estimation”, “Quality Gates” and “Issues Management”. In this post, I will focus on the first one “Smart Technical Debt Estimation”.

The feature goal is to give a number in term of cost. It also give grade to your code as well as an estimate effort in time to improve that rating. Everything is configurable for your preference.

First of all, if you are coming from nDepend version 6, when opening your old project with nDepend version 2017 you will get a notification. About dept not configured. You just need to click on it and you will get a message that will guide you on this new feature.

From there, you will get a first rating.

From here, you can click the rating, the effort or any numbers to get an idea how to improve. But before going too deep, you better configure your setting. For example, I ran this project on a project that I am phasing out where I’ll work about 1.5 hours for the next year per day. This can be configure!

Indeed, reducing the number of hour per day and running a new analysis, plunged the rating down in my case.

That said, the rating is meaningful if you have configured the rules to be along with what you believe to be a dept which mean that you have to setup the rules to fit your needs. For me, I never really modified the default values, always browsed the results and skipped those that wasn’t important for me. With this new feature, I am more encouraged to make the rule more like I want. For example, I do not mind about long method name, in fact, I like them.

I’ll post in few weeks the result of tweaking some rules, adjusting the code and so on. This is not an easy tasks, because it cannot be just changed blindly. Some private method that doesn’t seem to be used might be called by Entity Framework, some attribute classes may seem to be great sealed but they are also inherited by other classes. This is also true for other rules like static methods that can be converted, sometime it has some side effects. So far many features of this new version of nDepend seem promising. Not only for Visual Studio, but now with the integration to VSTS (Visual Studio Online) that can be used on each of your build.

The Back Side of the DevOps Trend

At this moment, if you do not agree that DevOps is the best thing in the world – your are out, well not in the “gang”. Indeed, Facebook do it, Google do it, Amazon do it and Microsoft is also in transition to do it. Everybody do it, so it must be the best thing in the world? Well, if you remove the fact that DevOps always existed in small companies, than it is nothing new than simplifying your workforce expertise into a big bucket. The concept of DevOps is that an individual can contribute almost end-to-end of a product. If you want, this is the opposite of what Henry Ford used to be efficient. It is the opposite to divide to conquer. So, instead of having 1 analyst, 1 developer, 1 tester and 1 IT guy; you just use a single person that know everything. This is something you have to do if you run a small company because you do not have enough money to support all those expertise to be spread on different person. With DevOps, the same person setup the server, go talk with customers, do the planning, code the product and test it. The reason behind having huge corporations going in that path is mostly because it increase the deployment time of feature. The justification is true in some extend because you have less overhead of communication and also less waiting for a particular set of skills. Also, you have a team that have better overall picture of the system. So far, everything is “better” on paper. Who can argue against that every one can be easily replaced or that anyone do a better development since he or she knows how to code from a testing perspective or a deployment perspective. Last thing, this also merge the support team with the development team. So, they are no anymore a team that is doing the new stuff and one that is repairing it.

However, here are some problems. If you are “DevOping” only for a part of your organization, than it is not really more efficient. For example, if you have 3 levels of manager that must agree for every decision than you have DevOps for your “coding operation” not for the overall “development operation” of your product. On a small company, you talk directly to the boss and thing can move fast — sometime the boss is also a developer! It works. However, if you need to include in the loop your lead developer, you manager level 1,2,3,4… than you product manager which must go in meeting with other managers, you loss a lot of benefits like deploying fast new features, having innovative features developed, etc. In fact, from my experience, people are waiting and while they are waiting they are trying to understand the field of knowledge that they do not understand. When they are not waiting, they are doing stuff, but most of the time is on researching about the knowledge that they do not know. At the end, the proportion of time passed into developing the feature itself is not high at all. Also, since your development team is handling all the support than the development that was supposed to be more efficient is cut by the time of understanding every bugs, making the fix, testing again, etc. In a single week, the development time is shrunk rapidly.

DevOps has a bigger caveat if you have a big software : the code. For example, a software that is built by 10 developers or 100 developers or 500 has more different coding standards across the product and also a lot more codes for the same development period of time. This mean that just for development tasks, this require a huge investment of time to understand the current code base. This is not without saying that their is so many technologies implied now that reading the code base force you to know more than just 2-3 languages, it can go very fast above 6-7-8… At that point, we are not even talking about front-end and back-end code. DevOps merges front-end and back-end dev but also, like I said analysis tasks and skills; design tasks, tool, standard, meeting ; coding with different technologies, standards, best practices, debugging and software; testing with unit test frameworks that is different from techno to techno but also from unit, integration, functional, etc; deploying locally, on a server or on the cloud; infrastructure with cluster, load-balancing, network VPN, DNS; etc. So, after a few times, expert in some field become average in every fields.

It is impossible to have a single individual that is an expert in CSS, JavaScript, TypeScript, Angular, ASP, SQL, ORM, Rest Service, security, cloud storage, deployment, unit testing, etc. Indeed, a single individual can be an expert on multiple technologies and systems, but not all of them. This is why the model of Henry Ford was good for development of thing that does not change because every phase was mastered by a single entity. In software, everything change, so a pure segregate model does not apply, but on the other spectrum, the “know it all” model does not either. This is also even more true with the new trend of having new version out so fast. Today, code base is working with version 1 of one framework, in 1 month it will be version 2 out… multiply that by the tremendous amount of frameworks, libraries, technologies required and you are almost changing every weeks something. Keeping track of the good way to do stuff become harder and harder. Of course, you can learn on every task you must do, but still, you will know the basis without being an expert. The cherry at the top is that now, is that every thing is shipped so fast that it contains bugs which if you stumble into one that you are often said that “it is open source, fix it” –indeed.

So, I am all about having a wide range of knowledge. I never been someone that was dividing the front-end development from the back-end development. In my mind, you must know how it works from the request that ask the web page to how to get the data from the database (or other source). I am also all into having developers building unit test and even integration test. In fact, I have projects that I do end-to-end. However, from my professional experiences, if it goes beyond that point and you have a huge code base, the performance of the team is not better with a DevOps approach than having some experts in every part of the process. In fact, it is worth because we are all average and we loss the expertise. Whilst your expert programmers are doing functional tests, or try to see how to deploy on the IIS farm or need to go in meeting with managers to figure out what to do, they are not at full speed at what they are good at. Also, some developer does not have any interest to do analysis, neither to gather requirements, doing tasks management or to work with third party partners — they want to do what they know the best develop. Same thing for testers or any other expert in the team.

This trend is strong right now, and will be there for few times before migrating to something else. Management likes DevOps because they have a pool of individual that can be easily switched and allow them to have for few days a full team of developer and tomorrow a full team of testers as well as one team in one product today which can be moved into a different division later. I am not against that movement, but contrary to a lot of people, I simply do not think that this is the way to go in long term. Keeping developer having expertise without having them exhausted with all those different tasks and technologies to keep up is going to be challenging.

To conclude, I am curious to see why this mentality does not goes in the management’s zone. Because, DevOps could also be applied: we should only have 1 layer of ManOps: “Management Operations”. All the benefits would be also there. Faster decisions, less hierarchies to reach the person who can do something tangible, no middle man or distortion of information, faster delivering features or innovative ideas to be incorporate inside the product…

Continuous Integration (CI) with C# .Net and Travis-CI

I am a big fan of Visual Studio Team Services (VSTS) as well as a developer on this platform. However, if you are doing open source project, you cannot benefit of VSTS. The reason is that everything is private. Microsoft is also using alternative company like GitHub to host public project, so am I! However, GitHub is focusing only at the source repository, not the build, running unit tests, doing coverage, deploying Nuget package. This doesn’t matter since GitHub provides what is called “Webhooks” which allow other services to get notified when new code get pushed. In this article, we will discuss about Travis-ci.org. This free service can get notified, with the “webhook” of GitHub to start compiling your code. It also let you run other task, like running unit tests.

I am take for grant that you have a GitHub account. If you do not, you can create one for free. The next step, is to go at Travis-ci.org and signup. This is very easy since you can login with your GitHub account. This is a big win because it is so related and GitHub has so many services around it that the burden of handling multiple accounts is not a problem with Travis. The next step is to select which repository you want Travis to get notified by GitHub.


Once that is done, you need to add something into your repository which will give instruction to the continuous integration system. This is where we will tell what to build and to run unit tests. The file must be set at the root of your repository with the name “.travis.yml”. There is a lot of options. Here is a sample of my C# project.

language: csharp
solution: AspNetMvcEasyRouting.sln
  - nuget restore AspNetMvcEasyRouting.sln
  - nuget install xunit.runners -Version 1.9.2 -OutputDirectory testrunner
  - xbuild /p:Configuration=Release AspNetMvcEasyRouting.sln
  - mono ./testrunner/xunit.runners.1.9.2/tools/xunit.console.clr4.exe ./AspNetMvcEasyRoutingTest/bin/Release/AspNetMvcEasyRoutingTest.dll

As you can see, it specify the language, and which solution to use. It defines what to do during the installation which is to restore all nuget packages and to install XUnit runner. XUnit is one supported type of unit tests that can be used. Visual Studio Test cannot be run because Travis run on Mono. That might be a show stopper for you if you have heavily invested in Visual Studio Test. The last section of the file is what to do. The first line, with xbuild, compile the code and the next one run the unit test. If at one of these steps something happen wrong, you will get an email. Otherwise, it’s all fine!


Travis-ci lets you see all logs in real time from the website. It is easy to access, easy to debug. It also let you have a dynamic image of the state of your build that you can embed in the readme file of GitHub. To do so, click the badge next to the repository and select markdown.


I will cover soon how to generate your Nuget package by pushing on GitHub. That would be one more step on having automated your steps.