Unit Tests and Coverage Report in Jenkins using Jest from Create-React-App

Since I left Microsoft Visual Studio Online (VSTS) has an employee I have been using Jenkins which is the continuous integration (ci) platform Netflix uses. I configured two Jenkins jobs for the project I am leading. One is handling every pull request done against master and the second one is executed during the merge of any pull request into master. For many months, I didn’t have the unit tests running on the platform. The reason is that I am, yet, used to how Jenkins works and even after several months feel VSTS more intuitive. Regardless, recently I took the time and setup to have my TypeScript code using Create-React-App to run my unit tests in these two Jenkins tasks. I am using Create-React-App, which come with the best testing framework I have experimented so far which is Jest. My goal was to have all the unit tests ran as well as to see the coverage.

Here are the steps required to have Jenkins handle your test. First thing is to install a dev dependency to “jest-junit”. The reason is that we need to convert the format of Jest into Junit.

npm install --save-dev jest-junit

The next step is to download a Python script in your repository. I have mine in “tools”. The reason is also about converting. Jest coverage file is not in the right format. The Python script converts the locv into Cobertura format. You can download once the script at this address.

wget https://raw.github.com/eriwen/lcov-to-cobertura-xml/master/lcov_cobertura/lcov_cobertura.py

Few configurations are required in the package.json. The first one is to create a test command that Jenkins execute instead of the default test command. The command calls the react-scripts. I am using TypeScript, hence I have to use the react-scripts-ts command. The next parameter is the “test” command which we still want to execute. The change starts with the test results processor. This is where you specify the jest-junit to execute once the tests are done. I set my coverage to be positioned into the “coverage” folder which is the folder I have ignored in the .gitignore and where I have normally my local coverage file outputted. Here are the three commands I have. The first one runs the test, the second run and coverage for the ci (this is the new stuff) and the last one is when I want to run locally the coverage.

"test": "react-scripts-ts test --env=jsdom",
"test:ci": "react-scripts-ts test --env=jsdom --testResultsProcessor ./node_modules/jest-junit --coverage --coverageDirectory=coverage",
"coverage": "react-scripts-ts test --env=jsdom --coverage",

Finally, you need few jest-unit configurations. This can be in your package.json. I have some coverage folder that I want to exclude which you can do in the jest configuration under collectCoverageFrom. I had these before doing the task we are doing of configuring Jenkins. Then, the coverage reported must be lcov and text. Finally, the new configurations are under “jest-junit”. The most important configuration is the “output” which is again in the coverage folder. You can change the destination and file as you wish. However, remember the location because you will need to use the same in a few instants inside Jenkins.

  "jest": {
    "collectCoverageFrom": [
    "coverageReporters": [
  "jest-junit": {
    "suiteName": "jest tests",
    "output": "coverage/junit.xml",
    "classNameTemplate": "{classname} - {title}",
    "titleTemplate": "{classname} - {title}",
    "ancestorSeparator": " > ",
    "usePathForSuiteName": "true"

In Jenkins, you need to add 2 build steps and 2 post-build steps. The first build step is to run the unit test with the script we just added in the package.json. The type of build step is “Execute Shell”.

npm run test:ci

The second step is also an “Execute Shell”. This one calls the python code that we placed in the “tools” folder. It is important to change the path of your lov.info and coverage.xml. Both are in my “/coverage/” folder. The “base-dir” is the directory of the source of your code.

python tools/lcov_cobertura.py coverage/lcov.info --base-dir src/ --output coverage/coverage.xml

The next two steps are “Post-Build”. This time, two different types. The first one is “Publish JUnit test result report”. It has a single parameter which is the XML file. Mine is set to “coverage/junit.xml”. The second task is a “Publish Cobertura Coverage Report”. It also takes a single parameter which is the coverage.xml file. Mine is set to “coverage/coverage.xml”.

At that point, if you push the modification from the package.json and the Python script you will see Jenkins running the unit tests and doing the conversion. It is possible to adjust the threshold of how many tests your allow to fail to not break the build as well as setting the percentage of coverage you expect. You will get a report on the build history that allows you to sort and drill into the coverage report.

NPM Useful Commands

This article will be pretty slim but can be helpful for more than just myself so I am sharing. Here are some commands available with NPM that you may not know that can be useful.

To know packages that are installed globally:

npm list -g --depth=0

To know where the packages are installed. It gives the global location if no node_modules folder in the direct folder parent tree:

npm root
npm root -g // The global folder

To know your NPM version and the actual released version

npm root

To know all available version of a library:

npm show react-native versions

To get all your direct dependencies current version and latest:

npm out --depth=0

Capturing Performance in a React and Redux Web Application

Recently, I started working a project and wanted to gather information in regard to performance. I wanted to have insight about where code could be improved over time. This means that I need to have entry points where I can collect metrics and send them to the server. In this article, I won’t discuss the detail of how to send the information, but more about where to set these markers.

Before anything, let’s clarify the situation. This is a React application, that uses Redux as the way to manipulate the data flow. The application has container components that are connecting to the store through the “connect” React-Redux function. Presentation components will delegate back to the container component. Once an action occurs, the “mapDispatchToProps” of the corresponding container component calls the dispatch method The exact flow that React dispatch a call to the action creator. This latter can do some business logic and Ajax call to finally dispatch an action that will be intercepted reducers. Between the beginning of the call to the reducer and the end, middlewares can act upon the state. Finally, the “mapStateToProps” of the connected component is called and change the state of the component which will call the shouldComponentUpdate, the render and the componentDidUpdate.

I’ve seen many places on the Internet that were placing performance markers in a middleware. I can understand the appealing reason of being easily injectable, but that doesn’t cover the whole flow as I just described.

The confirm what my hypothesis of having to start performance log right before the dispatch and the performance log to stop at the update method, let’s do an experience with some console.log in 6 different places. The first one right before “dispatching” to the action creator. One in the first line of the action creator, a third one in the render method of the component, and a fourth one in the componentDidUpdate. Finally, two console.log in a basic middleware invokes the console before and after the action.

The order is:

1- Dispatch
2- ActionCreator (first line)
3-Middleware before action
4-Middleware after action
5-Component Render
6-Component componentDidUpdate

That being established, calling any kind of stop marker at a middleware level is preamptive. This is even truer if you are using Perf.start() and Perf.stop() at the Middleware level.

How I Migrated an Existing AngularJs Project from JavaScript to TypeScript (Part 2 of 2)

In part 1 of how to migrate an existing JavaScript project to TypeScript, we saw that we can have only JavaScript files and use TypeScript as transpiler with almost no effort. This gave us the capability to transpile into a different version of EcmaScript, but also open the door to bringing typed object and primitives in an existing project. In this phase 2, we will change TypeScript to allows .ts file and benefit of strongly typed parameters and objects.

As a reminder, the project that I was migrating to TypeScript was an AngularJS 1.5, with RequireJs as AMD loader that was using Grunt to workout the files. In part 1, we configured TypeScript to read only .js file. Now, we need to read both, but also to use AMD module. This wasn’t required in the first place and still isn’t because of the way the actual project was built — it was explicitly using the requireJs library in all files. The tsconfig.json also specify what we want to include: .ts file. I added the “checkjs” to report errors in JavaScript file. This is not required, but since it’s a phase 2, I desired to kick the notch up more in term of validation. However, the checkjs is limited since it relies on inference or JsDoc comment style which wasn’t used in the project that I was converting.

Few changes are required in term of libraries. We need to bring some definition files for AngularJS, RequireJs, JQuery and Angular-ui-router and also the Angular library. This can be done easily with NPM, here is the JSON output.

"angular": "^1.5.11",
"@types/angular": "^1.5.23",
"@types/angular-ui-router": "^1.1.37",
"@types/jquery": "^3.2.10",

Minor changes was required in the Gruntfile.js because if we recall, we were using the tsconfig.json file to do the heavy lifting. The main change was to bring .ts file into the final distribution folder since we want to debug the .ts with the map file.

From there, I chose some JavaScript files, and changed the extension of the file to .ts. I started with the entry JavaScript file and went deeper and deeper. Without modifying anything, it was working. But, it wasn’t leveraging the typed aspect of TypeScript. That is the reason, I started to change all requirejs type. When “angular” was injected, I added to the parameter the type. Almost every file needed to have ng.IAngularStatic

angular: ng.IAngularStatic

The definition file are well exhaustive and provides everything from compileProvider, to scsDelegateProvider, to httpProvider and so on. With the power of TypeScript, it’s a matter of typing “ng.” and wait for Intellisense to come up suggesting type from AngularJS’ definition file.

Finally, I went into a situation where this project was using another of our project that was written in JavaScript. No definition file was available. However, I wanted to have the return object to be typed. I ended up creating a folder, in the project I am converting, that I called “CustomDefinitionFiles” and I added the missing type and used it. Be sure to have the extension “.d.ts” and you will be able to use it in your project. While it’s better to have the definition files provided by the library, this give us a quick way to have typed content without forcing anything.

At the end of these two phases, I was able to show that an existing JavaScript project can be converted to use TypeScript without any impact on the actual code. I demonstrate that it’s possible to live in a hybrid mode where JavaScript and TypeScript co-exist. Finally, I demonstrated the power of type by not having a system that is getting type progressively without creating any burden for developers. No one in the team was affected by this transformation and in long term, the code base will get in better quality, easier to read and maintain.

First part of AngularJs to React

How I Migrated an Existing AngularJs Project from JavaScript to TypeScript (Part 1 of 2)

A few weeks ago, I had to present the benefits of TypeScript to a group of people. One argument was that many projects were built in JavaScript, so bringing a new tool to the mix would create a disparity amongst the fleet of repositories we need to maintain. This is a valid argument when we have many different languages like Java, C#, C++, Python, but is it when TypeScript is a superset of JavaScript? No that much since going with TypeScript allows you to still work in JavaScript if desired. Coming back to JavaScript from TypeScript is allow smooth since you could always transpile into EcmaScript 6 and work from the output. Nevertheless, it’s an interesting exercise to demonstrate how to migrate an existing AngularJs project from JavaScript into TypeScript. This also may be a good argument to switch to TypeScript since it’s easy and bring all the power of strongly typed language and still be very close of JavaScript.

Before going any further, let’s see what kind of project we will migrate. First, it’s using AngularJS 1.5. Any version before 2 was built in JavaScript. There is an official definition file available which will be required to fetch during our conversion. Second, the project is using RequireJs as the module loader. Again, this is not an issue since RequireJs has also official definition file. Third and last, this project is using Grunt as tasks manager. This is pretty old since the community drifted to Gulp and now Webpack. However, this won’t be a problem since TypeScript has a Grunt library.

The first step is to bring TypeScript into the project and this can be done easily by using NPM. This project was using mostly Bower, but also NPM. Since I am more familiar with NPM, I decided to use NPM to fetch TypeScript.

npm install typescript --save-dev

We also need to get the Grunt library that will bridge Grunt and Typescript.

npm install grunt-ts --save-dev

Now that we have the tool to transpile, let’s do the migration in two phases. The first phase will be to keep every JavaScript in .js file and only use it as a transpiler. We will change some JavaScript file to use the latest EcmaScript and transpile using TypeScript to EcmaScript 3. The second phase will be to migrate files to be TypeScript (.ts) file. To have TypeScript transpiler configured for the need of reading JavaScript, we must specify that TypeScript is allowed to get into JavaScript file. The actual project is having its source code in a folder named public and was using Grunt’s task to move the code to distribute in a folder named dist. Since the goal is to migrate without modifying too much, we will introduce a dist_tsfolder that will be then moved to the dist folder. At the end, the dist folder remains the folder to use to deploy and the public folder the source folder to modify. We just created an intermediary folder to output our TypeScript file and do some modifications. In the following configuration, you will see all what we just discussed.

         "outDir": "./dist_ts",         
         "allowJs": true,         
         "target": "es3",         
         "rootDir": "./"     
"include": [         
        "exclude": [       

For phase 2, we will have to change the include and bring few other configurations, but so far, it does what we want. We just need to put the configuration in tsconfig.json and we are all set to go in the Gruntfile.js to create a task to build TypeScript.

ts: {           
   default: {                
      tsconfig: true           

This step as you can see was pretty straightforward. The only thing it says is to read the configuration file. However, a little more work was required to have TypeScript to work properly. The first thing is that this project has a requirejs Grunt task that was using the public folder to bring the whole code into a single JavaScript file. This couldn’t point to the public folder since we output the JavaScript in dist_ts. It’s not a big deal. We need to change the mainConfigFile path of requirejs to point to the intermediary folder. However, the requirejs task needed to have access to some assets and third library that was under the public folder. So, a pre-build task was required before calling the Gulp’s TypeScript task to move some files in dist_ts. And finally, a post-build task to move all the generated JavaScript file and JavaScript’s map file in the final destination folder.

From here, any actual JavaScript file could be changed to use “const” or the fat arrow way to work with function and this one will be with “var” and conventional function at the end.

Migrating from JavaScript with a big framework like Angular can be done in steps. So far in this phase 1, we were able to bring TypeScript very smoothly without disrupting any actual JavaScript code. In a next article, we will see phase two which is to bring TypeScript file parallel to JavaScript file to run in a hybrid mode where both can cohabit.

Migrating from AngularJS to React Part 2

JavaScript Fibonnaci Recursive, with Memoizer and Iterative

Fibonnaci numbers are a sequence of number that keep adding from previous result. It’s a common problem that can be resolve with few lines of code by using recursivity. The formula is F(x) = F(n-1) + F(n-1). The following code is a basic Fibonnaci implemented with a recursive loop.

function fib(x){
    return 0;
  if(x==1 || x ==2){
    return 1;
  } else{
  return fib(x-1) + fib(x-2);


We can use a closure to keep in memory previous value. This increase the speed since you do not have to compute many time the same function. However, this solution will grow your memory consumption.

var fibMemoize = function(){
  var memoize = [0,1,1];
  var innerFib = function(x){
    var resultFromMemoize = memoize[x];
    if(resultFromMemoize === undefined){
      memoize[x] = innerFib(x-1) + innerFib(x-2);
      return memoize[x];
    } else {
      return resultFromMemoize;
  return innerFib;

console.log("10:"+ fibMemoize(10));

It’s also possible to implement an iterative version of Fibonnaci. We do not need to keep an array since we won’t compute more than once every possibility (we do not have a branch of n-1 and n-2). In that case, what we do is always keeping the n-2 and n-1 result in a variable and keep swapping the value while iterating to the number desired.

function fibIterative(x){
    return 0;
  var n2 = 1;
  var n1 = 1;
  for(var i = 2; i < x-1; i++){
    var newValue = n2 + n1;
    n2 = n1;
    n1 = newValue;
  return n2 + n1;

console.log("10 iterative:"+ fibIterative(10));

There is plenty of other solutions, but these three are the basic ones.

Resizing an Image with NodeJs

This is the second post about project of creating a search tool for local pictures. As mentioned in the first post, this tool needs to use a web service to get information about the picture. This mean we need to upload the image that Microsoft Cognitive Vision/Face service will analyze and return a JSON object with information about the picture. Like most service, there is some constraints in term of the minimum and maximum of the size of what you can upload. Also, even for us, we do not want to send a 25 megs picture when it is not necessary. This article discuss about how to resize picture before sending a request to the web service. This will not only allow to be withing the range of the acceptable values, but also speed up the upload.

I decide to take the arbitrary value of sending picture with the widest side of 640px. This produce in average a file 30kb which is tiny but still enough for the cognitive service to give very good result. This value may not be good if you are building something similar where people are far or if you are not using portrait pictures. In my case, the main subjects are always close range, hence very easy to get detail at that small resolution.

Resizing a file requires to use a third-party library. This is something easy to find with JavaScript and NPM has a library named “Sharp” that do it perfectly. The TypeScript definition file is also available, so we are in business!

npm install --save sharp
npm install --save-dev @types/sharp

Before anything, even if this project is for myself, I defined some configuration variables. Some rigor is required when it’s cheap to do! The three first constant is the maximum size we want to output the image. I choose 640 pixel. The directory name is the constant of the folder where we will save the image we will send and where we will late save the JSON file with the analysed data. We save the resized image because on the website later, we will use this small image instead of the full resolution image. The website will be snappy and since we have the file, why not using this optimization for free. At 30kb for 2000 images, we only use 58 megs. The last constant is the glob pattern to get all underscore JPEG pictures. We will talk about glob very soon.

const maxSize = 640;
const directoryName = "metainfo";
const pathImagesDirectory = path.join(imagesDirectory, "**/_*.+(jpg|JPG)");

The second pre-task is to find the images to resize. Again, this will require a third-party library to simplify our life. We could recursively navigate folders, but it would be nicer to have a singe glob pattern that handle it.

npm install --save glob
npm install --save-dev @types/glob

From there, we need to import the module. We will bring the path and fs module of NodeJs to be able to create proper path syntax and to save file on disk.

import * as g from "glob";
import * as path from "path";
import * as sharp from "sharp";
import * as fs from "fs";

The first function that we need to create is the one that return a list of string that represent the file to resize. This will be all our underscore aka best pictures. We want to be sure that we can re-run this function multiple times, thus we need to ignore the output folder where we will save resized images. This function returns the list in a promise fashion because the glob library is asynchronous. Here is the first version which call the module function “Glob” and add everything into an array while sending in the console the file for debugging purpose.

function getImageToAnalyze(): Promise<string[]> {
    const fullPathFiles: string[] = [];
    const promise = new Promise<string[]>((resolve, reject) => {
        const glob = new g.Glob(pathImagesDirectory, { ignore: "**/" + directoryName + "/**" } as g.IOptions, (err: Error, matches: string[]) => {
            matches.forEach((file: string) => {
    return promise;

This can be simplified by just returning the matches string array and returning the promise instead of using a variable. At the end, if you are not debugging you can use :

function getImageToAnalyze(): Promise<string[]> {
    return new Promise<string[]>((resolve, reject) => {
        const glob = new g.Glob(pathImagesDirectory, { ignore: "**/" + directoryName + "/**" } as g.IOptions, (err: Error, matches: string[]) => {

As mentioned, the quality of this code is average. In reality, some loves are missing around the error scenario. Right now, if something is wrong, the rejection promise bubble up.

At this point, we can call the method with :

console.log("Step 1 : Getting images to analyze " + pathImagesDirectory);
    .then((fullPathFiles: string[]) => {
        console.log("Step 2 : Resize " + fullPathFiles.length + " files");
        return resize(fullPathFiles);

The code inside the “then” is the one executed if the promise is resolved successfully. It will start resizing the list of pictures and pass this list into the function that we will create in an instant.

The resize function is not the one that will do the resize. It will call the function that does the resize only if the picture has not been yet resized. This is great if something happen to fail and you need to re-run. The resize function will check in the “metainfo” folder, where we output the resized picture and only resize this one if not present. In both case, this function return a promise. The type of the promise is a list of IImage.

export interface IImage {
    thumbnailPath: string;
    originalFullPathImage: string;

This type allows to have the detail about the full path of the thumbnail “resized” picture and the original picture. When we have already resized, we just create an instance, when we do not have an image we create this one and then return a new instance. This method waits all resize to occur before resolving. This is the reason of the .all. We are doing so just to have a clear cut before moving to the next step and since we are launching multiple resizes in parallel, we are waiting to have them all done before analyzing.

function resize(fullPathFiles: string[]): Promise<IImage[]> {
    const listPromises: Array<Promise<IImage>> = [];
    const promise = new Promise<IImage[]>((resolve, reject) => {
        for (const imagePathFile of fullPathFiles) {
            const thumb = getThumbnailPathAndFileName(imagePathFile);
            if (fs.existsSync(thumb)) {
                listPromises.push(Promise.resolve({ thumbnailPath: thumb, originalFullPathImage: imagePathFile } as IImage));
            } else {
            .then((value: IImage[]) => resolve(value));
    return promise;

This function use a function to get the thumbnail path to lookup if it’s been already created or not. This function call another one too, and both of these methods are having the same goal of providing a path. The first one, the getThumbnailPathAndFileName get the original full quality picture path and return the full image path of where the resized thumbnail is stored. The second one is a function that will be resused in some occasion and it gives the metainfo directory. This is where the resized picture are stored, but also the JSON file with the analytic data are saved.

function getThumbnailPathAndFileName(imageFullPath: string): string {
    const dir = getMetainfoDirectoryPath(imageFullPath);
    const imageFilename = path.parse(imageFullPath);
    const thumbnail = path.join(dir, imageFilename.base);
    return thumbnail;

function getMetainfoDirectoryPath(imageFullPath: string): string {
    const onlyPath = path.dirname(imageFullPath);
    const imageFilename = path.parse(imageFullPath);
    const thumbnail = path.join(onlyPath, "/" + directoryName + "/");
    return thumbnail;

The last method is the actual resize logic. The first line of the method create a “sharp” object for the desired picture. Then we invoke the “metadata” method that will give us access to the image information. We need this to get the actual width and height and do some computation to get the wider side and find the ratio of resizing. Once we know the height and the width of the thumbnail we need to create the destination folder before saving. Finally, we need to call the “resize” method with the height and width calculated. The “webp” method is the one that generate the image. From there, we could generate a buffered image and use a stream to handle it in memory or to store it on disk like we will do with the method “toFile”. This return a promise that we use to generate and return the IImage.

function resizeImage(imageToProceed: string): Promise<IImage> {
    const sharpFile = sharp(imageToProceed);
    return sharpFile.metadata()
        .then((metadata: sharp.Metadata) => {
            const actualWidth = metadata.width;
            const actualHeight = metadata.height;
            let ratio = 1;
            if (actualWidth > actualHeight) {
                ratio = actualWidth / maxSize;
            } else {
                ratio = actualHeight / maxSize;
            const newHeight = Math.round(actualHeight / ratio);
            const newWidth = Math.round(actualWidth / ratio);
            const thumbnailPath = getThumbnailPathAndFileName(imageToProceed);
            // Create directory thumbnail first
            const dir = getMetainfoDirectoryPath(imageToProceed);
            if (!fs.existsSync(dir)) {

            return sharpFile
                .resize(newWidth, newHeight)
                .then((image: sharp.OutputInfo, ) => {
                    return { thumbnailPath: thumbnailPath, originalFullPathImage: imageToProceed } as IImage;
        }, (reason: any) => {

This conclude the resize part of the project. It’s not as straight forward as it may seem, but noting is space rocket science either. This code can be optimized to start resizing without having analyzed if all the image are present or not. Some refactoring could be done around the ratio logic within the promise callback of sharp’s metadata method. We could also optimize the write to remain in memory and hence having not to reload the thumbnail from the disk but working the on the memory buffer. The last optimization wasn’t done because I wanted every step to be re-executed what ever the state in which they were stopped. I didn’t wanted to bring more logic to reload in memory if already generated. That said, it could be done. The full project is available on GitHub : https://github.com/MrDesjardins/CognitiveImagesCollection

Create a Local Search Tool for Pictures in NodeJs

I recently searched for a specific picture of my daughter on my local drive with some issue. First, I am taking a lot of picture and it was hard to find. Second, I have some good pictures and some average, but I keep them all, hence I have thousand and thousand of picture that are not easy to find. However, I always had since 2003 got a systematic way to store my picture which is a main folder that contains one folder per year and every year has many folder per event. The event folder always have the format “yyyy-mm-dd-EventDescriptionIn2words”. I also have the habit to prefix the best pictures with an underscore inside these folders. Still, the picture name are always the sequential number of my camera and they are not consequent in time. There is no way I can search for “Alicia happy in red dress during summer 2015” for example.

Here come the idea that I started few weeks ago: having a training set of pictures that will serve as a base for the system to figure out who is in my picture and having a service that analyse what is in the picture. On top of the data, a simple website that let me query the database of pictures and return me the best match with a link to the actual full quality picture. Before going any further, a word of caution, the idea of this project is not to develop something that will scale, or a stellar code, hence the quality of the code is very average, but workable solution. Everything is developed with NodeJs, TypeScript, Microsoft Cognitive Api, MongoDb and doesn’t have any unit tests. I may refactor this project someday, but for the moment, let’s just get out head around how to do it.

I’ll write several posts around this project. In fact, at the moment I am writing this article, I have only done half way through the first phase which is analyzing a little subset of my picture. This article will serve more as a description of what will be build.

First thing we need to do is to read a sample of all the images. For me, instead of scanning and analyzing my whole hard drive for picture, I will analyze only picture between a specific range of date. At this date, I have 34 000 pictures taken since 2009 (since I met my wife) and in this population 2 000 have been identified with an underscore which mean that I really like them. For the purpose of having a smaller set of search and not having to analyze for too long time I will only use pictures with an underscore. Second, in these pictures, I can stay that roughly 75% of people are my wife, my daughter or me. Hence, I will only try to identify these people and mark others as “unknown”. Third, I want to be able to know the emotion and what is going on in the picture. This will require a third party service and I will use Microsoft Azure Cognitive API. I’ll get more in detail in the article about the api.

Once the picture will be analyzed, the data will be stored in a MongoDB, which is a JSON based storage. This is great because the result of all the analysis will be in a JSON format. It will allow us to query the content to get results to display in the website. To simplify this project, I will mark the first milestone as scanning the picture and create one JSON file per underscore file inside a “metainfo” folder. The second milestone will be to hydrate the MongoDB and the third one to create a simple web application that will communicate and display the result from MongoDB.

I’ll stop here for the moment. You can find the source code of the progress of this project in this GitHub repository : https://github.com/MrDesjardins/CognitiveImagesCollection

Using TypeScript, React and WebPack

I created an open source project to bootstrap TypeScript and React few months ago. You can see the first article about TypeScript/React/Gulp before this article. It wasn’t bundling the code, and was using Gulp which is in mid-2017 not the preferred tool to package JavaScript code. At this moment, Webpack is the most popular tool allowing to do everything Gulp or Grunt was doing but avoiding to rely on the middle man of having Gulp’s package (or Grunt’s package) to invoke the actual library. Webpack is also very smart in term of exploring the code and figure out dependencies. This article will focus to migrate from Gulp to Webpack for a TypeScript and React project.

First of all, we need to change index.html. The file was using RequireJs and was not referring to any bundles. The change is dual. We need to remove RequireJs. We will use Webpack to handle to load dependencies between modules. We also need to refer to bundles.


        <title>TS + React Boilerplate v1.01</title>

        <div id="main"></div>
    <script src="vendors/requirejs/require.js"></script>
    <script src="vendors/jquery/jquery.js"></script>
            //Every script without folder specified before name will be looked in output folder
            baseUrl: 'output/',
            paths: {
                //Every script paths that start with "vendors/" will get loaded from the folder in string
                vendors: 'vendors',
                jquery: '../vendors/jquery/jquery',
                react: "../vendors/react/dist/react",
                "react-dom": "../vendors/react-dom/dist/react-dom"
        //Startup file



        <title>TS + React Boilerplate v1.01</title>
        <div id="main"></div>
    <script src="vendorbundle.js"></script>
    <script src="appbundle.js"></script>

RequireJs’ configuration and the startup file are gone. The complexity will move into Webpack’s configuration file that we will see soon. So, at this point, you see that we won’t be using AMD. This mean that we need to build our TypeScript to use something else. We will use CommonJS.

  "compilerOptions": {
    "sourceMap": true,
    "target": "es6",
    "module": "commonjs",
    "outDir": "./deploy/output",
    "jsx": "react",
    "noImplicitAny": true
  "exclude": [

since Webpack will read the EcmaScript syntax used in each file of each module, it will transpile in CommonJS format. The JavaScript produced is read by Webpack and this one will bring all the file into a single one (bundle). This remove the need to load asynchronously (like AMD) was doing.

Changing to CommonJS made the code to require a change. If you want to load a relative to the file that want to import a module, it needs to start with “./” instead of directly the name. For example, you won’t be able to write :

import { Component } from "component1";"


import { Component } from "./component1";

The next change was around JQuery. The file that was using JQuery didn’t had any reference to JQuery, but now we explicitly mention the library.

import * as $ from "jquery";

Before going in Webpack configuration, we had with AMD a lazy loading file that was loading and using a specific file after 2 seconds. This is to simulate the “load on-demand” files that we may want not to load initially. The scenarios are multiple. This can be justify because the user is rarely using the feature, hence no need to load this one. This can also be that the user doesn’t have the authorization to do this kind of action, thus no need to load code that won’t be used.

Here is the AMD solution we had before:

import foo = require("folder1/fileToLazyLoad");
export class ClassB {
    public method1(): void {
        setTimeout(() => {
            requirejs(["folder1/fileToLazyLoad"], (c: typeof foo) => {
                const co = new c.ClassC();
        }, 2000);

The first line is to tell TypeScript the type we want to lazy load. It was using “require” which we will still use. This time, we can use the relative path. So far, not much as change. However, we can see that we were using requirejs directly inside the timer. This time, we will use CommonJS and load the module. It’s almost the same thing — using a different library.

import foo = require("./fileToLazyLoad");
export class ClassB {
    public method1(): void {
        setTimeout(() => {
            System.import("./fileToLazyLoad").then((c: typeof foo) => {
                 const co = new c.ClassC();
        }, 2000);

We are at the point where bigger change will occurs. The change start with NPM module that we need to use. As we saw, we can remove RequireJS from the dependencies list. We also need to bring many libraries for Webpack, loader and utility library to clean and move files. Here is the complete list of dependencies:

"devDependencies": {
    "@types/express": "^4.0.35",
    "@types/jquery": "^2.0.46",
    "@types/react": "^15.0.26",
    "@types/react-dom": "^15.5.0",
    "@types/systemjs": "^0.20.2",
    "awesome-typescript-loader": "^3.1.3",
    "copyfiles": "^1.2.0",
    "del-cli": "^1.0.0",
    "express": "^4.15.3",
    "file-loader": "^0.11.2",
    "html-webpack-plugin": "^2.28.0",
    "source-map-loader": "^0.2.1",
    "typescript": "^2.3.4",
    "webpack": "^2.6.1",
    "tslint": "^5.4.2"
  "dependencies": {
    "jquery": "^3.2.1",
    "react": "^15.5.4",
    "react-dom": "^15.5.4"

The next step is that we won’t use Gulp to invoke action. If we want to delete previous generated deploy files, build TypeScript or run the server, we need to use the CLI of each of the tool we are using. We could use directly the TypeScript’s CLI, named tcs, and use xcopy to move file, etc. The problem is that it is not easy to remember. NPM allows to have custom script which can be invoked with “npm use ABC” where “ABC” is the name of your script. Here is the script we need to add to replace the Gulp tasks we had.

  "scripts": {
    "clean": "del-cli deploy/**",
    "package": "./node_modules/.bin/webpack --config webpack.config.js --display-error-details",
    "copy": "copyfiles -u 1 ./app/index.html ./deploy",
    "build": "del-cli deploy/** &amp; SET NODE_ENV=development &amp; webpack --config webpack.config.js --display-error-details &amp; copyfiles -u 1 ./app/index.html ./deploy/",
    "server": "node bin/www.js",

The “clean” script use the “del-cli” to delete the deployment folder. This could be use a native Windows or Linux command, but using this tool allows to be cross platform. The “package” allows to run webpack with a specific configuration file and for debugging purpose to display verbose detail. The principle is the same for all others script. The “build” one is using the clean + package + copy. So, in practice, you should use only build and server.

The final step is to configure Webpack. This is done in the file webpack.config.js. You can rename it the way you want, you just need to specify it in the webpack command after –config. Webpack can use many NPM package to accomplish its job. For sure, you need at least the “webpack” package.

var path = require('path');
var webpack = require('webpack');

Webpack needs to have an entry point. In the example we are working on, the main file is the one that was used in the index.html with “requirejs([‘file1’]);”. This time, we do not have any indication in the HTML file. However, Webpack needs one, or many, entry point and will navigate through all the dependencies to make the main bundle.

module.exports = {
    entry: {
        app: "./app/scripts/file1.tsx"

Entry may be the entry point, Output will be where the bundles goes. The filename uses the square bracket which will be replaced by the key provided by each entry point. In our example, we have “app” in the entry, and we will have a file produced with the name “appbundle.js” in the output. The directory is provided at the “path” property, which in that case is “deploy”.

    output: {
        path: path.resolve(__dirname, 'deploy'),
        filename: '[name]bundle.js'

The next configuration is to tell which extension Webpack should care of. For us, it’s TypeScript

    resolve: {
        extensions: ['.ts', '.tsx', '.js', '.jsx']

This property allows to have source map. This give the possibility to debug TypeScript in Chrome on the real individual file even if it’s bundled.

    devtool: "source-map",

Webpack work with rules. Every “test” evaluate a condition to execute a loader. We use two different ones. One to call the TypeScript loader that will transpile TS and TSX file. The second one is to generate source map. Even if we have the devtool to provide source map, we need a central place to handle third-party JS library too.

    module: {
        rules: [
            { test: /\.tsx?$/, loader: "awesome-typescript-loader" },
            { enforce: "pre", test: /\.js$/, loader: "source-map-loader" },

The last piece is a plugin called CommonsChunkPlugin. It comes from Webpack and its role is to great additional bundle. In that case, we create a bundle name “vendorbundle.js”. The minChunk can be a number of time we see the reference to join the bundle. For example, if we have a module that we use often and that we want them to be in a common bundle, we can say “5” and if more than 5 modules reference than instead of being in the “app” one, it would go in this one. For us, we want to have all vendors module, which mean they are from node_modules directory. To do so, the minChunks allows to pass a function. When it contains the node_modules in the path, it goes into that bundle instead of the main one (app).

    plugins: [
        new webpack.optimize.CommonsChunkPlugin({
            name: "vendor",
            filename: "vendorbundle.js",
            minChunks: function(module) {
                return module.context &amp;&amp; module.context.indexOf('node_modules') !== -1;

You can find the exact code from this commit in GitHub. In this article, we saw how to remove Gulp to use Webpack. Not only it removes dependencies to Gulp and Gulp’s packages, it also bring a powerful tool to bundle smartly. The next step will be to bring an auto-reload when the code change to have TypeScript compile automatically to get JS deployed.

JavaScript Depth First Traversal

Traversing a tree by going in one path as down as possible is possible recursively or with an iteration. Let’s first create a node function that will hold the information of each node.

var node = function(v, c) {
  return {
    value : v,
    children : c

var node5 = node("node5");
var node4 = node("node4");
var node3 = node("node3");
var node2 = node("node2", [node5]);
var node1 = node("node1", [node3, node4]);
var graph = node("root", [node1, node2]);

The graph is simple, and is the same that we used in the breadth first approach. The algorithm to traverse the tree doesn’t need any structure since we will call recursively every children when we find one. This has the advantage to not carry a stack with us, but the disadvantage that we may want to have additional variable to not go too deep in one direction.

function depthFirstRecursive(node, callback){
    for(var i = 0 ; i < node.children.length; i++){
      var child = node.children[i];
        depthFirstRecursive(child, callback);

depthFirstRecursive(graph, function(node){

This output : Root, Node1, Node3, Node4, Node2, Node5

The iterative approach is similar. However, we loop from the length of the children array to zero. We do this if we want to have the exact same output of the recursive one. Without loop in reverse, we still do a depth first but from the right side of the tree first instead of the left side.

function depthFirstIterative(root, callback){
  var stack = [];
  while(stack.length > 0)
    var node = stack.shift();
      for(var i = node.children.length - 1; i >= 0; i--){
        var child = node.children[i];
          child.discovered = true;

depthFirstIterative(graph, function(node){

We are using unshift to push any new child at the beginning of the array (stack) and get the data from the beginning of the array to (it’s a stack!).