TypeScript: Testing Private Members

I wrote about three years ago on how to test private member as well as last year and the year before. One article was more specific to C# and the two others more abstract and on TypeScript, but still relevant today. In this article, I’ll explain how I am testing private members without using any hack with Jest.

The goal of having private and public members is to mark a clear separation about what is restricted as internal use of the object that defines the member from what can be consumed from outside the scope of the object. The idea makes sense. The usage of the keyword “private” does not necessary. Using “private” does not work because you will not be able to test internal logic, neither mock them in your testing — it is blocked at the scope of the object.

class MyClass1 {
    private variable1: string = "init";
    public function2(p1: number, p2: number): void {
        // Code here...
        if (this.function1(p1)) {
            // More code here ...
        }
        // Code here... 
    }
    private function1(p1: number): boolean { }
}

There are some workarounds. One popular approach is to avoid testing these functions directly. The problem is that you have code that is not tested. An alternative is to test these private functions through public function. The problem is that you are using a function to proxy all the logic which make all test less “unit test” and fragile because these tests become dependant on another piece of code. If the logic of the private function remains the same, but the public function change, the code that is testing the private function will fail. It sways the simplicity into a nightmare. If the private function returns a boolean value, but the public function returns a void type, testing the return of the private function requires to understand the behavior of the public function that use it to extrapolate a behavior that corresponds to the return value. The proxy function, the public function, might be only a single one or many. When there is only a single function, the choice is limited and can push you in a corner without escape. When many functions, the selection of which one can also be problematic. In the end, the goal is to unit test, not to have many hurdles before even testing.

Another option is to cast to any and access the function. The problem is that any refactoring on the name will make the function to be “undefined.” It leads to issues of typing when the ground change that is the role of having a typed language in the first place.

describe("MyClass1", () => {
    describe("test the private function #1", () => {
        it("public proxy function", () => {
            const x = new MyClass1();
            expect(x.function2(1, 2)).toBe("????");
        });
        it("cast with any", () => {
            const x = new MyClass1() as any;
            expect(x.function1(1)).toBeTruthy();
        });
    });
});

So, if we have all these solutions with weaknesses, what is the best solution? The best solution that I have been used for a while now is this one: do not use private. Instead, you should use interface. The idea is that the concrete object will never be used directly, hence can have its members public. The usage across the whole application is done with an interface that exposes the members that the consumers can interact. Here is the same code as above but with the pattern of using an interface instead of private.

interface IMyClass1 {
    function2(p1: number, p2: number): void;
}

class MyClass1 implements IMyClass1 {
    private variable1: string = "init";
    public function2(p1: number, p2: number): void {
        // Code here...
        if (this.function1(p1)) {
            /// More code here ...
        }
        // Code here... 
    }
    public function1(p1: number): boolean { }
}

describe("MyClass1", () => {
    let x: MyClass1;
    beforeEach(() => {
        x = new MyClass1();
    });
    describe("function1 with an integer", () => {
        it("returns true", () => {
            expect(x.function1(1)).toBeTruthy();
        });
    });
});

It works perfectly in term of testability. The unit test code has access to all members because everything is public. It is easy to invoke all members directly but also to mock these and still keep and the type from TypeScript. In the application, we use the interface everywhere. The only place where we use the concrete class is during the initialization. Every declaration uses the interface — no exception.

Furthermore, a class is easily mockable with framework because you can access every public function and assign them a mock/spy/stub that allows to control specific branches of the code as well as managing the scope of the test to be as much as a single unit. The key of an efficient test is to have every block of code tested as a unit and then moving from bottom to top with more and more integration tests.

describe("MyClass1", () => {
    let x: MyClass1;
    beforeEach(() => {
        x = new MyClass1();
        x.function1 = jest.fn();
    });
    describe("function2", () => {
        it("calls function1", () => {
            x.function2(1,2);
            expect(x.function1).toHaveBeenCalledTimes(1);
        });
    });
});

Last but not the least, functions that are directly sitting in a module are very hard to unit test. It is not possible to mock or spy their dependencies. The lack of access sway my design to always encapsulate every function into a class.

In summary, encapsulation does not require to rely solely on public, private and protected keywords. The usage of an interface is powerful by adding the protection to what can be accessed while allowing developers to have a sane and simple way to test correctly without using a detour to obtain the desired piece of code.

TypeScript and React – Continuous Integration for Pull Request from 3 minutes to 1 minute

At Netflix, software engineers own the full lifecycle of an application, from gathering the requirement to building the code, to the way we handle our life cycle process to the deployment which includes configuring AWS for DNS and load-balancing. I personally like to have on every pull request a build that makes sure that everything is building and not only on my machine as well as my unit tests to be run to make sure that no regression is introduced. For several months, this process was taking 3 minutes +-10 seconds. This was satisfying for me, it was accomplishing its main goal. I was expecting some time because of the nature of the project. First, I am using TypeScript, seconds I am using node modules and third I need to run these unit tests. The code is relatively small on that project. I wrote about 36k lines in the last 11 months and there are about 900 unit tests that need to run.

Moving from 3 minutes to 1 minute 30 seconds

The first step was to add the unit tests. Yes! The first few months only the build was running. Mainly because we are using Bitbucket and Jenkins and I never took the time to configure everything — and it is not straightforward to get coverage with Jenkins for JavaScript code. Nevertheless, I was using the create-react-app “react-scripts-ts build” commands which are way slower than running the command “react-scripts-ts test –env=jsdom –coverage”. In my case, it trimmed 1 minutes 30 seconds.

Still, I was observing in the remaining 1 minute 30 seconds a waste of 1 minute trying to get node_modules by the command “npm install” regardless of my step specifying “npm ci”. The difference between “install” and “ci” is that the first one is slower because it performs a task for the package-lock.json which the “ci” command skip by relying on an existing generated package-lock.json file

Moving from 1 minute 30 seconds to 1 minute 10 seconds

The install command was bothering me and I found out that we had some internal process taking over some steps. To keep it show, I had to do some modification. First, in Jenkins, you can preserve folder like the node_modules. This is under “Build Environment”. Do not forget to check the “Apply pattern also on directories”. However, the problem is still npm. The “ci” removes the node_module. We are not more advanced than before. So, the idea is to use back the install command.

Conclusion

There is still some room for improvement. To be honest, I had to completely not delete the whole workspace to have it to work. Jenkins was removing all the time the node_modules regardless of the syntax I was using. I also found suspecious that it takes 20 seconds for npm install to figure out that nothing has changed — that is very slow. I’ll have to investigate further with yarn.

Data Access Gateway Project

I have an open source project to handle Ajax call and cache for about 3 months now. I am building that project in my free time and I have been dogfooding the library in one of my project at Netflix. The goal of the library is to avoid having parallel similar Ajax calls and to leverage a cache mechanism to avoid fetching data on the backend.

Why not leveraging the service worker? Service worker is a great way to have background queries executed and cached. However, it has some drawbacks. The first one is that you have all the logic outside the scope of your project. It means that if you want to have a particular caching policy depending on the data that it is harder to handle. For example, depending on some metrics that are displayed in a chart, the cache lifetime of some objects might be different. A graph for several years could be cached longer than a chart for the last hour. I am not saying it is impossible to handle at service worker but that code would still be required to customize and would be spread between the caller (who know the logic) and the service worker (which should be as generic as possible). The second drawback is that you cannot have several strategies easily. Some part of your application might be good to get data that is obsolete while other parts can use the cache but the data must be under a specific time window and finally some other part might require to always have the latest from the server. In the end, a system with different logics of caching makes the service worker harder because the logic is divided. Finally, regardless of the separation and the rules, you have to code the service worker to handle the cache, so you have to code the code that goes in the index db, etc.

The library I built handle HTTP request to fetch the data and allows to cache the data in an IndexDB and/or in memory. There are two levels of cache which is handy for fast access but also for returning user. The library also has three built-in functions to query data: fast, fresh and web. The fast approach gets as fast as possible the data from the cache regardless of if obsolete or not. If obsolete it will do a HTTP request for the next request to be ready with fresh data.

The fresh approach checks for the date of expiration and if still fresh will return immediately. Otherwise, it will fetch and wait for the answer. Once the answer is back, it returns the data and stores it. Indeed, this is a little bit slower since in many scenarios you have to wait.

The last function fetches the data from the web directly without looking at the cache. The function stores the data in the cache, letting the two formers function leveraging the data. The function handles concurrent requests for the same request as well. Like the other functions, the concurrency is interesting in a situation where the user clicks around the interface which launches several same requests. Instead of having all these requests in parallel who would return the same response, the caller will be subscribed to the former request and will receive the initial response. It means that the data is fresh and is returned faster for the subsequent call. It also pollutes less the backend with useless calls.

To have insight into what is happening, I created a second side-project that is a Chrome Extension.. This is not released yet, but the code is available. It gives information about the quantity of call done at each level (HTTP request, IndexDb, Memory) and what is actually used. It has insight into the size of data fetching by each source. Finally, you can see all the logs and a summary of the percentile of all the fetches.

So, how does it looks? While it might sound complicated it is actually easy to use. There is the initialization portion that is done once in your single-page application, and after the request.

// Initialization 
export const DATABASE_BASE_WITH_VERSION = "DB_V1";
export const DataAccess = DataAccessGateway(DATABASE_BASE_WITH_VERSION);

// Configuration can change at any time
DataAccess.setConfiguration(
  {
     isCacheEnabled: true,
     logError: (error: LogError) => { Log.trackError(error.error, "DataAccessError", error); },
   }
);

// Query example
const requestInfo: AxiosRequestConfig = {
                method: "GET",
                url: "http://yoururl.com",
};
const promise1 = DataAccess.fetchFast<YourEntityModel>({ request: requestInfo }).then((response) => {
                // Do here what ever you want
                return response.result;
            }).catch((error: AxiosError) => {
                // You can track error here
                throw error;
            });

The query use AxioRequestConfig that is a popular Ajax library. I am leveraging the library at the moment. Every request use the URL has the unique identifier of unicity for a request. In case you have POST with data, you can provide manually an ID in the object of the data access request fetch request. It is also possible to pass configuration for the life of the cache (in memory and IndexDB) for a single request or globally for all the Data Access Gateway library.

To summarize, Data Access Gateway is a library that helps handling caching and improve the performance of your application. It can be used directly in your code or even in your service worker. The goal is to avoid redundant request as well as having a fine-grained control on the caching policy while being as simple as possible to use.

Unit Tests and Coverage Report in Jenkins using Jest from Create-React-App

Since I left Microsoft Visual Studio Online (VSTS) has an employee I have been using Jenkins which is the continuous integration (ci) platform Netflix uses. I configured two Jenkins jobs for the project I am leading. One is handling every pull request done against master and the second one is executed during the merge of any pull request into master. For many months, I didn’t have the unit tests running on the platform. The reason is that I am, yet, used to how Jenkins works and even after several months feel VSTS more intuitive. Regardless, recently I took the time and setup to have my TypeScript code using Create-React-App to run my unit tests in these two Jenkins tasks. I am using Create-React-App, which come with the best testing framework I have experimented so far which is Jest. My goal was to have all the unit tests ran as well as to see the coverage.

Here are the steps required to have Jenkins handle your test. First thing is to install a dev dependency to “jest-junit”. The reason is that we need to convert the format of Jest into Junit.

npm install --save-dev jest-junit

The next step is to download a Python script in your repository. I have mine in “tools”. The reason is also about converting. Jest coverage file is not in the right format. The Python script converts the locv into Cobertura format. You can download once the script at this address.

wget https://raw.github.com/eriwen/lcov-to-cobertura-xml/master/lcov_cobertura/lcov_cobertura.py

Few configurations are required in the package.json. The first one is to create a test command that Jenkins execute instead of the default test command. The command calls the react-scripts. I am using TypeScript, hence I have to use the react-scripts-ts command. The next parameter is the “test” command which we still want to execute. The change starts with the test results processor. This is where you specify the jest-junit to execute once the tests are done. I set my coverage to be positioned into the “coverage” folder which is the folder I have ignored in the .gitignore and where I have normally my local coverage file outputted. Here are the three commands I have. The first one runs the test, the second run and coverage for the ci (this is the new stuff) and the last one is when I want to run locally the coverage.

"test": "react-scripts-ts test --env=jsdom",
"test:ci": "react-scripts-ts test --env=jsdom --testResultsProcessor ./node_modules/jest-junit --coverage --coverageDirectory=coverage",
"coverage": "react-scripts-ts test --env=jsdom --coverage",

Finally, you need few jest-unit configurations. This can be in your package.json. I have some coverage folder that I want to exclude which you can do in the jest configuration under collectCoverageFrom. I had these before doing the task we are doing of configuring Jenkins. Then, the coverage reported must be lcov and text. Finally, the new configurations are under “jest-junit”. The most important configuration is the “output” which is again in the coverage folder. You can change the destination and file as you wish. However, remember the location because you will need to use the same in a few instants inside Jenkins.

  "jest": {
    "collectCoverageFrom": [
      "**/*.{ts,tsx}",
      "!**/node_modules/**",
      "!**/build/**",
      "!**/definitionfiles/**",
      "!**/WebWrokers/**",
      "!**/*.mock.ts",
      "!src/setupTests.ts"
    ],
    "coverageReporters": [
      "lcov",
      "text"
    ]
  },
  "jest-junit": {
    "suiteName": "jest tests",
    "output": "coverage/junit.xml",
    "classNameTemplate": "{classname} - {title}",
    "titleTemplate": "{classname} - {title}",
    "ancestorSeparator": " > ",
    "usePathForSuiteName": "true"
  },

In Jenkins, you need to add 2 build steps and 2 post-build steps. The first build step is to run the unit test with the script we just added in the package.json. The type of build step is “Execute Shell”.

npm run test:ci

The second step is also an “Execute Shell”. This one calls the python code that we placed in the “tools” folder. It is important to change the path of your lov.info and coverage.xml. Both are in my “/coverage/” folder. The “base-dir” is the directory of the source of your code.

python tools/lcov_cobertura.py coverage/lcov.info --base-dir src/ --output coverage/coverage.xml

The next two steps are “Post-Build”. This time, two different types. The first one is “Publish JUnit test result report”. It has a single parameter which is the XML file. Mine is set to “coverage/junit.xml”. The second task is a “Publish Cobertura Coverage Report”. It also takes a single parameter which is the coverage.xml file. Mine is set to “coverage/coverage.xml”.

At that point, if you push the modification from the package.json and the Python script you will see Jenkins running the unit tests and doing the conversion. It is possible to adjust the threshold of how many tests your allow to fail to not break the build as well as setting the percentage of coverage you expect. You will get a report on the build history that allows you to sort and drill into the coverage report.

How to create a toast notification in few lines of code

I recently was about to ship a new system when I remember that many months ago I decided to create an error message at the top of the application which was having few constraints. My initial thought was that this error banner would be there only for an unhandled exception which wouldn’t be the central error area. That still stands. For example, forms were getting their validation directly at the input level. However, the reality is that an Ajax error can happen and one of my forms could be out of range of the header. The result is that a user might not see the mistake at all which would be bewildered about why is the data not being processed.

With very little time, I had to refactor the user interface. I first had the idea of using an on-the-shelf solution. But, after few minutes I thought that since it is an exceptional case, I can probably tinker simple CSS solution that would fit perfectly in my React application. Within one hour, I had the whole solution written and tested on many screen resolutions. This article has a simplified equivalence of what I created without being strongly associated to React or TypeScript.

The first piece is that when an error occurs to create the HTML structure that will hold the message.

<div class="toast">
  <p>Your message</p>
</div>

The idea is to have a container. The inside can be a simple message or an icon with a message. In my case, I had an icon, a header, a text and a dismiss button that can call a Redux action to remove the message from the Redux’s store. As I said, this is all optional and is not required for the idea to work. The next step is to shape the end result of the error panel that would remain at the bottom of the browser windows regardless if the user is scrolling or has scrolled.

.toast{
  position:fixed;
  bottom:10px;
  left: 10px;
  width:250px;
  border-radius: 4px;
  box-shadow: #310808 1px 1px 5px;
  background-color: rgba(177, 7, 15, 0.78);
  padding:10px;
  color: #f5bfbf;
}

The CSS is pretty basic and leverages the position “fixed”. On the actual code, I had some media instruction to have the width to span the whole width of the browser if the resolution was low by setting the right value and width to auto.

The second piece is to have the toast appearing gracefully. This can be done with purely CSS. The idea is to set an animation that will be executed once. The start of the animation should be that the container is hidden and out of the screen with the end of the animation to its final position with full opacity. The CSS needs to have 3 more lines.

  opacity:1;
  animation: toast 500ms cubic-bezier(.23,.82,.16,1.46);
  animation-iteration-count: 1;

The animation is called “toast” and iterate once and has a duration of 500ms. The cubic-bezier is set to create a bouncy effect.

@keyframes toast{
  0%{
    opacity:0;
    transform: translateY(200px);
  }

  100%{
    opacity:1;
    transform: translateY(0px);
  }
}

The end result is as planned which pop once when the error is added into the DOM.

Indeed, this solution is not elaborated and mundane to anyone using something more rich in feature. However, the size of this solution is tiny and for an edge case fit perfectly the situation. If you want to see the code in action I created a CodePen.