Comment your Code

I placed that article under the category TypeScript, but it’s really about every language. This trend of not commenting code is still there and I am more and more curious because the more I work and preach to comment code, the more I see people commenting. That make me think that most people who think that it’s fine not to comment didn’t work on large system or with project with many people.

While some argument are valid like that comment doesn’t follow the code and that comment doesn’t give value can be true; most of the time it’s just because the comment is not well written. Let’s be lucid, if people can write bad code, they also can write bad comment.

First of all, a comment should say that the variable describe. And, the variable should describe what it holds. Let’s look at a real example:

private originalarrivaltime: string;

No comment. Having a comment saying “Original arrival time” wouldn’t be good too. However, right now, I can put any kind of data in this variable since it’s a string. What is the format expected? When is this time should be set? Why do we need it as a string? No idea.

This example is simple, and some obvious question like why and when should be answered.

/**
* What: Time of when a message arrive into the mailbox
* When: This is set when the message is built and cannot be changed any further
* Why: It's a string because we get it from a ISO String format
*/
private originalarrivaltime: string;

That’s it. Of course, it’s a lot to type. However, if I give both of them to two different person, I can assure you that the second format, with the comment, is way easier to the developer to move forward. Indeed, we can argue that by reading the code we can find those answer to these questions. But, on big system, it can take many minutes, more than 1-2… sometime it’s so complex that it can take half an hour just to get the whole idea.

I understand that we could divide the code in smaller classes, with more cohesion and encapsulation, and instead of string using a class that will strongly type this one to avoid confusion to pass the string only in a ISO String format and reject other format. I also am with you by having proper, good written, unit tests that we could have figure that all. But, we live in the real world where time is always ticking and by experience I clearly know that even in the best company, in the best product, in the best teams, we have to ship features and not follow all the perfect path. That is why, commenting can save a lot of time. In less than 15 seconds, I was able to write this comment which will save minutes to people coming after me to work on that code. Comment is not the enemy.

How to setup TypeScript and React

This article explains how to get an exiting TypeScript project that use Gulp to use ReactJs.

React NPM Packages

To use ReactJs you needs four different NPM packages for React and React-Dom.

npm install react@latest --save
npm install react-dom@latest --save
npm install @types/react --save-dev
npm install @types/react-dom --save-dev

Two of them are the actual Facebook ReactJs libraries and the two others are Types which allow to use these two JavaScript libraries with TypeScript. This means that you will get all the autocomplete features while typing.

Modifying RequiresJs

Inside the configuration of your RequireJs, you need to add two entries for react and react-dom to indicate where will be the module to load. This is tricky in term that you must point to the “dist” folder. If you look in node_modules you will see that react exist in the main folder but also in the lib folder. These won’t work. The other tricky part is that react-dom use a hyphen, which will require you to use quote. Both need to have lowercase path key, and both will be used with the lowercase on the import statement.

  <script>
        requirejs.config({
            baseUrl: 'output/',
            paths: {
                vendors: 'vendors',
                jquery: '../vendors/jquery/jquery',
                react: "../vendors/react/dist/react",
                "react-dom": "../vendors/react-dom/dist/react-dom"
            }
        });
        //Startup file
        requirejs(['file1']);
    </script>

If your file1.ts was your entry point, this one can remain. You just need to have this one to be file1.tsx since it will use React JSX syntax.

TypeScript TsConfig.json

The TypeScript’s configuration file needs to be altered to indicate the type of JSX we want to compile.

  "jsx": "react"

If you want, you can add both types under the property “types” but this is optional since TypeScript will looks into the node_modules package to get the definition.

Gulp and React

If you are using Gulp or any other toolkit for automation, you need to do a change. The change is mainly to change the path of file to compile to get TSX file additional to TS file.

paths.allTypeScript = paths.typescript_in + "**/*.{ts,tsx}";
// ...
gulp.task("build", () => {
    var compilationResults = gulp.src(paths.allTypeScript)
        .pipe(sourcemaps.init())
        .pipe(tsProject())
    compilationResults.dts.pipe(gulp.dest(paths.typescript_out));
    return compilationResults.js
        .pipe(sourcemaps.write('.'))
        .pipe(gulp.dest(paths.typescript_out));
});

This change add ts and tsx file to be compiled by TypeScript Compiler (TSC).

Using React

Once React is setup, you can modify your entry point by rendering a simple component. Here is an example that use a simple component. The component code:

import * as React from "react";
interface ComponentProps {
    name: string;
}

export class Component extends React.Component<ComponentProps, {}> {
    public render(): JSX.Element {
        return <div>React: Hello, {this.props.name}</div >;
    }
}

The original TypeScript modified to have the .tsx extension:

import * as React from "react";
import * as ReactDOM from "react-dom";
import { Component } from "component1";

const elementToRender = <Component name="Test"/>;
ReactDOM.render(elementToRender, document.getElementById("main"));

The difference is that we import the two libraries and use the component that we also import.

You can find the whole source in GitHub at https://github.com/MrDesjardins/TypeScriptReactBoilerplate.

Gulp Watch to build only changed TypeScript

Performance when building TypeScript can be crutial if you are working on a big projects. If you are using watcher to compile when any TypeScript file change and use Gulp Watch to compile every TypeScript file, then you will have a huge performance hit. It means that if you change 1 file that you may have to build thousand of them. The following code is the lazy approach that build every TypeScript file if one changed:

gulp.watch(paths.typescript_in + '**/*.ts', ['build', 'tsreload']);

This script watch for TypeScript files to be changed, if it does, run the build task and reload the browser. The problem is that the build task build all TypeScript. To remedy that situation, we want TypeScript to only build the changed file. For that, you will need a new Gulp package called “gulp-cached” that you install as a dev dependency.

npm install gulp-cached --save-dev

Inside your gulpfile.js, you need to access the module:

const changed = require('gulp-cached');

And finally, you need to use the “on change” event after the watch, and remove the tasks’s dependencies.

    gulp.watch("app/scripts/**/*.ts").on("change", function() {
        var compilationResults = gulp.src("app/scripts/**/*.ts")
            .pipe(changed("./deploy/output"))
            .pipe(sourcemaps.init())
            .pipe(tsProject())
        compilationResults.dts.pipe(gulp.dest("./deploy/output"));
        compilationResults.js
            .pipe(sourcemaps.write('.'))
            .pipe(gulp.dest("./deploy/output"))
            .pipe(connect.reload());
    });

The main change is that we pipe through the changed call the destination. This pipe, once ran once, will keep data about if the file change. If this one change, it will go down the pipeline. Otherwise, it will be filtered out. It means that the first time a TypeScript file change, the watch will build everything. After, it will only filter all the source down the changed file. The reload is by calling directly connect.reload() at the end. This is a huge performance boost for your as a developer because you will be able, what ever the size of the project you are working on, to build under 1 sec every change you do. Having a rapid window between you save your file and the time you can see your change in your browser is critical to ship fast code. With this library that act a cache, you can benefit of filtering out the noise that doesn’t change and concentrate your computer to build only what is required.

Minimum Viable Product is a Bad Pattern

A few years ago, for me, MVP meant “Most Valuable Professional”, but since I moved in the United States, every time I hear this acronym it’s for “Minimum Viable Product” — and I hear it a lot. It’s almost a habit that people have. When discussion arises around what needs to be done, this free card gets on the table. Mostly at a time when someone is pitching an idea that is time-consuming or that doesn’t go in the direction desired. Lots of people get caught in the trap of accepting this excuse without giving more thought while this is certainly very costly for a product.

The rationale behind the MVP can answer is that we should build the minimum possible and iterate afterward with how the consumer reacts to the product. This way, you do not build features that the user do not really want. At that point, I think we can all agree, but the problem is that if you are building a car, you still need to have seats. The original meaning of MVP was that your car shouldn’t need a rocket to fly. Unfortunately, like anything that gets used too much in the wrong situation, it creates the reverse effect — a product that is not very viable.

The first bad resultant of building with a minimum spirit attitude is to bring the product to stay at this minimum level. While many people really had the intention to iterate to improve the feature, the reality becomes something else. It will probably stay minimal because of many reasons. The most popular one is that “it already does the job”. For example, users may need to click 5 times more than it should, or that the user interface is built to fast that it almost forces the user to consult the documentation to figure it out; but “it’s good enough”. Also, it stays minimum because everything is money driven and budget gets shuffle all the time. Future alternatives to a better plan get behind an infinite list of other “supposed” more important tasks. The major issue of this minimal spirit is that customers get a piece of software (or any product in reality) that is flawed since its design. Some side effect is to have a big product with a lot of features that are all average instead of being good. By aiming at the minimum, the whole product stays at the minimum bar.

The second result is that it’s hard for people who get used to work in a minimum environment to see beyond what is produced. After few months or years of working by doing the minimum, your expectation bar gets low. While having a car with massage seats is probably not a good idea, because it’s too much, having a car with seats that is just a piece of wood shouldn’t be your minimum. You may think I am stretching the idea of MVP in this example, but in reality, for any of us, a wooden seat is just ridiculous. Same thing happens to software, we are just more tolerant since it’s less tangible.

The third output of a wrong MVP is that it’s very short-term goal oriented. When working on a product, you should not plan its next 10 years, but you should be able to see ahead for few months and have an objective. MVP becomes the excuse for people to not think more than few feet in front of them. A short-term vision breaks the master plan of having something cohesive, build in a way that can grow in a direction. While software allows doing more easily 360 degree turnover, it is still very costly and creates many technical debts in the code. The excuse of doing the minimum and get the customer feedback for not having a plan is even more obvious when once the feature delivered that there is not telemetry based on users behaviors or not any time allowed to improve upon any feedback at all. I have seen many telemetries getting gathered and never queried. The reason is very simple, while MVP is initially mentioned some deciders know that they cannot fail, hence not acknowledging that something is wrong is a way to avoid being wrong. Pushing a boat into the ocean and hoping this one to land in a good place (which we do not know where it is) is a sad reality. Not looking to see if this boat could have landed in a better place is also a symptom of a wrongly MVP pattern.

The fourth fallacy is that being in a constant MVP environment bring your expectation low, which means you will get low results. Forget about creating a good user experience, or creating a “wow factor”, or trying to be innovative in MVP mode. After all, the first letter of MVP, is M for “minimum”. It’s like asking your kid to aim for the minimum grade and expect this one to be the best of its class at the end of the year. Again, we shouldn’t aim to have a flying car, but your car should beat your competitors in term of performance, consumption, comfort, etc.

The fifth result is the principle that with MVP we will iterate into something less minimum which will be better in time. I have been the witness of this pattern several times and the output is always the same: many iterations with drastic changes that could have been avoided with more though or even just a little more time at the beginning. I saw many people rising proper points and issues that could have been solved right from the beginning, but because the minimum was required, many iterations went down the road and added 6 more months instead of going in straight line. The side effect is that the code is getting messy, incomprehensive by those who worked on the previous iteration and discourage those who keep working on the same thing over and over again. These same people, which often knew since the beginning that it would have been better not to take this detour. A common and wrong behavior of those who are behind this MVP iteration spiral is that they are satisfied to see the improvement between the iterations which is just wrong — it could have been right from the start and not being bad at all. Finally, you can confirm that you are in this pattern if you are iterating without even getting users feedback. This happens if the MVP route was taking to reach some “fictional” deadline or because one individual well placed in the hierarchy wanted something else. By fictional deadline I mean that a date is dropped on the table for some reason that is not very clear and it becomes an emergency which no one understand since it wasn’t an emergency since months and months… mostly this idea converge with the short-term plan which it was pretty obvious that this would be needed but it wasn’t looking ahead.

The sixth fallacy is present only if you have a product with real customers that are paying to use your software. Once you are done with the release of your minimum viable product, lots of people well placed in the hierarchy become frightened by changing or iterating. The reason is that the product is working, it has to pay customers and why trying to change something which could disappoint existing customers or even lose some of them? That happened more than you think. Instead of having a strategy to test on a small subset and go ahead, we go on the spiral of overthinking to try to absolutely guarantee that it must satisfy everybody. And, as we know, having everyone satisfy is impossible. So, people become frightened and stay with their minimum product or changing really subtle elements and stop thinking that they can grow more, do change to get more customers, etc. While this could be mitigated with a proper transition plan, it’s costly to do if you always aim for the minimum because the system/infrastructure you are working on is probably of minimum caliber too.

MVP remains for me a keyword that people use in discussions where they want to kill someone ideas. There are many other keywords used like agile, iterative and slim that are used to avoid thinking more than just the minimum. Like I said, it’s not about trying to figure out everything from the start, but it’s about having a vision of what we want and do what needs to be done in order of priority. You shouldn’t have to do 3 iterations of the same thing even before releasing your product, neither you should work on something without having the master plan clearly exposed to you. When Tesla built its car, they didn’t said that they wanted to do the minimum possible in electric car to be just a little bit better than the competition. They did the best they could with what they have without going crazy by trying to get a flying car. Every time you hear someone bringing MVP, you should be sure that the product is still doing what it should do, without the fluff of bringing too much side features, but also without removing everything from it which would result just to more work later or a bad user experience.

The end result is a growing amount of bad products getting into the market rapidly with low quality, with features that reach their limitation within few minutes of use, and that even a neophyte can tell you that it doesn’t make sense the way it’s built. We see products that don’t evolve in time and die by the competition, even if these one were popular once upon the time. We see big marketing events that should inspire us with something new and different, but we end up seeing copycat of existing features without any innovation added.

How to unit test private method in TypeScript (part 2)

I already posted about how to write unit tests for private method with TypeScript about one year ago. Few days ago, I had the same discussion that I had in my previous team concerning private method. The situation is similar. Developers don’t test private method and rely on public method to reach those private methods. At first, it may sound that we are going inside those private methods, therefore we are doing proper unit testing.

The problem by going through intermediate methods to access the one we want to test is that any change on intermediate methods will make multiple test to fail. When a unit tests fail, the goal is to know which unit of your code is failing. Imagine the situation where you have class A that has method a1, a2, a3. You want to to unit test a3, but the only entry point is a1 which is the only method public. This one call a2, who call in some particular situation a3. You have multiple conditions in a3 and you evaluate that you need 5 unit tests. The problem is that if a1 or a2 change in the future that all these 5 tests may fail, when they should not.

At that point, most people understand the situation and agree to test the private methods. However, there is some good ways to do it and some bad ways. The worst way to do it to cast the class A to be of type any and call a3 directly. Something like :

// Bad way:
var a = new A();
(a as any).a3();

The problem with the above code is that when you will refactor a3 to have a better name that no tool will find out this instance. More, this open the door to access private fields or inject new functions and fields to the class. At the end, it become a nightmare to maintain. We are using TypeScript to be strongly typed, our tests should continue to be as strong.

In the previous article I wrote, I talked about 2 patterns. The first one is about working around encapsulation with an interface. The second had two variations.

Let’s remember the first pattern. The first pattern is that class A should have an interface IA that is used everywhere. IA would only expose the method a1. Everywhere you use the interface and the only place where it doesn’t it’s when it’s getting injected by the inversion of control container. However, we can leverage this abstraction to keep a strong encapsulation for the application and use the implementation that has every method public. This way, developers still have only access to a1 in our example, but in our test we have access to everything else. This might not sound a proper solution at first since we open the encapsulation on the implemented class, but it’s the cheapest way to be able to test unit tests. That said, I am all with you that there is other solution like the pattern #2 presented in the previous article.

The second pattern presented was about moving code around. In our example, a2 and a3 are private and could be moved outside an other class. For example, let’s say that A was a user class, a1 was a method to get the user information to display to the screen, a2 a method to get the address information and a3 a method to format the street address. This could be refactored from :

class User{
    public getUserInformationToDisplay(){
        //...
        this.getUserAddress();
        //...
    }

    private getUserAddress(){
        //...
        this.formatStreet();
        //...
    }
    private formatStreet(){
        //...
    }
}

to:

class User{
    private address:Address;
    public getUserInformationToDisplay(){
        //...
        address.getUserAddress();
        //...
    }
}
class Address{
    private format: StreetFormatter;
    public format(){
        //...
        format.ToString();
        //...
    }
}
class StreetFormatter{
    public toString(){
        // ...
    }
}

Originally, we wanted to test the private method formatStreet (a3), and now it’s very easy because I do not even need to care about all the classes or function that call it, just to unit test the StreetFormatter class (which was the original a3). this is the best way to unit test private method : to divide it correctly into specific class. This is also costly in term of time.

I always prefer the second approach, but time constraints and the high velocity of shipping features is always that is a higher priority — even in software shop where the message is quality first. That said, I prefer using the first approach than not having any unit tests at all. It’s a good compromise that work well what ever your framework. I used both approach in TypeScript code that was using proprietary framework, as well with React and now with Angular. At the end, the important is to have the best coverage while being sure that everything tested are solid to help the software and not slow down the whole development.

Compiling TypeScript for a specific folder to increase build performance

If you are using Gulp to build your TypeScript you may use the default configuration that is used in a lot of website which is letter the tsconfig.json to handle what to include and having Gulp to use Gulp-TypeScript to read the tsconfig.json file. However, if you want to build just a portion of the TypeScript, let say a single folder, you will be out of luck.

So the idea is to not use this kind of configuration in tsconfig.json

{
  "compilerOptions": {
    "sourceMap": true,
    "target": "es6",
    "module": "amd",
    "outDir": "./deploy/output",
    "types": [
      "jquery",
      "requirejs",
      "lodash",
      "reflect-metadata"
    ],
    "experimentalDecorators": true,
    "emitDecoratorMetadata": true
  },
  "include": [
    "app/scripts/**/*"
  ],
  "exclude": [
    "node_modules",
    "**/*.spec.ts"
  ]
}

and not this task in Gulp task code:

gulp.task("build", () => {
    const r = "./app/output";
    var compilationResults = tsProject.src()
        .pipe(sourcemaps.init())
        .pipe(tsProject())
    compilationResults.dts.pipe(gulp.dest(r));
    return compilationResults.js
        .pipe(sourcemaps.write('.'))
        .pipe(gulp.dest(r));
});

But to use this tsconfig.json:

{
  "compilerOptions": {
    "sourceMap": true,
    "target": "es6",
    "module": "amd",
    "outDir": "./deploy/output",
    "types": [
      "jquery",
      "requirejs",
      "lodash",
      "reflect-metadata"
    ],
    "experimentalDecorators": true,
    "emitDecoratorMetadata": true
  },
  "exclude": [
    "node_modules",
    "**/*.spec.ts"
  ]
}

and this Gulp task code:

gulp.task("build", () => {
    const outFolder  = "./app/output";
    var compilationResults = gulp.src("app/scripts/**/*.ts")
        .pipe(sourcemaps.init())
        .pipe(tsProject())
    compilationResults.dts.pipe(gulp.dest(outFolder));
    return compilationResults.js
        .pipe(sourcemaps.write('.'))
        .pipe(gulp.dest(outFolder));
});

The whole idea is that you can move the include files outside TypeScript configuration file, but inject the files from Gulp. So far, everything is build from the root of the app/script folder, but you could define a new task that take a sub folder like the following code:

gulp.task("buildgeneral", () => {
    const outFolder = "deploy/output/general";
    var compilationResults = gulp.src("app/scripts/general/*.ts")
        .pipe(sourcemaps.init())
        .pipe(tsProject())
    compilationResults.dts.pipe(gulp.dest(outFolder));
    return compilationResults.js
        .pipe(sourcemaps.write('.'))
        .pipe(gulp.dest(outFolder));
});

This is very interesting if you have a project with thousand files. Instead of building the whole project every time, you can just build the file or the folder that the file reside.

2017 Accomplishment Goals

This year, I decided to “bring it on” by adding every month a new awesome accomplishment in my life. The goal is to try to push me by doing something out of the normal, to get an achievement/recognition, to start a new habit or anything concrete that can improve my professional or personal life. It can be also be something simple for someone, but that is totally out of my zone of conform. All this to try to push me to the next level and try to do something outstanding. Some objectives can start one month and continue until the end of the year and other can be a one-shot deal.

I decided to start this project last month (April), but I am serious and want a full year of amazing months. Therefore, my first objective count for January and so on, so I had some catch up to do which was easier than expected since my year started pretty solid.

Here are my monthly accomplishments/goals:

  • January: Publish a book on Asp.Net MVC/Entity Framework/Azure
  • February: Changing Job from Microsoft VSTS to Microsoft Teams
  • March: Start working on two online TypeScript courses (to be finished by June)
  • April: Secret project that I invested now, but that I’ll reveal in December and will continue in 2018.
    Bonus #1: Start reading 1 book a month that is not related to programming language for a total of 12 for this year.
    Bonus #2: I received an award from Microsoft because I fulfilled a second patent as a sole inventor.
    Bonus #3: I received an award from Microsoft because I fulfilled a third patent as a sole inventor.
  • May: Start to write a private journal where I must write 15 minutes (~500 words) every two days. The goal is to think reflect on what went well and not. Also, a way to meditate in time of struggle. I am using the platform Penzu (https://penzu.com/). I wrote above 15000 words and everyday for May.
    Bonus #1: I filled a forth patent as a sole inventor with Microsoft.
  • June: Interview with Netflix, got an offer and accepted this one.
    Bonus #1: I filled a fifth patent as a sole inventor with Microsoft.
    Bonus #2: I filled a sixth patent as a sole inventor with Microsoft.
  • July: Move to California.
    Bonus #1: Switch from Microsoft’s ecosystem to Apple’s ecosystem.
  • August: Start studying for GRE.
  • September: Start doing cardio exercise (jump rope) and learn 60 news English word per month for the next 4 months.
  • October: Got accepted to Georgia Tech Master Program (goal that will be stretched for the next 3+ years).
  • November: Got promoted at Netflix within 4 months.

Books read so far (April objective):

I’ll update this post every month! Looking forward to getting an awesome year!!!

Visual Studio Code with NPM and TypeScript (Part 8 : SASS & CSS)

Transforming SCSS into CSS is a matter of using less than 5 lines of code. NPM has a package that you can use with Gulp. In less than 5 minutes, you can have SCSS configured to compile all your files into CSS.

The first step is to get the gulp sass package. You need to install the dependency has development since you won’t need it at the browser level.

npm install gulp-sass --save-dev

The next step is to create a step that will take all you scss files, pipe the files into the sass compiler and pipe the output into your output folder, where you will references them from your HTML file to serve the style. Here is the Gulp task.

gulp.task('scss', function() {
  gulp.src('sass/**/*.scss')
        .pipe(sass().on('error', sass.logError))
        .pipe(gulp.dest('./css/'));
});

You can find all changes for SCSS inside this commit[1] for the project we are building up. You may see some differences in the commit compared to this blog post. For example, instead of relying on string directly inside the task for path, I opted to use a constant. It’s always better to have in a centralized way in case of change or in case that it can be reused in another task.

Another change that you might consider is to add this new created task in the dependency of tasks for the build task or the watcher.

Here is a screenshot of the output where the SCSS generates CSS that modify the HTML.

SCSS:

body{
    font-size:36px;
    span{
        font-size:6px;
    }
}

CSS:

body {
  font-size: 36px; 
}
body span {
    font-size: 6px; 
}

HTML Output:

[1]:https://github.com/MrDesjardins/TypescriptNpmGulp/commit/02410d535a12cb48b167ff45b18970d312776270