TypeScript 2.0 cast required for simple boolean?

TypeScript is awesome to help you develop web application with significant amount of client-side script. It had on top of JavaScript some strong types and once compiled bring back everything to Javascript. Many huge project, like Angular 2 is using TypeScript. At Microsoft, Visual Studio Team Services is also using TypeScript. Few days ago, the TypeScript team released version 2.0 which bring some existing code to fail. Here is a snippet that illustrate the problem. The following code doesn’t compile, right at the IF line.

var bool: boolean = true;

if (bool === false) {
    console.log("not true");
}
else { 
    console.log("not false");
}

However, the following code works:

var bool = true;

if (bool === false) {
    console.log("not true");
}
else { 
    console.log("not false");
}

Or this one works:

var bool: boolean = true;

if ((bool as boolean) === false) {
    console.log("not true");
}
else { 
    console.log("not false");
}

Before I start explaining, let’s be clear : I think the way it was working was good and I am not convinced that this new behavior will help reducing the amount of errors.

So, how come `var bool: boolean = true;` doesn’t work when `var bool = true;` works? The second one infer the type from the value. And, if you play with it in the official TypeScript Playground. You can see that the inferred type is boolean.
inferredboolean

In fact, the one without type is a bug and in version 2.1 will be fixed to also fail. But why? The culprit is under the boolean type which doesn’t exist in JavaScript. TypeScript creates an union of `true | false`. However, in the example, we only set it to true. The compiler figure out that the real type of this variable is true, not true or false. TypeScript 2.0 supports literal types `true` and `false`. TypeScript only narrows union types. What means narrowing? Narrowing is limiting the value space that a variable can host based on some checks like typeof, instanceof, equality, etc.. Narrow type can be boolean as we just saw but also enum.

You can still trick TypeScript compiler by using function that alter the value. But it still flacky. For example, the following code set the type to boolean, set the value to true and alter it to false within a function. Written this way, TypeScript figures out that the value can be true or false, hence keep the legacy boolean validation as correct.

var bool: boolean = true;
let f = (() => {
    bool = false;
})();

if (bool === false) {
    console.log("bool is false");
} else {
    console.log("bool is true");
}

On the other hand, if you try to write the same code differently, this one get lost and won’t let you compile:

var bool: boolean = true;
let f = (() => {
    bool = false;
});
f();

if (bool === false) {
    console.log("bool is false");
} else {
    console.log("bool is true");
}

The last piece of code is harder for TypeScript to understand if the function `f` will be called. The first code is for sure to be executed. Does that mean that if we were to compare with true that it would fail? The answer is yes. The bool type is true.
inferredboolean

Overall, this change is weird. Why would you take the time to define a type to boolean (true | false) and let TypeScript overwrites your decision to decide either it should be true only or false only. It’s also bringing some issues if you are using TypeScript’s enum. For example,

enum MyEnum {
    Choice1,
    Choice2,
    Choice3
}

This translate into Choice1 being 0, Choice2 1 and Choice3 = 2. The problem is that 0 is false in JavaScript. The first check is to be sure we do not pass null or undefined. Since MyEnum.Choice1 is 0, which is false, it will never goes into the if of the following code.

function isChoice1(yourChoice: MyEnum) {
    if (yourChoice) {
        return yourChoice === MyEnum.Choice1; 
    }
    return false;
}

This code won’t compile because the triple equal is not about being yourChoice being a MyEnum but yourChoice being of type Choice2 | Choice3!

Overall, this change will probably give you some headache first. You can always quickly fix it by casting to the type you desire. In the long run, you’ll get more used about this control flow TypeScript analysis and will develop new habits.

React Autobinding and how to force component to re-attach its events

This is not a usual scenario but imagine that you have a component that must reattach its events. How would you do it? The use case is that when a user click a link from the menu that we create the top level component which attach event to a store, etc. We want to attach everytime to a new store, thus not having any listener to the old one. The reason is that it’s a cheap way to reset the JavaScript store and be sure to have no old events listening to the new view. However, if you just use React.createElement() you will endup that any subsequent creation will not trigger componentDidMount. That cause the component not listening to the store.

Under the hood, React knows to handle binding events, even if it doesn’t hit again the mounting method. It’s called Event Delegation and AutoBinding. The event delegation is that every event attached are attached at the top level component instead of sub-component. React dispatches the event to the proper component when an action occurs. The AutoBinding part is if you create a new object and this one is the same type and the same hiearchy level, this one doesn’t need to re-attach events — React already knows about it and it will handle the delegation properly to the new component. That is the reason that if you create a component and create it again that this one will not call componentDidMount but it will still have the listener to the store.

The problem is that if you want to reset those listeners, how do you do? How can you force the AutoBinding to reset all listeners? The solution is with key. The binding is associated to a component by this identifier. If you do not provide one, React is smart enough to figure out that you are creating again the same component and do some optimization like AutoBinding. If you want to skip this optimization and create from scratch, you need to setup a unique key. If you are clicking on different view, you can use the view id as the component key. If you want to have always a new creation, you can use a GUID.

React.createElement(MyComponent,
        {
            props1: "value1",
            key: newGuid()
        } 
);

Boosting Asp.Net MVC performance improvement by not using Razor templates

Here is a real case scenario. A website running on Azure that got deployed in release mode with everything pre-compiled. Still hitting 400ms to render the whole view. With Glimpse on, we can see that many views are used. Partial views, editor templates and display templates and taking some extra milliseconds here and there.

ViewWithEditorTemplate

Here is the same view rendered with almost no template. Everything is directly in the view and for specific components, editor and display templates got migrated into Html helper.

ViewWithoutEditorTemplate

So, at the end, Asp.Net MVC templates are just time consuming. Rendering a simple view shouldn’t take 400ms. It should take 10% of that, and this is what we get by trimming templates out.

Improving your Azure Redis performance and size with Lz4net

The title is a little misleading. I could rewrite it has improving Redis performance by compressing the object your serialized. It’s not related to Azure Redis particularly, neither to Lz4net which is a way to compress. However, I have learn that compression is improving Redis on Azure recently. It helps in two different ways. First, the Redis server and the website/webjobs needs to send the information to the Redis server. Having a smaller number of bytes to send it always faster. Second, you have a limit of size depending of the package you took on Azure with Redis. Compressing can save you some space. That space vary depending of what you cache. From my personal experience, any object serialized that takes more than 2ko gains from compression. I did some logging and the I have a reduction between 6% and 64% which is significant if you have object to cache that are around 100ko-200ko. Of course, this has CPU cost, but depending of the algorithm you use, you may not feel the penalty. I choose Lz4net which is a loose-less, very fast compression library. It’s open source and also available with Nuget.

Doing it is also simple, but the documentation around Lz4net is practically non-existent and Redis.StackExchange doesn’t provide detail about how to handle compressed data. The problem with StackExchange library is that it doesn’t allow you to use byte[] directly. Underneath, it converts the byte[] into a RedisValue. It works well for storing, however, when getting, the RedisValue to byte[] return null. Since the compressed data format in an array of bytes, this cause a problem. The trick is to encapsulate the data into a temporary object. You can read more from Marc Gravell on StackOverflow.

private class CompressedData
{
	public CompressedData()
	{
		
	}
	public CompressedData(byte[] data)
	{
		this.Data = data;
	}
	public byte[] Data
	{
		get; private set;
	}
}

This object can be serialized and used with StackExchange. It can also be restored from Redis, uncompressed, deserialized and used as object. Inside my Set method, the code looks like this:

var compressed = LZ4Codec.Wrap(Encoding.UTF8.GetBytes(serializedObjectToCache));
var compressedObject = new CompressedData(compressed);
string serializedCompressedObject = Serialization.Serialize(compressedObject);
//Set serializedCompressedObject with StackExchange Redis library

The Get method do the other way around:

string stringObject = //From StackExchange Redis library
var compressedObject = Serialization.Deserialize<CompressedData>(stringObject);
var uncompressedData = LZ4Codec.Unwrap(compressedObject.Data);
string unCompressed = Encoding.UTF8.GetString(uncompressedData);
T obj = Serialization.Deserialize<T>(unCompressed);

The result is really stunning. If you look at my personal numbers from a project that I applied this compression you can see that even for 5ko objects we have a gain.

RedisSize

For example, the 50th percentile with has a 23ko size for one key. This one go down by more than half when compressed. If we look at the 95th percentile we realize that the gain is even more touching 90% reduction by being from 478ko to 44ko. Compressing is often the critic of being bad for smaller object. However, i found that even object as small as 6ko was gaining by being reduced to 3ko. 35ko to 8ko and so on. Since the compression algorithm used is very fast, the experience was way more positive than impacting the performance negatively.

Using Application Insight with Visual Studio for Azure Website

Working with production code is not always easy when it comes the time to fix issue. Application Insight is a free service on Microsoft Azure that allow to do a lot and one of the feature is to integrate with Visual Studio. In this article, we will see how Application Insight can improve the speed to fix your issue.

First of all, if you log in the Cloud Explorer panel into your Azure account and open the solution your deployed you will see Application Insight in CodeLens.
ApplicationInsightCodeLen

That mean that while coding, you may see that some exception go raised in your production server. From here, you can click Application Insight in CodeLens and see the number of exception as well as 2 links. The first one is Search. Search allows you to search in time the exception and get more information. It’s possible to filter and search by exception type, country, ip, operation name (Asp.Net actions), etc. For example, here is a NullReferenceException thrown when users where accessing the Asp.Net MVC controller “UserContest” from the action “Detail”. We can see in the stack trace and see who called the faulty method.

ApplicationInsightDetail

The second link is called Trend. This one let you see when the exception was raised, as well as the amount of time the exception got thrown and the problem id. You can navigate in time and in exceptions and see what can cause this issue. It might be a webjob that run at a specific times or a period of high traffic.

ApplicationInsightTrend

This is a short article, but it should give you the desire to go explore this free feature. It’s clearly a powerful tool for developers that need to react fast to problem in production and remove a lot of fences between finding the right log and fixing the issue. With an easy tool, and natural integration, investigations are faster which lead to faster resolutions of problem.