If you are using create-react-app or the TypeScript equivalent react-script-ts for TypeScript, you see that the default testing framework is Jest. This is developed by Facebook like React and Redux. Jest procures many advantages like being fast. It doesn’t need to load a browser headless or being in a browser at all. It is also fast because it can run the unit tests on a changed test or run the unit test that has a relation to the code changed instead of running every test. In this article, I’ll guide you to setup Visual Studio Code to be able to debug directly in TypeScript. I take the time to write something because information on Internet is very slim and the configuration is fragile.
As mentioned, configuring Visual Studio with Jest require subtle detail that can break the whole experience. For example, using Node 8.1.4 won’t work, using Node 8.4 or 8.6 works. Another sensitive area is the configuration of Visual Studio. It requires having some specific configurations which vary. The following code is two different launchers that work with Visual Studio Code.
The first step is to make a new Gulp’s task that will look similar to this one which, at this moment, hard code a single file to be compiled.
To make it works, we need to pass the double dash for the name of the parameter, followed by special macro variable of VSCode. Then, the Gulp’s task needs to consume the argument. The arguments passed are indexed and contains more than only what is passed down by the VSCode’s task. The index 0 is the node.exe path, index 1 is the gulp.js path, index 2 starts to be out own argument, which in our case will contain “–workspaceroot” and so on. The index 7 is the first argument used and it contains the file on which the task is executed. We need it to give the file to TypeScript to compile. Next, we use argument 5 and 3 to create the output path. This need to be custom to your file structure. In the illustrated case we have a structure that looks like: ./src/same-structure-after that is compiled to ./output/same-structure-after.
To use it, we only need to press “F1” on the single file we want to compile, type “task”, select “Run Task” and select “buildsinglefile”.
Having a quick way to build file you are working on is crucial to move fast forward. This is one step in this direction. Two possibles improvement would be to have a shortcut to execute this task, and another solution is to have a file watcher that execute a compilation on the modified file.
To do so, we need to add a launch configuration for VSCode. This is done by adding inside the root of your project a folder called “.vscode” and adding a file named “launch.json”.
This file is used by Visual Studio when you hit “F5” or when you go in the left panel, under “debug” and click “Play”.
The configuration contains few item that you must have, and some that you need to configure.
The type, request are required to be “node” and “launch” which said to VSCode that we will debug a node application. The “name” property is the name that will show in the debugger. In the screenshot you see it’s written “Debug Gulp” which is the name specified here. The very important part is the “program” which must point to Gulp. So far, we said to VSCode to execute node with the program Gulp.
“StopOnEntry” is not required but it will stop right when it loads the program. I found it handy to have it stop, which allow me to go set my breakpoint into the Gulp’s task I want to debug. In regard of which task we debug, it is defined under the args. In my example above, I am debugging the task named “copy”. The “cwd” is where the gulpfile.js is located, this is where the task to debug is located. It’s white to use the workspace root keyword to start from a good unchangeable root.
And that’s it! You will have the possibility to step through all the Gulp’s task code.
Creating a new project that use TypeScript with Visual Studio code can be not as straight forward as expected. A quick search on the web shows dozen of different ways and none of them are the same. Most current example are out of date or use so many npm packages that it’s hard to follow for people coming from full blown IDE and framework. This article goal is to present the simplest possible way to have a TypeScript project up and running. Even if we want to be very simple, a lot of technologies will be required. We will try to use the minimum.
Using npm needs to use Powershell or a command line. But before, let’s create a new directory where we will host the project and initialize npm.
The npm command creates a package.json file where information about which packages is needed. See this as a cart of library that you need and that later we will download and install. For your project, we want to use TypeScript, JQuery and Gulp. The first one is the TypeScript library, the second is a popular library that is not written in TypeScript and the third one is a task runner. We will go in more detail soon, just keep in mind that they are 3 different libraries that needs to be handle differently. One different is that TypeScript and Gulp are used at development time, while JQuery is needed in development and in production. We need to install differently because of this difference since we do not want to have libraries not needed in production.
At this point, you can look in your directory and see a new one generated named “node_modules”. This folder doesn’t need to be in your source control since you can simply invoke “npm install” in the directory to get the folder back with all the libraries. When using the install command, NPM looks in the package.json configuration file and get what needed. I suggest that you delete the “node_modules” and try the command. But before doing so, look at how many folder contains the “node_module”. At the time I am writing this article, the number if 162. The count is way more than just 3 (TypeScript, Gulp and JQuery) because each of these libraries have dependencies on other libraries that also have dependencies and so on. Since it’s an introduction article, I won’t go in too much detail, but it’s possible to install package globally in your computer (%AppData%\npm\node_modules). If you are developing several projects, you may want to install common utility tools globally, like TypeScript or Gulp. The advantage is that you avoid having in all project the same files, hence saving disk space.
Before going to the next article that will setup the HTML file and create the first TypeScript file, let’s add an additional library: requirejs. This library will be used to handle TypeScript module that we will see soon.
The first step is to download the latest version of TypeScript. This will install TypeScript in Program Files (C:\Program Files (x86)\Microsoft SDKs\TypeScript). Be aware that you may already have some older version of TypeScript (like 1.6 and 1.8) but you want to have 2.1.
At that point, you may ask yourself that it sounds cumbersome to manually add files if the project is big. This is why you can change the tsconfig.json to compile all TypeScript file of specific folders.
You may fall into the problem that while writing your TypeScript that Visual Studio 2015 tells you that you are using a version different from the version specified in the tsconfig.ts.
This come with the problem of having the .csproject having the TypeScript options being disabled saying that two tsconfig.json exist.
The problem is that Visual Studio is having TypeScript configuration directly into the .csproj file. You can open with a text editor the .csproj and search for TypeScript.
There is two options here. The first one is to remove the tsconfig.json file and configure from Visual Studio project property. However, you will be limited in term of options. The second is to remove all TypeScript entries inside the .csproject and keep the tsconfig.json. You may have to restart Visual Studio to have Intellisense to work again.
Working with production code is not always easy when it comes the time to fix issue. Application Insight is a free service on Microsoft Azure that allow to do a lot and one of the feature is to integrate with Visual Studio. In this article, we will see how Application Insight can improve the speed to fix your issue.
First of all, if you log in the Cloud Explorer panel into your Azure account and open the solution your deployed you will see Application Insight in CodeLens.
That mean that while coding, you may see that some exception go raised in your production server. From here, you can click Application Insight in CodeLens and see the number of exception as well as 2 links. The first one is Search. Search allows you to search in time the exception and get more information. It’s possible to filter and search by exception type, country, ip, operation name (Asp.Net actions), etc. For example, here is a NullReferenceException thrown when users where accessing the Asp.Net MVC controller “UserContest” from the action “Detail”. We can see in the stack trace and see who called the faulty method.
The second link is called Trend. This one let you see when the exception was raised, as well as the amount of time the exception got thrown and the problem id. You can navigate in time and in exceptions and see what can cause this issue. It might be a webjob that run at a specific times or a period of high traffic.
This is a short article, but it should give you the desire to go explore this free feature. It’s clearly a powerful tool for developers that need to react fast to problem in production and remove a lot of fences between finding the right log and fixing the issue. With an easy tool, and natural integration, investigations are faster which lead to faster resolutions of problem.
I am a big fan of Visual Studio Team Services (VSTS) as well as a developer on this platform. However, if you are doing open source project, you cannot benefit of VSTS. The reason is that everything is private. Microsoft is also using alternative company like GitHub to host public project, so am I! However, GitHub is focusing only at the source repository, not the build, running unit tests, doing coverage, deploying Nuget package. This doesn’t matter since GitHub provides what is called “Webhooks” which allow other services to get notified when new code get pushed. In this article, we will discuss about Travis-ci.org. This free service can get notified, with the “webhook” of GitHub to start compiling your code. It also let you run other task, like running unit tests.
I am take for grant that you have a GitHub account. If you do not, you can create one for free. The next step, is to go at Travis-ci.org and signup. This is very easy since you can login with your GitHub account. This is a big win because it is so related and GitHub has so many services around it that the burden of handling multiple accounts is not a problem with Travis. The next step is to select which repository you want Travis to get notified by GitHub.
Once that is done, you need to add something into your repository which will give instruction to the continuous integration system. This is where we will tell what to build and to run unit tests. The file must be set at the root of your repository with the name “.travis.yml”. There is a lot of options. Here is a sample of my C# project.
As you can see, it specify the language, and which solution to use. It defines what to do during the installation which is to restore all nuget packages and to install XUnit runner. XUnit is one supported type of unit tests that can be used. Visual Studio Test cannot be run because Travis run on Mono. That might be a show stopper for you if you have heavily invested in Visual Studio Test. The last section of the file is what to do. The first line, with xbuild, compile the code and the next one run the unit test. If at one of these steps something happen wrong, you will get an email. Otherwise, it’s all fine!
Travis-ci lets you see all logs in real time from the website. It is easy to access, easy to debug. It also let you have a dynamic image of the state of your build that you can embed in the readme file of GitHub. To do so, click the badge next to the repository and select markdown.
I will cover soon how to generate your Nuget package by pushing on GitHub. That would be one more step on having automated your steps.
The VsCode folder is a folder located at the root folder of your project. This folder lets you add configuration files in Json format. One file that can be used is named “tasks.json” and is used for running task. This is where we will create the task runner for TypeScript.
To do so, hit “F1” key and type “Configure Task Runner“. This will create for you the tasks.json file under .vscode folder.
You can put he following code:
// See http://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"args": ["-p", "."],
Next thing is to add TypeScript configuration into a tsconfig.json file. This file is located at the root of your project, at the same level where reside your package.json (npm). My TypeScript configuration contains a “jsx” mention only because I am using React and TypeScript.
You can have some problem to compile your TypeScript if you have this one already installed in your Program Files. VsCode already have TypeScript installation, so you do not need to have any other installation on your machine.
I am working on a side project that is a single solution with 51 projects. The amount of project is considered “big” for 2016 while it was still considered “medium” few years ago. For some reason, Visual Studio doesn’t handles very well project with more than 50 projects. I could refactor the solution by consolidating some projects and having a single project for unit testing instead of 12. Nevertheless, this take some time and before optimizing the design of the solution, let’s start by understand what is happening.
First, we need some basic metric. One useful extension to add in Visual Studio is the Build Monitor Extension by Daniel Vinntreus. This will give you an additional Output with the time of each project to be compiled. The second tool is also free, it is called Process Monitor. This can be download from Microsoft TechNet website. This tool lets you see what the process write on the hard drive (and more). Here was the data from both of these tool.
To get these statistics, I first clean up the solution to have the build rebuild everything. The total amount of time is 2 minutes 51 seconds. A lot of time is one project starting with “Script” which are webjob that run in the background. These ones are in Visual Studio under a folder and could be disabled in the future. This is something I was not doing and thus wasting a lot of build time when working on the main project : the website. Process Monitor is also educational by showing how many bytes is written when building the solution. To do so, open Process Monitor, click Filter (Ctrl+L) and add Visual Studio process (Devenv.exe) and MsBuild (msbuild.exe).
One the filter is done, be sure to clear everything (Ctrl+X) and start building. One the build is done. Go back in Process Monitor and go int Tools > File Summary and get everything sorted by Folder. You can dive in and see what happened.
This give us 893 megs written. I am on a SSD drive so it still not bad, but to be honest quite a lot of writing. From here, I noticed few things. First, I have a lot of bin folder with the same files. Second, that we are rebuilding the same file because they referenced them. To improve, I decided to edit all projects to output on the same bin folder. Third, the jobs folder that contains all scripts are heavy on the writing.
I decided to have every project to output in the bin folder of the website. The reason is that I have IIS taking the bin folder as source so every time I build I can just refresh the browser to get the website with the latest version without deploying. After that, I went into all projects references and click reference to other projects and changing the Copy Local to false.
From there, I cleaned up everything (all bin folder empty) and re-build everything to see how the performance improved. First, the Build Monitor extension is showing some improvements:
The time to build is cut by half. This is already better. If we take the Process Monitor we can see the reason: we write only 51 megs.
Finally, if I unload all jobs project (scripts) one, I have a build time of 1m13sec. Not a huge improvement, but still 20 seconds less! The initial performance of 2 minutes 51 seconds to 1 minute 13 secconds is quite appreciable. With all these changes some problems raised. First, when pushing the code in the continuous integration (ci) environment, the build server is not able to build the whole solution. This is because the build server builds the startup project which doesn’t copy local any references. The second problem is when you deploy. Visual Studio Publish mechanism build the main project too, whence the same consequences. So, we need to add additional steps to build everything and we will come back to have some performance lost.
An other direction is to remove the most possible projects. This approach is fine but limited to what you can group together. For example, I have 1 web project, and about 14 web jobs. This mean a minimum of 15 projects. If we want to divide unit tests from the code, we can add 1 more project. If we want to share webjobs and website logic we add one more project. Still, we have half the number of project and while working on the shared tier and website, it is always possible to unload from the main solution every webjob projects. The best way to move everything is to create a shared project, that I called “ApplicationTier”. The website project remains the same but refers this new project. Inside Visual Studio, we need to go one by one in each project to drag-and-drop all files in a folder with the same name of the project. The final result is easy to read and consolidate a lot of project within one but with familiar structure. At the end, the result was very impressive. Instead of taking 2 minutes 51 seconds the build time was at 54 seconds.
By reducing the amount of project, we have a lot less references that needs to be cut. The number of megs written on disk is about 550 megs now. The main bottle neck is all the scripts for webjobs. Since all jobs are just entry points to the ApplicationTier, if they share the same bin folder, it will reduce by a lot the building time. This is because that the first script project to build will get the binary files in the bin folder, and subsequent scripts will just build the executable project without building again the references. The result is the following 31 seconds, mainly because only 196 megs go written on the disk.
I quickly started the RamDisk tool to see if having the scripts bin folder could benefit of it and I haven’t see any improvement. Finally, I am pretty happy of the end result. I can always unload all scripts and this can be done easily since they are inside a folder. Inside Visual Studio, right click the folder that contain these folders and click “unload project”. This will unload all of them in 1 operation. For further optimization, we could also unload the migration project and unit tests and by doing so having less than 20 seconds of total build time.
During the development of one feature, I noticed the performance to be very slow in some scenario. It was not obvious at first because the task was to simply update a user profile. The user profile in question is stored in a single table. It’s a pretty straight forward task. Before persisting the data, some validations are done but that is it.
This is where Visual Studio can be very useful with the integrated Diagnostic Tools. The diagnostic tools provide information about event and on any of them, you can come back in time and replay the call stacks which is pretty useful. It also gives some timing information, cpu usage and memory usage. To start diagnosing, simply attach Visual Studio to the process you want to diagnostic. After, open Visual Studio’s diagnostic tools that is located in the top menu under Debug > Profiler > Performance Explorer > Show Performance Explorer.
Here is an example of the output that I got from my performance problem.
Visual Studio Diagnostic tools events include Entity Framework SQL statements. This is where I realized that the user’s table was updated but also hundred of others which looks to be a table linked to this one. Here was the performance bottleneck, the culprit! I never expected to update anything related to that table — just the main user’s table.
Entity Framework code was like this:
public void Update(ApplicationUser applicationModel)
//Update the password IF necessary
var local = UnitOfWork.Set<ApplicationUser>().Local.FirstOrDefault(f => f.Id == applicationModel.Id);
if (local != null)
UnitOfWork.Entry(local).State = EntityState.Detached;
UnitOfWork.Entry(applicationModel).State = EntityState.Modified;
UnitOfWork.Entry(applicationModel).Property(f => f.PasswordHash).IsModified = false;
UnitOfWork.Entry(applicationModel).Property(f => f.UserName).IsModified = false;
UnitOfWork.Entry(applicationModel).Property(f => f.CreationDateTime).IsModified = false;
UnitOfWork.Entry(applicationModel).Property(f => f.ValidationDateTime).IsModified = false;
UnitOfWork.Entry(applicationModel).Property(f => f.LastLogin).IsModified = false;
UnitOfWork.Entry(applicationModel).Property(f => f.SecurityStamp).IsModified = false;
UnitOfWork.Entry(applicationModel).Property(f => f.Language).IsModified = false;
As you can notice, nothing is done directly on the property that has the collection of “reputation”. The problem is that if the user as in that collection 250 objects, that for an unknown reason, Entity Framework does 250 updates. Since we want just to update first name, last name and few other basic properties than we need to be sure to remove those unwanted updates. After some modification with Entity Framework, like nulling every collection before updating, The SQL provided was only a single SQL, whence the performance at full speed.