The bad habit of hiding features with context menu and double click

Once in a while, I feel that some subjects return on the table wherever I go. One of that subject is where should we but a button to launch a specific action. While this is totally a valid question, the repeated problem is that at some point in the conversation people focus on the easiest way to do it instead of the best way to do it. Of course, the best way is always subjective, but the wrong way is normally accepted by more people.

Let’s start with some premises. No one can argue that an hidden feature is a bad. First of all, the name says it: it’s hidden. Users will not find it easily, thus not use a feature that cost money to the company to create. This can be even more drastic than that, people may just leave your product because they cannot find how to do specific action — the software is slowing the user down, make him frustrated. Worse, when evaluating a product, this can be a turn off because the user will not even notice that your product has that feature compared to your competitor which has the feature in the face of the user. Second, an hidden feature make a occasional users forget about it. Even if it was written in a documentation, the user will forget about it and not use it. On the other hand, if this one is clear in your user interface, there is a bigger chance of re-learning to use the feature because it was in a natural place, a visual place.

This lead me to two principles in Web Design that are wrong. The first one is the right click to have a context menu and the second is the double click event. Right clicking was something that developer high-jacked in the late 90′ to block people looking at the source code (Html, JavaScript) of a website. Some people were displaying an alert window saying that the source is not available. That trend didn’t last very long since browser got more and more incorporated with developer tools and some work around was possible. It was also very annoying because no default right click menu was present. Users couldn’t RightClickWebsave image for example. It’s been for a long time a well known pattern to not interfere with the browser context menu. The rule is that users expect something to be consistent across all browsers, all pages. Right now, when reading that text, you can save that Html, save images, copy text, reload the page, etc. Theses actions are also there on any websites. This is what users expects. Would you be surprised if I tell you that the way to start commenting on that blog would be to right click this article and select “Comments”? Well, for me yes; for most people too. This is why, below this article, there is a form with a submit button to send comments. It’s clear, it’s obvious and not confusing. However, some people would argue that it takes place for a feature that is not used that much, thus, should be in the context menu. This kind of argumentation is recurrent everywhere in the industry. This is wrong in most cases. The major exceptions is if you have an online text editor and that you want to have specific actions on selected text for example. But even there, a toolbar should let you do the action. The problem with right clicking is not only that it removes default right clicking actions but is that when you open a webpage that you cannot say where you can right click to do any action. Can I right click the article? A paragraph? Specific words? Just the menu? It’s the game of trial-and-error with more loser than winner.

The double click on Web Site also comes from Windows Application paradigms where you can double click a folder to open it and from very popular software like Outlook. However, double clicking in web were not supported until the last few years. While some limited use cases may be okay to use double clicking, it is not for most scenarios. Double clicking shares some problems with context menu — it’s hidden. On this website, can you tell me which Html element you can double click? Of course, you can double click any word to have this one selected : like expected on any reading or writing text software, but other than that? It is impossible to know. Can you double click the “Build Status” to get the full report? Can you double click a user name in Facebook to have this one added to your friend list? No and no. In fact, double clicking is even worse than the context menu because once executed in trial-and-error the action is executed. At least with the context menu, you could see the hidden feature before triggered it. It is also worse than the context menu because double clicking depends of the speed of the user to double click. It’s not for nothing that you can configure the double clicking rate in all PC settings. However, this is tricky for a user. Even young software engineer in good health can sometime not double click at the right rate, hence clicking twice. Double clicking is sneaky, because if you single click twice fast you trigger the double click event. If you single click twice slow, you trigger twice the single click event.

The solution of both of these hidden features is to think to a good design. You can most of the time create button to do actions. You have a group of action, than you can create a toolbar or you can create a button with a dropdown of action. You needs to have something big in a very tight space; click to expand that space to let you do more and than contract that space.

Telemetry with Application Insights for Website and Webjobs

If you have a website and also some WebJobs you may want have both of them use the same library for your telemetry. Once idea is to create a shared project that both project refers. This shared project can have a class that abstract your abstraction. The real implementation can use Microsoft Azure Application Insights to send telemetries to Azure. As you may have read in the official documentation is that your website needs to have the Microsoft.ApplicationInsights.Web package, and Microsoft.ApplicationInsights.WindowsServer. What you need to know is that the shared project also need to have the Web and WindowsServer package but the WebJobs also need to have the WindowsServer package. If you don’t, your will get some exception on Telemetry.Active…

Finally, you should always give some time for the telemetry to be sent after it is flushed. Here is a snippet of the method that send the constructed telemetry from my Telemetry class in the shared project.

private void Send(string eventName, Dictionary<string, string> properties, Dictionary<string, double> metrics)
{
	this.telemetry.TrackEvent(eventName
	  , properties
	  , metrics
	  );
	this.telemetry.Flush();
	System.Threading.Thread.Sleep(5000);
}

The 5 seconds sleep is more than enough. You can have less. The important is just giving enough time to the telemetry to be sent to Azure.

Azure WebJobs using CronJobs

Windows Azure lets you have background running jobs that are hosted with your website. If you want those jobs to be run in a specific time, you need to schedule them. This is where it can become confusing. Do you need or not to use the Azure Scheduler service? I firstly though that I needed the scheduler service to finally realize that it was not required. I also scheduled everything from Visual Studio who produced a scheduled file webjob-publish-settings.json but got issue with recurring schedule or more complexe scenario like running a job only during the week every 10 minutes. That said, I used for years Cron jobs on Linux and was very happy. Azure lets you configure with Cron jobs too.

First of all, you need to still have the webjob-publish-settings.json. This can be generated by Visual Studio for you. But before anything, be sure that your webjobs has the reference to the Microsoft.Web.WebJobs.Publish. Once you have that reference, be sure that inside the .csproj you have the target to the reference. Here is what you should see.

  <Import Project="..\..\..\packages\Microsoft.Web.WebJobs.Publish.1.0.10\tools\webjobs.targets" Condition="Exists('..\..\..\packages\Microsoft.Web.WebJobs.Publish.1.0.10\tools\webjobs.targets')" />

By right clicking your console application, you need to choose Publish as Azure Webjobs. This will let you create a schedule.
PublishAsAzureWebJob
Select a starting date, an ending date (since you cannot not select anything in this dialog) and choose the mode to on demand. Then, select the generated file webjob-publish-settings.json under the properties folder. You need to have the following properties filled up.

{
  "$schema": "http://schemastore.org/schemas/json/webjob-publish-settings.json",
  "webJobName": "YourJobNameHere",
  "startTime": null,
  "endTime": null,
  "jobRecurrenceFrequency": null,
  "interval": null,
  "runMode": "OnDemand"
}

The mode is on demand because we will run it by Cron job. This need to manually create a new file at the root of your console application named “settings.job”. You also need to go in the file’s property and select that this file will be copied if newer. This is required because when you publish, the system compiles the project and publish the bin folder. You need to have the settings.job file in the bin to be published too. That file will be read by Azure later.

The settings.job file is a JSON format with a single property named “schedule” which have a Cron job syntax. It has 6 fields which is seconds, minutes, hours, days, month, day of week.

{
    "schedule": "0 */10 * * * 1-5"
}

Once published, you will see your job under the website.
WebJobsOnAzure

From there, it is possible to see the job running in the portal.azure.com and see any diagnostic output, etc. One last detail, your website must be running on always on.
ApplicationSettingAlwaysOn

This is required to be able to have your jobs ran. Azure Scheduler is there to be able to do more advanced scenario like calling endpoints outside the scope of your web application. For more scenario, using Cronjob and having all the scheduling setup in your solution which can be on your code repository.

Unit Testing with TypeScript – 3 Tricks to help you with private method

I heard more and more that unit testing TypeScript is hard for two reasons. Most of the time is because of how hard is to test private method or that unit tests broken easily when modifing the code thus very expensive. Here is 3 different ways to achieve a higher number of unit test in a more atomic fashion which helps to reduce the cost of impact when modifying code. Another issue I see is we are having single public method calling several privates methods has the consequence of having multiple unit tests harder to maintain. The difficulty is that private methods need to be set in a particular state to achieve specific testing paths. Since it’s all in private methods, setting conditions to get into the desired private method require for the developer to read more code, understand many private method instead of just the one tested, arrange several parameters and finally execute the code to be tested. Not only it increases the time to create tests, since it requires a larger comprehension of the code, but make them fragile to changes. A single change in a private method could result of failing other tests when it should not.

Pattern 1 (fast, less code)
The main problem is that since methods are private, it’s hard to unit test – so make them public. This kill encapsulation, which is present in TypeScript not in JavaScript. It is preferable to have a tested code and being less purism, than having 1 unit test called “Happy path” which test 1 scenario or having all tests being called by that single public method to test these under neat private methods. However, the big drawback is that everything can be used outside your class which might not be your real intention.

Pattern 2 – Variation 1 (separating the logic into classes and interface)
This second pattern is a better pattern than the previous one, but it comes with a cost of having more code to write. Common sense here need to be used about when to use it or not. The way to keep having public method and still being able to unit tests the code is by separating the code with a level of abstraction by dividing the logic into public classes. For example, you are creating a class that has a private method to sort a collection. Instead of having that logic into a private method, you extract the code into a SortByXYZ class which can inherit a ISortableCollectionWhatEver, then your class is injected by the concrete implementation during instantiation. The class uses that abstraction inside its methods. With that pattern, we can test the widget but also test the sorting logic because it’s in a public class — outside the class.

Pattern 2 – Variation 2 (no interface abstraction)
Imagine a class named ClassA that is having private method to render a title. Instead of having all the logic into a private method to render the title, we also created a Title class that has a public render. This way, we can test the TitleClass without having to care about ClassA implementation. This is an example about how to split logic into classes. This example does not have interface, so it’s less pluggable, but a little faster since we do not have to create interface and inject them. This is also a mix between the first pattern since you can have more public method in each class too. For example, the TitleClass could have a generateTitleFormat() method which could be public which would allow to create unit test on the formatting even if this one is only used by Render. See that solution has hardcoded instanciation of class which are as cohesive that the variation 1.

The main goal is not only to be able to unit test easily but also to be able to have cohesive classes that are easy to understand and can also be reusable. They are others pattern too available. Just keep in mind to keep it simples, easy to understand and easy to test.

Telemetry with Application Insights JavaScript

Application Insights is a system that run on Azure that lets you send information about your application. I will not go in detail in that post about all its capability. I suggest you read the overview page from the official documentation. This article goal is to use Application Insights to collection information about two buttons that go to the the same page to figure out which one is the most popular. Since both links are normal Html link, we need to use JavaScript to bind the click event to both buttons and send the telemetry before moving to the actual page.

The first step is well documented which is to add the Telemetry to the web project. The detail are also explained in the documentation. In short, it consists to get the right Nuget Package, create the Azure Application Insights account (needed to get a key) and then setup the Global.asax.cs + set the JavaScript that will collect client side telemetry. What is not really well clear is that you need to setup the maxBathSizeInBytes to zero if you want to get the telemetry sent very fast. This is required if you want it on a click event that leave the page. This is not required if you have a single page application that remains on the same page. The setup code look like the following:

<script type="text/javascript">
        var appInsights=window.appInsights||function(config){
            function r(config){t[config]=function(){var i=arguments;t.queue.push(function(){t[config].apply(t,i)})}}var t={config:config},u=document,e=window,o="script",s=u.createElement(o),i,f;for(s.src=config.url||"//az416426.vo.msecnd.net/scripts/a/ai.0.js",u.getElementsByTagName(o)[0].parentNode.appendChild(s),t.cookie=u.cookie,t.queue=[],i=["Event","Exception","Metric","PageView","Trace"];i.length;)r("track"+i.pop());return r("setAuthenticatedUserContext"),r("clearAuthenticatedUserContext"),config.disableExceptionTracking||(i="onerror",r("_"+i),f=e[i],e[i]=function(config,r,u,e,o){var s=f&&f(config,r,u,e,o);return s!==!0&&t["_"+i](config,r,u,e,o),s}),t
        }({
            instrumentationKey: "@Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration.Active.InstrumentationKey",
            maxBatchSizeInBytes: 0
        });

        window.appInsights=appInsights;
        appInsights.trackPageView();
    </script>

The second step is to add event on each link that you want to collect the click.

function attachTelemetries()
{
    $('#all-contests').click(function () {
        appInsights.trackEvent("BestLinkLocation",
            { LinkDestination: "Contest/List", LinkPageLocation: "DirectTopMenu" }
        );
        appInsights.flush()
    });
    $('#user-contests').click(function () {
        appInsights.trackEvent("BestLinkLocation",
            { LinkDestination: "Contest/List", LinkPageLocation: "SubMenu" }
        );
        appInsights.flush()
    });

}

You can use the appInsights variable, defined when setuping Application Insights with the method trackEvent. The method take a name for the telemetry captured following by properties and if needed some metric. In our case, we can just sum the each properties (each link location) and determine which one is the most popular. Something not clear in the documentation is that it is required to call the flush method. The reason is that once the click event is done, the browser move to the next page. Flush ensures that the telemetry is sent.
MetricExplorer
The third step is to analyse the result. Simply go to portal.azure.com, under the Application Insights.
Events

You can add a chart in the Metric blade, see a the top. Select Sum, in a grid, and group by the property you want to get the data. In our example, it is the property that change which is the location of the link. What can be confusing is to be able to select the property you need to go at the bottom of that blade and select the metric event.

GridTelemetry

The result is a table that contains only the data that has been collected. Hence, you need to have people clicking the buttons to see anything. Also, it can take some time (about 30 minutes) to see the information into the Azure portal.