The Best Development Browser Resolution

An interesting challenge with web development is the unlimited different combination of width and height that people can consume the application. I am developing with a wide screen most of the time and once in a while directly on my MacBook Pro that has a retina resolution. In all cases, I have the Chrome’s developer tool open which allows to change the viewport of the web application at the exact amount of pixel desired. The question remains which resolution is the best?

The answer depends of your application. For website that have millions of user it will be harder to converge into a single or even a couple of combination of width and height. However, most people develop web application that are internally used or used by similar group of individual which usually lead into a handful amount of resolution.

At Netflix, I am working on the Open connect Partner Portal where thousands of partners consult the application as well as many Netflix employee. The application is built with React and Redux and I am capturing in every action the width and height of the browser. I am also getting a lot more information and one of it is if it is an employee at Netflix or not. The information about if the data is coming from Netflix or not may or may not be relevant but I wanted to confirm. The reason is that I am getting more direct feedback from internal employees than from partner around the worlds. I wanted to confirm that they are viewing the same web application.

I created a heat map with Kibana who is a simple user interface to visualize Elastic Search data. I created different bucket of resolution and normalized every employe data to have a proportional idea. The first heat map was about Netflix employee. The application was mostly used around 1600×750.

Heat map of resolution used by Netflix employee on the Open Connect Partner Portal

The data shows a different picture of partners where the height is smaller of about 250 pixels and a width with two distincts categories which is 1300px and 1900px.

Heat map of resolution used by non-Netflix employee on the Open Connect Partner Portal

Another view is to see the average width and height per day of all employees and non-employees. The following graph shows in purple the average screen width. The darker line is non-employee, lighter is an employee. Under is the height. The blue line is the employees average and the light green is the non-employee average.

Average of employee vs non-employees width and height

I was curious how bias I was while developing the application. Because I am gathering a lot of telemetry, I was able to plot my resolution through time during the development in parallel of how the users were using the application. The following graph shows my average and the users’ average for the last 10 weeks. 

Me vs Users

The Y-axis is the number of pixel. The two green lines represent the width, the darker line is me, the lighter green is the user. The purple line is the height, again the darker color is me and lighter are the users. In the last few three weeks I was aware of the resolution of my users, hence I was developing conscientiously with the most popular categories discovered. 

The best resolution for Netflix Open Connect Partner Portal is different from the best resolution that you should build your application. While it is crucial to develop for all resolutions, it is realist to have few preset defined to be efficient. The exercise showed me that I have a tendency to use and build the application with more height. I realized that few buttons were under the fold making it harder to find to several of our external users.

Adding Telemetry to your Web Application

I wrote several months ago about having telemetry has a centerpiece of your application. In this article, I’ll write how it works more technically.

First and foremost, the application needs a thin layer to send the telemetry to the backend. I wrote a small internal library that handles basic telemetry needs. For example, if we want to collect a user behavior (clicking, hovering, scrolling, etc) we use the trackEvent function that allows entering a telemetry entry in the system. All telemetries come with a set of information for free. I am collecting information about the user’s browser, the user identity, which organization it belongs, the navigator width/height, if the user is under a specific state (e.g. size of the organization the user belong) but also information about where the user is in the single-page application, and more. For each event, it is possible to add a custom payload. For example, it is possible to add the number of time a popup opens in milliseconds.

Log.trackEvent(this.props.name, {
    timeConsumedInMs: diffMs,
    HelpAnchorId: this.props.id,
    ...this.props.telemetryPayload
});

This layer has many functions from trackError, to trackPage or trackScenario which inject itself into Chrome’s performance tool to have a neat integration about when code start and end (with between marks). 

Chrome Performance Tool Integration with Telemetry Track Scenario

Second, we are receiving a lot of telemetries. Some are useful only for development time, some good to send to the backend. The telemetry library allows dispatching which one is mentioned to be developed only or not by a single option when tracking the information. Chrome’s console is then showing the information while the application run. I opted to use a mix of color and indentation to distinguish easily the different telemetry type.

Console with different telemetry collected

The library has many other features like batching several telemetry events together before sending to the backend the information. Similarly, the library hooks into the browser beacon feature to send what it has in its queue when the user leaves the web application.

The information is valuable while developing but it very valuable once in production. The collection of event and data allows having a real picture of the user. In one of my case, I realized that people at Netflix are using the system quite differently than from outside the company. We are collecting the “role” of the Netflix employee and if our partners are from a small or big organization. A mundane example is that I realized that the average viewport of our user change making some grid barely visible without scrolling for people from a particular group. In another case, an assumed important feature finished to be used only by a very low level of specific users.

There are many ways to analyze all the collected information, and it requires a thorough exercise to have the right conclusion. However, having the information end up to improve a web application less subjectively. It allows evaluating how far from some assumptions we are. I personally enjoy writing down in the specification document my hypotheses. Not only it allows me to put in place the right telemetry, but it also allows me to look back after few weeks and see if I was right or not and adjust my point of view to some potential bias I had or assumption. Telemetry improves the user experience and improves the developer sense to create future user interfaces that do not only look good but that is useful.

Re-Reselect your whole Redux State

Re-Reselect is a library that uses Reselect. Both have the capability to memoize a function returns by its input. It means that if the input of a determine function change, the function is executed and the return is saved in memory. The saved copy of the information is used until one of the specified parameter change. The goal is to avoid computation that is expensive. Once use case with Redux is denormalization. Denormalizing is the act of stitching together information in several places. Redux encourages separating the model in a normalized way to avoid duplicate, thus reducing issue of having more than one entity with different value. Re-Reselect has the advantage of having a set of input that can handle several other selector. The following image, from the official repository of Re-Reselect illustrate the difference.

Reselect versus Re-Reselect

I am using Re-Reselect at Netflix in the Partner Portal application for denormalizing many entities. For example, I have selector for each of our organization we serve and each of them have sites around the world. Each site has appliances as well. When I am receiving information from the backend, depending of the endpoint, I need to invalidate more than one cache. So far, Re-Select is working well. However, I have custom logic to denormalize to handle cases that are beyond that blog post. These one require to access specific Redux store directly to compute information with different functions. It means that during the invalidation of the cache, and while a new value to be memoized I need to have an access to the Redux’s state and pass this one to functions.

public denormalizeSitePerformanceExpensive(
    appReduxState: AppReduxState,
    site: SiteNormalized | undefined,
    org: OrgNormalized | undefined,
    contactSelector: GenericMap<ContactNormalized>,
    deepDenormalize: boolean = true
): SiteDenormalized | undefined {

The function signature above is an example that to denormalize a site we need to pass the application “head” Redux state. The problem the memoization get invalidate on every change. The reason is not obvious if you never used ReSelect (or Re-Reselect). Because it is an input and because the reference of the head of Redux state will change at any change, then it invalidates the site cache. Here is the cache creation that shows the input that are used to invalidate the cache.

private denormalizeSiteFunction = createCachedSelector(
(state: AppReduxState) => state,
(state: AppReduxState, siteNormalized: SiteNormalized | undefined) => siteNormalized,
(state: AppReduxState, siteNormalized: SiteNormalized | undefined, orgNormalized: OrgNormalized | undefined) => orgNormalized,
this.contactSelector,
this.applianceSelector,
(state: AppReduxState, siteNormalized: SiteNormalized | undefined, orgNormalized: OrgNormalized | undefined, deepDenormalizer: boolean) =>
deepDenormalizer,
(
state: AppReduxState,
siteNormalized: SiteNormalized | undefined,
orgNormalized: OrgNormalized | undefined,
contactSelector: GenericMap<ContactNormalized>,
applianceSelector: GenericMap<ApplianceNormalized>,
deepNormalizer: boolean
) => this.denormalizeSitePerformanceExpensive(state, siteNormalized, orgNormalized, contactSelector, deepNormalizer)
)();

The quandary is to find a way to pass the state without having this one invalidating the selector when a change is done but keep having the function invalidated if any other selector in the parameter change. In the example posted, we want the site to denormalize to change if the normalized site change, or the organization the site belong change or a contact change or if an appliance change but not for all other selectors we have in the system, neither any data of the store.

The idea is to build a custom input instead of relying on the shallow comparer that comes by default. It is possible to pass to the createCachedSelector an optional object with a selectorCreator

 {
        selectorCreator: this.createShallowEqualSelector
 }

In my situation, it was a good opportunity to also have a feature to turn off completely the memoization. I always have an off switch to all my caching mechanism. It helps to debug and to preclude any issue with caching. To avoid impacting the memoization with the Redux’s store, I am looking for specific children reducer and if they are present, I know that it is the head and I return true which mean that the parameter is equal and it will not invalidate the cache.

private createShallowEqualSelector = createSelectorCreator(
    defaultMemoize,
    (previous: any, next: any, index: number): boolean => {
        if (this.isCacheEnabled) {
            // 1) Check if head of Redux state
            if (
                previous !== undefined &&
                previous.router !== undefined &&
                previous.orgs !== undefined &&
                // ... Simplified for this example 
            ) {
                return true; // AppReduxState no check!
            }
               // Logic removed that figure out if the input is the same or not
                return isTheSame;
            }
        } else {
            return false;
        }
    }
);

The custom equalizer opens door to interesting pattern. For example, if you do not have all your entity with the same reference even if they are the same, you can provide a global logic that handle that case. For my scenario, I am using a property that each entity has which is the last updated date time from the server. You may wonder why not relying on the object reference. In a perfect world, it would make sense because it is the most efficient way to perform a comparison and have the best performance. However, Partner Portal uses many caching mechanisms. For example, we are using IndexDb which mean that depending of the source of the object, the object may have not changed in term of value but changed in term of reference. Also, at the moment, one flaw of the system is that the cached value is set into the Redux store even if the Redux store has the same value (there is not check before setting the value received by the data access layer). To avoid invaliding because the data was fetched again (Ajax) or from the cache, a simple check to the last updated avoid invalidating the cache.

Testing Redux next and api.dispatch with TypeScript and Jest

A middleware in Redux can have quite a lot of logic. In fact, this is my favorite place to place logic. The rationale is that it can be triggered by an action, still, have the time to request data from the backend server with an Ajax call and can dispatch other actions that can be computed by another middleware or by a reducer. It becomes crucial to unit test any area where logic can go wrong, thus testing if specific logic dispatch or invoke next.

Testing api.dispatch is a matter of leveraging Jest. If you are using React and Redux, it is the defacto testing framework. In fact, it is the one coming with the famous “create-react-app”. To test if an action got dispatched from the api.dispatch, you need to mock the dispatch and verify that it has been called. The function “toHavebeenCalledWith” can take another expect which can peek the content of an object passed. Because every action contains a type and payload, it is possible to verify if an action has been invoked.

middleware.myFunctionInMiddleware(next, api, payload);
expect(api.dispatch).toHaveBeenCalledWith(
    expect.objectContaining({
        type: MY_ACTION_CONSTANT_HERE
    })
);

The case of next is similar. I found that while it is possible to use a similar logic than with api.dispatch, that sometimes it is not enough. For example, if you have a function that calls several times “next.” In that case, it is possible to pass a custom “next” that will be smarter than a simple mock.

let nextMock = nextPayloadTypeSpy(MY_ACTION_CONSTANT_HERE);
middleware.myFunctionInMiddleware(nextMock.mock, api, payload);
expect(nextMock.getPayload()[0]).toBe(payload);
expect(nextMock.hasBeenCalled()).toBeTruthy();

The code above this paragraph is a glimpse of how to use the smart “next.” The code accumulates in an array all invocation and allows to asset its content along the test. In that case, the test was testing the first execution of the “next” associated to a specific action (defined at the declaration of the object).  The logic relies on a custom Jest’s function that adds in an array all actions of a specific time when invoked by the tested function.

export interface ActionsWithPayload<TypeAction, TypePayload> {
    type: TypeAction;
    payload: TypePayload;
}
export interface SpyActionsWithPayload {
    mock: jest.Mock&lt;{}>;
    hasBeenCalled: () => boolean;
    getPayload: () => any[];
}

export function nextPayloadTypeSpy(type?: string): SpyActionsWithPayload {
    let typeFromCaller: string[] = [];
    let payloadFromCaller: any[] = [];
    let nextMock = jest.fn().mockImplementation((func: ActionsWithPayload<any, string>) => {
        typeFromCaller.push(func.type);
        if (func.type === type) {
            payloadFromCaller.push(func.payload);
        }
    });
    return {
        mock: nextMock,
        hasBeenCalled: () => {
            return type === undefined ? false : typeFromCaller.includes(type);
        },
        getPayload: () => payloadFromCaller
    };
}

The code is tightly coupled with how you are handling your action. I strongly suggest using the Redux-TypeScript boilerplate “free”. It is really a relief to use in a term that you will create action within 30 seconds and to be type safe for the payload.

The code uses a “mock” property which is a hook to a mock implementation that does nothing else than being recorded in an array. The actual action is doing nothing. The two functions aside the mock property are there to assert the test. Future improvements are obvious. For example, the “hasBeenCalled” could also take an index to ensure that a particular “next” has been called.

To summarize, even if the middleware design is frightful, with types and some useful utility functions and patterns creating code and testing this one afterward is a breeze. I always enjoy having tests that are quick to build, and the discussed approach rationale with that mindset.