TypeScript Exhaustive Check your Reducer

A few weeks ago, I wrote about how to use React Hooks useReducer with TypeScript. The natural follow-up for many is to ensure that the set of action allowed is all served with the reducer. Not only it helps to tidy up the accepted actions by reducers when building the reducer, it also help ensuring during the lifetime of the reducer that the list of action remains up-to-date.

If we recall, the reducer is taking the state and the action. The action was typed to be a list of function that must be part of the AppActions. An utility type was used that allowed to union many set of action, but not used since we were using a single type. Nonetheless, everything was in place to ensure a flexible configuration of actions.

export type AcceptedActions = ActionsUnion<typeof AppActions>;
export function appReducer(
  state: AppReducerState,
  action: AcceptedActions
): AppReducerState {
  switch (action.type) {
      return {
        clickCount: state.clickCount + 1
      return {
        activeEntity: { ...state.activeEntity, ...{ name: action.payload } }
  return state;

While we cannot add subjective case with action not defined in the AcceptedActions type, the weakness of the code is that we can remove one of the two cases without being noticed. Ideally, we would want to ensure that all actions are defined. In the case that an action is not anymore required that we would need to remove it from the list of action.

The solution require only a few amount of lines. First, you may already have have the core of the needed logic: an exhaustive check function. I have covered many months ago the idea of an exhaustive check in this article. In short, it is a function that should not be reached, when TypeScript found a logical path that can reach the code, the code will not compile.

export function exhaustiveCheck(check: never, throwError: boolean = false): never {
    if (throwError) {
        throw new Error(`ERROR! The value ${JSON.stringify(check)} should be of type never.`);
    return check;

The use of reducer and TypeScript’s exhaustive check pattern is similar to what we would have done for checking if all values are covered on an Enum. The code needs to have a default case which we do not expect the code go fallthrough.

The two new lines:


Removing a required action cause TypeScript to go in the exhaustive check and since the function is marked to accept a never argument does not compile.

TypeScript catching the missing action

I have updated the original code sandbox. Click on the reducer.ts and try to remove on the of action.

In conclusion, the solution might not be ideal for you if you have all your actions into a huge function, or if you do not even group your action might not be even possible. However, grouping actions tidy up your code by having a better idea of what possible actions are expected in different domain of business your application handles. It is not much more work, and it self-document the code. The exhaustive check is an additional step to maintain order.

Re-Reselect your whole Redux State

Re-Reselect is a library that uses Reselect. Both have the capability to memoize a function returns by its input. It means that if the input of a determine function change, the function is executed and the return is saved in memory. The saved copy of the information is used until one of the specified parameter change. The goal is to avoid computation that is expensive. Once use case with Redux is denormalization. Denormalizing is the act of stitching together information in several places. Redux encourages separating the model in a normalized way to avoid duplicate, thus reducing issue of having more than one entity with different value. Re-Reselect has the advantage of having a set of input that can handle several other selector. The following image, from the official repository of Re-Reselect illustrate the difference.

Reselect versus Re-Reselect

I am using Re-Reselect at Netflix in the Partner Portal application for denormalizing many entities. For example, I have selector for each of our organization we serve and each of them have sites around the world. Each site has appliances as well. When I am receiving information from the backend, depending of the endpoint, I need to invalidate more than one cache. So far, Re-Select is working well. However, I have custom logic to denormalize to handle cases that are beyond that blog post. These one require to access specific Redux store directly to compute information with different functions. It means that during the invalidation of the cache, and while a new value to be memoized I need to have an access to the Redux’s state and pass this one to functions.

public denormalizeSitePerformanceExpensive(
    appReduxState: AppReduxState,
    site: SiteNormalized | undefined,
    org: OrgNormalized | undefined,
    contactSelector: GenericMap&amp;lt;ContactNormalized&amp;gt;,
    deepDenormalize: boolean = true
): SiteDenormalized | undefined {

The function signature above is an example that to denormalize a site we need to pass the application “head” Redux state. The problem the memoization get invalidate on every change. The reason is not obvious if you never used ReSelect (or Re-Reselect). Because it is an input and because the reference of the head of Redux state will change at any change, then it invalidates the site cache. Here is the cache creation that shows the input that are used to invalidate the cache.

private denormalizeSiteFunction = createCachedSelector(
(state: AppReduxState) =&amp;gt; state,
(state: AppReduxState, siteNormalized: SiteNormalized | undefined) =&amp;gt; siteNormalized,
(state: AppReduxState, siteNormalized: SiteNormalized | undefined, orgNormalized: OrgNormalized | undefined) =&amp;gt; orgNormalized,
(state: AppReduxState, siteNormalized: SiteNormalized | undefined, orgNormalized: OrgNormalized | undefined, deepDenormalizer: boolean) =&amp;gt;
state: AppReduxState,
siteNormalized: SiteNormalized | undefined,
orgNormalized: OrgNormalized | undefined,
contactSelector: GenericMap&amp;lt;ContactNormalized&amp;gt;,
applianceSelector: GenericMap&amp;lt;ApplianceNormalized&amp;gt;,
deepNormalizer: boolean
) =&amp;gt; this.denormalizeSitePerformanceExpensive(state, siteNormalized, orgNormalized, contactSelector, deepNormalizer)

The quandary is to find a way to pass the state without having this one invalidating the selector when a change is done but keep having the function invalidated if any other selector in the parameter change. In the example posted, we want the site to denormalize to change if the normalized site change, or the organization the site belong change or a contact change or if an appliance change but not for all other selectors we have in the system, neither any data of the store.

The idea is to build a custom input instead of relying on the shallow comparer that comes by default. It is possible to pass to the createCachedSelector an optional object with a selectorCreator

        selectorCreator: this.createShallowEqualSelector

In my situation, it was a good opportunity to also have a feature to turn off completely the memoization. I always have an off switch to all my caching mechanism. It helps to debug and to preclude any issue with caching. To avoid impacting the memoization with the Redux’s store, I am looking for specific children reducer and if they are present, I know that it is the head and I return true which mean that the parameter is equal and it will not invalidate the cache.

private createShallowEqualSelector = createSelectorCreator(
    (previous: any, next: any, index: number): boolean => {
        if (this.isCacheEnabled) {
            // 1) Check if head of Redux state
            if (
                previous !== undefined &&
                previous.router !== undefined &&
                previous.orgs !== undefined &&
                // ... Simplified for this example 
            ) {
                return true; // AppReduxState no check!
               // Logic removed that figure out if the input is the same or not
                return isTheSame;
        } else {
            return false;

The custom equalizer opens door to interesting pattern. For example, if you do not have all your entity with the same reference even if they are the same, you can provide a global logic that handle that case. For my scenario, I am using a property that each entity has which is the last updated date time from the server. You may wonder why not relying on the object reference. In a perfect world, it would make sense because it is the most efficient way to perform a comparison and have the best performance. However, Partner Portal uses many caching mechanisms. For example, we are using IndexDb which mean that depending of the source of the object, the object may have not changed in term of value but changed in term of reference. Also, at the moment, one flaw of the system is that the cached value is set into the Redux store even if the Redux store has the same value (there is not check before setting the value received by the data access layer). To avoid invaliding because the data was fetched again (Ajax) or from the cache, a simple check to the last updated avoid invalidating the cache.

Testing Redux next and api.dispatch with TypeScript and Jest

A middleware in Redux can have quite a lot of logic. In fact, this is my favorite place to place logic. The rationale is that it can be triggered by an action, still, have the time to request data from the backend server with an Ajax call and can dispatch other actions that can be computed by another middleware or by a reducer. It becomes crucial to unit test any area where logic can go wrong, thus testing if specific logic dispatch or invoke next.

Testing api.dispatch is a matter of leveraging Jest. If you are using React and Redux, it is the defacto testing framework. In fact, it is the one coming with the famous “create-react-app”. To test if an action got dispatched from the api.dispatch, you need to mock the dispatch and verify that it has been called. The function “toHavebeenCalledWith” can take another expect which can peek the content of an object passed. Because every action contains a type and payload, it is possible to verify if an action has been invoked.

middleware.myFunctionInMiddleware(next, api, payload);

The case of next is similar. I found that while it is possible to use a similar logic than with api.dispatch, that sometimes it is not enough. For example, if you have a function that calls several times “next.” In that case, it is possible to pass a custom “next” that will be smarter than a simple mock.

let nextMock = nextPayloadTypeSpy(MY_ACTION_CONSTANT_HERE);
middleware.myFunctionInMiddleware(nextMock.mock, api, payload);

The code above this paragraph is a glimpse of how to use the smart “next.” The code accumulates in an array all invocation and allows to asset its content along the test. In that case, the test was testing the first execution of the “next” associated to a specific action (defined at the declaration of the object).  The logic relies on a custom Jest’s function that adds in an array all actions of a specific time when invoked by the tested function.

export interface ActionsWithPayload<TypeAction, TypePayload> {
    type: TypeAction;
    payload: TypePayload;
export interface SpyActionsWithPayload {
    mock: jest.Mock&lt;{}>;
    hasBeenCalled: () => boolean;
    getPayload: () => any[];

export function nextPayloadTypeSpy(type?: string): SpyActionsWithPayload {
    let typeFromCaller: string[] = [];
    let payloadFromCaller: any[] = [];
    let nextMock = jest.fn().mockImplementation((func: ActionsWithPayload<any, string>) => {
        if (func.type === type) {
    return {
        mock: nextMock,
        hasBeenCalled: () => {
            return type === undefined ? false : typeFromCaller.includes(type);
        getPayload: () => payloadFromCaller

The code is tightly coupled with how you are handling your action. I strongly suggest using the Redux-TypeScript boilerplate “free”. It is really a relief to use in a term that you will create action within 30 seconds and to be type safe for the payload.

The code uses a “mock” property which is a hook to a mock implementation that does nothing else than being recorded in an array. The actual action is doing nothing. The two functions aside the mock property are there to assert the test. Future improvements are obvious. For example, the “hasBeenCalled” could also take an index to ensure that a particular “next” has been called.

To summarize, even if the middleware design is frightful, with types and some useful utility functions and patterns creating code and testing this one afterward is a breeze. I always enjoy having tests that are quick to build, and the discussed approach rationale with that mindset.

Redux Structure Best Practices

I have been coding in one of our web application at Netflix for more than 15 months, and I realized that my Redux’s state got better in time. In hindsight, it is easy to recognize that the initial draft of the state was performing its job but not efficiently. However, after a few Internet searches, I realized that not a lot of guidance a year ago, neither today, on how to divide a Redux state efficiently. In this article, I’ll describe crucial points that increase performance over 8x when the store was getting populated with complex objects.

A little disclaimer: I will not elaborate on how to optimize the whole React-Redux flow. I already discussed how to increase the overall React-Redux application in this top 5 improvements post. I also already discussed 4 tips to improve Redux performance but this time it is mainly with the structure of the Redux state.

I assume your Redux’s state contains only normalized data. Normalized data means that every state has entities that describe their relationship with unique identifier instead of a deep object that contain children and other objects. In short, every entity are unique, not duplicated in the Redux state. Also, I assume that during the life-cycle of your application, your denormalize the objects when they need to be consumed to have fully fledged objects with deep and rich entities. A commonplace, to denormalize is inside the mapping between the reducer and React. Depending on the complexity this can be expensive to construct. A natural consequence of an active web application well normalized is that the mapping will be a bottleneck. If you have many actions, the mapping can happen quite often. While the 4 tips to improve Redux performance does a beautiful job to mitigate some of the calls, it still requires to workout on a structure that is optimized for speed.

In the Netflix Partner Portal, the first version of the Redux state was dividing the entity per domain. For example, one reducer for Organization, one for Site, one for Cache, etc. Sub-Reducers were available for each area. For instance, under Cache, you can find the Reducer Interfaces, BgpSessions, etc. The separation makes sense conceptually. However, it falls short when within these reducers you start mixing data coming from data API and data from the user.

Initial Reducer Structure: By domain and sub-domain

The problem with mixing data from the API and from the user is that the user may type, change a configuration, set a temporary information and the data from the API does not change. The lack of separation causes issues in term of performance. A user change should never make an already formed object to be recomputed and build again: they have not changed. The reason is that the tree of objects changes if one of its node is changing. Let’s see the consequence with a small example. An organization contains a list of site which contains a list of appliances which contains many children and so on. If we set the property “activeAppliance” in one of the sub-reducer of an appliance, it will have a chain of effect. First of all, the object that holds the “activeAppliance” will get a new reference (because of immutability), then the sub-reducer will get a new reference and its parents. But, it does not stop there. All memoized data ofReSelect or Re-reselect are flushed. The reason is Re-select selector are logically linked to specific are of the Reducer trees and check if something has changed. If nothing has changed, it returns a pre-computed denormalized instance. However, if not present or if the memoized selector has changed (which is the case with this example in many ways) it will invalidate the cache an perform a denormalization.

Dividing the data from the source of truth, the API, from the one that the user is using while editing cause all the memoize data to remain intact. If a user is changing the active appliance, there is no need to invalidate a whole organization with all its sites and appliances. Thus, the separation allows avoiding computer data that barely change. In that example, the “activeAppliance” moves from “Appliance” sub-reducer to the “AppliancesNotPersisted” reducer. The change the reference on the “AppliancesNotPersisted” state and invalidate a limited amount of selector compared to the ones on the business logics entities.

Not persisted data are separated from the main domain reducer

While this might sound simplistic – and it is – this little change had a big impact. Refactoring code that already exists is not an easy task. However, the Netflix Partner Portal is made with TypeScript and has thousands of unit tests. Most changes were done within a day without disrupting the quality of the application.

Another pattern is to ensure that you are comparing correctly the entity you are memorizing with ReSelect. I am using Re-reselect which allows deep memoization. For some reason, the Partner Portal requires to pass the whole state down to each selector. Usually, that would cause any change to invalidate the cache because the Reducer is immutable hence any change cause create a new reference. However, with a custom shallow equal function that Re-reselect can use. The way I coded this one is that it checks if the object is the root of the reducer, if it is it avoids the invalidation. Other optimizations are done which are beyond the scope of this article like comparing last updated date on the entities coming from the backend.

A third and last best practice is to have entities inheriting or intersecting data that are not coming from the API. The reason is the same as having data not coming from the API with data from the API: it invalidates the selector hence require to compute hierarchies of object for no reason. For example, if you have an Organization entity coming from a REST API and you enhance the entity with some other data (e.g. add the preferred timezone that can be selected by the user) then you will be in a trap when the user change this non-persisted state. Once again, denormalization will occur causing a waste of CPU time.

To conclude, it is primordial to separate your Redux’s store in a structure that is conceptually sound for your business logic, but also clearly separated in term of the behavior. A rule that I draw from my experience with the Netflix Partner Portal is so always have a reducer for data coming from the source of truth and one for the user interaction. It makes the store clean with a clear idea of where belongs each data as well as increasing your performance at the same time.

Handling Unexpected Error with React and Redux

Regardless of how professional developer you are, there always will be an exception that will slip between your fingers, or should I say between your lines of code. These errors that bubble up the stack are called unexpected error. While it is always preferable to catch the error as close as possible of where the exception is thrown, it is still better to catch them at a high level. In the web world, it gives an opportunity to have a generic message to the user gracefully. It also provides an open door to collect the unexpected error and act before having any of your customers reach you.

There are three places where you need to handle unexpected errors in a stack using React and Redux. The first one is at React level. An unexpected error can occur in a render method for example. The second level is during the mapping between Redux and React. The error occurs when we move data from the Redux’s store to the React’s property of the connected component. The third level is an error in the chain of middlewares. The last one will bubble up through the stack of middleware and explode where the action was dispatched. Let’s see how we can handle these three cases in your application.

React Unhandled Exception

Since version 16, React simplified the capture of error by introducing the lifecycle function “componentDidCatch.” It is a function like “render” or “componentShouldUpdate” that come with the framework. The “componentDidCatch” is triggered when an exception go thrown any children of the component. The detail about what it covers is crucial. You must have a component that englobes most of your application. If you are using React-Router and would like to keep the web application with the same URL and have the menus to stay in place, this can be tricky. The idea is to create a new component with the sole purpose of wrapping all top route components. Having a single component to handle the unexpected error is interesting. It is simple, easy to test, with a cohesive and single task.

export interface ErrorBoundaryStates {
  hasError: boolean;
export class ErrorBoundary extends React.Component<ErrorBoundaryProps, ErrorBoundaryStates> {
  constructor(props: ErrorBoundaryProps) {
    this.state = { hasError: false };

  public componentDidCatch(error: Error, errorInfo: ErrorInfo): void {
    this.setState({ hasError: true });
    YourLoggingSystem.track(error, "ErrorBoundary", { errorInfo: errorInfo });

  public render() {
    if (this.state.hasError) {
      return <div className="ErrorBoundary">
      The application crashed. Sorry!
    return this.props.children;

However, with React-Router, every route is assigned as property. The property is an excellent opportunity to create a function that returns the React class.


// Constructor:
 this.page1Component = withErrorBoundary()(Page1Component));
// Render:
<Route path={Routes.PAGE_1} component={this.page1Component} />

export const withErrorBoundary = () => <P extends object>(Component: React.ComponentType<P>) =>
  class WithErrorBoundary extends React.Component<P> {
    render() {
      return <ErrorBoundary}><Component {...this.props} /></ErrorBoundary>;

Redux Mapping Unhandled Exception

This section will be short because it is covered with React. However, I wanted to clarify that this can be tricky if you are not doing exactly like the pattern I described. For instance, if you are wrapping the “withErrorBoundary” not at the initialization of the route but directly when you connect — it will not work. For example, the code below does not work as you might expect. The reason is that the error is bound to the component but not to the code being bound by the React-Connect.

export default connect<ModelPage1, DispatchPage1, {}, {}, AppReduxState>(
    (s) => orgMappers.mapStateToProps(s),
    (s) => orgMappers.mapDispatchToProps(s)

Outside of the initial solution proposed, it is also valid to wrap the “connect” to have the desired effect of receiving the error in the “componentDidCatch” of the “ErrorBoundary”. I prefer the former solution because it does not coerce the ErrorBoundary with the component forever.

export default WithErrorBoundary(connect<ModelPage1, DispatchPage1, {}, {}, AppReduxState>(
    (s) => orgMappers.mapStateToProps(s),
    (s) => orgMappers.mapDispatchToProps(s)

Redux Middleware Unhandled Exception

The last portion of the code that needs to have a catch-all is the middleware. The solution goes with the Redux’s middleware concept which is to leverage function that calls themselves. The idea is to have one of the first function, middleware, to be a big try-catch.

const appliedMiddleware = applyMiddleware(

// Excerpt of the middleware:
return (api: MiddlewareAPI<Dispatch, AppReduxState>) =>
       (next: Dispatch) =>
       (action: Actions): any => {
            try {
                return next(action);
            } catch (error) {
                YourLoggingSystem.track(error, "Unhandled Exception in Middleware");
                return next(/*Insert here an action that will render something to the UI to indicate to the user that an error occured*/);


Handling errors in a React and Redux world require code to be in a particular way. At this day, the documentation is not very clear. Mostly because there is a clear separation between React, Redux, Redux-Connect, and React-Router. While it is very powerful to have each element of the puzzle separated, this comes with the price that the integration is in a gray area. Hopefully, this article uncovers some mystery around how to collection unhandled errors and removed confusion with the particular case of why mapping error can throw error through the React mechanism when not positioned at the right place.

Google Analytic with React and Redux

I had to integrate Google Analytic in one of our website at Netflix. It’s been a while I had to use Google Analytic and the last time was simply copy-pasting the code snippet provided by Google when creating the Analytic “provider” account. The last time was a few years ago and the website was not a single-page application (spa). Furthermore, I the application is using Create-React App (TypeScript version) with Redux. I took a quick look and found few examples on the web but I wasn’t satisfied. The reason is that all examples I found were hooking Google Analytic at the component level. I despise having anything in the user interface (UI), React that is not related to the UI.

The first step is to use a library instead of dropping the JavaScript directly into the code.

npm install --save react-ga

Next step is to configure the library to set the unique identifier provided by Google. I am using the create-react-app scaffolding and I found the best place to initialize Google Analytic to be in the constructor of the App.ts file. It is a single call that needs to be executed once for the life of the spa.

class App extends React.Component {

  public constructor(props: {}) {
    ReactGA.initialize(process.env.NODE_ENV === Environment.Production ? "UA-1111111-3" : "UA-1111111-4");
  public render(): JSX.Element {
    return <Provider store={store}>
      <ConnectedRouter history={history}>
        <AppRouted />

export default App;

The last step is to have react-router to call the page change when the routing change. React-router is mainly configured in React, but I didn’t want to have any more ReactGA code in React. The application I am working uses Redux and I have a middleware that handles route. At the moment, it checks if the route change and analyzes the URL to start fetching data on the backend.

  return (api: MiddlewareAPI<AppReduxState>) =>
            (next: Dispatch<AppReduxState>) =>
                <A extends Action>(action: A): A => {
               // Logic here that check for action.type === LOCATION_CHANGE to fetch the proper data
               // ...
               // If action.type === LOCATION_CHANGE we also call the following line:

The previous code is clean. Indeed, I would rather not have anything inside React, but App.tsx is the entry point and the initialize function injects into the DOM Google’s code. The Redux solution works well because the react-router-redux used gives the pathname which is the URL. By using the function “pageview” we are manually sending to Google Analytic a page change.

Top 5 Improvements that Boost Netflix Partner Portal Website Performance

Netflix is all about speed. Netflix strives to give the best experience to all its customers — and no one like to wait. I am working in the Open Connect division which ensures that movies are streamed efficiently to everyone around the world. Many pieces of the puzzle are essential for a smooth streaming experience but at its core, Netflix’s caches act like a smart and tailored CDN (content delivery network). At Netflix, my first role was to create a new Partner Portal for all ISP (Internet service provider) to do monitoring of the caches as well as other administrative tasks. There is a public documentation about Partner Portal available here if you are interested to know more about it. In this blog post, I’ll talk about how I was able to take users’ a specific scenario that required many clicks and an average of 2 minutes 49 seconds to under 50 seconds (cold start) and under 19 seconds once the user visited more than once the website. An 88% reduction of waiting time is far more than just an engineering feat but a delight for our users.

#1: Tech Stack

The framework you are using has an initial impact. The former Partner Portal was made in AngularJS. That is right, the first version of Angular. No migration had been made for years. There were the typical problems in many areas with the digest of events, as well as how the code was getting harder to maintain. The maintenance aspect is out of scope of this actual article, but AngularJS always been hard to follow without types, and with the possibility to add values in a variety of places many functions and values in scope it becomes slowly a nightmare. Overall, Netflix is moving toward React and TypeScript (while not being a rule). I saw the same trend in my years at Microsoft and I was glad to take this direction as well.

React allows having a fine-grained control over optimization which I’ll discuss in further points. Other than React, I selected Redux. It is not only a very popular framework but also very flexible in how you can configure it and tweak the performance. Finally, I created the Data Access Gateway library to handle client-side request optimization with two levels of cache.

The summary of the tech stack point is that you can have a performant application with Angular or any other framework. However, you need to keep watering your code and libraries. By that, you must upgrade and make sure to use the best practices. We could have gone with Angular 6 and achieve a very similar result in my opinion. I will not go into detail about why I prefer the proximity to JavaScript of React to AngularJS templating engine. Let’s just end up that being as close of the browser and avoiding layers of indirection are appealing to me.

#2: More click, less content per page

The greatest fallacy of web UI is the optimization for the less amount of click. This is driven by research on shopping websites where the easiest and quickest the user can press “buy” that will result in a sell. Great, but not every website goal is to bring someone having one particular action in the less amount of click. Most website goal is to have the user enjoy the experience and to have the user fulfill his or her goal in a fast and pleasant way. For example, you may have a user interface that requires 3 clicks but each click takes 5 seconds. Or, you could have a user interface that requires 4 clicks with 2 seconds each. In the end, the experience is 15 seconds versus 8 seconds. Indeed, the user clicked one more click but got the result way faster. Not only that, the user had the impression of a way faster experience because he or she was interacting instead of waiting.

Let’s be clear, the goal is not to have the user click a lot more, but to be smart about the user interface. Instead of showing a very long page with 20 different pieces of information, I broke the interface into separated tabs or different pages. It reduces some pages that required to do a dozen of HTTP calls to 1-2 calls. Furthermore, clicks in the sequence of action could reuse data previously fetched giving fast steps. The gain was done automatically with the Data Access Gateway library which cache HTTP responses. Not only in term of performance it was better, in term of telemetry it is heaven. It is now possible to know what the user is looking at very accurately. Before we had a lot of information and it was hard to know which one was really consulted. Now, we have a way since we can collect information about which page, tabs, and section is open or drilled down.

#3: Collect Telemetry

I created a small private library that we now share across our website in our division at Netflix. It allows collecting telemetry. I wrote a small article about the principle in the past where you can find what is collected. In short, you have to know how users are interacting with your website as well as gathering performance data about their reality. Then, you can optimize. Not only I know what feature is used but I can establish patterns which allow to preload or position elements on the interface in a smart way. For example, in the past, we were fetching graphs on a page for every specific entity. It was heaving in term of HTTP calls, in term of rendering and in “spinner”. By separating into a “metric” pages with one type of graph per tabs we not only been able to establish which graph is really visualized but also which option etc. We removed the possibility to auto-load the graph by letting the user loading which graph he or she wants to see. Not having to wait for something you do not want seem to be a winner (and obvious) strategy.

To summarize, not only data is a keystone of knowing what to optimize, it is crucial for the developer to always have the information in his/her face. The library I wrote for the telemetry output in the console a lot of information with different color and font size to clearly give insights into the situation. It also injects itself into the Google Chrome Performance tooling (like React does) which allow seeing different “scenario” and “markers”. No excuses at the development phase, neither at production time to not knowing what is going on.

#4: Rendering Smartly

In a single-page application that optimizes for speed, not clicks, rendering smartly is crucial. React is build around virtualization but it still requires some patterns to be efficient. I wrote several months ago 4 patterns to boost your React and Redux performance. These patterns are still very relevant. Avoiding rendering helps the whole experience. In short, you can batch your Redux actions to avoid having several notifications that trigger potential view update. You can optimize the mapping of your normalized objects into denormalized objects by using a function in Redux-Connect to cancel the mapping. You can also avoid denormalizing by “selecting” the data if the normalize data in your reducers have not changed. Finally, you need to use React to leverage the immutable data and only render when data change without having to compute intense logic.

#5: Get only what you need

We had two issues in term of communication with the backend. First, we were doing a lot of calls. Second, we were performing the same call over and over again in a short period of time. I open-sourced a library that we are using intensively for all our need of data called Data Access Gateway library. It fixes the second issue right away by never performing two calls that are identical at the same time. When a request is performed and a second one wants the same information, the latter will subscribe to the first request. It means that all subsequent requesters are getting the information from the former requester — it receives it pretty fast. The problem with several calls could be in theory handled better by having less generic REST endpoints. However, I had low control over the APIs. The Data Access Gateway library offers memory cache and persisted cache with IndexDb for free. It means that calls are cached and depending on the strategy selected in the library you can get the data almost instantly. For example, the library offers a “fetchFast” function that always returns as fast as possible the data even if this one is expired. It will perform the HTTP call to get fresh data which will be ready for the next request. The default is 5 minutes, and our data does not change that fast. However, we have a scenario where it must be very fresh. It is possible to tailor the caching for these cases. Also, it is possible to cache for a longer time. For example, a chart that displays information on a year period could be cached for almost a full day. Here is a screenshot of Chrome’s extension of the Data Access Gateway which shows that for a particular session, most of the data was from the cache.

The persisted cache is also great for returning user. Returning users have a return experience that is also instantly. The data might be old, but the next click performed to update everything.

The experience and numbers vary a lot depending on how the user interacts with the system. However, it is not rare to see that above 92% of requests of information are delivered by the cache. By delivered I mean that is returned to the user regardless if it comes from the memory cache, persisted cache or HTTP request. The other way to see it is that when a user clicks on the interface that only 8% of the data is requested via HTTP (slow). However, if the user stays under the same amount of feature the number can climb easily to 98%. Not only the amount of consumed data is very high at a level fast for the user, it is also very efficient in term of data moved across the network. Again, the number varies greatly depending on how the user interacts with the Netflix Partner Portal. But, it’s not rare to see that only 10% of bytes used by the application are actually coming from HTTP request, while 90% are already cached by the library — this means that on a session where a user performed many actions that instead of having to download about 150megs could have downloaded less than 15 megs of data. Great gain in term of user experience and great gain in term of relaxing our backend. Also, one gain for our user who saves bandwidth. Here is a screenshot of a session recorded by the Chrome’s extension of the Data Access Gateway.

What next?

Like many of you, my main task is about delivering new features and maintaining the actual code. I do not have specific time allowed for improving the performance — but I do. I found that it is our (web developer) duty to ensure that the user gets the features requested in a quality. The non-functional requirements of performance is a must. I often take the liberty of adding a bullet point giving a performance goal before starting to develop the feature. Every little optimization along the journey accumulates. I have been working for 13 months on the system and keep adding once in a while a new piece of code that boost the performance. Like unit testings, or polishing the user interface, or to add telemetry code to get more insight; performance is something that must be done daily and when we step back and look at the big picture that we can see that it was worth it.

Telemetry has a Centerpiece of Your Software

Since my arrival at Netflix, I have been working all my time on the new Partner Portal of Netflix Open Connect. The website is private, so do not worry if you cannot find a way to access its content. I built the new portal with a few key architectural concepts as the foundation and one of it is telemetry. In this article, I will explain what it consists of and why it plays a crucial role in the maintainability of the system as well as how to smartly iterate.

Telemetry is about gathering insight on your system. The most basic telemetry is a simple log that adds an entry to a system when an error occurs. However, a good telemetry strategy reaches way beyond capturing faulty operation. Telemetry is about collecting behaviors of the users, behaviors of the system, misbehavior of the correct programmed path and performance by defining scenario. The goal of investing time into a telemetry system is to raise awareness of what is going on on the client machine, like if you were behind the user’s back. Once the telemetry system is in place, you must be able to know what the user did. You can see telemetry like having someone dropping breadcrumb everywhere.

A majority of systems collects errors and unhandled errors. Logging errors are crucial to clarify which one occurs to fix them. However, without a good telemetry system, it can be challenging to know how to reproduce. Recording which pages the user visited with a very accurate timestamp, as well as with which query string, on which browser, from which link is important. If you are using a framework like React and Redux, knowing which action was called, which middleware execute code and fetched data, as well as the timing of each of these steps, are necessary. Once the data in your system, you can extract different views. You can extract all errors by time and divide them by category of errors, you can see error trends going up and down when releasing a new piece of code.

Handling error is one perspective, but knowing how long a user waited to fetch data is as much important. Knowing the key percentiles (5th, 25th, 50th, 75th, 95th, 99th) of your scenarios indicate how the user perceives your software. Decisions about which part need improvement can be taken with certitude because that it is backed by real data from users that consume your system. It is easier to justify engineering time to improve code that hinders the experience of your customer when you can have hard data. Collecting about scenarios is a source of feature popularity as well. The aggregation of the count by users of a specific scenario can indicate if a feature is worth staying in the system or should be promoted to be more easy to discover. The conclusion of how to interpret the telemetry values are subjective most of the time, but is less opinionate then a raw gut feeling. Always keep in mind that a value may hide an undiscovered reality. For example, a feature may be popular but users hate using it — they just do not have any other alternative.

There are many telemetries and when I unfolded my plan to collect them I created a TypeScript (client-side) library that is very thin with 4 access points. The first one is named “trackError”. Its specialty is to track error and exception. It is simple as having an error name that allows to easily group the error (this is possible with handled error caught in try-catch block) and contains the stack trace. The second one is “trackScenario” which start collecting the time from the start to the end. This function returns a “Scenario” object which can be ended but also having the capability of adding markers. Each marker is within the scenario and allows fined grained sub-steps. The goal is to easily identify what inside a scenario involves slowness. The third access point is trackEvent which take an event name and a second parameter that contain an unstructured object. It allows collecting information about a user’s behavior. For example, when a user sorts a list there is an event “sortGrid” with a data object that has a field that indicates which grid, the direction of the sort, which field is being sorted, etc. With the data of the event, we can generate many reports of how the user is using each grid or more generic which field etc. Finally, it is possible to “trackTrace” which allow specifying with many trace level (error, warning, info, verbose) information about the system. The library is thin, simple to use and has basic functionality like always sending the GIT hash of the code within the library, always sending the navigation information (browser info), having the user unique identifier, etc. It does not much more. In fact, one more thing which is to batch the telemetry and send them periodically to avoid hammering the backend. The backend is a simple Rest API that takes a collection of telemetry message and stores them in an Elastic Search persistence.

A key aspect, like many software architectures and process decision, is to start right from the beginning. There are hundreds of usage of telemetry at the moment in the system and it was not a burden to add them. The reason is that they were added continually during the creation of the website. Similar to writing unit tests, it is not a duty if you do not need to add to write them all at once. While coding the features, I had some reluctances in few decisions, I also had some ideas that were not unanimous.

The aftereffect of having all the data about the user sooth many hot topics by having a reality check about how really the users are using the system. Even when performing a thorough user session to ask how they are using the system, there is nothing like real data. For example, I was able to conclude that some user tries to sort empty grid of data. While this might not be the discovery of the century, I believe it is a great example that shows a behavior that no user would have raised. Another beneficial aspect is monitoring the errors and exceptions and fixing them before users report. In the last month, I have fixed many (minor) errors and less than 15% were raised or identified by a user. When an error occurs, there is no need to consult the user — which is often hard since they can be remote around the world. My daily routine is to sort all error by count by day and see which one is rising. I take the top ones and search for a user who had the issue and looks at the user’s breadcrumb to see how to reproduce locally on my developer’s machine. I fix the bug and push it in production. The next day, I look back at the telemetry to see if the count is reducing. A proactive fixing bug approach is a great sensation. You feel way less trying to put water on fire allowing you to fix properly the issue. Finally, with the telemetry system in place, when my day to day job is getting boresome or that I have a slump in productivity, I take this opportunity to start querying the telemetry data to break into several dimensions with the goal to shed some light about how the system is really used and how it can be improved to provide the best user experience possible.

This article was not technical. I will follow up in a few days with more detail about how to implement telemetry with TypeScript in a React and Redux system.

TypeScript and Redux Immutability Functions

The actual system I am working in my daily job is using TypeScript, React and Redux. I am not relying on any framework to do the immutability. I am using pure JavaScript cloning mechanism and have unit tests for each of my reducer that makes sure that the instance returned is not the same. It works great, it’s easy to understand, and doesn’t require any dependencies. However, few utility functions ease the whole process mostly when using dictionary which is present everywhere since I an normalizing my data in the Redux store to avoid having any duplicated values.

First, I am using an alias for the index signature just to avoid repeating the square brackets syntax everywhere. This doesn’t provide much but is worth mentioning because all the future function I’ll share use this interface.

export interface GenericMap<T> {
    [id: string]: T;

The first useful function is to add into an array an object without mutating this one. This function relies on the array’s function “slice” to return a copy of the array.

export function addInArrayIfNotPresent<T>(array: T[], item: T): T[] {
    let returnArray: T[] = [];
    if (array !== undefined) {
        returnArray = array.slice();
        if (array.indexOf(item) === -1) {
    return returnArray;

// Usage:
const newReferenceArrayWithItemsAdded = addInArrayIfNotPresent(existingArray, itemsToAdd);

The second function is to add a new element in a map without mutating the existing dictionary. This is useful because it handles the cloning and swap the value into the cloned dictionary.

export function alterImmutablyMap<T>(stateMap: GenericMap<T>, key: number | undefined, modelMember: T): GenericMap<T> {
    if (key !== undefined) {
        const cloneStateMap = Object.assign({}, stateMap);
        cloneStateMap[key] = modelMember;
        return cloneStateMap;
    return stateMap;

// Usage:
const newDictionary = alterImmutablyMap(existingDictionary, key, value);

The third function allows changing a property of an existing object in a dictionary. The function is useful if the user change a single property and you do not want to change to extract the current object and manually clone it to set the new value into a new clone of the object.

export function alterImmutablyMapMember<T, K extends keyof T>(stateMap: GenericMap<T>, key: number | undefined, modelMember: K, value: T[K]): GenericMap<T> {
    if (key !== undefined) {
        if (stateMap[key] !== undefined) {
            const cloneStateMap = Object.assign({}, stateMap);
            const modelFromState = Object.assign({}, cloneStateMap[key]) as T;
            if (modelFromState !== undefined) {
                modelFromState[modelMember] = value;
                cloneStateMap[key] = modelFromState;
            return cloneStateMap;
    return stateMap;
// Usage:
const newDictionary = alterImmutablyMapMember(existingDictionary, key, member, value);
const newDictionary2 = alterImmutablyMapMember(existingDictionary, 1, "name", "Patrick"); // Change the item 1's property name to Patrick

How to Organize Model Type with TypeScript and React/Redux

I will not pretend that there is a universal way organized type in a Redux/React application. Instead, I’ll present what, in the project I am actually working in my day to day job, I found an easy, clean and clear way to organize types.

First, let’s establish that other than all the business logic types you need that React require to have at a minimum a type for your React’s property. I’ll skip the React’s state property mostly because I rarely rely on the state but also because it is not a big deal. You can handle the state’s type with a type or interface directly in the React’s component file since it will only be used internally for this component.

Second, let’s pretend we are working on a normalized model. A normalized model means that Redux will store only a single instance of each entity in the store — there is no duplication of data. A normalized model infers that the data will be denormalized during the mapping from Redux store to your React’s components. The normalized model will have an id (string or number) instead of having the object. For example, if you have EntityA that has a one-to-many relationship to EntityB, than the EntityA in the normalized model will have an array of EntityB ids, not the EntityB instance. However, in the EntityA denormalized you will have an array of EntityB. The normalized doesn’t have duplicate, the denormalized has the possibility to do EntityA.ArrayOfB[0].Name because EntityB is rich and complete while the normalized is just a key.

Third, React uses properties to hydrate the component and properties to provide actions. Separating the behaviors and the data model will be a natural choice if you are using React-Redux library as we will see soon.

With the prerequisite that we have a model divided in two (normalized and denormalized) and that we are using React (property) that use your business logic, it starts to be clear that for a specific entity we will have many types of interfaces and some values will cross-over. In fact, all properties that are not a relationship will be used in the normalized and denormalized definition.

The construction for each entity that is normalized and denormalized is to have an entity that contains no relation, one that contains the relationship keys, and one that contains the rich objects which will be filled during the mapping to React. For example, if you have “EntityA”, the pattern would be to have “EntityA” and “EntityANormalized” and “EntityADenormalized” that inherit “EntityA”. During the mapping (and the creation of EntityADenormaliez) you use all the common property from “EntityA” which reside inside Redux’s store that hold instances of EntityANormalized and you remove all keys and array of keys to replace them with the other object in the store. For example, if you have EntityA that has a relationship to B, the EntityANormalized have “EntityB:number” which won’t be used in EntityADenormalized because this one will have “EntityB:EntityBDenormalized”. Once you have these three interfaces created, you can create a EntityA model which contain a 1-1 relationship to the denormalized entity but also can have other data needed in the React component. For example, you can have Routing data, other denormalized entities, or global user’s preference data, etc. The fifth interface contains a list of all potential action the user can execute in the component. Finally, a simple interface that extends the Model and Dispatch interface is created and used by the React component has its property.

The final result is of all the interface created look like this UML diagram:

The advantage of this modeling is the reusability for the base class (EntityA) by the normalized and denormalized. It is also clear to all developers that code in the system that these fields are coming from the backend and are “values” while the normalized contains the relationship keys. The mapping contains the logic to denormalize the object providing to React a rich model that has a good navigability in the properties of all objects but also contains fields that might be dynamically computed during the mapping. Finally, the division of model and dispatch work flawlessly with React-Connect because the connect function require to pass each type. It is also convenient because if you have a hierarchy of component you can pass only action or a set of model depending to which children React components have.

Here is an example of how React-Redux’s connect function takes the model and dispatch types as well as how the React component for EntityA uses the property.

export default connect<EntityAModel, EntityADispatch, {}, {}, ReduxState>(
    (s) => entityAMapper.mapStateToProps(s),
    (s) => entityAMapper.mapDispatchToProps(s),
    (stateProps, dispatchProps, ownProps) => Object.assign({}, ownProps, stateProps, dispatchProps),
        pure: true,
        areStatesEqual: (n, p) => entityAMapper.shouldMappingBeSkipped(n, p)

class ComponentA extends React.Component<ComponentAProps> {