Redux Structure Best Practices

I have been coding in one of our web application at Netflix for more than 15 months, and I realized that my Redux’s state got better in time. In hindsight, it is easy to recognize that the initial draft of the state was performing its job but not efficiently. However, after a few Internet searches, I realized that not a lot of guidance a year ago, neither today, on how to divide a Redux state efficiently. In this article, I’ll describe crucial points that increase performance over 8x when the store was getting populated with complex objects.

A little disclaimer: I will not elaborate on how to optimize the whole React-Redux flow. I already discussed how to increase the overall React-Redux application in this top 5 improvements post. I also already discussed 4 tips to improve Redux performance but this time it is mainly with the structure of the Redux state.

I assume your Redux’s state contains only normalized data. Normalized data means that every state has entities that describe their relationship with unique identifier instead of a deep object that contain children and other objects. In short, every entity are unique, not duplicated in the Redux state. Also, I assume that during the life-cycle of your application, your denormalize the objects when they need to be consumed to have fully fledged objects with deep and rich entities. A commonplace, to denormalize is inside the mapping between the reducer and React. Depending on the complexity this can be expensive to construct. A natural consequence of an active web application well normalized is that the mapping will be a bottleneck. If you have many actions, the mapping can happen quite often. While the 4 tips to improve Redux performance does a beautiful job to mitigate some of the calls, it still requires to workout on a structure that is optimized for speed.

In the Netflix Partner Portal, the first version of the Redux state was dividing the entity per domain. For example, one reducer for Organization, one for Site, one for Cache, etc. Sub-Reducers were available for each area. For instance, under Cache, you can find the Reducer Interfaces, BgpSessions, etc. The separation makes sense conceptually. However, it falls short when within these reducers you start mixing data coming from data API and data from the user.

Initial Reducer Structure: By domain and sub-domain

The problem with mixing data from the API and from the user is that the user may type, change a configuration, set a temporary information and the data from the API does not change. The lack of separation causes issues in term of performance. A user change should never make an already formed object to be recomputed and build again: they have not changed. The reason is that the tree of objects changes if one of its node is changing. Let’s see the consequence with a small example. An organization contains a list of site which contains a list of appliances which contains many children and so on. If we set the property “activeAppliance” in one of the sub-reducer of an appliance, it will have a chain of effect. First of all, the object that holds the “activeAppliance” will get a new reference (because of immutability), then the sub-reducer will get a new reference and its parents. But, it does not stop there. All memoized data ofReSelect or Re-reselect are flushed. The reason is Re-select selector are logically linked to specific are of the Reducer trees and check if something has changed. If nothing has changed, it returns a pre-computed denormalized instance. However, if not present or if the memoized selector has changed (which is the case with this example in many ways) it will invalidate the cache an perform a denormalization.

Dividing the data from the source of truth, the API, from the one that the user is using while editing cause all the memoize data to remain intact. If a user is changing the active appliance, there is no need to invalidate a whole organization with all its sites and appliances. Thus, the separation allows avoiding computer data that barely change. In that example, the “activeAppliance” moves from “Appliance” sub-reducer to the “AppliancesNotPersisted” reducer. The change the reference on the “AppliancesNotPersisted” state and invalidate a limited amount of selector compared to the ones on the business logics entities.

Not persisted data are separated from the main domain reducer

While this might sound simplistic – and it is – this little change had a big impact. Refactoring code that already exists is not an easy task. However, the Netflix Partner Portal is made with TypeScript and has thousands of unit tests. Most changes were done within a day without disrupting the quality of the application.

Another pattern is to ensure that you are comparing correctly the entity you are memorizing with ReSelect. I am using Re-reselect which allows deep memoization. For some reason, the Partner Portal requires to pass the whole state down to each selector. Usually, that would cause any change to invalidate the cache because the Reducer is immutable hence any change cause create a new reference. However, with a custom shallow equal function that Re-reselect can use. The way I coded this one is that it checks if the object is the root of the reducer, if it is it avoids the invalidation. Other optimizations are done which are beyond the scope of this article like comparing last updated date on the entities coming from the backend.

A third and last best practice is to have entities inheriting or intersecting data that are not coming from the API. The reason is the same as having data not coming from the API with data from the API: it invalidates the selector hence require to compute hierarchies of object for no reason. For example, if you have an Organization entity coming from a REST API and you enhance the entity with some other data (e.g. add the preferred timezone that can be selected by the user) then you will be in a trap when the user change this non-persisted state. Once again, denormalization will occur causing a waste of CPU time.

To conclude, it is primordial to separate your Redux’s store in a structure that is conceptually sound for your business logic, but also clearly separated in term of the behavior. A rule that I draw from my experience with the Netflix Partner Portal is so always have a reducer for data coming from the source of truth and one for the user interaction. It makes the store clean with a clear idea of where belongs each data as well as increasing your performance at the same time.

React Ref with children caveat

Recently, I had the requirement to disable some parts of the user interface if an entity is under a specific state. I decided to create a reusable component that would blur its child and show a message to the user explaining why a section of the screen is inaccessible. You can see the final rendering in the screen capture under this paragraph. The implementation of the solution worked — except when I was refreshing the page.

Refreshing the page cause the React component to be in a different path of execution. It has to fetch the information and mount for the first time the React component. While other navigation displayed the image above, in the case of refreshing the page, the graphic was never loading. In fact, even when the appliance status was in a good status, the component was totally blank.

After a while, I realized something: the wrapper I created was the cause of the issue. The following image shows on the left the problematic arrangement of components and on the right the solution that I will explain shortly.

Left side is the wrong implementation; Right side is the good implementation

The problem was that the parent container was loading the chart component and in the render function was using the wrapper. The component was using a ref to access the DOM because the chart built with HightChart (a JavaScript not a React library). However, the introduction of the wrapper (to blur or not the chart) was in the middle of the way. The wrapper was rendering the children (the div that was hooked to Highchart) differently. When the status was good it was rendering the children directly, when the status was not good it was creating a DIV and inserting the children. Here was the problem. The difference of depth in the DOM element was breaking Highchart. In fact, it’s not the fault of Highchart, but more that the code in the component that was hooking Highchart to the DIV was suddenly rendering another DIV that had not knowledge of the modification Highchart had done. A quick fix to change the hierarchy of HTML elements fixed the issue, but it was not right.

Here is a snippet of the wrapper:

export class Wrapper extends React.Component<WrapperProps> {
    public render(): JSX.Element {
        if (this.props.isEnabled) {
            return <>{this.props.children}</>;
        }
        return (
            <>
                <div
                >
                    {this.props.children}
                </div>
            </>
        );
    }
}

The core of the problem is that the component access a ref of an element outside its own render. It accesses a reference from a child that is rendered by the wrapper. Swapping the component, like illustrated in the right column, fixed the issue and rendering the children without having to care about the hierarchy of the elements.

The explanation, for my case, was that Highchart was configured in the componentDidMount and then had the data pushed in the chart in the componentDidUpdate. However, this was done by accessing the reference of the DOM element which was changing position once rendered by the wrapper. Moving the wrapper a level up and allowing the component to have a reference of a DOM element that it owns (inside its own render) mitigate the issue because of the configuration executed on a reference that does not “move”.

I discussed with one my teammate at Netflix and we built a demo inside a live sandbox that you can find at this address: https://codesandbox.io/s/74w7nz3m2j

The sandbox contains a simplification of the behavior described. There is a parent container named “App” and a render function that render two sibling wrappers named “SameStructure” and “DifferentStructure“. The “App” has a state that changes when you click anywhere in the application. The click event changes the state to the value “false” and uses the reference to change the DOM. It simulates changes that occur to Highcharts when the parent container is updated. The “App” sets a unique string in the mount to simulate the initial configuration of Highchart on a DOM ref of a child DOM element. There are two wrappers to illustrate that one has the same structure regardless of if the flag is set to false or true. However, the second wrapper change it’s structure. The code is succinct, here are the wrappers.

export default class SameStructure extends React.Component<Props> {
  render() {
    if (this.props.flag) {
      return <div className="first">{this.props.children}</div>;
    } else {
      return <div className="second">{this.props.children}</div>;
    }
  }
}

export default class DifferentStructure extends React.Component<Props> {
  render() {
    if (this.props.flag) {
      return (
        <div className="first">
          <div>{this.props.children}</div>
        </div>
      );
    } else {
      return (
        <div className="second" style={{ color: "blue" }}>
          {this.props.children}
        </div>
      );
    }
  }
}

The result is unexpected if you are not aware of the weakness of using a reference of a child at a higher level. The output does not have the text sets in the componentDidMount for the wrapper that has a structural change. Here is the output where you can see that the “Different Structure” misses the “MountText”.

Same Structure
SameStructureMountText[appendTextOnClick][appendTextOnClick][appendTextOnClick]

Different Structure
InitialDifferentStructureText[appendTextOnClick][appendTextOnClick][appendTextOnClick]

To conclude, a React component that has a reference should only refer to DOM element that it owns. React has ways to pass down a parent reference to a children if needed. However, in this particular case, we were accessing a children from a parent which is cause unexpected behavior. The work around is to ensure the hierarchy of component is respected and avoiding having “ref” from a DOM inside element (children) of a component.

Mocking Every Functions of an Object with Jest and TypeScript

Most of the time, it is enough to simply mock a handful amount of member of an object while performing unit tests. However, if you are using the dependency injection pattern, it is faster to not send an actual object and manually cherry-pick which function to test. Since you are injected an external object, you definitely do not want to test that object and you do not want to manually create a stub object which can be time consuming.

class MyClass{
    constructor(public myOtherInjectedClass: IMyInjectedClassTypeHere){ ...}
}

The idea is to use TypeScript mapped type to create a mirror of the type you inject but instead of having the raw function as a type to have the Jest’s mocking type. Changing the type will allow having a strongly typed object that has the same members but the function to be a mock. It gives the ability to use safely the Jest’s mocking features. 

The first step is to create a custom type mapping. The type has a generic type that is the object you are injecting. In the example, the “IMyInjectedClasTypeHere” is the one containing functions and variables. We want to change all the function to become a Jest’s mock type.

type Mockify<T> = { [P in keyof T]: T[P] extends Function ? jest.Mock<{}> : T[P] };

The “Mockify” type is looping all the members and when found one that is a function return a new Mock object. Otherwise, it returns the member in its initial format.

The next step is to create a function that transform an object into the Mockify one. So far, we only have a type translation, now we need to have the logic to transform.

function mapToMockify<T extends Object>(obj: T): Mockify<T> {
let newObject: Mockify<T> = {} as Mockify<T>;
const properties = Object.getOwnPropertyNames(Object.getPrototypeOf(obj));
for (let i = 0; i &lt; properties.length; i++) {
newObject[properties[i]] = jest.fn();
}
return newObject;
}

From this point, you can invoke the function from your real instance and get a modified one ready to be injected and interrogated with all the Jest’s mocking features.

const toTest = new MyClass(mapToMockify(new MyInjectedClassTypeHere()));
expect(toTest.function1).toHaveBeenCalled();

To recap, in this article we saw that with the combination of TypeScript’s mapped type and basic JavaScript we created an easy way to create a replica of a class that has all its functions transformed to Mock. 

Dragging a DOM Element Visually Properly

A few weeks ago, I started using a web appliance (Trello) to handle my side projects priority.  The main action is moving the cards between columns and I realized that I did a way better job when I worked at Microsoft with Visual Studio Team Services than Trello. 

My Visual Studio Team Services Animation

You can compare with the following example of Trello and probably see the difference right away.

Trello board

Trello has the idea of showing that something is in motion, which is great. However, the way I created the animation feels more natural. The concept is like if the mouse cursor is a pin on a piece of paper. Moving a piece of paper with a pin naturally tilt the paper differently when moving left to right, or right to left. This is what I developed. Trello tilts the card, but in a constant way, which is on the right side.

I am using shadow to create a depth of field and showing that we are above other elements that remain still. Trello is also using that technic.

However, I also added a CSS scale effect of about 5% which simulate we take from the board and move it somewhere else. Like in real life, when taking a piece of something and moving it, the perspective change. Trello does not change the scaling factor, hence the card remains the same size. In my view, the lack of scaling removes the realistic aspect of the movement.

Finally, I changed the cursor icon to to be the move pointer. The move pointer shows to the user the potential direction the item can be dragged and moved. In VSTS, it was every direction hence the 4 arrows cursor. Trello is not changing the cursor. Once again, small detail.

In the end, small details matter. The combination of a dynamic tilting, scaling, shadow and cursor modifications create a smooth and snazzy user interface. You can push the limit by slightly blurring the background. However, this last detail was removed for performance reason but would make total sense without that speed penalty.

Side Panel instead of Dialog

During my time at Microsoft on the product Visual Studio Online (renamed to Visual Studio Team Services), my team had to build the new dashboard. One feature is the configuration which consists of selecting widgets that can be part of the dashboard with a specific configuration. For example, you could choose a chart with the number of bugs or a widget that display the list of open pull requests, etc. Each widget can be updated once added or removed. The initial idea was to use a modal dialog, and the MVP (minimal viable product) was built using this user interface pattern. I was against it, still against it and I modified it.

My issue with dialog (and modal dialog) is by experience I know it never ends very well. It is even worse with the web. First of all, a dialog often requires to open other popups hence result in many layers of dialog. For example, it can be a configuration dialog which has another dialog to select a user from a list of existing user or it can be a color picker etc. Second, the goal of the dialog is mainly to display information within the context of the actual page. For example, the dashboard you want to add a dashboard’s widget. However, these dialogs are oversized and remove visibility of the underlying main page. The modal defeats the purpose. Third, the most dialog does not handle well responsiveness. Changing the browser size or simply being under a small resolution fail. Forth, many web pages that use dialog does not handle well scrolling.

A better pattern is to have a side panel that can open and close. This is what I ended up building for Visual Studio Team Services and worked very well. The configuration or the addition of a widget was simple and allowed a user to drag and drop and the proper location the widget. On the right side, you can select the widget you desire, configure this one and position it. All that with a visibility on the actual dashboard allowing the user to always have in focus what is already in place.

Recently, in my work at Netflix, I had to migrate from an older system to the new one the creation of users. Originally, the design was with a dialog. The problem is that you cannot copy information from the existing list, neither see if a user was already created and it was not mobile friendly (small resolution).  I opted to use a side panel. Here are a few interactions possible.

Partner Portal User Management Side Panel

Overall, the biggest gain is the reduction of layer ordering bugs. From Jira to Twitter or other systems that have dialog, there are always issues. It can be an error message that should be displayed on the main page but will be half on top of the dialog or that a dialog opens a dialog that reopens the first dialog creating a potential never-ending dialog. It increases the complexity of the user interface but also in term of a state which can grow exponentially. The simple pattern of the side panel reduces the complexity and increase the user visibility to the main task by not covering information that is valuable to the task.