How to Organize Model Type with TypeScript and React/Redux

I will not pretend that there is a universal way organized type in a Redux/React application. Instead, I’ll present what, in the project I am actually working in my day to day job, I found an easy, clean and clear way to organize types.

First, let’s establish that other than all the business logic types you need that React require to have at a minimum a type for your React’s property. I’ll skip the React’s state property mostly because I rarely rely on the state but also because it is not a big deal. You can handle the state’s type with a type or interface directly in the React’s component file since it will only be used internally for this component.

Second, let’s pretend we are working on a normalized model. A normalized model means that Redux will store only a single instance of each entity in the store — there is no duplication of data. A normalized model infers that the data will be denormalized during the mapping from Redux store to your React’s components. The normalized model will have an id (string or number) instead of having the object. For example, if you have EntityA that has a one-to-many relationship to EntityB, than the EntityA in the normalized model will have an array of EntityB ids, not the EntityB instance. However, in the EntityA denormalized you will have an array of EntityB. The normalized doesn’t have duplicate, the denormalized has the possibility to do EntityA.ArrayOfB[0].Name because EntityB is rich and complete while the normalized is just a key.

Third, React uses properties to hydrate the component and properties to provide actions. Separating the behaviors and the data model will be a natural choice if you are using React-Redux library as we will see soon.

With the prerequisite that we have a model divided in two (normalized and denormalized) and that we are using React (property) that use your business logic, it starts to be clear that for a specific entity we will have many types of interfaces and some values will cross-over. In fact, all properties that are not a relationship will be used in the normalized and denormalized definition.

The construction for each entity that is normalized and denormalized is to have an entity that contains no relation, one that contains the relationship keys, and one that contains the rich objects which will be filled during the mapping to React. For example, if you have “EntityA”, the pattern would be to have “EntityA” and “EntityANormalized” and “EntityADenormalized” that inherit “EntityA”. During the mapping (and the creation of EntityADenormaliez) you use all the common property from “EntityA” which reside inside Redux’s store that hold instances of EntityANormalized and you remove all keys and array of keys to replace them with the other object in the store. For example, if you have EntityA that has a relationship to B, the EntityANormalized have “EntityB:number” which won’t be used in EntityADenormalized because this one will have “EntityB:EntityBDenormalized”. Once you have these three interfaces created, you can create a EntityA model which contain a 1-1 relationship to the denormalized entity but also can have other data needed in the React component. For example, you can have Routing data, other denormalized entities, or global user’s preference data, etc. The fifth interface contains a list of all potential action the user can execute in the component. Finally, a simple interface that extends the Model and Dispatch interface is created and used by the React component has its property.

The final result is of all the interface created look like this UML diagram:

The advantage of this modeling is the reusability for the base class (EntityA) by the normalized and denormalized. It is also clear to all developers that code in the system that these fields are coming from the backend and are “values” while the normalized contains the relationship keys. The mapping contains the logic to denormalize the object providing to React a rich model that has a good navigability in the properties of all objects but also contains fields that might be dynamically computed during the mapping. Finally, the division of model and dispatch work flawlessly with React-Connect because the connect function require to pass each type. It is also convenient because if you have a hierarchy of component you can pass only action or a set of model depending to which children React components have.

Here is an example of how React-Redux’s connect function takes the model and dispatch types as well as how the React component for EntityA uses the property.

export default connect<EntityAModel, EntityADispatch, {}, {}, ReduxState>(
    (s) => entityAMapper.mapStateToProps(s),
    (s) => entityAMapper.mapDispatchToProps(s),
    (stateProps, dispatchProps, ownProps) => Object.assign({}, ownProps, stateProps, dispatchProps),
    {
        pure: true,
        areStatesEqual: (n, p) => entityAMapper.shouldMappingBeSkipped(n, p)
    }
)(EntityA) 

class ComponentA extends React.Component<ComponentAProps> {
}

Index Signature in TypeScript with a Twist

What if I tell you that the type you specify as the key in your index signature in TypeScript is useless? This is pretty much the case today with version 2.8.3. A historical reason is behind this design and this article will cover the current quirk that you might never notice even if you are using an object with index signature for a while.

Everything is more clear with an example. So here is a code that does not compile in TypeScript:

let x: string = "x";
x = 1; // As expected, this line doesn't compile

The variable is defined to be of type string. I am using a number — it doesn’t compile. At that point, anyone who uses TypeScript agrees that it makes total sense. The user explicitly defines a variable to be a primitive, not only a primitive but a string. If I want a number to be stored in this variable, I’ll have to parse the number into a string which can be done in different ways like using x.toString(), using the string constructor (String(x)), or to concatenate the number with an empty string (“”+x). Regardless, this is known and accepted since the inception of TypeScript.

An index signature allows an object to have value accessed by an index that can be a string or a number. This is often used in JavaScript to access properties of an object. The pattern in JavaScript to create a dictionary is by using the index signature. The following code is legit in JavaScript.

var x = {};
x[123] = "Value in property 123";
x["456"] = "Value in property 456";
x[true] = "Value in property true";

In the end, the type doesn’t matter, it becomes a property of the object. For example, you can access the value 123 by using x[123] or x[“123”] and same thing for 456 with x[456] or x[“456”] and similarly with the boolean value x[true] or x[“true”]. Again, so far, so good. Where it starts to become not clear is that TypeScript lets your strongly type the index. First detail: you cannot use boolean. Only a string and a number are allowed. This is no harm because it is not pragmatic to have a boolean. The major twist is that if you define the index to be a number that you can set a string. If you specify the index to be a string, you will be able to set a number. The cherry on top is that you cannot define the index to be an union of a number or a string.

interface Obj { 
    [id: string]: boolean;
}
let y: Obj = {};
y["okay"] = true; // string key: legit and compile
y[123] = false; // number key: legit and compile

interface Obj2 { 
    [id: number]: boolean;
}
let y: Obj2 = {};
y["okay"] = true; // string key: legit and compile
y[123] = false; // number key: legit and compile

I was bemused by the fact that TypeScript allows me to specify the type and wasn’t respecting it. It would be a great way to ensure that no one is using the wrong type, even if at the end it doesn’t matter in term of runtime it could matter in term of consistency in your code. As mentioned, the logical signature seems to be with a union since both types are valid.

interface Obj { 
    [id: string | number]: boolean; // Won't compile!
}

The documentation of Index Signature mentions this detail:

There are two types of supported index signatures: string and number. It is possible to support both types of indexers, but the type returned from a numeric indexer must be a subtype of the type returned from the string indexer. This is because when indexing with a number, JavaScript will actually convert that to a string before indexing into an object.

This is a quirk in TypeScript because the index signature did not exist at the time the index signature was born. This reason is not documented else than in this thread I stared in the official TypeScript Github repository. Ryan Cavanaugh was patient enough to give more context around the past decision that results to the current behavior. Changing the behavior to allow a union is not yet considered by fear of confusing developers. Being more strict in term of verifying that the index is really respecting the index type can break actual code which mean that a transition phase with a deprecate warning would be required. TypeScript’s team doesn’t believe it worth the change at the moment and kudos to them to keep backward compatibility in the evolution of the language. However, this inconsistency of syntax where the user clearly identify something in a specific time and result into behaving differently is a long term issue that should be addressed. Hopefully you understand that the type of the index is useless at the moment and you can be free to use whatever you want… but not a union of them.

Reducing Boilerplate of Redux with TypeScript

For quite a while, I found that working with TypeScript and Redux was slow in term of all the boilerplate required when adding a new Redux’s action. The creation of a unique constant, a unique type to narrow down the type and then an action creator function to have a unique place to call the action. The last step was to create a union of all actions (type) allowed for the Redux’s reducer which was passed in the reducer (similar when the action was used in a middleware). With the arrival of TypeScript 2.8, it’s easy to reduce the boilerplate.

First, some generic code must be in place.

export function createActionPayload<TypeAction, TypePayload>(actionType: TypeAction): (payload: TypePayload) => ActionsWithPayload<TypeAction, TypePayload> {
    return (p: TypePayload): ActionsWithPayload<TypeAction, TypePayload> => {
        return {
            payload: p,
            type: actionType
        };
    };
}
export function createAction<TypeAction>(actionType: TypeAction): () => ActionsWithoutPayload<TypeAction> {
    return (): ActionsWithoutPayload<TypeAction> => {
        return {
            payload: {},
            type: actionType
        };
    };
}
export interface ActionsWithPayload<TypeAction, TypePayload> {
    type: TypeAction;
    payload: TypePayload;
}

export interface ActionsWithoutPayload<TypeAction> {
    type: TypeAction;
    payload: {};
}

export type ActionsUnion<A extends ActionCreatorsMapObject> = ReturnType<A[keyof A]>;

It might look like a lot of code, but in reality, you won’t touch that code at all, just call it. Special mention at the last type which uses a TypeScript 2.8 feature “ReturnType” which will get all return types of the function we will have in a type removing some manual entries.

At that point, to create a new action consist of few steps but not too many keystrokes.

First, you still need a constant. The constant is of a type of a string literal (not a string) and must be unique by action. This will be used later to narrow the type down to the payload of the action itself.

export const ACTION_1 = "ACTION_1";

The second step is to create the action with one of the two methods above. One method creates an action with a payload while the second one creates an action without a payload. Something interesting is that it is more convenient to gather all your action into a constant. This way, later, we will leverage the “ReturnType” to get all action type from the action creator function. For example, the example we are building, the action returns a unique type that uses the string literal of the action’s constant.

export const SharedActions = {
    action1: createActionPayload<typeof ACTION_1, string>(ACTION_1),
};

The third step is to consume the action. We can use a reducer and type the action it receives to have the only action from one or many constant we build. You can still restrict which actions a reducer can receive by specifying which variable that holds many actions with “ActionsUnion”. In the example below, I mention that any action from the SharedAction object, and the OtherGroupAction and a specific other one is tolerated in that reducer. The greatest feature of this approach is that once you use a conditional statement like an IF or a SWITCH against the action’s type, the type of the payload is narrowed down to the action. This gives an excellent IntelliSense.

type ReducerAcceptedActions = ActionsUnion<typeof SharedActions & typeof OtherGroupOfAction & GroupB.individualAction>;

export function oneReducerHere(
    state: State = initialState(),
    action: ReducerAcceptedActions): State {

     switch (action.type) {
        case ACTION_1: {
            // ...
        }
     }
}

This way to proceed with Redux and TypeScript removes many boilerplate codes that were initially needed in TypeScript. In the end, the code is easy to read even if you have hundreds or thousands of actions because you can separate them in a bundle. Also, the automatic type narrowing is a bless giving a huge edge when developing by bringing a natural boundary of what is available in each payload automatically.

TypeScript Exhaustive Check

There is a time where you have a range of value and a function that must act when all entries of the data. In TypeScript, an enum or a union of value can define the set. The problem is that these sets can change in time. The ideal treatment is having TypeScript to notice the developer that a value is missing. The removal of a choice is handled by default since the value doesn’t exist, hence TypeScript won’t compile. An exhaustive check needs to be placed to manage any new value.

Exhaustive check leverage the “never” type. In TypeScript, we can create a default choice that calls a method that takes for parameter a “never” type. Since “never” primitive is a subtype of all type, you cannot pass the parameter of your function. TypeScript won’t compile if a potential path, a missing value, slips into the function. However, it compiles in the case that all values of the set are present.

public convertToHighChartType(type: MetricsChartType): string {
    switch (type) {
        case "Line":
            return "line";
        case "AreaDiff":
            return "area";
        default:
            return exhaustiveCheck(type);
    }
}

export function exhaustiveCheck(type: never): never {
    throw new Error("Missing type");
}

The example takes a union type in a parameter that the system uses but must map it to another string before being used elsewhere. Adding a new value in the union type will fall into the default and hence cause TypeScript to go into the default case which calls a function that doesn’t take the type passed — it won’t compile.

This is important for any enum or union that is threated. The exhaustive check function can be written once and reused across your application. It’s short to write and will help you not to have a runtime mismatch between during mapping of values.

4 Features to Boost React-Redux Performance

Recently, I’ve been optimizing one of our websites and I’ve seen tremendous improvement by adding or modifying some behaviors of how Redux and React communicate. In this article, we will see four approaches that can work together to avoid propagating a render to React. The fours area we will cover are :

  • Batch your actions
  • Short-circuit Redux mapping
  • Avoid denormalizing untouched normalized data
  • Immutable value with good should component update

Batch your actions

I’m starting with this one because it is one of the first that will trigger a chain of reaction. The way Redux works is that an action triggers and then middlewares can generate more actions (with Thunk), and in the end, many mappings can occur to finally have many components involved. It makes sense to try to optimize at the source.

The idea is limited to a set of actions and cannot be applied to every action. This pattern works well, e.g., where you are fetching data and need to dispatch several actions or where you have a graph of an object that needs to be normalized by several reducers. In both cases, you want to have all reducers with all the data well saved in the Redux store to continue to lifecycle (mapping, rendering). The batch notion enters when you receive the payload or you have the result of the normalization. Instead of dispatching several actions, you batch them.

In the following code, without batching, every five actions that are calling five different reducers would call the mapping code. Since the normalized data changes the store, it would trigger the denormalization code and the component to render because the data has changed. However, in reality, we want to have the UI to change once after all entities are well positioned in the Redux store.

batchNotifications(next,         
	Actions.normalizedEntityA(normalizedResponse.entities.A),     
	Actions.normalizedEntityB(normalizedResponse.entities.B),
	Actions.normalizedEntityC(normalizedResponse.entities.C),     
	Actions.normalizedEntityD(normalizedResponse.entities.D),     
	Actions.normalizedEntityE(normalizedResponse.entities.E) 
); 

I wrote a simple implementation for the batch notification that sets a variable to “on” before executing the first element of the batch and turn it off at the last element.

export function batchNotifications<S>(next: Dispatch<S>,
	...batchOfActions: Action[]): void {
	next(SharedActions.actionUiBatchOn());
	try {
		batchOfActions.forEach((action: Action) => next(action));
	} catch (e) {
		Log.trackError(e);
	} finally {
		next(SharedActions.actionUiBatchOff());
	}
} 

Because the batch is executed at the middleware level, I can create a Redux Store Enhancer to accumulate all the action between the “on” and “off”. It means that I have a custom code in the composeEnhancers. The link in this paragraph covers the code about the batch notification. Overall, the gain is tremendous because no other pattern can reduce these five calls. In my case, the biggest entity was divided into ten reducers which would otherwise cost many milliseconds (about 400ms) wasted.

Short-circuit Redux mapping

If you are using the React-Redux library, you are probably using the connect function to create the link between the Redux store notification and your component.

export default connect<YourModel, YourDispatch, {}, {},
     YourStoreModel>(
    (s) => model.mapStateToProps(s),
    (s) => model.mapDispatchToProps(s),
    (stateProps, dispatchProps, ownProps) => Object.assign({},
     ownProps, stateProps, dispatchProps),
    {
        pure: true,
        areStatesEqual: (n, p) => model.shouldMappingBeSkipped(n, p)
    }
)(YourComponent);

The pattern is to use the third parameter and assign the value “pure” to true and to set up a function to areStatesEqual. If the states are equal between the previous state and the current one, then the change event will not continue to propagate, thereby eliminating any mapping code to occur. The map code is the second and third parameter of the connect function. This can be expensive to run if you need to denormalize your model for no reason. The check can be done simply by looking at the Redux store and comparing the previous value. You can target the exact portion of the state that should change before proceeding. If you are using an immutable model, the check can be quick.

This pattern and the previous one combined can save most of the rendering. I have witnessed more than 50% reduction of performance on user’s action by batching and short-circuiting the mapping. In my worst scenario, the mapping went from 24 to 12 by the short-circuiting value that was not present yet (waiting for middleware to dispatch the code from Ajax call) or not yet fully formed (middleware logic on the data).

Avoid denormalizing untouched normalized data

This optimization can be more time-consuming than the first two, mostly because of some difficulty with the library that you might use. I’ve been using Re-reselect library, but before, something more custom. The idea is to cache the denormalized value if the normalized value has not changed. The cache can save some time if you have a rich, deep model with many lists and maps.

In my particular case, I’ve seen a performance gain on data that doesn’t often change with a list of thousands of entities that require being denormalized as well. The gain in performance varies a lot by a few milliseconds by entities. In my case, I was able to trim about 50ms.

The ReSelect library wasn’t working for my use case which required to pass dynamic values as a parameter, but the Re-reselect worked. However, be aware that invalidation is only done by a change in the dynamic parameters or the state, and your store must be immutable. Otherwise, it becomes rapidly complex, and the cache might not work as expected.

Immutable value with good should component update

The last pattern is to break down your components into small pieces with a good shouldComponentUpdate function that can leverage the immutability model. Breaking down is crucial. Small components allow only to update a piece where the information changes. Immutability reduces complex checks and avoids having an issue in the long term if fields are added to the model (which would also require being added in the check of the component that consumed it). I have the habit of running the performance tools of Chrome quite often every day, and I noticed that a bad habit is to break down the render in private functions. The problem is that these functions are always executed even if the part they render is not changing. Breaking down into smaller components avoid this pitfall of wasteful rendering. In my case, not a lot of gain was realized because everything was correctly implemented. However, I noticed that the main menu was rendering an expensive user’s profile on every render. The cause: a private function that was rendering something that should only be rendered once, and only after when the user activates a menu! I got a gain of 120ms just by having a unique component, a good shouldComponentUpdate, and a further rendering when the user clicks the profile.

To conclude, these patterns can help you rapidly gain a few milliseconds, which compounded would be a half-second that your user may perceive. Reducing the number of rendering is crucial if you want to have a system that runs smoothly.