4 Features to Boost React-Redux Performance
Posted on: 2018-05-01
Recently, I've been optimizing one of our websites and I've seen tremendous improvement by adding or modifying some behaviors of how Redux and React communicate. In this article, we will see four approaches that can work together to avoid propagating a render to React. The fours area we will cover are :
- Batch your actions
- Short-circuit Redux mapping
- Avoid denormalizing untouched normalized data
- Immutable value with good should component update
Batch your actions
I’m starting with this one because it is one of the first that will trigger a chain of reaction. The way Redux works is that an action triggers and then middlewares can generate more actions (with Thunk), and in the end, many mappings can occur to finally have many components involved. It makes sense to try to optimize at the source.
The idea is limited to a set of actions and cannot be applied to every action. This pattern works well, e.g., where you are fetching data and need to dispatch several actions or where you have a graph of an object that needs to be normalized by several reducers. In both cases, you want to have all reducers with all the data well saved in the Redux store to continue to lifecycle (mapping, rendering). The batch notion enters when you receive the payload or you have the result of the normalization. Instead of dispatching several actions, you batch them.
In the following code, without batching, every five actions that are calling five different reducers would call the mapping code. Since the normalized data changes the store, it would trigger the denormalization code and the component to render because the data has changed. However, in reality, we want to have the UI to change once after all entities are well positioned in the Redux store.
batchNotifications(next,
Actions.normalizedEntityA(normalizedResponse.entities.A),
Actions.normalizedEntityB(normalizedResponse.entities.B),
Actions.normalizedEntityC(normalizedResponse.entities.C),
Actions.normalizedEntityD(normalizedResponse.entities.D),
Actions.normalizedEntityE(normalizedResponse.entities.E)
);
I wrote a simple implementation for the batch notification that sets a variable to “on” before executing the first element of the batch and turn it off at the last element.
export function batchNotifications<S>(next: Dispatch<S>,
...batchOfActions: Action[]): void {
next(SharedActions.actionUiBatchOn());
try {
batchOfActions.forEach((action: Action) => next(action));
} catch (e) {
Log.trackError(e);
} finally {
next(SharedActions.actionUiBatchOff());
}
}
Because the batch is executed at the middleware level, I can create a Redux Store Enhancer wasted.
Short-circuit Redux mapping
If you are using the React-Redux library, you are probably using the connect function to create the link between the Redux store notification and your component.
export default connect<YourModel, YourDispatch, {}, {},
YourStoreModel>(
(s) => model.mapStateToProps(s),
(s) => model.mapDispatchToProps(s),
(stateProps, dispatchProps, ownProps) => Object.assign({},
ownProps, stateProps, dispatchProps),
{
pure: true,
areStatesEqual: (n, p) => model.shouldMappingBeSkipped(n, p)
}
)(YourComponent);
The pattern is to use the third parameter and assign the value “pure” to true and to set up a function to areStatesEqual. If the states are equal between the previous state and the current one, then the change event will not continue to propagate, thereby eliminating any mapping code to occur. The map code is the second and third parameter of the connect function. This can be expensive to run if you need to denormalize your model for no reason. The check can be done simply by looking at the Redux store and comparing the previous value. You can target the exact portion of the state that should change before proceeding. If you are using an immutable model, the check can be quick.
This pattern and the previous one combined can save most of the rendering. I have witnessed more than 50% reduction of performance on user’s action by batching and short-circuiting the mapping. In my worst scenario, the mapping went from 24 to 12 by the short-circuiting value that was not present yet (waiting for middleware to dispatch the code from Ajax call) or not yet fully formed (middleware logic on the data).
Avoid denormalizing untouched normalized data
This optimization can be more time-consuming than the first two, mostly because of some difficulty with the library that you might use. I’ve been using Re-reselect library, but before, something more custom. The idea is to cache the denormalized value if the normalized value has not changed. The cache can save some time if you have a rich, deep model with many lists and maps.
In my particular case, I’ve seen a performance gain on data that doesn’t often change with a list of thousands of entities that require being denormalized as well. The gain in performance varies a lot by a few milliseconds by entities. In my case, I was able to trim about 50ms.
The ReSelect library wasn’t working for my use case which required to pass dynamic values as a parameter, but the Re-reselect worked. However, be aware that invalidation is only done by a change in the dynamic parameters or the state, and your store must be immutable. Otherwise, it becomes rapidly complex, and the cache might not work as expected.
Immutable value with good should component update
The last pattern is to break down your components into small pieces with a good shouldComponentUpdate function that can leverage the immutability model. Breaking down is crucial. Small components allow only to update a piece where the information changes. Immutability reduces complex checks and avoids having an issue in the long term if fields are added to the model (which would also require being added in the check of the component that consumed it). I have the habit of running the performance tools of Chrome quite often every day, and I noticed that a bad habit is to break down the render in private functions. The problem is that these functions are always executed even if the part they render is not changing. Breaking down into smaller components avoid this pitfall of wasteful rendering. In my case, not a lot of gain was realized because everything was correctly implemented. However, I noticed that the main menu was rendering an expensive user’s profile on every render. The cause: a private function that was rendering something that should only be rendered once, and only after when the user activates a menu! I got a gain of 120ms just by having a unique component, a good shouldComponentUpdate, and a further rendering when the user clicks the profile.
To conclude, these patterns can help you rapidly gain a few milliseconds, which compounded would be a half-second that your user may perceive. Reducing the number of rendering is crucial if you want to have a system that runs smoothly.