How to Pass Value to useCallback in React Hooks

UseCallback allows having the same reference for a call back function which has the benefit of only changing when something changes. This is primordial when callbacks are passed down the React component chain to avoid component to be rendered without an actual real reason causing performance issue. The following snippet of code shows a simple button that when clicked invokes an action that set the name to “test”. You can imagine that in a real scenario that the string would come from a real source instead of being hardcoded.

  onClick={() => {

The action can often be handled without passing data, or by passing a React’s property and hence can access it from the handler of the action. However, in some cases, where the value is not accessible directly from the outer scope of the handler function, it means that we need to pass by parameter the value. I am reusing a Code Sandbox slightly modified to have the useCallback with a value passed down. The use of useCallback or simply the refactoring of the above snippet into a function that is not directly bound to the onClick is similar. We are moving the accessible scope. When the function is inline, the function can access anything that defined the button. It can be the React’s properties, or the “map” index if it was inside a loop or else. However, extracting the function out require some minor change to still have access to the value.

 const setName = useCallback(() => {
 }, []);

A quick change with React Hooks to produce the desired scenario is to use useCallback at the top of the component and access it directly in the onClick function callback.

<button onClick={setName}>Change name</button>

At the moment, it works. However, we are not passing any information. Let’s imagine that we cannot access the data directly from the useCallback, how can we still invoke this one?

const setName = useCallback((event: React.MouseEvent<HTMLButtonElement, MouseEvent>) => {
  }, []);

The idea is to have a callback that return a function that as on its turn the input event.

<button onClick={setName("Testing Name")}>Change name</button>

The invocation code change by passing the data. In that example, it is a string, but you can imagine that you are passing the index of the map function or data coming from a source inaccessible from the callback function.

  const setName = useCallback(
    (name: string) => (
      event: React.MouseEvent<HTMLButtonElement, MouseEvent>
    ) => {

My rule of thumb is that I do not need to have this convoluted definition if I am accessing directly properties of the component. Otherwise, I am passing the data needed. I always define the type, which gives me a good quick view about what is passed (name is a string and the event is a mouse event) without having to rummage the code. Here is the code sand box to play with the code of this article.

How to consume GraphQL in TypeScript and React

In this article I’ll cover two different approach concerning rendering information coming from a GraphQL server in React while being strongly typed.

Approach 1: The normal fetch

The first approach is the one I prefer because it is not intrusive. It is very similar of to do a normal REST Ajax call. It can be swapped from an existing fetch command that is using not GraphQL without having to change much. The approach is frontend framework agnostic, meaning that you can use it outside React. Furthermore, this approach works well in a middleware if you are using Redux, but can be directly used in you custom “service” layer or when a component is mounted — you choose.

The first step is to get some NPM packages. In this approach, I am using two packages related to GraphQL: the Apollo-boost and the graphql-tag.

import ApolloClient, { ApolloQueryResult } from "apollo-boost";
import { Proto } from "autogenerated/octoolstypes";
import gql from "graphql-tag";
import React from "react";

You then need to create an ApolloClient which is the library that will perform the query. See the ApolloClient as an HTTP request library, similar to Axios. It lets you specify the URL to perform the POST HTTP request, but also custom headers. For example, I am specifying a bearer token to have the request authenticated.

this.client = new ApolloClient({
    uri: "/api/graphql",
    headers: {
        authorization: `Bearer ${this.props.token}`

The next step is to have the query executed. This is where you can swap your existing fetching code with the ApolloClient.

(async () => {
    try {
        const result: ApolloQueryResult<Proto.Query> = await this.client.query<Proto.Query>({
            query: gql`
                query Proto {
                    org(orgId:orgIdParam) {
        if ( &amp;&amp; {
             // Do something with
    } catch (error) {

That’s it. Very simple, easy to understand because it is similar to most fetching pattern. The ApolloQueryResult takes a generic type which is the format of the gql. It means that when building for the first time the query that you might not want to explicitly specify the type because it does not exist. As discussed in a previous article, there is tool that will scan the gql query and generate the type for you.

Approach 2: Strongly bound to React

The second approach is my less favorite. I favor the first one because I like separating my view from my logic. It is easier to unit test, and the separation of concern make the code easier to understand and refactor. Nonetheless, the option is available and worth exploring.

The second approach use an Apollo component that will wrap the application. It is similar to the React-Redux provider. It takes a single property which is the client, that is the same as provided in the first approach.

public render(): JSX.Element {
    return (
        <ApolloProvider client={this.client}>
            <ConnectedRouter history={history}>
                <AppRouted />

The difference is that because we have the client connected in the ApolloProvider that any React component underneath has access to the possibility of querying by creating a Query component.

        query Proto2 {
            org(orgId: orgIdParam {
    {queryResult => {
        if (queryResult.loading) {
            return "Loading";
        if (queryResult.error) {
            return "Error";
        if ( === undefined || === null) {
            return <p>None</p>;

        return (

Once again, the Query is strongly typed by using the gql defined in the query parameter of the query component. Until the type is generated, which mean that the gql query has been picked up by the script that build the TypeScript file, the query cannot be explicitly typed. However, once it done, any future change will catch on. This approach has the query executed every time the container of the Query component is rendered which mean that future optimization, like having a caching policy specified in the ApolloClient might be a good thing.


There are more than two ways to bind GraphQL data into your React application while having the benefice of being strongly Typed. The first approach is more agnostic, the second embrace React with a component to handle a query. As long as you are comfortable, either does the job. I recommend using the former approach because it is flexible without having your fetching code dependent on React but otherwise, the end result of a strongly typed code is the same.

GraphQL Posts

Handling Unexpected Error with React and Redux

Regardless of how professional developer you are, there always will be an exception that will slip between your fingers, or should I say between your lines of code. These errors that bubble up the stack are called unexpected error. While it is always preferable to catch the error as close as possible of where the exception is thrown, it is still better to catch them at a high level. In the web world, it gives an opportunity to have a generic message to the user gracefully. It also provides an open door to collect the unexpected error and act before having any of your customers reach you.

There are three places where you need to handle unexpected errors in a stack using React and Redux. The first one is at React level. An unexpected error can occur in a render method for example. The second level is during the mapping between Redux and React. The error occurs when we move data from the Redux’s store to the React’s property of the connected component. The third level is an error in the chain of middlewares. The last one will bubble up through the stack of middleware and explode where the action was dispatched. Let’s see how we can handle these three cases in your application.

React Unhandled Exception

Since version 16, React simplified the capture of error by introducing the lifecycle function “componentDidCatch.” It is a function like “render” or “componentShouldUpdate” that come with the framework. The “componentDidCatch” is triggered when an exception go thrown any children of the component. The detail about what it covers is crucial. You must have a component that englobes most of your application. If you are using React-Router and would like to keep the web application with the same URL and have the menus to stay in place, this can be tricky. The idea is to create a new component with the sole purpose of wrapping all top route components. Having a single component to handle the unexpected error is interesting. It is simple, easy to test, with a cohesive and single task.

export interface ErrorBoundaryStates {
  hasError: boolean;
export class ErrorBoundary extends React.Component<ErrorBoundaryProps, ErrorBoundaryStates> {
  constructor(props: ErrorBoundaryProps) {
    this.state = { hasError: false };

  public componentDidCatch(error: Error, errorInfo: ErrorInfo): void {
    this.setState({ hasError: true });
    YourLoggingSystem.track(error, "ErrorBoundary", { errorInfo: errorInfo });

  public render() {
    if (this.state.hasError) {
      return <div className="ErrorBoundary">
      The application crashed. Sorry!
    return this.props.children;

However, with React-Router, every route is assigned as property. The property is an excellent opportunity to create a function that returns the React class.


// Constructor:
 this.page1Component = withErrorBoundary()(Page1Component));
// Render:
<Route path={Routes.PAGE_1} component={this.page1Component} />

export const withErrorBoundary = () => <P extends object>(Component: React.ComponentType<P>) =>
  class WithErrorBoundary extends React.Component<P> {
    render() {
      return <ErrorBoundary}><Component {...this.props} /></ErrorBoundary>;

Redux Mapping Unhandled Exception

This section will be short because it is covered with React. However, I wanted to clarify that this can be tricky if you are not doing exactly like the pattern I described. For instance, if you are wrapping the “withErrorBoundary” not at the initialization of the route but directly when you connect — it will not work. For example, the code below does not work as you might expect. The reason is that the error is bound to the component but not to the code being bound by the React-Connect.

export default connect<ModelPage1, DispatchPage1, {}, {}, AppReduxState>(
    (s) => orgMappers.mapStateToProps(s),
    (s) => orgMappers.mapDispatchToProps(s)

Outside of the initial solution proposed, it is also valid to wrap the “connect” to have the desired effect of receiving the error in the “componentDidCatch” of the “ErrorBoundary”. I prefer the former solution because it does not coerce the ErrorBoundary with the component forever.

export default WithErrorBoundary(connect<ModelPage1, DispatchPage1, {}, {}, AppReduxState>(
    (s) => orgMappers.mapStateToProps(s),
    (s) => orgMappers.mapDispatchToProps(s)

Redux Middleware Unhandled Exception

The last portion of the code that needs to have a catch-all is the middleware. The solution goes with the Redux’s middleware concept which is to leverage function that calls themselves. The idea is to have one of the first function, middleware, to be a big try-catch.

const appliedMiddleware = applyMiddleware(

// Excerpt of the middleware:
return (api: MiddlewareAPI<Dispatch, AppReduxState>) =>
       (next: Dispatch) =>
       (action: Actions): any => {
            try {
                return next(action);
            } catch (error) {
                YourLoggingSystem.track(error, "Unhandled Exception in Middleware");
                return next(/*Insert here an action that will render something to the UI to indicate to the user that an error occured*/);


Handling errors in a React and Redux world require code to be in a particular way. At this day, the documentation is not very clear. Mostly because there is a clear separation between React, Redux, Redux-Connect, and React-Router. While it is very powerful to have each element of the puzzle separated, this comes with the price that the integration is in a gray area. Hopefully, this article uncovers some mystery around how to collection unhandled errors and removed confusion with the particular case of why mapping error can throw error through the React mechanism when not positioned at the right place.

Burning the last GIT commit into your telemetry/log

I enjoy knowing exactly what happens in the systems that I am actively working and that I need to maintain. One way to ease the process is to know precisely the version of the system when an error occurs. There are many ways to proceed like having a sequential number increasing, or having a version number (major, minor, path). I found that the easiest way is to leverage the GIT hash. The reason is that not only it point me into a unique place in the life of the code, but it also removes all manual incrementation that a version number has or to have to use/build something to increment a number for me.

The problem with the GIT hash is that you cannot run it locally. The reason is that every change you are doing must be committed and pushed. Hence the hash will always be at least one hash before the last. The idea is to inject the hash at build time in the continuous integration (CI) pipeline. This way, the CI is always running on the latest code (or a specific branch) and knows what is the code being compiled thus without having to save anything could inject the hash.

At the moment, I am working with Jenkins and React using the react-script-ts. I only had to change the build command to inject into a React environment variable a Git command.

"build": "REACT_APP_VERSION=$(git rev-parse --short HEAD) react-scripts-ts build",

In the code, I can get the version by using the process environment.

const applicationVersion = process.env.REACT_APP_VERSION;

The code is minimal and leverage Git system and environment variable that can be read inside React application easily. There is no mechanism to maintain, and the hash is a source of truth. When a bug occurs, it is easy to setup the development environment to the exact commit and to use the remaining of the logs to find out how the user reached the exception.

Google Analytic with React and Redux

I had to integrate Google Analytic in one of our website at Netflix. It’s been a while I had to use Google Analytic and the last time was simply copy-pasting the code snippet provided by Google when creating the Analytic “provider” account. The last time was a few years ago and the website was not a single-page application (spa). Furthermore, I the application is using Create-React App (TypeScript version) with Redux. I took a quick look and found few examples on the web but I wasn’t satisfied. The reason is that all examples I found were hooking Google Analytic at the component level. I despise having anything in the user interface (UI), React that is not related to the UI.

The first step is to use a library instead of dropping the JavaScript directly into the code.

npm install --save react-ga

Next step is to configure the library to set the unique identifier provided by Google. I am using the create-react-app scaffolding and I found the best place to initialize Google Analytic to be in the constructor of the App.ts file. It is a single call that needs to be executed once for the life of the spa.

class App extends React.Component {

  public constructor(props: {}) {
    ReactGA.initialize(process.env.NODE_ENV === Environment.Production ? "UA-1111111-3" : "UA-1111111-4");
  public render(): JSX.Element {
    return <Provider store={store}>
      <ConnectedRouter history={history}>
        <AppRouted />

export default App;

The last step is to have react-router to call the page change when the routing change. React-router is mainly configured in React, but I didn’t want to have any more ReactGA code in React. The application I am working uses Redux and I have a middleware that handles route. At the moment, it checks if the route change and analyzes the URL to start fetching data on the backend.

  return (api: MiddlewareAPI<AppReduxState>) =>
            (next: Dispatch<AppReduxState>) =>
                <A extends Action>(action: A): A => {
               // Logic here that check for action.type === LOCATION_CHANGE to fetch the proper data
               // ...
               // If action.type === LOCATION_CHANGE we also call the following line:

The previous code is clean. Indeed, I would rather not have anything inside React, but App.tsx is the entry point and the initialize function injects into the DOM Google’s code. The Redux solution works well because the react-router-redux used gives the pathname which is the URL. By using the function “pageview” we are manually sending to Google Analytic a page change.

Increasing Performance of thousand of LI with expandable row by React-virtualized

Like most of the application, everything is smooth with testing data. After a while of usage, the application gathers more and more bytes which change the whole picture. In some situation, the data does not expire, hence the collection of data grow out of proportion, way beyond the normal testing scenarios that the developer might anticipate at the inception of the code. The problem is the browser, the lack of understanding of the purpose of so many elements. Using HTML with overflow to scroll (or auto) and relying upon that the scrolling will always be smooth is a fallacy — the browser will need to render and to move all these elements.

There are a few options. The easiest one is to only display a subset of the whole collection of data. However, the solution penalizes users who may want to reach that information again in the future. A second popular option is to render only what is visible. It means that we need to remove from the screen elements that are hidden, out-of-range of the user. Everything above and under the viewport is removed. The challenge is to keep right the scrollbar position and size. Using a virtualization library that calculates the number of element out of the viewport and simulates the space while avoiding generating too much element is a valid solution. It works well with a fixed row. However, in my use case, I have the possibility to click the row which expandable this one with many inputs, graphs, and text. I also didn’t use a table, so I wasn’t sure how it would work. At my big surprise, not only variable row’s height is supported by the most popular virtualization library, it also supports a collection of an element that is not a table (for example UL>LI). I am talking about the React-Virtualized library.

In this article, I’ll show how to transform a list built with a single UL and many LI. Clicking the LI, open add another React component inside that increase the height of the row. The full source code is coming from the Data Access Gateway Chrome’s Extension. In the following image, you can see that we have many rows of 15px height followed with one that got expanded to 300px. The idea is that the code is smart enough to take the different heigh in its calculation.

Before explaining, the official library has many examples which are great. However, they are not all written in JSX which can be cumbersome to read. I hope that this article might help some people in understanding how you can do it with JSX as well as with your own React components. To explain my scenario, here how it was before and how we will modify the structure in term of React components hierarchy. The following diagram shows that I am simulating a header of the grid, outside the range of what can be scrolled with the first UL. The second UL, contains a collection of “ConsoleMessageLine” component which is a LI with many DIV. One DIV per column positioned with a FLEX display. When one row is clicked, it toggles the possibility to render an additional component the “ConsoleMessagesLineDetails” or to render undefined which do not display anything.

The division of the component satisfy my needs and I desired to alter the composition as little as possible. It ended that I only had to add two different components from the React-Virtualized library without having to change the UL>LI>DIV composition.

The first step is to get the library. This is straightforward if you are using NPM. Indeed, I am using TypeScript and the library has a @type package.

npm install react-virtualized --save
npm install @type/react-virtualized --save-dev

However, with TypeScript 3.0rc, I had an issue with the definition file. It has some conflicting type which required me to indicate to TypeScript to forgo analyzing the definition file. It doesn’t remove the strength of having types, but TypeScript won’t use it when compiling. In your tsconfig.json file, you shall add and enable “skipLibCheck”.

"skipLibCheck": true

The next step is to add the AutoSizer and the List component from the library to the existing code. Few important details. First, you must have a “ref” to the list. The reference gives us a pointer to force redraw when we know the user change the height of an existing component. We will see the code later. Second, we must provide a function that will return the row height. The property “rowHeight” is a must in the situation where the row can change depending of some characteristic of this one. In my case, the row will be either 15px or 300px depending if this one is collapsed or expanded.

<ul className="ConsoleMessage-items">
        {({ width, height }) => (
                ref={r => (this.list = r)}
                rowHeight={p => this.calculateRowHeight(p)}
                rowRenderer={p => this.renderRow(p)}

The AutoSizer component has an unusual way to use its children by allowing a width and a height. You can see an extract of the definition file under this paragraph. The Size type has a width and a height determined by the AutoSizer.

children: (props: Size) => React.ReactNode;

The List component is wrapping our rows and will generate one row for every rowCount provided in the properties. It means that I had to modify my code because I was mapping my collection to render a ConsoleMessagesLine and that now it will to a for-loop and I’ll have to indicate by index what element to render.

  <ul className="ConsoleMessage-items">
                    .filter((m) => this.logics.filterConsoleMessages(m, this.state.consoleMessageOptions.performance, this.state.consoleMessageOptions.size))
                    .map((m: MessageClient) => <ConsoleMessagesLine

The first modification from the code above was that I was filtering in the rendering. Instead, I have to have this list somewhere to be sure to be able to refer by index to it. I leveraged the static React function to derive from the props into the state of the components a filtered collection.

public static getDerivedStateFromProps(
    props: ConsoleMessagesProps,
    state: ConsoleMessagesState
): ConsoleMessagesState {
    const allData = props.listMessages.filter(m =>
    return {
        filteredData: allData

The next step was to create two functions. One for the dynamic height determination of each row and one to the actual render function of a single line that will use the data we just stored in the component’s state. The dynamic height is a single line function, that use on its turn the list as well.

private calculateRowHeight(params: Index): number {
    return this.openMessages[this.state.filteredData[params.index].uuid] === undefined ? 15 : 300;

As you can see, it goes into the filtered list, it uses the parameter that has an index and determines whether the unique identifier of the object is in a map that we populate when the user clicks the row. I have a uuid field, but it could be anything that is proper to your logic and code. For the curious about the “openMessages” map, it is a private member of the component that adds and remove element depending on the toggle of each row. Small but important detail, because we are modifying the height of an element of the virtualization, we must expliclity and manually invoke a recalculus of the rows. This is possible by calling “recomputeRowHeights” from the list. The reference to the list is handy because we can invoked in the function that toggle the height.

private lineOnClick(msg: MessageClient, isOpen: boolean): void {
    const unique = msg.uuid;
    if (isOpen) {
        this.openMessages[unique] = unique;
    } else {
        delete this.openMessages[unique];
    if (this.list !== null) {

The last crucial piece is the render of the element. Similarly to the function to determine the height of the row, the parameter gives the index of the element to render.

private renderRow(props: ListRowProps): React.ReactNode {
    const index = props.index;
    const m = this.state.filteredData[index];
    return (
            onClick={(msg, o) => this.lineOnClick(msg, o)}
            isOpen={this.openMessages[m.uuid] !== undefined}

The code is very similar to my code before the virtualization modification. However, it has to fetch the data from the filtered list by index.

Finally, the code is all set up, without major refactoring or constraint in term of how the components must be created. Two wrappers in place, a little bit of code moved around to comply with the contract of having rows rendered by function and we are all set. The performance is now day and night when the number of row increase.

Top 5 Improvements that Boost Netflix Partner Portal Website Performance

Netflix is all about speed. Netflix strives to give the best experience to all its customers — and no one like to wait. I am working in the Open Connect division which ensures that movies are streamed efficiently to everyone around the world. Many pieces of the puzzle are essential for a smooth streaming experience but at its core, Netflix’s caches act like a smart and tailored CDN (content delivery network). At Netflix, my first role was to create a new Partner Portal for all ISP (Internet service provider) to do monitoring of the caches as well as other administrative tasks. There is a public documentation about Partner Portal available here if you are interested to know more about it. In this blog post, I’ll talk about how I was able to take users’ a specific scenario that required many clicks and an average of 2 minutes 49 seconds to under 50 seconds (cold start) and under 19 seconds once the user visited more than once the website. An 88% reduction of waiting time is far more than just an engineering feat but a delight for our users.

#1: Tech Stack

The framework you are using has an initial impact. The former Partner Portal was made in AngularJS. That is right, the first version of Angular. No migration had been made for years. There were the typical problems in many areas with the digest of events, as well as how the code was getting harder to maintain. The maintenance aspect is out of scope of this actual article, but AngularJS always been hard to follow without types, and with the possibility to add values in a variety of places many functions and values in scope it becomes slowly a nightmare. Overall, Netflix is moving toward React and TypeScript (while not being a rule). I saw the same trend in my years at Microsoft and I was glad to take this direction as well.

React allows having a fine-grained control over optimization which I’ll discuss in further points. Other than React, I selected Redux. It is not only a very popular framework but also very flexible in how you can configure it and tweak the performance. Finally, I created the Data Access Gateway library to handle client-side request optimization with two levels of cache.

The summary of the tech stack point is that you can have a performant application with Angular or any other framework. However, you need to keep watering your code and libraries. By that, you must upgrade and make sure to use the best practices. We could have gone with Angular 6 and achieve a very similar result in my opinion. I will not go into detail about why I prefer the proximity to JavaScript of React to AngularJS templating engine. Let’s just end up that being as close of the browser and avoiding layers of indirection are appealing to me.

#2: More click, less content per page

The greatest fallacy of web UI is the optimization for the less amount of click. This is driven by research on shopping websites where the easiest and quickest the user can press “buy” that will result in a sell. Great, but not every website goal is to bring someone having one particular action in the less amount of click. Most website goal is to have the user enjoy the experience and to have the user fulfill his or her goal in a fast and pleasant way. For example, you may have a user interface that requires 3 clicks but each click takes 5 seconds. Or, you could have a user interface that requires 4 clicks with 2 seconds each. In the end, the experience is 15 seconds versus 8 seconds. Indeed, the user clicked one more click but got the result way faster. Not only that, the user had the impression of a way faster experience because he or she was interacting instead of waiting.

Let’s be clear, the goal is not to have the user click a lot more, but to be smart about the user interface. Instead of showing a very long page with 20 different pieces of information, I broke the interface into separated tabs or different pages. It reduces some pages that required to do a dozen of HTTP calls to 1-2 calls. Furthermore, clicks in the sequence of action could reuse data previously fetched giving fast steps. The gain was done automatically with the Data Access Gateway library which cache HTTP responses. Not only in term of performance it was better, in term of telemetry it is heaven. It is now possible to know what the user is looking at very accurately. Before we had a lot of information and it was hard to know which one was really consulted. Now, we have a way since we can collect information about which page, tabs, and section is open or drilled down.

#3: Collect Telemetry

I created a small private library that we now share across our website in our division at Netflix. It allows collecting telemetry. I wrote a small article about the principle in the past where you can find what is collected. In short, you have to know how users are interacting with your website as well as gathering performance data about their reality. Then, you can optimize. Not only I know what feature is used but I can establish patterns which allow to preload or position elements on the interface in a smart way. For example, in the past, we were fetching graphs on a page for every specific entity. It was heaving in term of HTTP calls, in term of rendering and in “spinner”. By separating into a “metric” pages with one type of graph per tabs we not only been able to establish which graph is really visualized but also which option etc. We removed the possibility to auto-load the graph by letting the user loading which graph he or she wants to see. Not having to wait for something you do not want seem to be a winner (and obvious) strategy.

To summarize, not only data is a keystone of knowing what to optimize, it is crucial for the developer to always have the information in his/her face. The library I wrote for the telemetry output in the console a lot of information with different color and font size to clearly give insights into the situation. It also injects itself into the Google Chrome Performance tooling (like React does) which allow seeing different “scenario” and “markers”. No excuses at the development phase, neither at production time to not knowing what is going on.

#4: Rendering Smartly

In a single-page application that optimizes for speed, not clicks, rendering smartly is crucial. React is build around virtualization but it still requires some patterns to be efficient. I wrote several months ago 4 patterns to boost your React and Redux performance. These patterns are still very relevant. Avoiding rendering helps the whole experience. In short, you can batch your Redux actions to avoid having several notifications that trigger potential view update. You can optimize the mapping of your normalized objects into denormalized objects by using a function in Redux-Connect to cancel the mapping. You can also avoid denormalizing by “selecting” the data if the normalize data in your reducers have not changed. Finally, you need to use React to leverage the immutable data and only render when data change without having to compute intense logic.

#5: Get only what you need

We had two issues in term of communication with the backend. First, we were doing a lot of calls. Second, we were performing the same call over and over again in a short period of time. I open-sourced a library that we are using intensively for all our need of data called Data Access Gateway library. It fixes the second issue right away by never performing two calls that are identical at the same time. When a request is performed and a second one wants the same information, the latter will subscribe to the first request. It means that all subsequent requesters are getting the information from the former requester — it receives it pretty fast. The problem with several calls could be in theory handled better by having less generic REST endpoints. However, I had low control over the APIs. The Data Access Gateway library offers memory cache and persisted cache with IndexDb for free. It means that calls are cached and depending on the strategy selected in the library you can get the data almost instantly. For example, the library offers a “fetchFast” function that always returns as fast as possible the data even if this one is expired. It will perform the HTTP call to get fresh data which will be ready for the next request. The default is 5 minutes, and our data does not change that fast. However, we have a scenario where it must be very fresh. It is possible to tailor the caching for these cases. Also, it is possible to cache for a longer time. For example, a chart that displays information on a year period could be cached for almost a full day. Here is a screenshot of Chrome’s extension of the Data Access Gateway which shows that for a particular session, most of the data was from the cache.

The persisted cache is also great for returning user. Returning users have a return experience that is also instantly. The data might be old, but the next click performed to update everything.

The experience and numbers vary a lot depending on how the user interacts with the system. However, it is not rare to see that above 92% of requests of information are delivered by the cache. By delivered I mean that is returned to the user regardless if it comes from the memory cache, persisted cache or HTTP request. The other way to see it is that when a user clicks on the interface that only 8% of the data is requested via HTTP (slow). However, if the user stays under the same amount of feature the number can climb easily to 98%. Not only the amount of consumed data is very high at a level fast for the user, it is also very efficient in term of data moved across the network. Again, the number varies greatly depending on how the user interacts with the Netflix Partner Portal. But, it’s not rare to see that only 10% of bytes used by the application are actually coming from HTTP request, while 90% are already cached by the library — this means that on a session where a user performed many actions that instead of having to download about 150megs could have downloaded less than 15 megs of data. Great gain in term of user experience and great gain in term of relaxing our backend. Also, one gain for our user who saves bandwidth. Here is a screenshot of a session recorded by the Chrome’s extension of the Data Access Gateway.

What next?

Like many of you, my main task is about delivering new features and maintaining the actual code. I do not have specific time allowed for improving the performance — but I do. I found that it is our (web developer) duty to ensure that the user gets the features requested in a quality. The non-functional requirements of performance is a must. I often take the liberty of adding a bullet point giving a performance goal before starting to develop the feature. Every little optimization along the journey accumulates. I have been working for 13 months on the system and keep adding once in a while a new piece of code that boost the performance. Like unit testings, or polishing the user interface, or to add telemetry code to get more insight; performance is something that must be done daily and when we step back and look at the big picture that we can see that it was worth it.

Telemetry has a Centerpiece of Your Software

Since my arrival at Netflix, I have been working all my time on the new Partner Portal of Netflix Open Connect. The website is private, so do not worry if you cannot find a way to access its content. I built the new portal with a few key architectural concepts as the foundation and one of it is telemetry. In this article, I will explain what it consists of and why it plays a crucial role in the maintainability of the system as well as how to smartly iterate.

Telemetry is about gathering insight on your system. The most basic telemetry is a simple log that adds an entry to a system when an error occurs. However, a good telemetry strategy reaches way beyond capturing faulty operation. Telemetry is about collecting behaviors of the users, behaviors of the system, misbehavior of the correct programmed path and performance by defining scenario. The goal of investing time into a telemetry system is to raise awareness of what is going on on the client machine, like if you were behind the user’s back. Once the telemetry system is in place, you must be able to know what the user did. You can see telemetry like having someone dropping breadcrumb everywhere.

A majority of systems collects errors and unhandled errors. Logging errors are crucial to clarify which one occurs to fix them. However, without a good telemetry system, it can be challenging to know how to reproduce. Recording which pages the user visited with a very accurate timestamp, as well as with which query string, on which browser, from which link is important. If you are using a framework like React and Redux, knowing which action was called, which middleware execute code and fetched data, as well as the timing of each of these steps, are necessary. Once the data in your system, you can extract different views. You can extract all errors by time and divide them by category of errors, you can see error trends going up and down when releasing a new piece of code.

Handling error is one perspective, but knowing how long a user waited to fetch data is as much important. Knowing the key percentiles (5th, 25th, 50th, 75th, 95th, 99th) of your scenarios indicate how the user perceives your software. Decisions about which part need improvement can be taken with certitude because that it is backed by real data from users that consume your system. It is easier to justify engineering time to improve code that hinders the experience of your customer when you can have hard data. Collecting about scenarios is a source of feature popularity as well. The aggregation of the count by users of a specific scenario can indicate if a feature is worth staying in the system or should be promoted to be more easy to discover. The conclusion of how to interpret the telemetry values are subjective most of the time, but is less opinionate then a raw gut feeling. Always keep in mind that a value may hide an undiscovered reality. For example, a feature may be popular but users hate using it — they just do not have any other alternative.

There are many telemetries and when I unfolded my plan to collect them I created a TypeScript (client-side) library that is very thin with 4 access points. The first one is named “trackError”. Its specialty is to track error and exception. It is simple as having an error name that allows to easily group the error (this is possible with handled error caught in try-catch block) and contains the stack trace. The second one is “trackScenario” which start collecting the time from the start to the end. This function returns a “Scenario” object which can be ended but also having the capability of adding markers. Each marker is within the scenario and allows fined grained sub-steps. The goal is to easily identify what inside a scenario involves slowness. The third access point is trackEvent which take an event name and a second parameter that contain an unstructured object. It allows collecting information about a user’s behavior. For example, when a user sorts a list there is an event “sortGrid” with a data object that has a field that indicates which grid, the direction of the sort, which field is being sorted, etc. With the data of the event, we can generate many reports of how the user is using each grid or more generic which field etc. Finally, it is possible to “trackTrace” which allow specifying with many trace level (error, warning, info, verbose) information about the system. The library is thin, simple to use and has basic functionality like always sending the GIT hash of the code within the library, always sending the navigation information (browser info), having the user unique identifier, etc. It does not much more. In fact, one more thing which is to batch the telemetry and send them periodically to avoid hammering the backend. The backend is a simple Rest API that takes a collection of telemetry message and stores them in an Elastic Search persistence.

A key aspect, like many software architectures and process decision, is to start right from the beginning. There are hundreds of usage of telemetry at the moment in the system and it was not a burden to add them. The reason is that they were added continually during the creation of the website. Similar to writing unit tests, it is not a duty if you do not need to add to write them all at once. While coding the features, I had some reluctances in few decisions, I also had some ideas that were not unanimous.

The aftereffect of having all the data about the user sooth many hot topics by having a reality check about how really the users are using the system. Even when performing a thorough user session to ask how they are using the system, there is nothing like real data. For example, I was able to conclude that some user tries to sort empty grid of data. While this might not be the discovery of the century, I believe it is a great example that shows a behavior that no user would have raised. Another beneficial aspect is monitoring the errors and exceptions and fixing them before users report. In the last month, I have fixed many (minor) errors and less than 15% were raised or identified by a user. When an error occurs, there is no need to consult the user — which is often hard since they can be remote around the world. My daily routine is to sort all error by count by day and see which one is rising. I take the top ones and search for a user who had the issue and looks at the user’s breadcrumb to see how to reproduce locally on my developer’s machine. I fix the bug and push it in production. The next day, I look back at the telemetry to see if the count is reducing. A proactive fixing bug approach is a great sensation. You feel way less trying to put water on fire allowing you to fix properly the issue. Finally, with the telemetry system in place, when my day to day job is getting boresome or that I have a slump in productivity, I take this opportunity to start querying the telemetry data to break into several dimensions with the goal to shed some light about how the system is really used and how it can be improved to provide the best user experience possible.

This article was not technical. I will follow up in a few days with more detail about how to implement telemetry with TypeScript in a React and Redux system.

Using TypeScript and React to create a Chrome Extension Developer Tool

I recently have a side project that I am dogfooding at Netflix. It is a library that handles HTTP request by acting as a proxy to cache requests. The goal is to avoid redundant parallel call and to avoid requesting data that is still valid while being as simple as possible. It has few configurations, just enough to make be able to customize the level of cache per request if required. The goal of this article is not to sell the library but to expose information about how I created a Chrome’s extension that listens to every action and collects insight to help the developer understanding what is going on underneath the UI. This article will explain how I used TypeScript and React to build the UI. For information about how the communication is executed behind the scene, from the library to Chrome, you should refer to this previous article.

Here is an image of the extension at the time I wrote this article.

Chrome’s extension for developer tool requires a panel which is an HTML file. React with Creat-React-App generate a static HTML file that bootstrap React. There is a flavor of create-react-app with TypeScript that works similarly, but with TypeScript. In both case, it generates a build in a folder that can be published as a Chrome’s extension.

The build folder content can be copied and pasted into your distribution folder along with the manifest.json, the contentScript.js, and background.js files that has been discussed in the communication article between your code and Chrome extension.

What is very interesting is, you can develop your Chrome’s extension without being inside the developer tool. By staying outside, it increases your development velocity because you do not need to build which use Webpack — this is slow. It also requires to close and open the extension which at the end consume time for every little change. Instead, you can mock the data and leverage the hot-reload mechanism of create-react-app by starting the server (npm run start) and run the Chrome’s extension as we independent website until you are ready to test the full fledge extension with communication coming from outside your extension.

Running the website with creat-react-app is a matter of running a single command (start), however, you need to indicate to the panel’s code that you do not expect to receive a message from Chrome’s runtime. I handle the two modes by passing an environment variable in the command line. In the package.json file I added the following code:

"start": "REACT_APP_RUNENV=web react-scripts-ts start",

Inside the React, app.tsx file, I added a check that decides whether to subscribe to Chrome’s runtime message listener or to be injected with fake data for the purpose of web development.

if (process.env.REACT_APP_RUNENV === "web") {
    // Inject fake data in the state
} else {
    // Subscribe to postMessage event

Finally, using TypeScript and React is a great combination. It clarifies the message that is expected at every point in the communication but also simplifies the code by removing any potential confusion about what is required. Also, React is great in term of simplification of the UI and the state. While the Data Access Gateway Chrome’s extension is small and does not use Redux or another state management, it can leverage the React’s state at the app.tsx. It means that to save and load the user’s data is a matter of simply dumping the state in the Chrome’s localstorage and to restore it — that is it. Nothing more.

public persistState(): void {
  const state = this.state;{ [SAVE_KEY]: state });
public loadState(): void {[SAVE_KEY], (result) => {
    if (result !== undefined && result[SAVE_KEY] !== undefined) {
      const state = result[SAVE_KEY] as AppState;

To summarize, a Chrome extension can be developed with any front-end framework. The key is to bring the build result along with the required file and make sure you connect in the manifest.json the index generated. React works well, not only because it generates for you the entry point as a simple HTML file which is the format required by Chrome’s extension. TypeScript is not a hurdle because the file generated by the build is JavaScript, hence no difference. React and TypeScript is a great combination. With the ability of developing the extension outside Chrome’s extension you can gain velocity and have a product rapidly in a shape that can be used by your user.

TypeScript and React – Continuous Integration for Pull Request from 3 minutes to 1 minute

At Netflix, software engineers own the full lifecycle of an application, from gathering the requirement to building the code, to the way we handle our life cycle process to the deployment which includes configuring AWS for DNS and load-balancing. I personally like to have on every pull request a build that makes sure that everything is building and not only on my machine as well as my unit tests to be run to make sure that no regression is introduced. For several months, this process was taking 3 minutes +-10 seconds. This was satisfying for me, it was accomplishing its main goal. I was expecting some time because of the nature of the project. First, I am using TypeScript, seconds I am using node modules and third I need to run these unit tests. The code is relatively small on that project. I wrote about 36k lines in the last 11 months and there are about 900 unit tests that need to run.

Moving from 3 minutes to 1 minute 30 seconds

The first step was to add the unit tests. Yes! The first few months only the build was running. Mainly because we are using Bitbucket and Jenkins and I never took the time to configure everything — and it is not straightforward to get coverage with Jenkins for JavaScript code. Nevertheless, I was using the create-react-app “react-scripts-ts build” commands which are way slower than running the command “react-scripts-ts test –env=jsdom –coverage”. In my case, it trimmed 1 minutes 30 seconds.

Still, I was observing in the remaining 1 minute 30 seconds a waste of 1 minute trying to get node_modules by the command “npm install” regardless of my step specifying “npm ci”. The difference between “install” and “ci” is that the first one is slower because it performs a task for the package-lock.json which the “ci” command skip by relying on an existing generated package-lock.json file

Moving from 1 minute 30 seconds to 1 minute 10 seconds

The install command was bothering me and I found out that we had some internal process taking over some steps. To keep it show, I had to do some modification. First, in Jenkins, you can preserve folder like the node_modules. This is under “Build Environment”. Do not forget to check the “Apply pattern also on directories”. However, the problem is still npm. The “ci” removes the node_module. We are not more advanced than before. So, the idea is to use back the install command.


There is still some room for improvement. To be honest, I had to completely not delete the whole workspace to have it to work. Jenkins was removing all the time the node_modules regardless of the syntax I was using. I also found suspecious that it takes 20 seconds for npm install to figure out that nothing has changed — that is very slow. I’ll have to investigate further with yarn.