Dragging a DOM Element Visually Properly

A few weeks ago, I started using a web appliance (Trello) to handle my side projects priority.  The main action is moving the cards between columns and I realized that I did a way better job when I worked at Microsoft with Visual Studio Team Services than Trello. 

My Visual Studio Team Services Animation

You can compare with the following example of Trello and probably see the difference right away.

Trello board

Trello has the idea of showing that something is in motion, which is great. However, the way I created the animation feels more natural. The concept is like if the mouse cursor is a pin on a piece of paper. Moving a piece of paper with a pin naturally tilt the paper differently when moving left to right, or right to left. This is what I developed. Trello tilts the card, but in a constant way, which is on the right side.

I am using shadow to create a depth of field and showing that we are above other elements that remain still. Trello is also using that technic.

However, I also added a CSS scale effect of about 5% which simulate we take from the board and move it somewhere else. Like in real life, when taking a piece of something and moving it, the perspective change. Trello does not change the scaling factor, hence the card remains the same size. In my view, the lack of scaling removes the realistic aspect of the movement.

Finally, I changed the cursor icon to to be the move pointer. The move pointer shows to the user the potential direction the item can be dragged and moved. In VSTS, it was every direction hence the 4 arrows cursor. Trello is not changing the cursor. Once again, small detail.

In the end, small details matter. The combination of a dynamic tilting, scaling, shadow and cursor modifications create a smooth and snazzy user interface. You can push the limit by slightly blurring the background. However, this last detail was removed for performance reason but would make total sense without that speed penalty.

Side Panel instead of Dialog

During my time at Microsoft on the product Visual Studio Online (renamed to Visual Studio Team Services), my team had to build the new dashboard. One feature is the configuration which consists of selecting widgets that can be part of the dashboard with a specific configuration. For example, you could choose a chart with the number of bugs or a widget that display the list of open pull requests, etc. Each widget can be updated once added or removed. The initial idea was to use a modal dialog, and the MVP (minimal viable product) was built using this user interface pattern. I was against it, still against it and I modified it.

My issue with dialog (and modal dialog) is by experience I know it never ends very well. It is even worse with the web. First of all, a dialog often requires to open other popups hence result in many layers of dialog. For example, it can be a configuration dialog which has another dialog to select a user from a list of existing user or it can be a color picker etc. Second, the goal of the dialog is mainly to display information within the context of the actual page. For example, the dashboard you want to add a dashboard’s widget. However, these dialogs are oversized and remove visibility of the underlying main page. The modal defeats the purpose. Third, the most dialog does not handle well responsiveness. Changing the browser size or simply being under a small resolution fail. Forth, many web pages that use dialog does not handle well scrolling.

A better pattern is to have a side panel that can open and close. This is what I ended up building for Visual Studio Team Services and worked very well. The configuration or the addition of a widget was simple and allowed a user to drag and drop and the proper location the widget. On the right side, you can select the widget you desire, configure this one and position it. All that with a visibility on the actual dashboard allowing the user to always have in focus what is already in place.

Recently, in my work at Netflix, I had to migrate from an older system to the new one the creation of users. Originally, the design was with a dialog. The problem is that you cannot copy information from the existing list, neither see if a user was already created and it was not mobile friendly (small resolution).  I opted to use a side panel. Here are a few interactions possible.

Partner Portal User Management Side Panel

Overall, the biggest gain is the reduction of layer ordering bugs. From Jira to Twitter or other systems that have dialog, there are always issues. It can be an error message that should be displayed on the main page but will be half on top of the dialog or that a dialog opens a dialog that reopens the first dialog creating a potential never-ending dialog. It increases the complexity of the user interface but also in term of a state which can grow exponentially. The simple pattern of the side panel reduces the complexity and increase the user visibility to the main task by not covering information that is valuable to the task.

How to organize all the model interfaces and types in your TypeScript project

So many ways, so many good solutions. I do not think you can go very wrong regardless of the approach employed with your types. However, there are some benefits in few patterns that I prefer. As a rule of thumb, I do not want to search far away from my types. While the tool that you are using might provide a shortcut to search by name easily, and I do leverage it most of the time, it is crucial to find types easily when you do not know the name or when you are not sure if it already exists. The reason is mostly to avoid duplication and to ease the introduction of new developers in your project but also to find out if one of your teammates already coded a definition that you could reuse.

First of all, the approach I am using is subjective. It means that you have to take many choices that some people might or not agree with your team. I have worked in systems where all the components (views) were together, and all the test in one folder, and all the model in one folder. This is straightforward but come with the cost that when your application grows that these folders are neverending. It becomes hard to navigate. I also saw folder divided by team, which is okay until organization breakdown the structure and then it becomes a mess. I also seen some separation that works by model but mix all model files are a sibling to your views (in React your .tsx file, in Angular the .html), to your tests, etc. The approach is not bad, but when you start having more than one component or page using the same model you are stuck.

The approach I recommend is to have your source folder divided by domain of business. For example, under your “source” folder, you can have a “UserManagement”, a “Products”, an “Inventory”, a “Shipping” folder etc. The idea is at that point, you do not rely on the technology that you will use, neither how your application will separate the UI. Your user management might have 1 page or 10 pages, it doesn’t matter. The choice of these domains is subjective. You could avoid having “Inventory” and put everything under “Products”. It is up to you and your team to figure out how fined grained your want these domain folders.

Under each of these domain folders, you can start bringing some specification of your technologies. For example, with React, each of these folder has a “Actions”, “ContainerComponents”, “Middlewares”, “Models”, “PresentationComponents” and “Reducers”. Once again, you have some freedom. You can keep all your container and presentation into a single “components” folder. I like to divide them because I know that my containers are connected while my presentations are more reusable and not connected to Redux directly. Within each of these folders, I have my Typescript file (.ts or .tsx) as well as my tests. For a long time, I positioned my tests files as a sibling of the source folder, but having them close to the files they are testing is a way to keep them in sight. Because they are always in view, I tend to test more. I also clearly see if a file is not having a test file.

Some types are cross domains. Shared types are also valid for very generic visual components. In that case, I have at the root, inside the source folder a domain called “shared.” The cross-domain folder is the one that every domain folder can access. Having a shared folder require to be diligent. It is easy to insert code random code instead of thinking where to locate the file accurately.

If we come back to the type, each domain folder has a model folder which has many files. The separation is not one type, one file. The reason is that it would become uncontrollable. The number of types grows very fast, and it is not rare to have thousands of types, even for a small product. The idea is to divide the file per main entity. For example, a product (interface Product) can have a product category (Interface ProductCategory) and a ProductSize (enum ProductSize). These three types can be in the “Product.ts” file. Again, there is room for subjectivity. Someone could argue that more or less type can be in or out of a file. The important here is to have something that makes sense. I often like to have the main entity, in that case, “Product” to hold every children class, interface, enum, type in the same file. I like having a mapping file “ProductMapping.ts” that manipulate the entity from a normalized structure to a denormalized for example.

At the end of the day, the most important detail is to stay coherent. The overall goal is to find where are a type without searching for a long time.

TypeScript and React to Animate a Custom HTML Bar Graph

In my work at Netflix, I had to create a page to visualize when a cache (also known as an appliance) can receive new movies. The former Partner Portal had a table with one row per cache for a specific organization. Every row had one column for every hour of the day (24 columns). The table can be very verbose depending on how structured an organization is in term of sites and appliances. I opted to an improved visualization that removed the redundancy of the unused cells and leveraged React and animation to have the user following the transformation when changing options like timezone and visualization preferences. The result is that a user interfaces less clunky that displays four times more information for the same viewport.

Cache Fill Window

I decided to design a simplistic user interface. The top row of the section is what the user can configure for the chart. The first selector allows choosing from a limited set of timezone available from the organization’ sites. The second selection is how caches are sorted. Underneath is the result. You can see the actual time with the “now” pin and the hour used that are highlighted. This is a small detail that fades away most of the hours of the day instead of having a dedicated column as in the previous design.

I opted to remove the redundancy of repeating the hours on every row. The original design has on top of each appliance a row with 24 columns for each hour of the day. It was taxing the screen with redundant information and it was taking space were pertinent information can be positioned. The change lights up the readability but also give more room for useful data. To help the user having a clear idea of the time, I drop a line from the used hour down to the bottom of the page. Also, when the user hovers a cache, I have a popover that mentioned the time frame.

In term of navigability, it is possible to click on any cache to move to the detail page of this one. The green handle shows the manifest cluster of the caches. It has multiple goals. The first one is to show the name of the manifest cluster. The second is to indicate if the caches type (manifest cluster, global, flash). The third is for the future when we add the feature to allow the user to alter the time from a manifest cluster — the green handle will be an actual drag-and-drop handle.

So, how it was built? All with TypeScript, React and HTML/CSS. The first row of option has its own component and the result has also its own component. Both live in the “FillWindows” component.

React Components for Fill Windows Feature

The menu is simple and has two functions in its props which allows the “Fillwindows.tsx” to know how to render the information.  The “FillwindowsNowIndicator” is the pin that point the current time. It receives the timezone as well as some detail about the width of the diagram. Finally, depending on the organization, a different number of “FillwindowsCluster” are generated. Information travels always from the “Fillwindows.tsx” which has the main DIV with a position absolute. The position type to absolute is required to position every element with a left and top style which will be animated later by setting the value and animating everything with a translation.

The “Fillwindows.tsx” is rendering the vertical bar as well but this could be extracted into another component as well. The separation into an additional component is a future improvement I plan to execute. However, at the moment, it works flawlessly by having the “Fillwindows.tsx” knowing about the business models object that has the information about the caches. With the information, it is possible to figure out which hours to highlight and which one to draw.

The graph is horizontally centered as well as being absolute. It is centered allowing a variety of resolution to have a look and feel similar. The configuration is written in the CSS file of the main component. The reason is that it will not change dynamically, hence can set it outside React.

.FillWindows .timeline-container{
    position: absolute;
    left: 0;
    right: 0;
    margin-left: auto;
    margin-right: auto;

Most of the feat of this feature is how to get the right position for every element. The calculation is performed at the “Fillwindows.tsx” render’s level where the width and height of the whole graph can be calculated with the business models. From the main component, the right final position of every element can be passed down in all children component by properties. For example, the “FillwindowsCluster.tsx” is receiving a top and a left position and also receive the height of each appliance they must render.

const individualClusters: JSX.Element[] = [];
let top = tableHeader;
clusters.clusters.forEach((cluster: FillWindowsDataCluster) => {
            key={"fwc-" + cluster.clusterId}
    top += cluster.numberOfAppliances * applianceHeight + spaceBetweenCluster;

return <div
            width: totalClustersWidth
</div >;

The animation is using the React and animation technic previous discussed. For each element, it sets the top and left of the final position and rewind with the previous position with CSS animation that has a transition. It means the “FillwindowsCluster.tsx” has in its class the two values as well as a reference to the DOM element to modify the style dynamically.

private domCluster: HTMLDivElement | null = null;
private lastLeftPosition: number = 0;
private lastTopPosition: number = 0;

The React receive property function get the current position.

public componentWillReceiveProps(nextProps: FillWindowsClusterProps): void {
    if (this.domCluster !== null) {
        const coord = this.domCluster.getBoundingClientRect();
        this.lastLeftPosition = coord.left;
        this.lastTopPosition = coord.top;

And the React did update function calculate the animation values, moving immediately the position to its last position relatively to the final position and using the request animation frame to tell the browser to remove the transformation (delta between last and current position).

public componentDidUpdate(previousProps: FillWindowsClusterProps): void {
    if (this.domCluster !== null) {
        const coord = this.domCluster.getBoundingClientRect();
        const deltaPositionLeft = this.lastLeftPosition - coord.left;
        const deltaPositionTop = this.lastTopPosition - coord.top;
        if (this.domCluster !== null) {
            this.domCluster.style.transform = `translate(${deltaPositionLeft}px, ${deltaPositionTop}px)`;
            this.domCluster.style.transition = "transform 0s";
        requestAnimationFrame(() => {
            if (this.domCluster !== null) {
                this.domCluster.style.opacity = "1";
                this.domCluster.style.transform = "";
                this.domCluster.style.transition = "transform 850ms, opacity 800ms";

If this is not clear, please refer to the link previous mentioned in this article.

The re-design of the feature could have been the same old static grid with a huge repetition of “hours” and several cells empty. However, with a little bit of design, something more refines was born. The touch of animation improves the user experience while adding a finishing touch.

My Second TypeScript Book is Available

You read right! My second TypeScript book! I published my first book entitled Holistic TypeScript in April this year and wrote about my motivation in a blog post. This time, the publisher Packt reached for a quick start guide for TypeScript 3.0.

I jumped at the opportunity to cover all feature in a simpler way than my original book. While the former book is up-to-date with all TypeScript’s features in a great depth, the latter is targeting introducing all powerful feature of the language.

The book is available on Packt website and on Amazon.

In this book you will learn how to:

  • Set up the environment quickly to get started with TypeScript
  • Configure TypeScript with essential configurations that run along your code
  • Structure the code using Types and Interfaces to create objects
  • Demonstrate how to create object-oriented code with TypeScript
  • Abstract code with generics to make the code more reusable
  • Transform the actual JavaScript code to be compatible with TypeScript

I encourage you to read TypeScript 3.0 Quick Start Guide if you are new to TypeScript or if you want to share your love of TypeScript to a friend that is not yet leveraging the power of type for web development. For those at an intermediary level and higher, Holistic TypeScript remains the best choice even if it cover up to 2.8. I’ll update Holistic TypeScript every year with new TypeScript features. 

Detecting a New Version of your React/Redux App in 15 minutes

While I am well aware of the feature of notifying to the user that a new version is ready to be served, I never implemented a solution myself. Gmail and many websites do it to allow the user to refresh the website which brings the new JavaScript, CSS and other assets. It is a great way to encourage users to refresh the page once in a while. With Web application, it occurs that some users do not refresh so often. Myself, I rarely refresh my Gmail webpage until I see the notification.

This week, someone raised the point that it would be interesting to have this feature on the application I am working on at Netflix. I wish I had time to do it, but instead of procrastinating, I decided to give it a try. The idea was to implement a way as cheap as possible since I have a backlog that is gargantuan

If you recall, a few days ago, I discussed how to burn the Git hash into your React application. The article described how to inject in the environment variable the Git hash of the actual build. It is done at compilation time and the React has a way to read the information from JavaScript. In my case, the version is stored in an environment variable named “REACT.APP_VERSION.” It is accessible at runtime. That being said, the version remains on the client browser until the user fetches the new JavaScript file that has a new value which happens only on a new build. The idea is to leverage the fact that if the client keeps the version until a new version is available, on a refresh, then checking once in a while for a version change is a matter of comparing the Git hash on the client machine to the actual head of the repository.

The idea can be broke down in few steps:

  1. On build, burn the Git Hash into the JavaScript
  2. On build, generate a file with the Git Hash
  3. Fetch once in a while from the server a file that has the latest Git
  4. If the Git Hash from the JavaScript environment variable is different from the file on the server, show a message to the user to refresh

The first step has been already discussed, but in short, it consists to change the “npm run build” to take the head Git hash and to pass it down to React.

 "build": "REACT_APP_VERSION=$(git rev-parse --short HEAD) react-scripts-ts build"

The second step require a change on the build system. At Netflix, we are using Jenkins. I added a Bash Script step that look like the following:

CommitHash=$(git rev-parse --short HEAD)
echo "HASH"
echo $CommitHash
echo $CommitHash > "public/version.json"
cat public/version.json

The step number is open to how you want to handle it. I personally want to avoid setTimout. I rather have a Redux Middleware that will check once in a while. The middleware holds the time of the last check and on every action, verify that the time has elapsed from the threshold that you set. I have mine to 5 minutes. Every 5 minutes, or more depending of when the user does an action, it fetches the static file. As you can see in the previous step, the Git Hash is stored in a file under the “public” folder which is accessible with a simple Ajax call. If the string is different from the JavaScript environment variable, an action is dispatched that shows a message to the user to refresh the page.

const diff = this.currentTimeSinceEpoch() - this.lastCheck;
    this.lastCheck = this.currentTimeSinceEpoch();
    (async () => {
        const currentVersion = process.env.REACT_APP_VERSION === undefined ? "dev" : process.env.REACT_APP_VERSION;
        try {
            const file = await AjaxLibraryThatYouUse<string>({
                request: {
                    url: "/version.json"
            const newVersion = file.result;
            if (newVersion !== currentVersion) {               next(SharedActions.actionInfoMessage(localize(globalResources.new_version_available)));
        } catch (error) {
           // Track error here

The solution took less than writing this article! It might not be a push notification, neither having a complete Rest API. It is not the most elegant because we fetch every 5 minutes, but at the end of the day, a new feature is born in 15 minutes of work. Now, I’m back to my backlog!

Handling Unexpected Error with React and Redux

Regardless of how professional developer you are, there always will be an exception that will slip between your fingers, or should I say between your lines of code. These errors that bubble up the stack are called unexpected error. While it is always preferable to catch the error as close as possible of where the exception is thrown, it is still better to catch them at a high level. In the web world, it gives an opportunity to have a generic message to the user gracefully. It also provides an open door to collect the unexpected error and act before having any of your customers reach you.

There are three places where you need to handle unexpected errors in a stack using React and Redux. The first one is at React level. An unexpected error can occur in a render method for example. The second level is during the mapping between Redux and React. The error occurs when we move data from the Redux’s store to the React’s property of the connected component. The third level is an error in the chain of middlewares. The last one will bubble up through the stack of middleware and explode where the action was dispatched. Let’s see how we can handle these three cases in your application.

React Unhandled Exception

Since version 16, React simplified the capture of error by introducing the lifecycle function “componentDidCatch.” It is a function like “render” or “componentShouldUpdate” that come with the framework. The “componentDidCatch” is triggered when an exception go thrown any children of the component. The detail about what it covers is crucial. You must have a component that englobes most of your application. If you are using React-Router and would like to keep the web application with the same URL and have the menus to stay in place, this can be tricky. The idea is to create a new component with the sole purpose of wrapping all top route components. Having a single component to handle the unexpected error is interesting. It is simple, easy to test, with a cohesive and single task.

export interface ErrorBoundaryStates {
  hasError: boolean;
export class ErrorBoundary extends React.Component<ErrorBoundaryProps, ErrorBoundaryStates> {
  constructor(props: ErrorBoundaryProps) {
    this.state = { hasError: false };

  public componentDidCatch(error: Error, errorInfo: ErrorInfo): void {
    this.setState({ hasError: true });
    YourLoggingSystem.track(error, "ErrorBoundary", { errorInfo: errorInfo });

  public render() {
    if (this.state.hasError) {
      return <div className="ErrorBoundary">
      The application crashed. Sorry!
    return this.props.children;

However, with React-Router, every route is assigned as property. The property is an excellent opportunity to create a function that returns the React class.


// Constructor:
 this.page1Component = withErrorBoundary()(Page1Component));
// Render:
<Route path={Routes.PAGE_1} component={this.page1Component} />

export const withErrorBoundary = () => <P extends object>(Component: React.ComponentType<P>) =>
  class WithErrorBoundary extends React.Component<P> {
    render() {
      return <ErrorBoundary}><Component {...this.props} /></ErrorBoundary>;

Redux Mapping Unhandled Exception

This section will be short because it is covered with React. However, I wanted to clarify that this can be tricky if you are not doing exactly like the pattern I described. For instance, if you are wrapping the “withErrorBoundary” not at the initialization of the route but directly when you connect — it will not work. For example, the code below does not work as you might expect. The reason is that the error is bound to the component but not to the code being bound by the React-Connect.

export default connect<ModelPage1, DispatchPage1, {}, {}, AppReduxState>(
    (s) => orgMappers.mapStateToProps(s),
    (s) => orgMappers.mapDispatchToProps(s)

Outside of the initial solution proposed, it is also valid to wrap the “connect” to have the desired effect of receiving the error in the “componentDidCatch” of the “ErrorBoundary”. I prefer the former solution because it does not coerce the ErrorBoundary with the component forever.

export default WithErrorBoundary(connect<ModelPage1, DispatchPage1, {}, {}, AppReduxState>(
    (s) => orgMappers.mapStateToProps(s),
    (s) => orgMappers.mapDispatchToProps(s)

Redux Middleware Unhandled Exception

The last portion of the code that needs to have a catch-all is the middleware. The solution goes with the Redux’s middleware concept which is to leverage function that calls themselves. The idea is to have one of the first function, middleware, to be a big try-catch.

const appliedMiddleware = applyMiddleware(

// Excerpt of the middleware:
return (api: MiddlewareAPI<Dispatch, AppReduxState>) =>
       (next: Dispatch) =>
       (action: Actions): any => {
            try {
                return next(action);
            } catch (error) {
                YourLoggingSystem.track(error, "Unhandled Exception in Middleware");
                return next(/*Insert here an action that will render something to the UI to indicate to the user that an error occured*/);


Handling errors in a React and Redux world require code to be in a particular way. At this day, the documentation is not very clear. Mostly because there is a clear separation between React, Redux, Redux-Connect, and React-Router. While it is very powerful to have each element of the puzzle separated, this comes with the price that the integration is in a gray area. Hopefully, this article uncovers some mystery around how to collection unhandled errors and removed confusion with the particular case of why mapping error can throw error through the React mechanism when not positioned at the right place.

Burning the last GIT commit into your telemetry/log

I enjoy knowing exactly what happens in the systems that I am actively working and that I need to maintain. One way to ease the process is to know precisely the version of the system when an error occurs. There are many ways to proceed like having a sequential number increasing, or having a version number (major, minor, path). I found that the easiest way is to leverage the GIT hash. The reason is that not only it point me into a unique place in the life of the code, but it also removes all manual incrementation that a version number has or to have to use/build something to increment a number for me.

The problem with the GIT hash is that you cannot run it locally. The reason is that every change you are doing must be committed and pushed. Hence the hash will always be at least one hash before the last. The idea is to inject the hash at build time in the continuous integration (CI) pipeline. This way, the CI is always running on the latest code (or a specific branch) and knows what is the code being compiled thus without having to save anything could inject the hash.

At the moment, I am working with Jenkins and React using the react-script-ts. I only had to change the build command to inject into a React environment variable a Git command.

"build": "REACT_APP_VERSION=$(git rev-parse --short HEAD) react-scripts-ts build",

In the code, I can get the version by using the process environment.

const applicationVersion = process.env.REACT_APP_VERSION;

The code is minimal and leverage Git system and environment variable that can be read inside React application easily. There is no mechanism to maintain, and the hash is a source of truth. When a bug occurs, it is easy to setup the development environment to the exact commit and to use the remaining of the logs to find out how the user reached the exception.

Google Analytic with React and Redux

I had to integrate Google Analytic in one of our website at Netflix. It’s been a while I had to use Google Analytic and the last time was simply copy-pasting the code snippet provided by Google when creating the Analytic “provider” account. The last time was a few years ago and the website was not a single-page application (spa). Furthermore, I the application is using Create-React App (TypeScript version) with Redux. I took a quick look and found few examples on the web but I wasn’t satisfied. The reason is that all examples I found were hooking Google Analytic at the component level. I despise having anything in the user interface (UI), React that is not related to the UI.

The first step is to use a library instead of dropping the JavaScript directly into the code.

npm install --save react-ga

Next step is to configure the library to set the unique identifier provided by Google. I am using the create-react-app scaffolding and I found the best place to initialize Google Analytic to be in the constructor of the App.ts file. It is a single call that needs to be executed once for the life of the spa.

class App extends React.Component {

  public constructor(props: {}) {
    ReactGA.initialize(process.env.NODE_ENV === Environment.Production ? "UA-1111111-3" : "UA-1111111-4");
  public render(): JSX.Element {
    return <Provider store={store}>
      <ConnectedRouter history={history}>
        <AppRouted />

export default App;

The last step is to have react-router to call the page change when the routing change. React-router is mainly configured in React, but I didn’t want to have any more ReactGA code in React. The application I am working uses Redux and I have a middleware that handles route. At the moment, it checks if the route change and analyzes the URL to start fetching data on the backend.

  return (api: MiddlewareAPI<AppReduxState>) =>
            (next: Dispatch<AppReduxState>) =>
                <A extends Action>(action: A): A => {
               // Logic here that check for action.type === LOCATION_CHANGE to fetch the proper data
               // ...
               // If action.type === LOCATION_CHANGE we also call the following line:

The previous code is clean. Indeed, I would rather not have anything inside React, but App.tsx is the entry point and the initialize function injects into the DOM Google’s code. The Redux solution works well because the react-router-redux used gives the pathname which is the URL. By using the function “pageview” we are manually sending to Google Analytic a page change.

Improving the User Experience of a Complex Form with Animation

I am working at Netflix on one of our website dedicated to our partners around the world to get information as well as performing actions in their caches (Netflix’s CDN). One form allows to configure the BGP Configuration. It was present in the legacy portal. I found it complex to understand and while many users are well aware of how BGP works, some other partners have less knowledge. I am a strong believer that a user interface must guide the user to avoid writing bad information and then being alarmed by a red error message. My philosophy is to guide the user during the data input and make the experience enjoyable without fear of bad action or wrong inputs.

While I was learning how Netflix’s caches worked and how BGP was supposed to be configured, most people were drawing a diagram. Since the “human to human” natural explanation was to use simple graphical geometrical shape to explain the concept, I decided to not fight that natural behavior and to embrace it.

The first step was to produce a simplified version of different kind of sketches I received and to generalize the idea from several different states the BGP can be configured. The configuration in the old system was two forms for IPv4 and IPv6 which required the user to have a mental picture of the configuration not displayed. I decided to combine the two to avoid the user to open two browser windows (or tabs). I also wanted to avoid a completely new form per use cases. For example, a BGP configuration requires hops between the gateway and the peer when the gateway is not on the same subnet of the peer. The peer IP is configurable, hence can change which may or not have the hops count in an instant. 

BGP Configuration with IPv4 and IPv6

The screenshot above shows a configuration. On the left is an OCA for “Open Connect Appliance” which is the cache that has all the movies. On the right is the Peer. The gateway in the IPv4 and IPv6 is on the same subnet hence does not have any additional inputs to be filled. This diagram has all the IPv4 at the top and the IPv6 at the bottom. After one or two usages, it becomes easy to see what inputs are for which internet protocol as well as for which machine (cache or peer).

Another detail is the highlight of which input or information belongs to which part of the graphic. When the mouse hovers any portion of the user interface, you can see the internet protocol being highlighted. It guides the user into knowing which element is getting changed.

Hover highlight the impact of a change

Another detail shown in the previous image is that when you activate a section, there is inline-help, a circle with a question mark, that appears. The help that does not clutter the interface and appears and the right moment allows the user to get additional information about the values. After getting telemetry in these inline-helps, I can confirm that the idea is a success. People are reading them!

Inline help that appears only when relevant

You may notice, on the first screenshot, that the reset and submit button are disabled. The inline help next to the button explains the reason to the user. When the user interacts with the form, the buttons change state and also the inline-help. The help is dynamic everywhere. It means that it’s a lot more work in term of development because the messages require to be smarter but it also means that the user does not fall into the trap of generic message that does not help them — every message is right for the user’s scenario.

Gateway appears when the IP are not on the same subnet. Merge of the gateway when IPv4 and IPv6 under the same situation

In the last animation, we can see a more advanced scenario where the user interface guides the user by showing additional fields that must be entered: the hops count. The field has some business logic that indicates that it must be between 1 to 16 and the interface adapts to shows the input as well as where the information belongs. The hops are between the gateway and the peer. Also, you can see the gateway IP moving to the gateway which is not anymore the same as the Peer IP. Suddenly, anyone using the form sees that the gateway is an entity apart of the Peer and that it has an IP that cannot be changed, but the Peer IP can.

You may wonder how it was built? It was built using React, TypeScript and basic normal HTML. No SVG, no canvas. Working with SVG is trendy but it overly complex for even basic styling. For example, adding a drop shadow. It is also more complex to meddle with input fields. Using DIV and Inputs did the job perfectly. 

I already wrote about how to handle Animation with React, and the exact same technic is used. CSS3 animations conduct animations. Many more scenarios require parts to move, and every dance is orchestrated by the main form component and children components that add styles and CSS classes depending on every property that describes the business logic rules.

The graphic is wide and on small resolution could be a problem. I decided to fallback to a simple basic form with label on top of the inputs. Nothing flashy, but enough to have someone in a small tablet or a phone to be able to configure the BGP.

To conclude, the animation is a step into a more complex and fancy vision I had. Like I wrote in the performance article, I never had a dedicated task (or time) reserved to make this feature with animation — I had to do it. The more I work in software engineering, the more I realized that it is very rare that you have time for extra. This unfortunate rule applied for user interface extra, for performance tweaks and even for automated tests. You must find the time, squeeze these little extra between bigger initiative and always have some buffers for issues which could be used if everything went smooth into some polishing. In the next bug fix, I plan to improve the colors and how smooth the animation happen on slower computer, but this is story for another time.