Skip to main content

What you might learn about your APP by delaying or throttling API responses.

Mark Heyermann 22 March 2020

So, you created a frontend application or web application. You spent a lot of time choosing which framework to pick. You stuck to the documentation and also to a lot of best practices and all the do’s and don'ts out there. Now you want to know about some real-world test scenarios you might put it through before letting it go into production and that is how you came up with the idea or need to test something related to the network connection.

Or you are already running an app in production and from time to time you are getting user errors or reports that seem to be related to poor network connections with low bandwidth or high latency.

Tl;dr: Delay or throttle your API requests to simulate production environments and gain new insight about possible issues in your app.

As a software development company focused on web and mobile, we are doing a lot of JavaScript development for the front- and backend. Therefore this post is focused on SPA development.

We want to share one kind of test we usually use throughout the development process when building web or mobile apps. Of course this is not only a test you can run on existing apps or codebases, it is more a scenario you can and should think of every time when implementing an API interaction.

Photo by Icons8 Team on Unsplash Photo by Icons8 Team on Unsplash

The illusion of perfect network connections

When a Single Page Application is built, this usually happens on the local machine of a developer. No matter if Angular, Vue or React, the developer will run a local build job and something will serve the application locally. In addition, when the SPA connects to an API, it usually connects to an API on a development server with a very stable and fast network connection. For example a development API on the same machine.

This stable and predictable network setup is great to not get in a developers way while coding, but it can also mislead in various ways. It might make you forget that in the real world of end-user connectivity:

  • The API is not always available for the web or mobile app
  • Not every response takes below 200ms to arrive at the frontend app
  • The responses do not always arrive in the order of the requests
  • Not every request is successful
  • The network connection is not stable or predictable in general

So during development, we all are usually coding in our little cozy bubble of perfect network connections, meaning we have low latency, high bandwidth and perfect reliability. Which is good because this allows us to focus on what we are building without distractions. But for a web or mobile app in real life, things look a little different. Bad or no network connectivity at all is only one of the factors that will hit an app really hard when it has not been prepared to properly run out in the wild. Of course those differences are usually bigger for mobile apps or web apps on mobile devices then they are for desktop use cases.

So let’s look into how we can get at least one step closer to simulate a real world scenario for our apps during development.

Simulating a real world scenario

Let’s visualize what difference we actually want to reproduce while developing and testing an application. The scenario during development for multiple requests in parallel looks like this.

Timeline visualised

The only deviation might be a tiny little difference in response times, amounting to some milliseconds in total. But some things will nearly always be given when running this in a development environment.

  • All Request will finish successfully (at least on the network layer)
  • Orders request will finish before customers request finishes
  • Orders and customers requests will be finished before the user starts interacting with the App UI
  • Autocomplete requests will be finished quickly and not overlap

But in production, this is what actually might happen

Timeline visualised

Here we see an entirely different timeline. The main differences are

  • Orders request finishes after customers request even if it has been started earlier
  • Orders and customers requests are still pending when user interaction already starts
  • Orders requests overlaps with autocomplete request
  • Autocomplete requests overlap each other
  • Everything takes longer

Of course even this scenario is by far not the most complex one. Requests might completely fail or timeout when using a web or mobile application in the real world.

What you might learn about your application

When switching from a perfect environment to a more unstable one in your web or mobile app development process, you will most likely notice that you did not code as defensively as possibly necessary. Of course, defensive coding in general and sticking with the best practices and standard patterns of asynchronous operations can prevent a lot of those issues. But in a complex application with a lot of business cases using the same API endpoints in different combinations depending on how the user interacts with the UI it is very likely that some scenarios go unnoticed.

So what you might learn by throttling or delaying the API is

  • Not all of your loading indicators are working as perfect as you thought
  • You have unexpected side effects when requests overlap or take a very long time
  • Your state management (especially reducers) needs some improvements to be ready for every possible scenario
  • You might need to improve your UI and UX for those scenarios

Loading Indicators

When requests always finish in the same order and within the same timeframe it is easy to manage loading indicators. But if there are no rules anymore and everything can happen they usually require a little more effort to work really well. One design decision there might be if you want to use one loading indicator showing the combined loading state of all pending requests or if you want to use multiple ones to show every entity’s loading status explicitly.

Side Effects

For some requests you want to ensure that you always cancel them when they are not relevant anymore. For example when the user leaves the page that triggered the request in the first place. Others you want to actively keep running in the background no matter what happens with the UI.

When delaying or throttling the API you are testing if all of those possible overlapping request scenarios are well implemented and either cancel requests or ensure the app works properly with multiple requests happening in parallel.

State Management

If you use a state management library like NGRX, Redux or Vuex in you Single Page Application you will usually have to decide between two options in you reducers: overwriting state data with response data or merging response data into state data.

Responses arriving in an unusual or unexpected order might uncover where you have to reconsider or improve how you put or merge data from responses into the application state.

UX

When experiencing an app like a user will (with delays and a little touch of unreliability in the network) you might learn where you want to put more effort into the UI or UX of your app. This could mean improving loading state handling in components, switching to a more or less optimistic UI or other design decisions like disabling UI elements while in loading state.

If you want more chaos

If delaying or throttling is not enough for you and you want the real user experience including edge cases some users might experience you need to go one step further into chaos mode. And you are right to test that, at least from our perspective.

Imagine your request / response timeline looks like shown below, will your frontend application still do something that makes sense to the user?

Timeline visualised

Apply delays or throttling randomly

There is still a lot of consistency in a delay if you apply the same delay to all API requests. If every response is delayed by 2 seconds, the overall order of events still stays the same and everything just becomes a bit slower. If you want more chaos you should go with a random delay or random throttling so that some responses arrive within the usual time but others need an unusual amount of time. With a random factor like that you will detect even more side effects and possible bugs in your frontend.

Cancel or timeout requests randomly

One more option is to use some logic that randomly cancels requests as if the connection failed or was interrupted. For example randomly sends out a 503 response (Service unavailable) as if the server was not available or randomly lets requests time out.

How to throttle an API

If you are convinced by now that you should try this out and test your web or mobile app with delayed api responses, but you need a starting point on how to do that, here are two options that should get you going. Of course there are a lot others based on whatever tools or languages you prefer.

Throttling and delaying via browser (Chrome)

The Chrome Developer Tools come with an option to simulate slower network connections. This is an easy and well integrated way to quickly throttle the network speed for your web app. You can also achieve a delay by creating a custom throttling profile with very high latency. If you want all requests to be delayed for 2 Seconds you could ‘Add custom profile’, keep download and upload speed empty and set latency to 2000.

Now, every request will be delayed for 2 seconds because of the simulated latency of 2000ms.

If you want more than that, like canceling requests, delay specific endpoints only, different delays or randomness you will have to choose another method.

Throttling and delaying via proxy

There are several proxy implementations where you can apply pretty much whatever logic you need, because the proxy software offers hooks for that. The node-http-proxy is one of them. The documentation offers an example on how to set up a proxy with latency.

If you are convinced by now that you should try this out and test your web or mobile app with delayed api responses, but you need a starting point on how to do that, here are two options that should get you going. Of course there are a lot others based on whatever tools or languages you prefer.

Written by

Mark Heyermann

Gründer und Geschäftsführer der RKNN GmbH. Berater und Entwickler in Kundenprojekten. Head of Product der RKNN Produkte.