What you might learn about your APP by delaying or throttling API responses.
So, you created a frontend application or web application. You spent a lot of time choosing which framework to pick. You learned all the dos and don'ts as well as a number of best practices that you could find. Now you want to know about some real-world test scenarios you might put it through before letting it go into production and that is how you came up with the idea or need to test something related to the network connection.
Or you are already running an app in production and from time to time you are getting user errors or reports that seem to be related to poor network connections with low bandwidth or high latency.
Delay or throttle your API requests to simulate production environments and gain new insight about possible issues in your app.
We want to share one kind of test we usually use throughout the development process when building web or mobile apps. Of course this is not only a test you can run on existing apps or codebases, it is more a scenario you can and should think of every time when implementing an API interaction.
Did you fall for the illusion of a perfect network connection?
When a Single Page Application is built, this usually happens on the local machine of a developer. No matter if Angular, Vue or React, the developer will run a local build job and something will serve the application locally. In addition, when the SPA connects to an API, it usually connects to an API on a development server with a very stable and fast network connection. For example a development API on the same machine.
So let’s look into how we can get at least one step closer to simulate a real world scenario for our apps during development.
This stable and predictable network setup is great to not get in a developers way while coding, but it can also mislead in various ways. It might make you forget that in the real world of end-user connectivity:
- The API is not always available for the web or mobile app
- Not every response takes below 200ms to arrive at the frontend app
- The responses do not always arrive in the order of the requests
- Not every request is successful
- The network connection is not stable or predictable in general
So during development, we all are usually coding in our little cozy bubble of perfect network connections, meaning we have low latency, high bandwidth and perfect reliability. Which is good because this allows us to focus on what we are building without distractions. But for a web or mobile app in real life, things look a little different and there is no advantage in hiding from the complexities of the outside world. Bad or no network connectivity at all is only one of the factors that will hit an app really hard when it has not been prepared to properly run out in the wild. Of course those differences are usually more relevant for mobile apps or web apps on mobile devices compared to desktop use cases.
Simulating a real-world scenario
Let’s visualize what difference we actually want to reproduce while developing and testing an application. The scenario during development for multiple requests in parallel looks like this.
The only deviation might be a tiny little difference in response times, amounting to some milliseconds in total. But some things will nearly always stay the same when running this app in a development environment.
- All Request will finish successfully (at least on the network layer)
- Orders request is going to be finished before the customers request
- Orders and customers requests will both be finished before the user starts interacting with the App UI
- Autocomplete requests will be finished quickly and will not overlap
But in production, this is what actually might happen
Here we see an entirely different timeline.
The main differences are
- Orders request finishes after customers request even though it has been started earlier
- Orders and customers requests are still pending when user interaction already starts
- Orders requests overlap with autocomplete request
- Autocomplete requests overlap with each other
- Everything takes longer
Of course, even this scenario is by far not the most complex one. Requests might completely fail or timeout when using a web or mobile application in the real world.
What you might learn about your application
When switching from a perfect environment to a more unstable one in your web or mobile app development process, you will most likely notice that you did not code as defensively as possibly necessary. Of course, defensive coding in general and sticking with the best practices and standard patterns of asynchronous operations can prevent a lot of those issues. But in a complex application with a lot of business cases using the same API endpoints in different combinations depending on how the user interacts with the UI, it is very likely that some scenarios go unnoticed.
So what you might learn by throttling or delaying the API is
- Not all of your loading indicators are working as perfect as you thought
- You have unexpected side effects when requests overlap or take a very long time
- Your state management (especially reducers) may need some improvements to be ready for API delay related scenario
- You might need to improve your UI and UX for those scenarios as well
When requests always finish in the same order and within the same timeframe it is easy to manage loading indicators. But if there are no rules anymore and anything can happen they usually require a little more effort to work really well. One design decision there might be if you want to use one loading indicator showing the combined loading state of all pending requests or if you want to use multiple ones to show every entity’s loading status explicitly.
For some requests, you want to ensure that they are being canceled when they are no longer relevant. For example when the user leaves the page that triggered the request in the first place. Others you want to actively keep running in the background no matter what happens with the UI as losing them may mean data loss or bad user experience.
When delaying or throttling the API you are testing if all of those possible overlapping request scenarios are well implemented and either cancel requests or ensure the app works properly with multiple requests happening in parallel.
If you use a state management library like NGRX, Redux or Vuex in a Single Page Application you will usually have to decide between two options in your reducers: overwriting state data with response data or merging response data into state data.
Responses arriving in an unusual or unexpected order might uncover where you could reconsider a design decision or improve the way the data from responses is dealt with or mergeed into the application state.
When experiencing an app like a user will (with delays and a little touch of unreliability in the network) you might learn where you want to put more effort into the UI or UX of your app. This could mean improving loading state handling in components, switching to a more or less optimistic UI or making other design decisions like disabling UI elements while in loading state.
For those who'd like a bit more chaos
If delaying or throttling is not enough for you and you want the real user experience including edge cases some users might experience you need to go one step further into chaos mode. And you are right to test that, at least from our perspective.
Imagine your request/response timeline looks like shown below, will your frontend application still do something that makes sense to the user?
Apply delays or throttling randomly
There is still a lot of consistency in a delay if you apply the same delay to all API requests. If every response is delayed by 2 seconds, the overall order of events still stays the same and everything just becomes a bit slower. If you want more chaos you should go with a random delay or random throttling so that some responses arrive within the usual time but others need an unusual amount of time. With a random factor like that, you will detect even more side effects and possible bugs in your frontend.
Cancel or timeout requests randomly
One more option is to use some logic that randomly cancels requests as if the connection failed or was interrupted. For example randomly sends out a 503 response (Service unavailable) as if the server was not available or randomly lets requests time out.
How to throttle an API
If you are convinced by now that you should try this out and test your web or mobile app with throttled api responses, but you need a starting point on how to do that, here are two options that should get you going.
Throttling and delaying via browser (Chrome)
The Chrome Developer Tools come with an option to simulate slower network connections. This is an easy and well-integrated way to quickly throttle the network speed for your web app. You can also achieve a delay by creating a custom throttling profile with very high latency. If you want all requests to be delayed for 2 Seconds you could ‘Add custom profile’, keep download and upload speed empty and set latency to 2000.
Now, every request will be delayed for 2 seconds because of the simulated latency of 2000ms.
If you want more than that, like canceling requests, delay specific endpoints only, different delays or randomness you will have to choose another method.
Throttling and delaying via proxy
There are several proxy implementations where you can apply pretty much whatever logic you need, because the proxy software offers hooks for that. The node-http-proxy is one of them. The documentation offers an example on how to set up a proxy with latency.
If you are convinced by now that you should try this out and test your web or mobile app with delayed api responses, but you need a starting point on how to do that, here are two options that should get you going. Of course there are a lot others based on whatever tools or languages you prefer.