Lag Kills! How App Latency Wrecks Customer Experience

Executive Summary

  • STL Partners’ analysis shows that while latency and app errors are only weakly correlated across the whole of Europe, once outlying operators (SFR, Wind and those in Germany) are removed, there is a strong positive correlation between the two: as latency increases so do app errors.
  • Intuitively, this makes sense: apps ‘time out’ waiting for responses causing errors and crashes.
  • Latency and app errors both negatively affect customer experience – customers are more likely to abandon apps as responsiveness and error rates increase:
    • 48% of users would uninstall or stop using an app if it regularly ran slowly.
    • 53% of users would uninstall or stop using an app if it regularly crashed, stopped responding or had errors.
  • Historically, customers have tended to hold the app developer responsible for errors (55% of users blame the app for problems and only 22% the mobile operator) but mobile operators have a significant impact on how quickly an app runs and how likely it is to experience an error and, as understanding of the operators’ role grows, users may well use this as a criterion when selecting their mobile service provider.
  • Performance among Europe’s operators for app latency and errors varies widely:
    • The worst-performing operator in Europe (3 Italy) experiences over three times the amount of requests with poor latency compared to the best-performer (Bouygues Telecom).
    • The worst-performing operator in Europe (O2 Germany) results in over twice the number of app errors than the best-performer (Bouygues Telecom again).
  • Improving customer experience is rapidly becoming a mantra of operators globally and for several players (in Europe at least) improving latency performance and reducing app errors caused by latency and other factors should be a key priority. For without improvement, poor performing operators will find themselves at a disadvantage and may struggle to retain existing customers and recruit new ones.


Key objectives

Network latency is a key driver of user experience. In applications as diverse as e-commerce, VoIP, gaming, video or audio content delivery, search, online advertising, financial services, and the Internet of Things, increased latency has a direct and negative impact on customers. With higher latency, customers fail to complete tasks, leave applications, or experience application errors. This, in turn, results poorer core business KPIs for the application provider – lower ratings, fewer subscribers, or reduced advertising fees.

As we showed in a recent report titled Mobile app latency in Europe: French operators lead; Italian & Spanish lag, with the modern Internet dominated by flows of small packets on fast networks, latency accounts for the biggest share of total load times and tends to determine the actual data transfer rates users see. And, as web and mobile applications increasingly consist of large numbers of requests to independent ‘microservices’, jitter – the variation in latency – becomes a more significant threat to the consumer experience. Furthermore, we benchmarked major European mobile network operators (MNOs) on average latency and the rate of unacceptably high-latency events (over 500ms).

In this second report on latency, which again uses data provided by app analytics specialist Apteligent (formerly Crittercism), we look at the rate of app errors – evidently, something that could not impact user experience more directly – and its correlation with both latency, and the rate of unacceptable high-latency events. We explore how often apps fail across the same set of MNOs, test if latency is a driver of app errors, and then conclude whether or not our theory that it is a real driver of consumer experience is correct.

Source data and methodology

Our partner, Apteligent, collects a wide variety of analytics data from thousands of mobile apps used by hundreds of millions of people around the world in their every-day lives and work. To date, the primary purpose of the data has been to help app developers make better apps. We are now working with Crittercism to produce further insights from the data to serve the global community of mobile operators.

This data-set includes the average network latency experienced at the application layer, the percentage of network requests above 500ms round-trip time, the 5th and 95th percentiles, and the rate of application errors. All of these data points are useful in trying to understand the overall experience of customers using their mobile apps, and in particular the delays and problems they’ve experienced such as long screen wait times and applications failing to work.

We showed in the previous report how the longest round-trip delays or ‘app-lags’ (i.e. those over 500ms) are the most important KPI to look at when trying to understand customer experience. This is firstly because people really notice individual delays of this length. For people used to high speed broadband, it’s like going back to narrowband internet – it seems incredibly slow!

Importantly though, in modern apps, the distribution of delays is even more significant, as each app or web page typically makes multiple requests over the internet before it can load fully – and each of these requests will suffer some form of delay or latency.

A detailed explanation of this and of the collection methodology is available in the first report.

The Impact of latency on app errors

First glance: a positive correlation overall, but a weak one

The following chart shows the error rate per 10,000 app requests, plotted against the percentage of requests over 500ms round-trip time, by carrier. Each dot represents a week’s performance and we’ve looked at 12 weeks of data from 20 operators, from the week of 03/08/15 to the week beginning 19/10/2015. The hypothesis being that the more requests with unacceptable latency there are, the more app errors, because apps ‘time-out’ or key requests are not fulfilled in time causing an app error or, worse, a crash.

Figure 1: Latency and errors for the top 20 European MNOs over the last 12-weeks appear correlated, but there are some important outliers

Source: STL Partners, Apteligent

At first glance, there appears to be only a weak positive relationship between latency and error rates but there does seem to be a natural grouping found between the two hand-drawn dotted lines on the chart with the weeks above the upper boundary (potentially) being outliers, in which at least one other factor is driving application errors up.

The lower boundary seems to represent the underlying rate of app-errors that occur when there are no latency issues (between 20 and 50 errors per ten thousand plus an increasing error rate as higher latency kicks in. For example, when 10% of requests experience latency above 500ms, the minimum error rate is around 30 per 10,000 requests, rising to 50 at the 35% mark.

  • Executive Summary
  • Introduction
  • Key objectives
  • Source data and methodology
  • The Impact of Latency on App Errors
  • First glance: a positive correlation overall, but a weak one
  • Outliers are specific countries and operators
  • Strong positive correlation between latency and app errors once outliers are excluded
  • App Errors: The Impact on Customer Experience
  • Latency and errors – both bad for the customer
  • Appendix: Country Analysis
  • France: A Clear Relationship
  • The UK: Strong Latency-Error Correlation
  • Spain: A mixed picture, but latency is still predictive of app errors
  • Italy: Wind is a super-outlier
  • Germany: Nothing but Outliers?
  • STL Partners and Telco 2.0: Change the Game
  • About Apteligent (formerly Crittercism)


  • Figure 1: Latency and errors for the top 20 European MNOs over the last 12-weeks appear correlated, but there are some important outliers
  • Figure 2: 12-week average latency and app error performance by operator
  • Figure 3: After excluding the key outliers, high-latency events explain 75% of the app error rate across Europe’s top 20 operators
  • Figure 4: Expected number of errors when loading 20 web pages of Amazon
  • Figure 5: France shows both the best performers, and a very clear relationship between latency and app errors
  • Figure 6: The latency-error correlation is strongest in the UK
  • Figure 7: High variation in latency complicates the picture, but a third of app error variation is still driven by latency
  • Figure 8: Wind complicates the picture, but the trend is still there
  • Figure 9: Germany – is there any trend at all?
  • Figure 10: The source of the outliers – Germany in August

Mobile app latency in Europe: French operators lead; Italian & Spanish lag

Latency as a proxy for customer app experience

Latency is a measure of the time taken for a packet of data to travel from one designated point to another. The complication comes in defining the start and end point. For an operator seeking to measure its network latency, it might measure only the transmission time across its network.

However, to objectively measure customer app experience, it is better to measure the time it takes from the moment the user takes an action, such as pressing a button on a mobile device, to receiving a response – in effect, a packet arriving back and being processed by the application at the device.

This ‘total roundtrip latency’ time is what is measured by our partner, Crittercism, via embedded code within applications themselves on an aggregated and anonymised basis. Put simply, total roundtrip latency is the best measure of customer experience because it encompasses the total ‘wait time’ for a customer, not just a portion of the multi-stage journey

Latency is becoming increasingly important

Broadband speeds tend to attract most attention in the press and in operator advertising, and speed does of course impact downloads and streaming experiences. But total roundtrip latency has a bigger impact on many user digital experiences than speed. This is because of the way that applications are built.

In modern Web applications, the business logic is parcelled-out into independent ‘microservices’ and their responses re-assembled by the client to produce the overall digital user experience. Each HTTP request is often quite small, although an overall onscreen action can be composed of a number of requests of varying sizes so broadband speed is often less of a factor than latency – the time to send and receive each request. See Appendix 2: Why latency is important, for a more detailed explanation of why latency is such an important driver of customer app experience.

The value of using actual application latency data

As we have already explained, STL Partners prefers to use total roundtrip latency as an indicator of customer app experience as it measures the time that a customer waits for a response following an action. STL Partners believes that Crittercism data reflects actual usage in each market because it operates within apps – in hundreds of thousands of apps that people use in the Apple App Store and in Google Play. This is a quite different approach to other players which require users to download a specific app which then ‘pings’ a server and awaits a response. This latter approach has a couple of limitations:

1. Although there have been several million downloads of the OpenSignal and Actual Experience app, this doesn’t get anywhere near the number of people that have downloaded apps containing the Crittercism measurement code.

2. Because the Crittercism code is embedded within apps, it directly measures the latency experienced by users when using those apps1. A dedicated measurement app fails to do this. It could be argued that a dedicated app gives the ‘cleanest’ app reading – it isn’t affected by variations in app design, for example. This is true but STL Partners believes that by aggregating the data for apps such variation is removed and a representative picture of total roundtrip latency revealed. Crittercism data can also show more granular data. For example, although we haven’t shown it in this report, Crittercism data can show latency performance by application type – e.g. Entertainment, Shopping, and so forth – based on the categorisation of apps used by Google and Apple in their app stores.

A key premise of this analysis is that, because operators’ customer bases are similar within and across markets, the profile of app usage (and therefore latency) is similar from one operator to the next. The latency differences between operators are, therefore, down to the performance of the operator.

Why it isn’t enough to measure average latency

It is often said that averages hide disparities in data, and this is particularly true for latency and for customer experience. This is best illustrated with an example. In Figure 2 we show the distribution of latencies for two operators. Operator A has lots of very fast requests and a long tail of requests with high latencies.

Operator B has much fewer fast requests but a much shorter tail of poor-performing latencies. The chart clearly shows that operator B has a much higher percentage of requests with a satisfactory latency even though its average latency performance is lower than operator A (318ms vs 314ms). Essentially operator A is let down by its slowest requests – those that prevent an application from completing a task for a customer.

This is why in this report we focus on average latency AND, critically, on the percentage of requests that are deemed ‘unsatisfactory’ from a customer experience perspective.

Using latency as a measure of performance for customers

500ms as a key performance cut-off

‘Good’ roundtrip latency is somewhat subjective and there is evidence that experience declines in a linear fashion as latency increases – people incrementally drop off the site. However, we have picked 500ms (or half a second) as a measure of unsatisfactory performance as we believe that a delay of more than this is likely to impact mobile users negatively (expectations on the ‘fixed’ internet are higher). User interface research from as far back as 19682 suggests that anything below 100ms is perceived as “instant”, although more recent work3 on gamers suggests that even lower is usually better, and delay starts to become intrusive after 200-300ms. Google experiments from 20094 suggest that a lasting effect – users continued to see the site as “slow” for several weeks – kicked in above 400ms.

Percentage of app requests with total roundtrip latency above 500ms – markets

Five key markets in Europe: France, Germany, Italy, and the UK.

This first report looks at five key markets in Europe: France, Germany, Italy, and the UK. We explore performance overall for Europe by comparing the relative performance of each country and then dive into the performance of operators within each country.

We intend to publish other reports in this series, looking at performance in other regions – North America, the Middle East and Asia, for example. This first report is intended to provider a ‘taster’ to readers, and STL Partners would like feedback on additional insight that readers would welcome, such as latency performance by:

  • Operating system – Android vs Apple
  • Specific device – e.g. Samsung S6 vs iPhone 6
  • App category – e.g. shopping, games, etc.
  • Specific countries
  • Historical trends

Based on this feedback, STL Partners and Crittercism will explore whether it is valuable to provide specific total roundtrip latency measurement products.


  • Latency as a proxy for customer app experience
  • ‘Total roundtrip latency’ is the best measure for customer ‘app experience’
  • Latency is becoming increasingly important
  • STL Partners’ approach
  • Europe: UK, Germany, France, Italy, Spain
  • Quantitative Analysis
  • Key findings
  • UK: EE, O2, Vodafone, 3
  • Quantitative Analysis
  • Key findings
  • Germany: T-Mobile, Vodafone, e-Plus, O2
  • Quantitative Analysis
  • Key findings
  • France: Orange, SFR, Bouygues Télécom, Free
  • Quantitative Analysis
  • Key findings
  • Italy: TIM, Vodafone, Wind, 3
  • Quantitative Analysis
  • Key findings
  • Spain: Movistar, Vodafone, Orange, Yoigo
  • Quantitative Analysis
  • Key findings
  • About STL Partners and Telco 2.0
  • About Crittercism
  • Appendix 1: Defining latency
  • Appendix 2: Why latency is important


  • Figure 1: Total roundtrip latency – reflecting a user’s ‘wait time’
  • Figure 2: Why a worse average latency can result in higher customer satisfaction
  • Figure 3: Major European markets – average total roundtrip latency (ms)
  • Figure 4: Major European markets – percentage of requests above 500ms
  • Figure 5: The location of Google and Amazon’s European data centres favours operators in France, UK and Germany
  • Figure 6: European operators – average total roundtrip latency (ms)
  • Figure 7: European operators – percentage of requests with latency over 500ms
  • Figure 8: Customer app experience is likely to be particularly poor at 3 Italy, Movistar (Spain) and Telecom Italia
  • Figure 9: UK Operators – average latency (ms)
  • Figure 10: UK operators – percentage of requests with latency over 500ms
  • Figure 11: German Operators – average latency (ms)
  • Figure 12: German operators – percentage of requests with latency over 500ms
  • Figure 13: French Operators – average latency (ms)
  • Figure 14: French operators – percentage of requests with latency over 500ms
  • Figure 15: Italian Operators – average latency (ms)
  • Figure 16: Italian operators – percentage of requests with latency over 500ms
  • Figure 17: Spanish Operators – average latency (ms)
  • Figure 18: Spanish operators – percentage of requests with latency over 500ms
  • Figure 19: Breakdown of HTTP requests in, by type and size