Mobile app latency in Europe: French operators lead; Italian & Spanish lag

Executive Briefing Service

Purchase report

This report is available to purchase.

Buy Now

Login to access

Want to subscribe?

This article is part of: Executive Briefing Service

To find out more about how to join or access this report please contact us

Our latest analysis shows staggering differences in 'app-lag' (the time it takes for an app to get a response over the Internet) across France, Germany, Italy, Spain and the UK, and twenty mobile operators. This has significant consequences for customer data experiences, and potentially operator market performance too. Operators in France, particularly Bouygues and Free, are delivering a superior customer app experience while 3 in Italy and Movistar in Spain are European laggards. (October 2015, Foundation 2.0, Executive Briefing Service.)

Latency as a proxy for customer app experience

Latency is a measure of the time taken for a packet of data to travel from one designated point to another. The complication comes in defining the start and end point. For an operator seeking to measure its network latency, it might measure only the transmission time across its network.

However, to objectively measure customer app experience, it is better to measure the time it takes from the moment the user takes an action, such as pressing a button on a mobile device, to receiving a response – in effect, a packet arriving back and being processed by the application at the device.

This ‘total roundtrip latency’ time is what is measured by our partner, Crittercism, via embedded code within applications themselves on an aggregated and anonymised basis. Put simply, total roundtrip latency is the best measure of customer experience because it encompasses the total ‘wait time’ for a customer, not just a portion of the multi-stage journey

Latency is becoming increasingly important

Broadband speeds tend to attract most attention in the press and in operator advertising, and speed does of course impact downloads and streaming experiences. But total roundtrip latency has a bigger impact on many user digital experiences than speed. This is because of the way that applications are built.

In modern Web applications, the business logic is parcelled-out into independent ‘microservices’ and their responses re-assembled by the client to produce the overall digital user experience. Each HTTP request is often quite small, although an overall onscreen action can be composed of a number of requests of varying sizes so broadband speed is often less of a factor than latency – the time to send and receive each request. See Appendix 2: Why latency is important, for a more detailed explanation of why latency is such an important driver of customer app experience.

The value of using actual application latency data

As we have already explained, STL Partners prefers to use total roundtrip latency as an indicator of customer app experience as it measures the time that a customer waits for a response following an action. STL Partners believes that Crittercism data reflects actual usage in each market because it operates within apps – in hundreds of thousands of apps that people use in the Apple App Store and in Google Play. This is a quite different approach to other players which require users to download a specific app which then ‘pings’ a server and awaits a response. This latter approach has a couple of limitations:

1. Although there have been several million downloads of the OpenSignal and Actual Experience app, this doesn’t get anywhere near the number of people that have downloaded apps containing the Crittercism measurement code.

2. Because the Crittercism code is embedded within apps, it directly measures the latency experienced by users when using those apps1. A dedicated measurement app fails to do this. It could be argued that a dedicated app gives the ‘cleanest’ app reading – it isn’t affected by variations in app design, for example. This is true but STL Partners believes that by aggregating the data for apps such variation is removed and a representative picture of total roundtrip latency revealed. Crittercism data can also show more granular data. For example, although we haven’t shown it in this report, Crittercism data can show latency performance by application type – e.g. Entertainment, Shopping, and so forth – based on the categorisation of apps used by Google and Apple in their app stores.

A key premise of this analysis is that, because operators’ customer bases are similar within and across markets, the profile of app usage (and therefore latency) is similar from one operator to the next. The latency differences between operators are, therefore, down to the performance of the operator.

Why it isn’t enough to measure average latency

It is often said that averages hide disparities in data, and this is particularly true for latency and for customer experience. This is best illustrated with an example. In Figure 2 we show the distribution of latencies for two operators. Operator A has lots of very fast requests and a long tail of requests with high latencies.

Operator B has much fewer fast requests but a much shorter tail of poor-performing latencies. The chart clearly shows that operator B has a much higher percentage of requests with a satisfactory latency even though its average latency performance is lower than operator A (318ms vs 314ms). Essentially operator A is let down by its slowest requests – those that prevent an application from completing a task for a customer.

This is why in this report we focus on average latency AND, critically, on the percentage of requests that are deemed ‘unsatisfactory’ from a customer experience perspective.

Using latency as a measure of performance for customers

500ms as a key performance cut-off

‘Good’ roundtrip latency is somewhat subjective and there is evidence that experience declines in a linear fashion as latency increases – people incrementally drop off the site. However, we have picked 500ms (or half a second) as a measure of unsatisfactory performance as we believe that a delay of more than this is likely to impact mobile users negatively (expectations on the ‘fixed’ internet are higher). User interface research from as far back as 19682 suggests that anything below 100ms is perceived as “instant”, although more recent work3 on gamers suggests that even lower is usually better, and delay starts to become intrusive after 200-300ms. Google experiments from 20094 suggest that a lasting effect – users continued to see the site as “slow” for several weeks – kicked in above 400ms.

Percentage of app requests with total roundtrip latency above 500ms – markets

Five key markets in Europe: France, Germany, Italy, and the UK.

This first report looks at five key markets in Europe: France, Germany, Italy, and the UK. We explore performance overall for Europe by comparing the relative performance of each country and then dive into the performance of operators within each country.

We intend to publish other reports in this series, looking at performance in other regions – North America, the Middle East and Asia, for example. This first report is intended to provider a ‘taster’ to readers, and STL Partners would like feedback on additional insight that readers would welcome, such as latency performance by:

  • Operating system – Android vs Apple
  • Specific device – e.g. Samsung S6 vs iPhone 6
  • App category – e.g. shopping, games, etc.
  • Specific countries
  • Historical trends

Based on this feedback, STL Partners and Crittercism will explore whether it is valuable to provide specific total roundtrip latency measurement products.

Contents

  • Latency as a proxy for customer app experience
  • ‘Total roundtrip latency’ is the best measure for customer ‘app experience’
  • Latency is becoming increasingly important
  • STL Partners’ approach
  • Europe: UK, Germany, France, Italy, Spain
  • Quantitative Analysis
  • Key findings
  • UK: EE, O2, Vodafone, 3
  • Quantitative Analysis
  • Key findings
  • Germany: T-Mobile, Vodafone, e-Plus, O2
  • Quantitative Analysis
  • Key findings
  • France: Orange, SFR, Bouygues Télécom, Free
  • Quantitative Analysis
  • Key findings
  • Italy: TIM, Vodafone, Wind, 3
  • Quantitative Analysis
  • Key findings
  • Spain: Movistar, Vodafone, Orange, Yoigo
  • Quantitative Analysis
  • Key findings
  • About STL Partners and Telco 2.0
  • About Crittercism
  • Appendix 1: Defining latency
  • Appendix 2: Why latency is important

 

  • Figure 1: Total roundtrip latency – reflecting a user’s ‘wait time’
  • Figure 2: Why a worse average latency can result in higher customer satisfaction
  • Figure 3: Major European markets – average total roundtrip latency (ms)
  • Figure 4: Major European markets – percentage of requests above 500ms
  • Figure 5: The location of Google and Amazon’s European data centres favours operators in France, UK and Germany
  • Figure 6: European operators – average total roundtrip latency (ms)
  • Figure 7: European operators – percentage of requests with latency over 500ms
  • Figure 8: Customer app experience is likely to be particularly poor at 3 Italy, Movistar (Spain) and Telecom Italia
  • Figure 9: UK Operators – average latency (ms)
  • Figure 10: UK operators – percentage of requests with latency over 500ms
  • Figure 11: German Operators – average latency (ms)
  • Figure 12: German operators – percentage of requests with latency over 500ms
  • Figure 13: French Operators – average latency (ms)
  • Figure 14: French operators – percentage of requests with latency over 500ms
  • Figure 15: Italian Operators – average latency (ms)
  • Figure 16: Italian operators – percentage of requests with latency over 500ms
  • Figure 17: Spanish Operators – average latency (ms)
  • Figure 18: Spanish operators – percentage of requests with latency over 500ms
  • Figure 19: Breakdown of HTTP requests in facebook.com, by type and size