Network edge capacity forecast: The role of hyperscalers

Developers need to see sufficient edge capacity

Edge computing comprises a spectrum of potential locations and technologies designed to bring processing power closer to the end-device and source of data, outside of a central data centre or cloud. This report focuses on forecasting capacity at the network edge – i.e. edge computing at edge data centres owned (and usually operated) by telecoms operators. 

This forecast models capacity at these sites for non-RAN workloads. In other words, processing for enterprise or consumer applications and the distributed core network functions required to support them. We cover forecasts on RAN as part of our Telco Cloud research services portfolio.

Forecast scope in terms of edge locations and workload types

Source: STL Partners

Enter your details below to request an extract of the report

The output of the forecast focuses on capacity: number of edge data centres and servers

STL Partners has always argued that for network edge to take off, developers and enterprises need to see sufficient edge capacity to transform their applications to leverage its benefits at scale. The forecast seeks to provide an indication for how this will grow over the next five years, by predicting the number of edge data centres owned by telecoms operators and how many servers they plan to fill these up with.

Hardware vendors have been evolving their server portfolios for a number of years to fit the needs of the telecoms industry. This started with core network virtualisation, as the industry moved away from an appliance-based model to using common-off-the-shelf hardware to support the virtualised LTE core.

As infrastructure moves “deeper” into the edge, the requirements for servers will change. Servers at RAN base stations will not have full data centre structures, but need to be self-contained and ruggedised. 

However, at this stage of the market’s maturity, most servers at the network edge will be in data centre-like facilities. 

There are three key factors determining a telco’s approach and timing for its edge computing data centres

Telecoms operators want to build their network edge capacity where there is demand. In general, the approach has been to create a deployment strategy for network edge data centres that guarantees a level of (low) latency for a certain level of population coverage. In interviews with operators, this has often ranged from 90-99% of the population experiencing sub-10 to 20 millisecond roundtrip latency for applications hosted at their network edge.

The resultant distribution of edge capacity will therefore be impacted by the spread of the population, the size of the country and the telecoms operator’s network topology. For example, in well connected, small countries, such as the Netherlands, low latencies are already achievable with the current networks and location of centralised data centres.

Key factors determining network edge build​

Source: STL Partners

The actual number of sites and speed at which a telecoms operator deploys these sites is driven by three main factors: 

Factor 1: edge computing strategy;

Factor 2: the speed at which it has or will deploy 5G (if it is a mobile operator);

Factor 3: the country’s geographic profile.

Details on the evidence for the individual factors can be found in the inaugural report, Forecasting capacity of network edge computing.

Table of contents

  • Executive summary
  • Introduction to the forecast
  • Key findings this year
  • Regional deep-dives
  • Role of hyperscalers
  • Conclusions
  • Appendix: Methodology

Enter your details below to request an extract of the report

How telcos can flex their physical strength

Telcos can learn from Amazon

This executive briefing explores how and why Amazon is expanding its physical footprint in its home market of North America. Although Amazon was born on the Internet, physical assets from lockers and retail stores to fulfilment centres and airplanes are absolutely fundamental to its mission to provide the most convenient means of accessing as many physical and digital products and services as possible.

Flagging the many parallels with telecoms, this report analyses the way in which Amazon is melding its physical and digital propositions to generate as many economies of scale and economies of scope as possible. For each example, the report considers the potential lessons for telcos.

Like Amazon, telcos have an extensive range of physical and digital assets and capabilities. But unlike Amazon, telcos tend to focus these assets on a single purpose, rather than serving multiple purposes and multiple groups of customers.

This report gives a high level overview of how telcos could do much more with their remaining data centres, their core and access networks, their retail stores, vehicle fleets, devices and apps. Indeed, each of these assets could help telcos to secure a major role in the Coordination Age – a new era in which telcos and their partners could move beyond providing connectivity to help the public and private sector better coordinate the use of key resources and assets, such as road space, fresh water, energy and farm land.

Finally, the report also flags the potential for telcos outside of North America to partner with Amazon. Beyond its home market, Amazon still has little in the way of physical assets, whereas telcos in Europe and Asia have large physical footprints that could be better utilised.

Note, this high-level report will be supplemented by future reports that will analyse in-depth how telcos can make better use of each category of asset, as STL Partners did in this report exploring best practice in the rollout of apps: Telcos’ apps: What works?

Enter your details below to request an extract of the report

Amazon: Coordinating convenience

As telcos explore new opportunities emerging in the Coordination Age, they could learn a lot from Amazon, a company that has mastered the coordination of complex digital and physical supply chains. Born on the Internet, Amazon is associated in most people’s minds with the rise of online shopping – buying goods with a click of a button from the comfort of your armchair. Although one might assume that Amazon keeps costs down by minimising its capital spending and its physical footprint, its approach is far more nuanced than that. Indeed, Amazon is building out a broad physical presence across North America that belies the notion that success in digital commerce is all about data, algorithms and slick software. Despite its relentless pursuit of automation, Amazon employs approximately 647,500 full-time and part-time staff, most of them working in fulfilment centres and other logistical facilities.

Rather than minimising costs, Amazon is looking to maximise convenience. Indeed, Amazon is gradually increasing its spending on fulfilment, which has climbed from 13% of sales in 2016 to 14% in 2017 and 15% in 2018.

Figure 1: Amazon’s fulfilment costs are rising

Amazon's fulfilment costs by segment

Source: Amazon

Amazon’s physical assets

In the Coordination Age, digital technologies are being used to coordinate the efficient use of physical assets and resources. While Google is focused on using its world-class software expertise to coordinate the use of physical assets owned by others, Amazon is betting that its expertise in managing physical assets (as well as developing software) will give it a competitive edge over its rival. As telecoms operators also own a broad mix of digital and physical assets, Amazon’s strategy provides a potential playbook for telcos. By straddling the physical and digital worlds, Amazon believes it can bring greater value to both consumers and companies.

One of the ways in which Amazon is increasing convenience for customers is by reducing the latency in its distribution network – it is building out an increasingly dense network of physical assets to reduce delivery times, so that consumers regard Amazon as first port of call for an even wider range of products and services. Amazon wants to sell people what they want, exactly when they want it. For Amazon, as with telcos, high quality coverage of major population centres is vital.

To that end, Amazon is building up a major physical presence in North America – its home market. Amazon’s balance sheet now shows US$45 billion of property and equipment in the U.S. with a further US$16.7 billion in the rest of the world. That compares with US$27.5 billion of property and equipment on the balance sheet of Target Corp., a retailer with more than 1,800 stores across the U.S. Figure 2 shows the value of the property and equipment assets in Amazon’s North America division is growing far faster than those in its International division. That suggests Amazon can’t afford to pursue an asset-heavy strategy across the world and could be open to partnerships with telcos with data centres, retail stores and phone boxes in markets beyond North America.

Figure 2: Amazon’s physical asset base in North America is growing fast

Value of Amazons physical assets by segment

Source: Amazon annual reports

As Figure 2 shows, the physical footprint of Amazon Web Services is also growing rapidly, underlining Amazon’s relentless expansion in online entertainment and cloud computing, as well as retail and logistics (see Figure 3). Indeed, Amazon is highly active in the six distinct market segments shown in Figure 3, underlining how Amazon is comfortable handling a vast range of physical goods and digital bits and bytes. It also operates its own network infrastructure, including undersea cables, and wind farms.

Figure 3 might suggest Amazon is a conglomerate, but it has integrated its propositions across multiple markets in a way that traditional conglomerates wouldn’t contemplate, building extraordinary synergies across six distinct markets over the past 20 years.

Amazon has built these synergies by moving fluidly between B2B2C propositions, B2B propositions and B2C propositions. Amazon is both a retailer and a marketplace for physical goods, and a supplier of cloud services and apps to both consumers and businesses.  As a result, it sometimes competes directly with is customers, but in a way that its customers seem to accept. Most famously, Amazon competes with Netflix in the video-on-demand market, while hosting Netflix on Amazon Web Services (telcos do something similar by serving third party MVNOs).

Figure 3: The milestones of Amazon’s expansion across six segments

Table of Amazon's milestones across six segments

Source: STL Partners

The rest of this report compares Amazon’s physical assets against those of telecoms operators, to draw some parallels on how telcos could make better use of their vast physical assets.

Table of Contents

  • Executive Summary
  • Introduction
  • Amazon: Coordinating convenience
    • Amazon’s physical footprint
    • Comparing Amazon with telcos
    • Whatever you want, however you want it
    • Exploiting end-user devices
    • Better consumer apps
  • How telcos can better harness their assets
    • Partnerships with Amazon

Enter your details below to request an extract of the report

Cloud 2020: Telcos’ Role, Scenarios and Forecast

Introduction: The Cloud in 2016

STL Partners developed our comprehensive ‘forward-view scenarios’ on the evolving cloud services market, and the role of telcos within this market, back in 2012[1].  Times have certainly moved on.  In 2016, the cloud has become an established part of the IT industry. The key cloud providers – Amazon.com, Microsoft, Google, Facebook – are seeing dramatic revenue growth and (at least in Amazon Web Services’ case) unexpectedly strong margins in the 25-30% range.

Estimates of server shipments and revenue suggest that, so far, the growth of the cloud is a blue-ocean phenomenon.  In other words, rather than cloud services supplanting on-premises data centres, the market for computing power is growing fast enough that the cloud is mostly additional to them. Enterprises’ consumption of computing has risen dramatically, as its price has fallen – and cloud is the preferred delivery method for the delivery of these additional data services.

Since our last major cloud report in 2012, there have been some major shifts in the market.

  • Public cloud – think Amazon Elastic Compute Cloud (EC2) – has grown enormously, and to some extent subsumed part of the private cloud segment, as the public clouds have added more and more features. For example, Amazon EC2 offers “Reserved Instances”, rather like a dedicated server – these “allow you to reserve Amazon EC2 computing capacity for 1 or 3 years, in exchange for a significant discount (up to 75%) compared to On-Demand instance pricing”[2]. EC2 also offers extensive “virtual private cloud” support, as does Microsoft Azure. This support has essentially put an end to the virtual private cloud as an industry segment.
  • Platform-as-a-service (PaaS) has, as we predicted, become less important compared with infrastructure-as-a-service (IaaS), as the latter has added more and more PaaS-like convenience.
  • Traditional managed-hosting providers, for their part, have begun to deliver managed hosting services in a “cloud-like”, programmatic, on-demand fashion, via the so-called “bare metal cloud”. Iliad’s Scaleway product is a notable example here.
  • Meanwhile, enterprise IT departments who choose to retain their own infrastructure are increasingly likely to do it by creating their own private clouds. Open-source software, like OpenStack, and open hardware like the Open Compute Project and OpenFlow, make this an increasingly attractive option.

The upshot for telcos has in general been pretty bleak.  In the volume-dominated public cloud market, they’ve failed to achieve significant scale; while the various niche cloud services markets have largely either been subsumed by the public cloud, or been served better by the open-source ecosystem. Telcos’ focus on enterprise cloud and (in most cases) on reselling VMWare’s technology as their core PaaS offering has rendered them vulnerable to severe competition. Enterprises could serve themselves better thanks to open source, while the public clouds’ engineering excellence and use of open source projects has allowed them to progress faster and address developers’ (the key buyers’) needs better.

However, as we discuss below, the big four cloud companies still only account for about half the total spending. The niche opportunities in cloud remain very real, and there are still potential opportunities for telcos who offer compelling technical and product differentiation.

STL’s cloud scenarios from 2012, revisited

In 2012, STL Partners identified three scenarios for the future of cloud, in our market overview report.

“Menacing Stormcloud”: this scenario essentially envisioned a world in which hyperscale data centre infrastructure just kept getting better. As a result, the cloud majors would eventually take over, probably also cannibalising the on-premises and private cloud markets. This would require cloud customers to bite the bullet and trust the cloud, whatever security and privacy issues might arise. Prices, but also margins, would be hammered into the ground by sheer scale economics.  In “Menacing Stormcloud”, AWS and its rivals would dominate the cloud market, and little would be left in terms of telco opportunities.

 “Cloudburst”: our second scenario postulated that the cloud was a technology bubble and the bubble would do what all bubbles do – burst. Some triggering event – perhaps a security crisis, or a major cloud customer deciding to scale out – would bring home the downside risks to the investing public and the customer base. Investors would dump the sector, bankruptcies would ensue, and interest would move on, whether to a new generation of on-premises solutions or to a revived interest in P2P systems.  In “Cloudburst”, both the cloud and the data centre in its current form would end up being much less relevant, and cloud opportunities for telcos (as well as other players) would accordingly be very limited.

“Cloud Layers”: this scenario foresaw a division between a hard core of hyperscale public cloud providers – dominated by AWS and its closest competitors – and a periphery of special-purpose, regional, private, and otherwise differentiated cloud providers.  This latter group would include telcos, CDNs, software-as-a-service providers, and enterprise in-house IT departments. We noted that this was the option that had the best chance of offering telcos a significant opportunity to address the cloud market.

Looking at the market in 2016, “Cloud Layers” has turned out to be closest to the current reality. The cloud has certainly not burst, as we postulated in our second scenario.  As far as the first “Menacing Stormcloud” scenario, public cloud majors have indeed become very dominant, but the resulting price drops this scenario envisioned have not necessarily ensued.  Even the price leader, AWS, has only returned about half the cost-savings derived from technical advances (what we would call the annual ‘Moore’s law increment’) to its customers through its pricing, capturing the rest into margin.

Further, although there have been exits from the market, the exiting providers have not been niche cloud providers or traditional managed hosting providers.  Rather, we have seen exits by players who have made unsuccessful attempts to compete in hyperscale. HP’s closure of its Helion Public Cloud product, Facebook’s closure of its Parse mobile developer PaaS, and the resounding lack of results for Verizon’s $1.4bn spent on Terremark, are cases in point.

Looking at the operators who managed to find a niche in the “Cloud Layers” scenario – such as AT&T[3], Telstra[4], or Iliad[5] – an important common factor has been their commitment to owning their technology and building in-house expertise, and using this to differentiate themselves from “big cloud”. AT&T’s network-integrated cloud strategy is driven by both using open-source software as far as possible, and investing in the key open-source projects by contributing code back to them. Iliad introduced the first full bare-metal cloud, using a highly innovative ARM-based microserver it developed in-house. Telstra is bringing much more engineering back in-house, in support of its distinctive role as the preferred partner for all the major clouds in Australia.


 

  • Executive Summary
  • Introduction: The Cloud in 2016
  • STL’s cloud scenarios from 2012, revisited
  • How much are we talking here?
  • Competitive Developments in Cloud Services, 2012-2016
  • Understanding the strategies of the non-telco cloud players
  • Most Telcos’ Cloud Initiatives Haven’t Worked
  • The Dash-for-Scale failed (because it wasn’t ‘hyperscale’)
  • Only the disruptors made any money
  • Too little investment in cloud innovation resources, and too much belief in marketing reach as a differentiator
  • Cloud innovation is demanding: the case of AT&T
  • Cloud 2.0 Scenarios 2016-2020
  • Scenario 1: Cumulonimbus – tech and Internet players’ global cloud oligopoly
  • Scenario 2: Cirro-cumulus – a core of big cloud players, plus specialists and DIY enterprises
  • Scenario 3: Disruptive 5G lightning storm fuses the Cloud with the Network
  • Conclusion

 

Figure 1: 2016 Forecasts of cloud market size through 2020
Figure 2: Forecasting the adoption of cloud
Figure 3: Our revised cloud services spending forecast: still a near-trillion dollar opportunity, even though IT spending slows
Figure 4: Our forecast in context
Figure 5: Public IaaS leads the way, with AWS and Microsoft
Figure 6: IaaS is forecast to grow as a share of the total Cloud opportunity
Figure 7: All the profit at Amazon is in AWS
Figure 8: Moore’s law runs ahead of AWS pricing, and Amazon grows margins
Figure 9: Cloud is the new driver of growth at Microsoft
Figure 10: Google is still the fourth company in the cloud
Figure 11: AT&T’s cloud line-item is pulling further and further ahead of Verizon’s
Figure 12: STL world cloud spending forecast (recap)
Figure 13: Driver/indicator/barrier matrix for Cloud 2.0 scenarios