In this October 2022 update to STL Partners’ Telco Cloud Deployment Tracker, we present data and analysis on progress with deployments of vRAN and open RAN. It is fair to say that open RAN (virtualised AND disaggregated RAN) deployments have not happened at the pace that STL Partners and many others had forecast. In parallel, some very significant deployments and developments are occurring with vRAN (virtualised NOT disaggregated RAN). Is open RAN a networking ideal that is not yet, or never will be, deployed in its purest form?
In our Telco Cloud Deployment Tracker, we track deployments of three types of virtualised RAN:
Open RAN / O-RAN: Open, disaggregated, virtualised / cloud-native, with baseband (BU) functions distributed between a Central Unit (CU: control plane functions) and Distributed Unit (DU: data plane functions)
vRAN: Virtualised and distributed CU/DU, with open interfaces but implemented as an integrated, single-vendor platform
Cloud RAN (C-RAN): Single-vendor, virtualised / centralised BU, or CU only, with proprietary / closed interfaces
Cloud RAN is the most limited form of virtualised RAN: it is based on porting part or all of the functionality of the legacy, appliance-based BU into a Virtual Machine (VM). vRAN and open RAN are much more significant, in both technology and business-model terms, breaking open all parts of the RAN to more competition and opportunities for innovation. They are also cloud-native functions (CNFs) rather than VM-based.
Enter your details below to request an extract of the report
2022 was meant to be the breakthrough year for open RAN: what happened?
Of the eight deployments of open RAN we were expecting to go live in 2022 (shown in the chart below), only three had done so by the time of writing.
Two of these were on the same network: Altiostar and Mavenir RAN platforms at DISH. The other was a converged Parallel Wireless 2G / 3G RAN deployment for Orange Central African Republic.
This is hardly the wave of 5G open RAN, macro-network roll-outs that the likes of Deutsche Telekom, Orange, Telefónica and Vodafone originally committed to for 2022. What has gone wrong?
Open RAN has come up against a number of thorny technological and operational challenges, which are well known to open RAN watchers:
integration challenges and costs
hardware performance and optimisation
immature ecosystem and unclear lines of accountability when things go wrong
unproven at scale, and absence of economies of scale
energy efficiency shortcomings
need to transform the operating model and processes
pressured 5G deployment and Huawei replacement timelines
absence of mature, open, horizontal telco cloud platforms supporting CNFs.
Over and above these factors, open RAN is arguably not essential for most of the 5G use cases it was expected to support.
This can be gauged by looking at some of the many open RAN trials that have not yet resulted in commercial deployments.
Global deployments of C-RAN, vRAN and open RAN, 2016 to 2023
Source: STL Partners
Previous telco cloud tracker releases and related research
In this July 2022 update to STL Partners’ Telco Cloud Deployment Tracker, we present granular information on 5G core launches. They fall into three categories:
5G Non-standalone core (5G NSA core) deployments: The 5G NSA core (agreed as part of 3GPP Release in December 2017), involves using a virtualised and upgraded version of the existing 4G core (or EPC) to support 5G New Radio (NR) wireless transmission in tandem with existing LTE services. This was the first form of 5G to be launched and still accounts for 75% of all 5G core network deployments in our Tracker.
5G Standalone core (5G SA core) deployments: The SA core is a completely new and 5G-only core. It has a simplified, cloud-native and distributed architecture, and is designed to support services and functions such as network slicing, Ultra-Reliable Low-Latency Communications (URLLC) and enhanced Machine-Type Communications (eMTC, i.e. massive IoT). Our Tracker indicates that the upcoming wave of 5G core deployments in 2022 and 2023 will be mostly 5G SA core.
Converged 5G NSA/SA core deployments: this is when a dual-mode NSA and SA platform is deployed; in most cases, the NSA core results from the upgrade of an existing LTE core (EPC) to support 5G signalling and radio. The principle behind a converged NSA/SA core is the ability to orchestrate different combinations of containerised network functions, and automatically and dynamically flip over from an NSA to an SA configuration, in tandem – for example – with other features and services such as Dynamic Spectrum Sharing and the needs of different network slices. For this reason, launching a converged NSA/SA platform is a marker of a more cloud-native approach in comparison with a simple 5G NSA launch. Ericsson is the most commonly found vendor for this type of platform with a handful coming from Huawei, Samsung and WorkingGroupTwo. Albeit interesting, converged 5G NSA/SA core deployments remain a minority (7% of all 5G core deployments over the 2018-2023 period) and most of our commentary will therefore focus on 5G NSA and 5G SA core launches.
Enter your details below to request an extract of the report
75% of 5G cores are still Non-standalone (NSA)
Global 5G core deployments by type, 2018–23
There is renewed activity this year in 5G core launches since the total number of 5G core deployments so far in 2022 (effective and in progress) stands at 49, above the 47 logged in the whole of 2021. At the very least, total 5G deployments in 2022 will settle between the level of 2021 and the peak of 2020 (97).
5G in whichever form now exists in most places where it was both in demand and affordable; but there remain large economies where it is yet to be launched: Turkey, Russia and most notably India. It also remains to be launched in most of Africa.
In countries with 5G, the next phase of launches, which will see the migration of NSA to SA cores, has yet to take place on a significant scale.
To date, 75% of all 5G cores are NSA. However, 5G SA will outstrip NSA in terms of deployments in 2022 and represent 24 of the 49 launches this year, or 34 if one includes converged NSA/SA cores as part of the total.
All but one of the 5G launches announced for 2023 are standalone; they all involve Tier-1 MNOs including Orange (in its European footprint involving Ericsson and Nokia), NTT Docomo in Japan and Verizon in the US.
The upcoming wave of SA core (and open / vRAN) represents an evolution towards cloud-native
Cloud-native functions or CNFs are software designed from the ground up for deployment and operation in the cloud with:
Portability across any hardware infrastructure or virtualisation platform
Modularity and openness, with components from multiple vendors able to be flexibly swapped in and out based on a shared set of compute and OS resources, and open APIs (in particular, via software ‘containers’)
Automated orchestration and lifecycle management, with individual micro-services (software sub-components) able to be independently modified / upgraded, and automatically re-orchestrated and service-chained based on a persistent, API-based, ‘declarative’ framework (one which states the desired outcome, with the service chain organising itself to deliver the outcome in the most efficient way)
Compute, resource, and software efficiency: as a concomitant of the automated, lean and logically optimal characteristics described above, CNFs are more efficient (both functionally and in terms of operating costs) and consume fewer compute and energy resources.
Scalability and flexibility, as individual functions (for example, distributed user plane functions in 5G networks) can be scaled up or down instantly and dynamically in response to overall traffic flows or the needs of individual services
Programmability, as network functions are now entirely based on software components that can be programmed and combined in a highly flexible manner in accordance with the needs of individual services and use contexts, via open APIs.
Previous telco cloud tracker releases and related research
Each new release of the tracker is global, but is accompanied by an analytical report which focusses on trends in given regions from time to time:
SK Telecom is the largest mobile operator in South Korea with a 42% share of the mobile market and is also a major fixed broadband operator. It’s growth strategy is focused on 5G, AI and a small number of related business areas where it sees the potential for revenue to replace that lost from its core mobile business.
By developing applications based on 5G and AI it hopes to create additional revenue streams both for its mobile business and for new areas, as it has done in smart home and is starting to do for a variety of smart business applications. In 5G it is placing an emphasis on indoor coverage and edge computing as basis for vertical industry applications. Its AI business is centred around NUGU, a smart speaker and a platform for business applications.
Its other main areas of business focus are media, security, ecommerce and mobility, but it is also active in other fields including healthcare and gaming.
The company takes an active role internationally in standards organisations and commercially, both in its own right and through many partnerships with other industry players.
It is a subsidiary of SK Group, one of the largest chaebols in Korea, which has interests in energy and oil. Chaebols are large family-controlled conglomerates which display a high level and concentration of management power and control. The ownership structures of chaebols are often complex owing to the many crossholdings between companies owned by chaebols and by family members. SK Telecom uses its connections within SK Group to set up ‘friendly user’ trials of new services, such as edge and AI
While the largest part of the business remains in mobile telecoms, SK Telecom also owns a number of subsidiaries, mostly active in its main business areas, for example:
SK Broadband which provides fixed broadband (ADSL and wireless), IPTV and mobile OTT services
ADT Caps, a securitybusiness
IDQ, which specialises in quantum cryptography (security)
11st, an open market platform for ecommerce
SK Hynixwhich manufactures memory semiconductors
Few of the subsidiaries are owned outright by SKT; it believes the presence of other shareholders can provide a useful source of further investment and, in some cases, expertise.
SKT was originally the mobile arm of KT, the national operator. It was privatised soon after establishing a cellular mobile network and subsequently acquired by SK Group, a major chaebol with interests in energy and oil, which now has a 27% shareholding. The government pension service owns a 11% share in SKT, Citibank 10%, and 9% is held by SKT itself. The chairman of SK Group has a personal holding in SK Telecom.
Following this introduction, the report comprises three main sections:
SK Telecom’s business strategy: range of activities, services, promotions, alliances, joint ventures, investments, which covers:
Mobile 5G, Edge and vertical industry applications, 6G
AIand applications, including NUGU and Smart Homes
New strategic business areas, comprising Media, Security, eCommerce, and other areas such as mobility
Business performance
Industrial and national context.
Enter your details below to download an extract of the report
Overview of SKT’s activities
Network coverage
SK Telecom has been one of the earliest and most active telcos to deploy a 5G network. It initially created 70 5G clusters in key commercial districts and densely populated areas to ensure a level of coverage suitable for augmented reality (AR) and virtual reality (VR) and plans to increase the number to 240 in 2020. It has paid particular attention to mobile (or multi-access) edge computing (MEC) applications for different vertical industry sectors and plans to build 5G MEC centres in 12 different locations across Korea. For its nationwide 5G Edge cloud service it is working with AWS and Microsoft.
In recognition of the constraints imposed by the spectrum used by 5G, it is also working on ensuring good indoor 5G coverage in some 2,000 buildings, including airports, department stores and large shopping malls as well as small-to-medium-sized buildings using distributed antenna systems (DAS) or its in-house developed indoor 5G repeaters. It also is working with Deutsche Telekom on trials of the repeaters in Germany. In addition, it has already initiated activities in 6G, an indication of the seriousness with which it is addressing the mobile market.
NUGU, the AI platform
It launched its own AI driven smart speaker, NUGU in 2016/7, which SKT is using to support consumer applications such as Smart Home and IPTV. There are now eight versions of NUGU for consumers and it also serves as a platform for other applications. More recently it has developed several NUGU/AI applications for businesses and civil authorities in conjunction with 5G deployments. It also has an AI based network management system named Tango.
Although NUGU initially performed well in the market, it seems likely that the subsequent launch of smart speakers by major global players such as Amazon and Google has had a strong negative impact on the product’s recent growth. The absence of published data supports this view, since the company often only reports good news, unless required by law. SK Telecom has responded by developing variants of NUGU for children and other specialist markets and making use of the NUGU AI platform for a variety of smart applications. In the absence of published information, it is not possible to form a view on the success of the NUGU variants, although the intent appears to be to attract young users and build on their brand loyalty.
It has offered smart home products and services since 2015/6. Its smart home portfolio has continually developed in conjunction with an increasing range of partners and is widely recognised as one of the two most comprehensive offerings globally. The other being Deutsche Telekom’s Qivicon. The service appears to be most successful in penetrating the new build market through the property developers.
NUGU is also an AI platform, which is used to support business applications. SK Telecom has also supported the SK Group by providing new AI/5G solutions and opening APIs to other subsidiaries including SK Hynix. Within the SK Group, SK Planet, a subsidiary of SK Telecom, is active in internet platform development and offers development of applications based on NUGU as a service.
Smart solutions for enterprises
SKT continues to experiment with and trial new applications which build on its 5G and AI applications for individuals (B2C), businesses and the public sector. During 2019 it established B2B applications, making use of 5G, on-prem edge computing, and AI, including:
Smart factory(real time process control and quality control)
Smart hospital (NUGUfor voice command for patients, AR-based indoor navigation, facial recognition technology for medical workers to improve security, and investigating possible use of quantum cryptography in hospital network)
Smart cities; e.g. an intelligent transportation system in Seoul, with links to vehicles via 5Gor SK Telecom’s T-Map navigation service for non-5G users.
It is too early to judge whether these B2B smart applications are a success, and we will continue to monitor progress.
Acquisition strategy
SK Telecom has been growing these new business areas over the past few years, both organically and by acquisition. Its entry into the security business has been entirely by acquisition, where it has bought new revenue to compensate for that lost in the core mobile business. It is too early to assess what the ongoing impact and success of these businesses will be as part of SK Telecom.
Acquisitions in general have a mixed record of success. SK Telecom’s usual approach of acquiring a controlling interest and investing in its acquisitions, but keeping them as separate businesses, is one which often, together with the right management approach from the parent, causes the least disruption to the acquired business and therefore increases the likelihood of longer-term success. It also allows for investment from other sources, reducing the cost and risk to SK Telecom as the acquiring company. Yet as a counterpoint to this, M&A in this style doesn’t help change practices in the rest of the business.
However, it has also shown willingness to change its position as and when appropriate, either by sale, or by a change in investment strategy. For example, through its subsidiary SK Planet, it acquired Shopkick, a shopping loyalty rewards business in 2014, but sold it in 2019, for the price it paid for it. It took a different approach to its activity in quantum technologies, originally set up in-house in 2011, which it rolled into IDQ following its acquisition in 2018.
SKT has also recently entered into partnerships and agreements concerning the following areas of business:
Fixed Wireless Access (FWA) networks use a wireless “last mile” link for the final connection of a broadband service to homes and businesses, rather than a copper, fibre or coaxial cable into the building. Provided mostly by WISPs (Wireless Internet Service Providers) or mobile network operators (MNOs), these services come in a wide range of speeds, prices and technology architectures.
Some FWA services are just a short “drop” from a nearby pole or fibre-fed hub, while others can work over distances of several kilometres or more in rural and remote areas, sometimes with base station sites backhauled by additional wireless links. WISPs can either be independent specialists, or traditional fixed/cable operators extending reach into areas they cannot economically cover with wired broadband.
There is a fair amount of definitional vagueness about FWA. The most expansive definitions include cheap mobile hotspots (“Mi-Fi” devices) used in homes, or various types of enterprise IoT gateway, both of which could easily be classified in other market segments. Most service providers don’t give separate breakouts of deployments, while regulators and other industry bodies report patchy and largely inconsistent data.
Our view is that FWA is firstly about providing permanent broadband access to a specific location or premises. Primarily, this is for residential wireless access to the Internet and sometimes typical telco-provided services such as IPTV and voice telephony. In a business context, there may be a mix of wireless Internet access and connectivity to corporate networks such as VPNs, again provided to a specific location or building.
A subset of FWA relates to M2M usage, for instance private networks run by utility companies for controlling grid assets in the field. These are typically not Internet-connected at all, and so don’t fit most observers’ general definition of “broadband access”.
Usually, FWA will be marketed as a specific service and package by some sort of network provider, usually including the terminal equipment (“CPE” – customer premise equipment), rather than allowing the user to “bring their own” device. That said, lower-end (especially 4G) offers may be SIM-only deals intended to be used with generic (and unmanaged) portable hotspots.
There are some examples of private network FWA, such as a large caravan or trailer park with wireless access provided from a central point, and perhaps in future municipal or enterprise cellular networks giving fixed access to particular tenant structures on-site – for instance to hangars at an airport.
Enter your details below to request an extract of the report
FWA today
Today, fixed-wireless access (FWA) is used for perhaps 8-9% of broadband connections globally, although this varies significantly by definition, country and region. There are various use cases (see below), but generally FWA is deployed in areas without good fixed broadband options, or by mobile-only operators trying to add an additional fixed revenue stream, where they have spare capacity.
Fixed wireless internet access fits specific sectors and uses, rather than the overall market
Source: STL Partners
FWA has traditionally been used in sparsely populated rural areas, where the economics of fixed broadband are untenable, especially in developing markets without existing fibre transport to towns and villages, or even copper in residential areas. Such networks have typically used unlicensed frequency bands, as there is limited interference – and little financial justification for expensive spectrum purchases. In most cases, such deployments use proprietary variants of Wi-Fi, or its ill-fated 2010-era sibling WiMAX.
Increasingly however, FWA is being used in more urban settings, and in more developed market scenarios – for example during the phase-out of older xDSL broadband, or in places with limited or no competition between fixed-network providers. Some cellular networks primarily intended for mobile broadband (MBB) have been used for fixed usage as well, especially if spare capacity has been available. 4G has already catalysed rapid growth of FWA in numerous markets, such as South Africa, Japan, Sri Lanka, Italy and the Philippines – and 5G is likely to make a further big difference in coming years. These mostly rely on licensed spectrum, typically the national bands owned by major MNOs. In some cases, specific bands are used for FWA use, rather than sharing with normal mobile broadband. This allows appropriate “dimensioning” of network elements, and clearer cost-accounting for management.
Historically, most FWA has required an external antenna and professional installation on each individual house, although it also gets deployed for multi-dwelling units (MDUs, i.e. apartment blocks) as well as some non-residential premises like shops and schools. More recently, self-installed indoor CPE with varying levels of price and sophistication has helped broaden the market, enabling customers to get terminals at retail stores or delivered direct to their home for immediate use.
Looking forward, the arrival of 5G mass-market equipment and larger swathes of mmWave and new mid-band spectrum – both licensed and unlicensed – is changing the landscape again, with the potential for fibre-rivalling speeds, sometimes at gigabit-grade.
Enter your details below to request an extract of the report
Table of contents
Executive Summary
Introduction
FWA today
Universal broadband as a goal
What’s changed in recent years?
What’s changed because of the pandemic?
The FWA market and use cases
Niche or mainstream? National or local?
Targeting key applications / user groups
FWA technology evolution
A broad array of options
Wi-Fi, WiMAX and close relatives
Using a mobile-primary network for FWA
4G and 5G for WISPs
Other FWA options
Customer premise equipment: indoor or outdoor?
Spectrum implications and options
The new FWA value chain
Can MNOs use FWA to enter the fixed broadband market?
People in China and South Korea are buying 5G phones by the million, far more than initially expected, yet many western telcos are moving cautiously. Will your company also find demand? What’s the smart strategy while uncertainty remains? What actions are needed to lead in the 5G era? What questions must be answered?
The report is informed by talks with executives of over three dozen companies and email contacts with many more, including 21 of the first 24 telcos who have deployed. This report covers considerations for the next three years (2020–2023) based on what we know today.
“Seize the 5G opportunity” says Ke Ruiwen, Chairman, China Telecom, and Chinese reports claimed 14 million sales by the end of 2019. Korea announced two million subscribers in July 2019 and by December 2019 approached five million. By early 2020, The Korean carriers were confident 30% of the market will be using 5G by the end of 2020. In the US, Verizon is selling 5G phones even in areas without 5G services, With nine phone makers looking for market share, the price in China is US$285–$500 and falling, so the handset price barrier seems to be coming down fast.
Yet in many other markets, operators progress is significantly more tentative. So what is going on, and what should you do about it?
Enter your details below to request an extract of the report
5G technology works OK
22 of the first 24 operators to deploy are using mid-band radio frequencies.
Vodafone UK claims “5G will work at average speeds of 150–200 Mbps.” Speeds are typically 100 to 500 Mbps, rarely a gigabit. Latency is about 30 milliseconds, only about a third better than decent 4G. Mid-band reach is excellent. Sprint has demonstrated that simply upgrading existing base stations can provide substantial coverage.
5G has a draft business case now: people want to buy 5G phones. New use cases are mostly years away but the prospect of better mobile broadband is winning customers. The costs of radios, backhaul, and core are falling as five system vendors – Ericsson, Huawei, Nokia, Samsung, and ZTE – fight for market share. They’ve shipped over 600,000 radios. Many newcomers are gaining traction, for example Altiostar won a large contract from Rakuten and Mavenir is in trials with DT.
The high cost of 5G networks is an outdated myth. DT, Orange, Verizon, and AT&T are building 5G while cutting or keeping capex flat. Sprint’s results suggest a smart build can quickly reach half the country without a large increase in capital spending. Instead, the issue for operators is that it requires new spending with uncertain returns.
The technology works, mostly. Mid-band is performing as expected, with typical speeds of 100–500Mbps outdoors, though indoor performance is less clear yet. mmWave indoor is badly degraded. Some SDN, NFV, and other tools for automation have reached the field. However, 5G upstream is in limited use. Many carriers are combining 5G downstream with 4G upstream for now. However, each base station currently requires much more power than 4G bases, which leads to high opex. Dynamic spectrum sharing, which allows 5G to share unneeded 4G spectrum, is still in test. Many features of SDN and NFV are not yet ready.
So what should companies do? The next sections review go-to-market lessons, status on forward-looking applications, and technical considerations.
Early go-to-market lessons
Don’t oversell 5G
The continuing publicity for 5G is proving powerful, but variable. Because some customers are already convinced they want 5G, marketing and advertising do not always need to emphasise the value of 5G. For those customers, make clear why your company’s offering is the best compared to rivals’. However, the draw of 5G is not universal. Many remain sceptical, especially if their past experience with 4G has been lacklustre. They – and also a minority swayed by alarmist anti-5G rhetoric – will need far more nuanced and persuasive marketing.
Operators should be wary of overclaiming. 5G speed, although impressive, currently has few practical applications that don’t already work well over decent 4G. Fixed home broadband is a possible exception here. As the objective advantages of 5G in the near future are likely to be limited, operators should not hype features that are unrealistic today, no matter how glamorous. If you don’t have concrete selling propositions, do image advertising or use happy customer testimonials.
Table of Contents
Executive Summary
Introduction
5G technology works OK
Early go-to-market lessons
Don’t oversell 5G
Price to match the experience
Deliver a valuable product
Concerns about new competition
Prepare for possible demand increases
The interdependencies of edge and 5G
Potential new applications
Large now and likely to grow in the 5G era
Near-term applications with possible major impact for 5G
Mid- and long-term 5G demand drivers
Technology choices, in summary
Backhaul and transport networks
When will 5G SA cores be needed (or available)?
5G security? Nothing is perfect
Telco cloud: NFV, SDN, cloud native cores, and beyond
AI and automation in 5G
Power and heat
Enter your details below to request an extract of the report
When we published the report 5G: The First Three Years in December 2018, we identified that most of the hype – from autonomous cars to surgeons operating from the beach – is at best several years from significant volume. There are no “killer apps” in sight. Telco growth from 5G deployments will be based on greater capacity, lower cost and customer willingness to buy.
If carrier revenue doesn’t rise, the pressure to cut costs will grow
For the last five years, carrier revenue has been almost flat in most countries and we believe this trend is likely to continue.
STL Partners forecasts less than 1% CAGR in telecoms revenues
Source: STL Partners
In our 5G Strategies report series, STL Partners set out to established what 5G actually offers that will enable carriers to make more money in the next few years.
It builds on STL Partners’ previous insights into 5G, including:
The report explores the most recent activities in 5G by operators, vendors, phone makers and chipmakers.
Enter your details below to request an extract of the report
var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);
High-level takeaways from initial 5G deployments
This section provides a high-level overview of the current efforts and activities of select telcos around the world. Broadly, it shows that almost all are pushing ahead on 5G, some much faster than others.
Korea is the world’s most advanced 5G market, with two million Koreans having bought 5G phones by July.
Korea’s 3.5 GHz networks typically deliver download speeds of 100 – 500 Mbps. SK Telecom and KT are using Samsung equipment. LG Uplus is mostly Huawei. There is little evidence that either vendor has demonstrated superior performance. Korea’s government, supported by the operators, made a decision that speeding ahead on 5G would be valuable prestige and improve the Korean economy. Korea expects to have 200,000 radios in place by the end of 2019, compared with BT which anticipates fewer than 2,500.
China Mobile has confirmed Huawei’s estimate that the price of 5G phones will fall to under US$300 in 2020, which will stimulate a sharp increase in demand.
The Chinese and the Koreans are investing heavily in augmented and virtual reality and games for 5G. This will take time to mature.
Verizon has taken a radical approach to simplifying its core and transport network, partly in preparation for 5G but more generally to improve its cost of delivery. This simplification has allowed it to maintain and even cut some CAPEX investments while delivering performance improvements.
5G mmWave in 28GHz works and often delivers a gigabit. The equipment is of modest size and cost. However, the apparent range of around 200 metres is disappointing (Verizon has not confirmed the range but there is evidence it is short). Verizon expects better range.
Sprint’s 160MHz of spectrum at 2.5GHz gives it remarkably wide coverage at 100 – 500 Mbps download speeds. Massive MIMO (multiple-input, multiple-output with 64 or more antennas) at 2.5 GHz works so well that Sprint is achieving great coverage without adding many small cells.
Etisalat (UAE) shows that any country that can afford it can deliver 5G today. Around the Gulf, Ooredoo (Kuwait, Qatar), Vodaphone (Qatar), du Telecom (UAE) and STC (Saudi Arabia) are speeding construction to avoid falling behind.
BT claims it will “move quickly” and turn on 100 cells per month (which is relatively few in comparison to Korea). BT’s website also claims that 5G has a latency speed of <1 ms, but the first measured latency is 31 ms. At Verizon, latency tests are often a little better than the announced 30 ms. Edge Networks, if deployed, can cut the latency by about half. A faster air interface, Ultra-Reliable Low-Latency Communication (URLLC), expected around 2023, could shave off another 5-7 ms. The business case for URLLC is unproven and it remains to be seen how widely it is deployed. In the rest of the section we look at these and other operators in a little more detail.
Live commercial 5G deployments globally, August 2019
This is the best available information on 5G deployments globally as of August 2019, gathered from both public and private sources. We have excluded operators that have announced 5G launches, but where services are not yet available for consumers to buy, such as AT&T in the US and Deutsche Telekom in Germany.
Table of contents
Executive Summary
Introduction
If carrier revenue doesn’t rise, the pressure to cut costs will grow
Operators
High-level takeaways
European operators
Asia Pacific and Middle Eastern operators
North America
Phone makers
5G system vendors
Datang
Samsung
Ericsson
Huawei
Nokia
Chip makers
Qualcomm
Samsung
Intel
MediaTek
Huawei-HiSilicon
Conclusions: (Almost) all systems go
Enter your details below to request an extract of the report
var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);
Who, among telecoms operators, are 5G leaders? Verizon Wireless is certainly among the most enthusiastic proponents.
On October 1, 2018, Verizon turned on the world’s first major 5G network. It is spending US$20 billion to offer 30 million homes millimetre wave 5G, often at speeds around a gigabit. One of the first homes in Houston “clocked speeds of 1.3 gigabits per second at 2,000 feet.” CEO Vestberg expects to cover the whole country by 2028, some with 3.5 GHz. 5G: The first three years cuts through the hype and confusion to provide the industry a clear picture of the likely future. A companion report, 5G smart strategies, explores how 5G helps carriers make more money and defeat the competition.
This report was written by Dave Burstein with substantial help from Andrew Collinson and Dean Bubley.
What is 5G?
In one sense, 5G is just a name for all the new technologies now being widely deployed. It’s just better mobile broadband. It will not change the world anytime soon.
There are two very different flavours of 5G:
Millimetre wave: offers about 3X the capacity of mid-band or the best 4G. Spectrum used is from 20 GHz to over 60 GHz. Verizon’s mmWave system is designed to deliver 1 gigabit downloads to most customers and 5 gigabits shared. 26 GHz in Europe & 28 GHz in the U.S. are by far the most common.
Low and mid-band: uses 4G hardware and “New Radio” software. It is 60-80% less capable on average than millimetre wave and very similar in performance to 4G TD-LTE. 3.3 GHz – 4.2 GHz is by far the most important band.
To begin, a few examples.
5G leaders are deploying millimetre wave
Verizon’s is arguably currently the most advanced 5G network in the world. Perhaps most surprisingly, the “smart build” is keeping costs so low capital spending is coming down. Verizon’s trials found millimetre wave performance much better than expected. In some cases, 5G capacity allowed reducing the number of cells.
Verizon will sell fixed wireless outside its incumbent territory. It has ~80 million customers out of district. Goldman Sachs estimates it will add 8 million fixed wireless by 2023 and more than pay for the buildout.
Verizon CEO Hans Vestberg says he believes mmWave capacity will allow very attractive offerings that will win customers away from the competition.
Enter your details below to request an extract of the report
var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);
What are the other 5G leaders doing?
Telefónica Deutschland has similar plans, hoping to blow open the German market with mmWave to a quarter of the country. Deutsche Telekom and Vodafone are sticking with the much slower mid-band 5G and could be clobbered.
Most 5G will be slower low and mid-band formerly called 4G
80% or more of 5G worldwide the next three years will not be high-speed mmWave. Industry group 3GPP decided early in 2018 to call anything running New Radio software “5G.” In practice, almost any currently shipping 4G radio can add on the software and be called “5G.” The software was initially said to raise capacity between 10% and 52%. That’s 60% to 80% slower than mmWave. However, improved 4G technology has probably cut the difference by more than half. That’s 60% to 80% slower than mmWave. It’s been called “faux 5G” and “5G minus,” but few make the distinction. T-Mobile USA promises 5G to the entire country by 2020 without a large investment. Neville Ray is blanketing the country with 4G in 20 MHz of the new 600 MHz band. That doesn’t require many more towers due to the long reach of low frequencies. T-Mobile will add NR software for a marketing push.
In an FCC presentation, Ray said standalone T-Mobile will have a very wide 5G coverage but at relatively low speeds. Over 85% of users will connect at less than 100 megabits. The median “5G” connection will be 40-70 megabits. Some users will only get 10-20 megabits, compared to a T-Mobile average today of over 30 megabits. Aggregating 600 MHz NR with other T-Mobile bands now running LTE would be much faster but has not been demonstrated.
While attesting to the benefits of the T-Mobile-Sprint deal, Neville claimed that using Sprint spectrum at 2500 MHz and 11,000 Sprint towers will make a far more robust offering by 2024. 10% of this would be mmWave.
In the final section of this report, I discuss 5G smart strategy: “5G” is a magic marketing term. It will probably sell well even if 4G speeds are similar. The improved sales can justify a higher budget.
T-Mobile Germany promises nationwide 5G by 2025. That will be 3.5 GHz mid-band, probably using 100 MHz of spectrum. Germany has just set aside 400 MHz of spectrum at 3.5 GHz. DT, using 100 MHz of 3.5 GHz, will deliver 100–400 megabit downloads to most.
100–400 megabits is faster than much of T-Mobile’s DSL. It soon will add fixed mobile in some rural areas. In addition, T-Mobile is selling a combined wireless and DSL router. The router uses the DSL line preferably but can also draw on the wireless when the user requires more speed.
China has virtually defined itself as a 5G leader by way of its government’s clear intent for the operators. China Mobile plans two million base stations running 2.5 GHz, which has much better reach than radio in the 3.5 GHz spectrum. In addition, the Chinese telcos have been told to build a remarkable edge network. Minister Miao Wei wants “90% of China within 25 ms of a server.” That’s extremely ambitious but the Chinese have delivered miracles before. 344 million Chinese have fibre to the home, most built in four years.
Telus, Canada’s second incumbent, in 2016 carefully studied the coming 5G choices. The decision was to focus capital spending on more fibre in the interim. 2016 was too early to make 5G plans, but a strong fibre network would be crucial. Verizon also invested heavily in fibre in 2016 and 2017, which now is speeding 5G to market. Like Verizon, Telus sees the fibre paying off in many ways. It is doing fibre to the home, wireless backhaul, and service to major corporations. CEO Darren Entwistle in November 2018 spoke at length about its future 5G, including the importance of its large fibre build, although he hasn’t announced anything yet.
There is a general principle that if it’s too early to invest in 5G, it’s a good idea to build as much fibre as you can in the interim.
Benefits of 5G technology
More broadband capacity and speed. Most of the improvement in capacity comes from accessing more bandwidth through carrier aggregation, and many antenna MIMO. Massive MIMO has shipped as part of 4G since 2016 and carrier aggregation goes back to 2013. All 5G phones work on 4G as well, connecting as 4G where there is no 5G signal.
Millimetre wave roughly triples capacity. Low and mid-band 5G runs on the same hardware as 4G. The only difference to 4G is NR software, which adds only modestly to capacity.
Drastically lower cost per bit. Verizon CEO Lowell McAdam said, “5G will deliver a megabit of service for about 1/10th of what 4G does.”
Reduced latency. 1 ms systems will mostly only be in the labs for several more years, but Verizon’s and other systems deliver speed from the receiver to the cell of about 10 milliseconds. For practical purposes, latency should be considered 15 ms to 50 ms and more, unless and until large “edge Servers” are installed. Only China is likely to do that in the first three years.
The following will have a modest effect, at most, in the next three years: Autonomous cars, remote surgery, AR/VR, drones, IoT, and just about all the great things promised beyond faster and cheaper broadband. Some are bogus, others not likely to develop in our period. 5G leaders will need to capitalise on near-term benefits.
Contents:
Executive Summary
Some basic timelines
What will 5G deliver?
What will 5G be used for?
Current plans reviewed in the report
Introduction
What is 5G?
The leaders are deploying millimetre wave
Key dates
What 5G and advanced 4G deliver
Six things to know
Six myths
5G “Smart Build” brings cost down to little more than 4G
5G, Edge, Cable and IoT
Edge networks in 5G
“Cable is going to be humongous” – at least in the U.S.
IoT and 5G
IoT and 5G: Does anyone need millions of connections?
Current plans of selected carriers (5G leaders)
Who’s who
Phone makers
The system vendors
Chip makers
Spectrum bands in the 5G era
Millimetre wave
A preview of 5G smart strategies
How can carriers use 5G to make more money?
The cold equations of growth
Figures:
Figure 1: 20 years of NTT DOCOMO capex
Figure 2: Verizon 5G network plans
Figure 3: Qualcomm’s baseband chip and radio frequency module
Figure 4: Intel 5G chip – Very limited 5G production capability until late 2019
Figure 5: Overview of 5G spectrum bands
Figure 6: 5G experience overview
Figure 7: Cisco VNI forecast of wireless traffic growth between 2021–2022
Enter your details below to request an extract of the report
var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);
The ‘Internet of Things’ first appeared as a marketing term in 1999 when it was applied to improved supply-chain strategies, leveraging the then hot-topics of RFID and the Internet.
Industrial engineers planned to use miniaturised, RFID tags to track many different types of asset, especially relatively low cost ones. However, their dependency on accessible RFID readers constrained their zonal range. This also constrained many such applications to the enterprise sector and within a well-defined geographic footprint.
Modern versions of RFID labelling have expanded the addressable market through barcode and digital watermarking approaches, for example, while mobile has largely removed the zonal constraint. In fact, mobile’s economies of scale have ushered in a relatively low-cost technology building block in the form of radio modules with local processing capability. These modules allow machines and sensors to be monitored and remotely managed over mobile networks. This is essentially the M2M market today.
M2M remained a specialist, enterprise sector application for a long time. It relied on niche, systems integration and hardware development companies, often delivering one-off or small-scale deployments. For many years, growth in the M2M market did not meet expectations for faster adoption, and this is visible in analyst forecasts which repeatedly time-shifted the adoption forecast curve. Figure 1 below, for example, illustrates successive M2M forecasts for the 2005-08 period (before M2M began to take off) as analysts tried to forecast when M2M module shipment volumes would breach the 100m units/year hurdle:
Although the potential of remote connectivity was recognised, it did not become a high-volume market until the GSMA brought about an alignment of interests, across mobile operators, chip- and module-vendors, and enterprise users by targeting mobile applications in adjacent markets.
The GSMA’s original Embedded Mobile market development campaign made the case that connecting devices and sensors to (Internet) applications would drive significant new use cases and sources of value. However, in order to supply economically viable connected devices, the cost of embedding connectivity had to drop. This meant:
Educating the market about new opportunities in order to stimulate latent demand
Streamlining design practices to eliminate many layers of implementation costs
Promoting adoption in high-volume markets such as automotive, consumer health and smart utilities, for example, to drive economies of scale in the same manner that led to the mass-adoption of mobile phones
The late 2000’s proved to be a turning point for M2M, with the market now achieving scale (c. 189m connections globally as of January 2014) and growing at an impressive rate (c. 40% per annum).
From M2M to the Internet of Things?
Over the past 5 years, companies such as Cisco, Ericsson and Huawei have begun promoting radically different market visions to those of ‘traditional M2M’. These include the ‘Internet of Everything’ (that’s Cisco), a ‘Networked Society’ with 50 billion cellular devices (that’s Ericsson), and a ‘Cellular IoT’ with 100 billion devices (that’s Huawei).
Figure 2: Ericsson’s Promise: 50 billion connected ‘things’ by 2020
Source: Ericsson
Ericsson’s calculation builds on the idea that there will be 3 billion “middle class consumers”, each with 10 M2M devices, plus personal smartphones, industrial, and enterprise devices. In promoting such visions, the different market evangelists have shifted market terminology away from M2M and towards the Internet of Things (‘IoT’).
The transition towards IoT has also had consequences beyond terminology. Whereas M2M applications were previously associated with internal-to-business, operational improvements, IoT offers far more external market prospects. In other words, connected devices allow a company to interact with its customers beyond its strict operational boundaries. In addition, standalone products can now deliver one or more connected services: for example, a connected bus can report on its mechanical status, for maintenance purposes, as well as its location to deliver a higher quality, transit service.
Another consequence of the rise of IoT relates to the way that projects are evaluated. In the past, M2M applications tended to be justified on RoI criteria. Nowadays, there is a broader, commercial recognition that IoT opens up new avenues of innovation, efficiency gains and alternative sources of revenue: it was this recognition, for example, that drove Google’s $3.2 billion valuation of Nest (see the Connected Home EB).
In contrast to RFID, the M2M market required companies in different parts of the value chain to share a common vision of a lower cost, higher volume future across many different industry verticals. The mobile industry’s success in scaling the M2M market now needs to adjust for an IoT world. Before examining what these changes imply, let us first review the M2M market today, how M2M service providers have adapted their business models and where this positions them for future IoT opportunities.
M2M Today: Geographies, Verticals and New Business Models
Headline: M2M is now an important growth area for MNOs
The M2M market has now evolved into a high volume and highly competitive business, with leading telecoms operators and other service providers (so-called ‘M2M MVNOs’ e.g. KORE, Wyless) providing millions of cellular (and fixed) M2M connections across numerous verticals and applications.
Specifically, 428 MNOs were offering M2M services across 187 countries by January 2014 – 40% of mobile network operators – and providing 189 million cellular connections. The GSMA estimates the number of global connections to be growing by about 40% per annum. Figure 3 below shows that as of Q4 2013 China Mobile was the largest player by connections (32 million), with AT&T second largest but only half the size.
Figure 3: Selected leading service providers by cellular M2M connections, Q4 2013
Source: Various, including GSMA and company accounts, STL Partners, More With Mobile
Unsurprisingly, these millions of connections have also translated into material revenues for service providers. Although MNOs typically do not report M2M revenues (and many do not even report connections), Verizon reported $586m in ‘M2M and telematics’ revenues for 2014, growing 47% year-on-year, during its most recent earnings call. Moreover, analysis from the Telco 2.0 Transformation Index also estimates that Vodafone Group generated $420m in revenues from M2M during its 2013/14 March-March financial year.
However, these numbers need to be put in context: whilst $500m growing 40% YoY is encouraging, this still represents only a small percentage of these telcos’ revenues – c. 0.5% in the case of Vodafone, for example.
Figure 4: Vodafone Group enterprise revenues, implied forecast, FY 2012-18
Figure 4 uses data provided by Vodafone during 2013 on the breakdown of its enterprise line of business and grows these at the rates which Vodafone forecasts the market (within its footprint) to grow over the next five years – 20% YoY revenue growth for M2M, for example. Whilst only indicative, Figure 4 demonstrates that telcos need to sustain high levels of growth over the medium- to long-term and offer complementary, value added services if M2M is to have a significant impact on their headline revenues.
To do this, telcos essentially have three ways to refine or change their business model:
Improve their existing M2M operations: e.g. new organisational structures and processes
Move into new areas of M2M: e.g. expansion along the value chain; new verticals/geographies
Explore the Internet of Things: e.g. new service innovation across verticals and including consumer-intensive segments (e.g. the connected home)
To provide further context, the following section examines where M2M has focused to date (geographically and by vertical). This is followed by an analysis of specific telco activities in 1, 2 and 3.
Executive Summary
Introduction
From RFID in the supply chain to M2M today
From M2M to the Internet of Things?
M2M Today: Geographies, Verticals and New Business Models
Headline: M2M is now an important growth area for MNOs
In-depth: M2M is being driven by specific geographies and verticals
New Business Models: Value network innovation and new service offerings
The Emerging IoT: Outsiders are raising the opportunity stakes
The business models and profitability potentials of M2M and IoT are radically different
IoT shifts the focus from devices and connectivity to data and its use in applications
New service opportunities drive IoT value chain innovation
New entrants recognise the IoT-M2M distinction
IoT is not the end-game
‘Digital’ and IoT convergence will drive further innovation and new business models
In 2013, it looked like Samsung Electronics could challenge Apple’s hegemony at the high-end of the handset market. The Korean giant’s flagship Galaxy smartphones were selling well and were equipped with features, such as large high definition displays and NFC, which Apple’s iPhones lacked.
But in 2014, Samsung’s Galaxy range lost some of is lustre – the latest flagship model, the S5, amounts to a fairly modest evolution of its predecessor, the S4. The Galaxy S5 underwhelmed some reviewers who criticised its look and feel, hefty price tag and erratic fingerprint sensor. Meanwhile, Apple launched two new high-spec handsets – the iPhone 6 and iPhone 6 Plus. These phones markedly close the hardware gap and fill a significant hole in Apple’s portfolio by venturing into the so-called phablet market, which sits between smartphones and tablets. Now that Apple has grasped consumers’ desire for larger form factors and bigger displays, Samsung may struggle to hold on to high-end buyers.
After out-innovating Apple in some respects in recent years, Samsung is now on the back foot again. While Apple is broadly back to parity in terms of hardware, Samsung continues to trail the Californian company in terms of software and services. Most reviewers still regard the iPhone as the gold standard when it comes to user experience.
It is now well understood that the iPhone’s lead is largely down to Apple’s absolute control over hardware and software. Samsung and other vendors selling handsets running Google’s Android operating system have struggled to achieve the slick integration between hardware and software exemplified by Apple’s iPhones. Samsung has often exacerbated this issue by presenting customers with a confusing mix of overlapping Google and Samsung apps on its Galaxy handsets.
Samsung’s Annus Miserablis
In the second quarter of 2014, research firm IDC estimates that Samsung shipped more than 18 million Galaxy S5s, along with nine million S3 and S4 units. That implies Samsung shipped 27 million models in its flagship Galaxy S range, compared with 35 million iPhones distributed by Apple. For the third quarter, IDC didn’t break out Galaxy sales, but the research firm flagged “cooling demand for [Samsung’s] high-end devices,” adding: “Although Samsung has long relied on its high-end devices, its mid-range and low-end models drove volume for the quarter and subsequently drove down average selling prices.”
But Samsung can’t afford to cede more of the high end of the market to Apple. The Korean giant is facing increasingly intense competition from low cost Chinese manufacturers in the low end and the mid-range segments of the handset market. The net result has been a marked decline in Samsung’s market share and falling revenues. As the global smartphone market has expanded to serve people in lower income groups, both Samsung and Apple have lost market share to the likes of Xiaomi, Lenovo and Huawei. But Samsung is suffering far more than Apple, whose devices are squarely aimed at the affluent (see Figure 3).
Figure 3: Samsung’s share of the global smartphone market share is sliding
source: IDC research
Worse still for Samsung, the decline in average selling prices is hitting its top line, damaging profitability and its ability to realise economies of scale. In terms of revenues, Apple is now almost as large as Samsung Electronics’ three divisions combined and is much bigger than Samsung’s information technology and mobile (IM) division, which competes directly with Apple (see Figure 4).
Figure 4: Apple is now generating almost as much revenue as Samsung Electronics
Source: Financial results, Apple guidance and analyst estimates captured by www.4-traders.com
The declining performance of Samsung’s IM division has had a major impact on Samsung Electronics’ profitability. The Korean group’s operating margin is slipping back towards 10%, whereas Apple’s operating margin has stabilised at about 28%, after sliding in 2013, when it faced particularly intense competition from Samsung and the broader Android ecosystem (see Figure 5).
Figure 5: Samsung’s margins are low and going lower
Source: Financial results, Apple guidance and analyst estimates captured by www.4-traders.com
Although Samsung Electronics still generates slightly more revenue than Apple, the U.S. company is likely to make more than double the operating profit of its Korean rival in 2014 (see Figure 6).
Figure 6: Apple’s operating profits are set to be more than double those of Samsung
Source: Financial results, Apple guidance and analyst estimates captured by www.4-traders.com
Naturally, declining operating profits mean lower net profits and a less attractive proposition for investors. Samsung clearly needs to avoid slipping into a downward spiral where low profitability prevents it from investing in the research and development and the manufacturing capacity it will need to compete effectively with Apple at the high end. Apple is now generating about $20 billion more in net income than Samsung each year, meaning it has far more financial firepower than its main rival, together with a virtual blank cheque from investors (see Figure 7).
Figure 7: The gap between Apple and Samsung’s financial firepower is widening
Source: Financial results, Apple guidance and analyst estimates captured by www.4-traders.com
Samsung should also be concerned about competition from Microsoft at the high-end of the market. Another company with a surplus of cash, Microsoft has a strong strategic interest in creating compelling smartphones and tablets to shore up its position in the business software market. Now that it is developing both software and hardware in house, Microsoft may yet be able to create smartphones that provide a better user experience than many Android handsets.
In summary, Samsung’s flagging performance in the smartphone market is having a major impact on the financial performance of the group. There could be worse to come. If Samsung concedes more of the premium end of the smartphone market to Apple and possibly Microsoft, it risks competing solely on price in the low and mid segments, where its expertise in display technology and semiconductors won’t enable it to add significant value. Samsung’s margins would erode further and it would be in danger of going into the terminal decline experienced by the likes of Nokia and Motorola, which have also both led the mobile phone market in the past.
An implosion by Samsung would have grave consequences for telcos and their primary suppliers. Aside from Microsoft, the Korean conglomerate is the only company in the smartphone and tablet markets that has the resources to provide credible global competition for Apple. Although the leading Chinese smartphone makers are strong in emerging markets, they lack the brand cachet and the marketing skills to mount a serious challenge to Apple in North America and Western Europe.
Internet-Driven Disruption
Introduction
Executive Summary
Samsung: slipping and sliding
How will Samsung respond?
The opportunities for Samsung in the smartphone market
The threats to Samsung in the smartphone market
Samsung’s next steps
Apple isn’t impregnable
Conclusions and implications for telcos
About STL Partners
Figure 1 – Apple financial firepower far outstrips that of Samsung Electronics
Figure 2 – How Samsung could shore up its position in the smartphone market
Figure 3 – Samsung’s share of the global smartphone market share is sliding
Figure 4 – Apple is now generating almost as much revenue as Samsung Electronics
Figure 5 – Samsung’s margins are low and going lower
Figure 6 – Apple’s operating profits are set to be more than double those of Samsung
Figure 7 – The gap between Apple and Samsung’s financial firepower is widening
Figure 8 – SWOT analysis of Samsung at the high end of the smartphone market
Figure 9 – Samsung Electronics is the largest investor in tech R&D worldwide
Figure 10 – Apple’s expanding portfolio is making life tougher for Samsung
Figure 11 – Potential strategic actions for Samsung in the smartphone market
Figure 12 – SWOT analysis of Apple in the smartphone market
Figure 13 – Potential strategic actions for Apple in the smartphone market
On January 13th 2014, Google announced its acquisition of Nest Labs for $3.2bn in cash consideration. Nest Labs, or ‘Nest’ for short, is a home automation company founded in 2010 and based in California which manufactures ‘smart’ thermostats and smoke/carbon monoxide detectors. Prior to this announcement, Google already had an approximately 12% equity stake in Nest following its Series B funding round in 2011.
Google is known as a prolific investor and acquirer of companies: during 2012 and 2013 it spent $17bn on acquisitions alone, which was more than Apple, Microsoft, Facebook and Yahoo combined (at $13bn) . Google has even been known to average one acquisition per week for extended periods of time. Nest, however, was not just any acquisition. For one, whilst the details of the acquisition were being ironed out Nest was separately in the process of raising a new round of investment which implicitly valued it at c. $2bn. Google, therefore, appears to have paid a premium of over 50%.
This analysis can be extended by examining the transaction under three different, but complementary, lights.
Google + Nest: why it’s an interesting and important deal
Firstly, looking at Nest’s market capitalisation relative to its established competitors suggests that its long-run growth prospects are seen to be very strong
At the time of the acquisition, estimates placed Nest as selling 100k of its flagship product (the ‘Nest Thermostat’) per month . With each thermostat retailing at c. $250 each, this put its revenue at approximately $300m per annum. Now, looking at the ratio of Nest’s market capitalisation to revenue compared to two of its established competitors (Lennox and Honeywell) tells an interesting story:
Figure 1: Nest vs. competitors’ market capitalisation to revenue
Source: Company accounts, Morgan Stanley
Such a disparity suggests that Nest’s long-run growth prospects, in terms of both revenue and free cash flow, are believed to be substantially higher than the industry average.
Secondly, looking at Google’s own market capitalisation suggests that the capital markets see considerable value in (and synergies from) its acquisition of Nest
Prior to the deal’s announcement, Google’s share price was oscillating around the $560 mark. Following the acquisition, Google’s share price began averaging closer to $580. On the day of the announcement itself, Google’s share price increased from $561 to $574 which, crucially, reflected a $9bn increase in market capitalisation . In other words, the value placed on Google by the capital markets increased by nearly 300% of the deal’s value. This is shown in Figure 2 below:
Figure 2: Google’s share price pre- and post-Nest acquisition
Source: Google Finance
This implies that the capital markets either see Google as being well positioned to add unique value to Nest, Nest as being able to strongly complement Google’s existing activities, or both.
Thirdly, viewing the Nest acquisition in the context of Google’s historic and recent M&A activity shows both its own specific financial significance and the changing face of Google’s acquisitions more generally
At $3.2bn, the acquisition of Nest represents Google’s second largest acquisition of all time. The largest was its purchase of Motorola Mobility in 2011 for $12.5bn, but Google has since reached a deal to sell the majority of its assets (excluding its patent portfolio) to Lenovo for $2.9bn. In other words, Nest is soon to become Google’s largest active, inorganic investment. Google’s ten largest acquisitions, as well as some smaller but important ones, are shown in Figure 3 below:
Figure 3: Selected acquisitions by Google, 2003-14
Source: Various
Beyond its size, the Nest acquisition also continues Google’s recent trend of acquiring companies seemingly less directly related to its core business. For example, it has been investing in artificial intelligence (DeepMind Technologies), robotics (Boston Dynamics, Industrial Perception, Redwood Robotics) and satellite imagery (Skybox Imaging).
Three questions raised by Google’s acquisition of Nest
George Geis, a professor at UCLA, claims that Google develops a series of metrics at an early stage which it later uses to judge whether or not the acquisition has been successful. He further claims that, according to these metrics, Google on average rates two-thirds of its acquisitions as successful. This positive track record, combined with the sheer size of the Nest deal, suggests that the obvious question here is also an important one:
What is Nest’s business model? Why did Google spend $3.2bn on Nest?
Nest’s products, the Nest Thermostat and the Nest Protect (smoke/carbon monoxide detector), sit within the relatively young space referred to as the ‘connected home’, which is defined and discussed in more detail here. One natural question following the Nest deal is whether Google’s high-profile involvement and backing of a (leading) company in the connected home space will accelerate its adoption. This suggests the following, more general, question:
What does the Nest acquisition mean for the broader connected home market?
Finally, there is a question to be asked around the implications of this deal for Telcos and their partners. Many Telcos are now active in this space, but they are not alone: internet players (e.g. Google and Apple), big technology companies (e.g. Samsung), utilities (e.g. British Gas) and security companies (e.g. ADT) are all increasing their involvement too. With different strategies being adopted by different players, the following question follows naturally:
What does the Nest acquisition mean for telcos?
Executive Summary
Introduction
Google + Nest: why it’s an interesting and important deal
Three questions raised by Google’s acquisition of Nest
Understanding Nest and Connected Homes
Nest: reinventing everyday objects to make them ‘smart’
Nest’s future: more products, more markets
A general framework for connected home services
Nest’s business model, and how Google plans to get a return on its $3.2bn investment
Domain #1: Revenue from selling Nest devices is of only limited importance to Google
Domain #2: Energy demand response is a potentially lucrative opportunity in the connected home
Domain #3: Data for advertising is important, but primarily within Google’s broader IoT ambitions
Domain #4: Google also sees Nest as partial insurance against IoT-driven disruption
Domain #5: Google is pushing into the IoT to enhance its advertising business and explore new monetisation models
Implications for Telcos and the Connected Home
The connected home is happening now, but customer experience must not be overlooked
Telcos can employ a variety of monetisation strategies in the connected home
Conclusions
Figure 1: Nest vs. competitors’ market capitalisation relative to revenue
Figure 2: Google’s share price, pre- and post-Nest acquisition
Figure 3: Selected acquisitions by Google, 2003-14
Figure 4: The Nest Thermostat and Protect
Figure 5: Consumer Electronics vs. Electricity Spending by Market
Figure 6: A connected home services framework
Figure 7: Nest and Google Summary Motivation Matrix
In this note, we compare the recent performance of three US fixed operators who have adopted contrasting strategies and technology choices, AT&T, Verizon, and Comcast. We specifically focus on their NGA (Next-Generation Access) triple-play products, for the excellent reason that they themselves focus on these to the extent of increasingly abandoning the subscriber base outside their footprints. We characterise these strategies, attempt to estimate typical subscriber bundles, discuss their future options, and review the situation in the light of a “Deep Value” framework.
A Case Study in Deep Value: The Lessons from Apple and Samsung
Deep value strategies concentrate on developing assets that will be difficult for any plausible competitor to replicate, in as many layers of the value chain as possible. A current example is the way Apple and Samsung – rather than Nokia, HTC, or even Google – came to dominate the smartphone market.
It is now well known that Apple, despite its image as a design-focused company whose products are put together by outsourcers, has invested heavily in manufacturing throughout the iOS era. Although the first generation iPhone was largely assembled from proprietary parts, in many ways it should be considered as a large-scale pilot project. Starting with the iPhone 3GS, the proportion of Apple’s own content in the devices rose sharply, thanks to the acquisition of PA Semiconductor, but also to heavy investment in the supply chain.
Not only did Apple design and pilot-produce many of the components it wanted, it bought them from suppliers in advance to lock up the supply. It also bought machine tools the suppliers would need, often long in advance to lock up the supply. But this wasn’t just about a tactical effort to deny componentry to its competitors. It was also a strategic effort to create manufacturing capacity.
In pre-paying for large quantities of components, Apple provides its suppliers with the capital they need to build new facilities. In pre-paying for the machine tools that will go in them, they finance the machine tool manufacturers and enjoy a say in their development plans, thus ensuring the availability of the right machinery. They even invent tools themselves and then get them manufactured for the future use of their suppliers.
Samsung is of course both Apple’s biggest competitor and its biggest supplier. It combines these roles precisely because it is a huge manufacturer of electronic components. Concentrating on its manufacturing supply chain both enables it to produce excellent hardware, and also to hedge the success or failure of the devices by selling componentry to the competition. As with Apple, doing this is very expensive and demands skills that are both in short supply, and sometimes also hard to define. Much of the deep value embedded in Apple and Samsung’s supply chains will be the tacit knowledge gained from learning by doing that is now concentrated in their people.
The key insight for both companies is that industrial and user-experience design is highly replicable, and patent protection is relatively weak. The same is true of software. Apple had a deeply traumatic experience with the famous Look and Feel lawsuit against Microsoft, and some people have suggested that the supply-chain strategy was deliberately intended to prevent something similar happening again.
Certainly, the shift to this strategy coincides with the launch of Android, which Steve Jobs at least perceived as a “stolen product”. Arguably, Jobs repeated Apple’s response to Microsoft Windows, suing everyone in sight, with about as much success, whereas Tim Cook in his role as the hardware engineering and then supply-chain chief adopted a new strategy, developing an industrial capability that would be very hard to replicate, by design.
Three Operators, Three Strategies
AT&T
The biggest issue any fixed operator has faced since the great challenges of privatisation, divestment, and deregulation in the 1980s is that of managing the transition from a business that basically provides voice on a copper access network to one that basically provides Internet service on a co-ax, fibre, or possibly wireless access network. This, at least, has been clear for many years.
AT&T is the original telco – at least, AT&T likes to be seen that way, as shown by their decision to reclaim the iconic NYSE ticker symbol “T”. That obscures, however, how much has changed since the divestment and the extremely expensive process of mergers and acquisitions that patched the current version of the company together. The bit examined here is the AT&T Home Solutions division, which owns the fixed-line ex-incumbent business, also known as the merged BellSouth and SBC businesses.
AT&T, like all the world’s incumbents, deployed ADSL at the turn of the 2000s, thus getting into the ISP business. Unlike most world incumbents, in 2005 it got a huge regulatory boost in the form of the Martin FCC’s Comcast decision, which declared that broadband Internet service was not a telecommunications service for regulatory purposes. This permitted US fixed operators to take back the Internet business they had been losing to independent ISPs. As such, they were able to cope with the transition while concentrating on the big-glamour areas of M&A and wireless.
As the 2000s advanced, it became obvious that AT&T needed to look at the next move beyond DSL service. The option taken was what became U-Verse, a triple-play product which consists of:
Either ADSL, ADSL2+, or VDSL, depending on copper run length and line quality
Plus IPTV
And traditional telephony carried over IP.
This represents a minimal approach to the transition – the network upgrade requires new equipment in the local exchanges, or Central Offices in US terms, and in street cabinets, but it does not require the replacement of the access link, nor any trenching.
This minimisation of capital investment is especially important, as it was also decided that U-Verse would not deploy into areas where the copper might need investment to carry it. These networks would eventually, it was hoped, be either sold or closed and replaced by wireless service. U-Verse was therefore, for AT&T, in part a means of disposing of regulatory requirements.
It was also important that the system closely coupled the regulated domain of voice with the unregulated, or at least only potentially regulated, domain of Internet service and the either unregulated or differently regulated domain of content. In many ways, U-Verse can be seen as a content first strategy. It’s TV that is expected to be the primary replacement for the dwindling fixed voice revenues. Figure 1 shows the importance of content to AT&T vividly.
Figure 1: U-Verse TV sales account for the largest chunk of Telco 2.0 revenue at AT&T, although M2M is growing fast
This sounds like one of the telecoms-as-media strategies of the late 1990s. However, it should be clearly distinguished from, say, BT’s drive to acquire exclusive sports content and to build up a brand identity as a “channel”. U-Verse does not market itself as a “TV channel” and does not buy exclusive content – rather, it is a channel in the literal sense, a distributor through which TV is sold. We will see why in the next section.
The US TV Market
It is well worth remembering that TV is a deeply national industry. Steve Jobs famously described it as “balkanised” and as a result didn’t want to take part. Most metrics vary dramatically across national borders, as do qualitative observations of structure. (Some countries have a big public sector broadcaster, like the BBC or indeed Al-Jazeera, to give a basic example.) Countries with low pay-TV penetration can be seen as ones that offer greater opportunities, it being usually easier to expand the customer base than to win share from the competition (a “blue ocean” versus a “red sea” strategy).
However, it is also true that pay-TV in general is an easier sell in a market where most TV viewers already pay for TV. It is very hard to convince people to pay for a product they can obtain free.
In the US, there is a long-standing culture of pay-TV, originally with cable operators and more recently with satellite (DISH and DirecTV), IPTV or telco-delivered TV (AT&T U-Verse and Verizon FiOS), and subscription OTT (Netflix and Hulu). It is also a market characterised by heavy TV usage (an average household has 2.8 TVs). Out of the 114.2 million homes (96.7% of all homes) receiving TV, according to Nielsen, there are some 97 million receiving pay-TV via cable, satellite, or IPTV, a penetration rate of 85%. This is the largest and richest pay-TV market in the world.
In this sense, it ought to be a good prospect for TV in general, with the caveat that a “Sky Sports” or “BT Sport” strategy based on content exclusive to a distributor is unlikely to work. This is because typically, US TV content is sold relatively openly in the wholesale market, and in many cases, there are regulatory requirements that it must be provided to any distributor (TV affiliate, cable operator, or telco) that asks for it, and even that distributors must carry certain channels.
Rightsholders have backed a strategy based on distribution over one based on exclusivity, on the principle that the customer should be given as many opportunities as possible to buy the content. This also serves the interests of advertisers, who by definition want access to as many consumers as possible. Hollywood has always aimed to open new releases on as many cinema screens as possible, and it is the movie industry’s skills, traditions, and prejudices that shaped this market.
As a result, it is relatively easy for distributors to acquire content, but difficult for them to generate differentiation by monopolising exclusive content. In this model, differentiation tends to accrue to rightsholders, not distributors. For example, although HBO maintains the status of being a premium provider of content, consumers can buy it from any of AT&T, Verizon, Comcast, any other cable operator, satellite, or direct from HBO via an OTT option.
However, pay-TV penetration is high enough that any new entrant (such as the two telcos) is committed to winning share from other providers, the hard way. It is worth pointing out that the US satellite operators DISH and DirecTV concentrated on rural customers who aren’t served by the cable MSOs. At the time, their TV needs weren’t served by the telcos either. As such, they were essentially greenfield deployments, the first pay-TV propositions in their markets.
The biggest change in US TV in recent times has been the emergence of major new distributors, the two RBOCs and a range of Web-based over-the-top independents. Figure 2 summarises the situation going into 2013.
Figure 2: OTT video providers beat telcos, cablecos, and satellite for subscriber growth, at scale
The two biggest classes of distributors saw either a marginal loss of subscribers (the cablecos) or a marginal gain (satellite). The two groups of (relatively) new entrants, as you’d expect, saw much more growth. However, the OTT players are both bigger and much faster growing than the two telco players. It is worth pointing out that this mostly represents additional TV consumption, typically, people who already buy pay-TV adding a Netflix subscription. “Cord cutting” – replacing a primary TV subscription entirely – remains rare. In some ways, U-Verse can be seen as an effort to do something similar, upselling content to existing subscribers.
Competing for the Whole Bundle – Comcast and the Cable Industry
So how is this option doing? The following chart, Figure 3, shows that in terms of overall service ARPU, AT&T’s fixed strategy is delivering inferior results than its main competitors.
Figure 3: Cable operators lead the way on ARPU. Verizon, with FiOS, is keeping up
The interesting point here is that Time Warner Cable is doing less well than some of its cable industry peers. Comcast, the biggest, claims a $159 monthly ARPU for triple-play customers, and it probably has a higher density of triple-players than the telcos. More representatively, they also quote a figure of $134 monthly average revenue per customer relationship, including single- and double-play customers. We have used this figure throughout this note. TWC, in general, is more content-focused and less broadband-focused than Comcast, having taken much longer to roll out DOCSIS 3.0. But is that important? After all, aren’t cable operators all about TV? Figure 4 shows clearly that broadband and voice are now just as important to cable operators as they are to telcos. The distinction is increasingly just a historical quirk.
Figure 4: Non-video revenues – i.e. Internet service and voice – are the driver of growth for US cable operators
Source: NCTA data, STL Partners
As we have seen, TV in the USA is not a differentiator because everyone’s got it. Further, it’s a product that doesn’t bring differentiation but does bring costs, as the rightsholders exact their share of the selling price. Broadband and voice are different – they are, in a sense, products the operator makes in-house. Most have to buy the tools (except Free.fr which has developed its own), but in any case the operator has to do that to carry the TV.
The differential growth rates in Figure 4 represent a substantial change in the ISP industry. Traditionally, the Internet engineering community tended to look down on cable operators as glorified TV distribution systems. This is no longer the case.
In the late 2000s, cable operators concentrated on improving their speeds and increasing their capacity. They also pressed their vendors and standardisation forums to practice continuous improvement, creating a regular upgrade cycle for DOCSIS firmware and silicon that lets them stay one (or more) jumps ahead of the DSL industry. Some of them also invested in their core IP networking and in providing a deeper and richer variety of connectivity products for SMB, enterprise, and wholesale customers.
Comcast is the classic example of this. It is a major supplier of mobile backhaul, high-speed Internet service (and also VoIP) for small businesses, and a major actor in the Internet peering ecosystem. An important metric of this change is that since 2009, it has transitioned from being a downlink-heavy eyeball network to being a balanced peer that serves about as much traffic outbound as it receives inbound.
The key insight here is that, especially in an environment like the US where xDSL unbundling isn’t available, if you win a customer for broadband, you generally also get the whole bundle. TV is a valuable bonus, but it’s not differentiating enough to win the whole of the subscriber’s fixed telecoms spend – or to retain it, in the presence of competitors with their own infrastructure. It’s also of relatively little interest to business customers, who tend to be high-value customers.
Executive Summary
Introduction
A Case Study in Deep Value: The Lessons from Apple and Samsung
Three Operators, Three Strategies
AT&T
The US TV Market
Competing for the Whole Bundle – Comcast and the Cable Industry
Competing for the Whole Bundle II: Verizon
Scoring the three strategies – who’s winning the whole bundles?
SMBs and the role of voice
Looking ahead
Planning for a Future: What’s Up Cable’s Sleeve?
Conclusions
Figure 1: U-Verse TV sales account for the largest chunk of Telco 2.0 revenue at AT&T, although M2M is growing fast
Figure 2: OTT video providers beat telcos, cablecos, and satellite for subscriber growth, at scale
Figure 3: Cable operators lead the way on ARPU. Verizon, with FiOS, is keeping up
Figure 4: Non-video revenues – i.e. Internet service and voice – are the driver of growth for US cable operators
Figure 5: Comcast has the best pricing per megabit at typical service levels
Figure 6: Verizon is ahead, but only marginally, on uplink pricing per megabit
Figure 7: FCC data shows that it’s the cablecos, and FiOS, who under-promise and over-deliver when it comes to broadband
Figure 7: Speed sells at Verizon
Figure 8: Comcast and Verizon at parity on price per megabit
Figure 9: Typical bundles for three operators. Verizon FiOS leads the way
Figure 12: The impact of learning by doing on FTTH deployment costs during the peak roll-out phase
Our knowledge, employment opportunities, work itself, healthcare, potential partners, purchases from properties to groceries, and much else can now be delivered or managed via software and mobile apps.
So are we all becoming increasingly ‘Software Defined’? It’s a question that has been stimulated in part by producing research on ‘Software Defined Networks (SDN): A Potential Game Changer’ and Enterprise Mobility, this video from McKinsey and Eric Schmidt, Google’s Exec Chairman, a number of observations throughout the past year, and particularly at this and last year’s Mobile World Congress (MWC).
But is software really the key?
The rapid adoption of smartphones and tablets, enabled by ever faster networks, is perhaps the most visible and tangible phenomenon in the market. Less visible but equally significant is the huge growth in ‘big data’ – the use of massive computing power to process types and volume of data that were previously inaccessible, as well as ‘small data’ – the increasing use of more personalised datasets.
However, what is now fuelling these trends is that many core life and business tools are now software of some form or another. In other words, programmes and ‘apps’ that create economic value, utility, fun or efficiency. Software is now the driving force, and the evolving data and hardware are by-products and enablers of the applications respectively.
Software: your virtual extra hand
In effect, mobile software is the latest great tool in humanity’s evolutionary path. With nearly a quarter of the world’s population using a smartphone, the human race has never had so much computing power by its side in every moment of everyday life. Many feature phones also possess significant processing power, and the extraordinary reach of mobile can now deliver highly innovative solutions like mobile money transfer even in markets with relatively underdeveloped financial service infrastructure.
How we are educated, employed and cared for are all starting to change with the growing power of mobile technologies, and will all change further and with increasing pace in the next phase of the mobile revolution. Knowing how to get the best from this world is now a key life skill.
The way that software is used is changing and will change further. While mobile apps have become a mainstream consumer phenomenon in many markets in the last few years, the application of mobile, personalised technologies is also changing education, health, employment, and the very fabric of our social lives. For example:
Back at MWC 2013 we saw the following fascinating video from Ericsson as part of its ‘Networked Society’ vision of why education has evolved as is has (to mass-produce workers to work in factories), and what the possibilities are with advanced technology, which is well worth a few minutes of your time whether you have kids or not.
There are now a growing number of eHealth applications (heart rate, blood pressure, stroke and outpatient care), and productivity apps and outreach of CRM applications like Salesforce into the mobile employment context are having an increasingly massive impact.
While originally a ‘fixed’ phenomena, the way we meet and find partners has seen a massive change in recent years. For example, in the US, 17% of recent marriages and 20% of ‘committed relationships’ started in the $1Bn online dating world – another world which is now increasingly going mobile.
The growing sophistication in human-software interactivity
Horace Dediu pointed out at a previous Brainstorm that the disruptive jumps in mobile handset technology have come from changes in the user interface – most recently in the touch-screen revolution accompanying smartphones and tablets.
And the way in which we interact with the software will continue to evolve, from the touch screens of smartphones, through voice activation, gesture recognition, retina tracking, on-body devices like watches, in-body sensors in the blood and digestive system, and even potentially by monitoring brainwaves, as illustrated in the demonstration from Samsung labs shown in Figure 1.
Clearly, some of these techniques are still at an early stage of development. It is a hard call as to which will be the one to trigger the next major wave of innovation (e.g. see Facebook’s acquisition of Oculus Rift), as there are so many factors that influence the likely take-up of new technologies, from price through user experience to social acceptance.
Exploring and enhancing the senses
Interactive goggles / glasses such as Google Glass have now been around for over a year, and AR applications that overlay information from the virtual world onto images of the real world continue to evolve.
Search is also becoming a visual science – innovations such as Cortexica, recognise everyday objects (cereal packets, cars, signs, advertisements, stills from a film, etc.) and return information on how and where you can buy the related items. While it works from a smartphone today, it makes it possible to imagine a world where you open the kitchen cupboard and tell your glasses what items you want to re-order.
Screens will be in increasing abundance, able to interact with passers-by on the street or with you in your home or car. What will be on these screens could be anything that is on any of your existing screens or more – communication, information, entertainment, advertising – whatever the world can imagine.
Segmented by OS?
But is it really possible to define a person by the software they use? There is certainly an ‘a priori’ segmentation originating from device makers’ segmentation and positioning:
Apple’s brand and design ethos have held consistently strong appeal for upmarket, creative users. In contrast, Blackberry for a long time held a strong appeal in the enterprise segment, albeit significantly weakened in the last few years.
It is perhaps slightly harder to label Android users, now the largest group of smartphone users. However, the openness of the software leads to freedom, bringing with it a plurality of applications and widgets, some security issues, and perhaps a greater emphasis on ‘work it out for yourself’.
Microsoft, once ubiquitous through its domination of the PC universe, now finds itself a challenger in the world of mobiles and tablets, and despite gradually improving sales and reported OS experience and design has yet to find a clear identity, other than perhaps now being the domain of those willing to try something different. While Microsoft still has a strong hand in the software world through its evolving Office applications, these are not yet hugely mobile-friendly, and this is creating a niche for new players, such as Evernote and others, that have a more focused ‘mobile first’ approach.
Other segments
From a research perspective, there are many other approaches to thinking about what defines different types of user. For example:
In adoption, the Bass Diffusion Model segments e.g. Innovators, Early Adopters, Mass Market, Laggards;
Segments based on attitudes to usage, e.g. Lovers, Haters, Functional Users, Social Users, Cost Conscious, etc.;
Approaches to privacy and the use of personal data, e.g. Pragmatic, Passive, Paranoid.
It is tempting to hypothesise that there could be meta-segments combining these and other behavioural distinctions (e.g. you might theorise that there would be more ‘haters’ among the ‘laggards’ and the ‘paranoids’ than the ‘innovators’ and ‘pragmatics’), and there may indeed be underlying psychological drivers such as extraversion that drive people to use certain applications (e.g. personal communications) more.
However, other than anecdotal observations, we don’t currently have the data to explore or prove this. This knowledge may of course exist within the research and insight departments of major players and we’d welcome any insight that our partners and readers can contribute (please email contact@telco2.net if so).
Hypothesis: a ‘software fingerprint’?
The collection of apps and software each person uses, and how they use them, could be seen as a software fingerprint – a unique combination of tools showing interests, activities and preferences.
Human beings are complex creatures, and it may be a stretch to say a person could truly be defined by the software they use. However, there is a degree of cause and effect with software. Once you have the ability to use it, it changes what you can achieve. So while the software you use may not totally define you, it will play an increasing role in shaping you, and may ultimately form a distinctive part of your identity.
For example, Minecraft is a phenomenally successful and addictive game. If you haven’t seen it, imagine interactive digital Lego (or watch the intro video here). Children and adults all over the world play on it, make YouTube films about their creations, and share knowledge and stories from it as with any game.
To be really good at it, and to add enhanced features, players install ‘mods’ – essentially software upgrades, requiring the use of quite sophisticated codes and procedures, and the understanding of numerous file types and locations. So through this one game, ten year old kids are developing creative, social and IT skills, as well as exploring and creating new identities for themselves.
Figure 2: Minecraft – building, killing ‘creepers’ and coding by a kid near you
There are also two broad schools of thought in advanced IT design. One is that IT should augment human abilities and its application should always be controlled by its users. The other is the idea that IT can assist people by providing recommendations and suggestions that are outside the control of the user. An example of this second approach is Google showing you targeted ads based on your search history.
Being properly aware of this will become increasingly important to individuals’ freedom from unrecognised manipulation. Just as knowing that embarrassing photos on Facebook will be seen by prospective employers, knowing who’s pulling your data strings will be an increasingly important to controlling one’s own destiny in the future.
Back to the law of the Jungle?
Many of the opportunities and abilities conferred by software seem perhaps trivial or entertaining. But some will ultimately confer advantages on their users over those who do not possess the extra information, gain those extra moments, or learn that extra winning idea. The questions are: which will you use well; and which will you enable others to use? The answer to the first may reflect your personal success, and the second that of your business.
So while it used to be that your genetics, parents, and education most strongly steered your path, now how you take advantage of the increasingly mobile cyber-world will be a key additional competitive asset. It’s increasingly what you use and how you use it (as well as who you know, of course) that will count.
And for businesses, competing in an ever more resource constrained world, the effective use of software to track and manage activities and assets, and give insight to underlying trends and ways to improve performance, is an increasingly critical competence. Importantly for telcos and other ICT providers, it’s one that is enabled and enhanced by cloud, big data, and mobile.
The Software as a Service (SaaS) application Salesforce is an excellent case in point. It can brings instantaneous data on customers and business operations to managers’ and employees’ fingertips to any device. This can confer huge advantages over businesses without such capabilities.
Figure 3: Salesforce delivers big data and cloud to mobile
$375Bn per annum Growth or Brutal Retrenchment? Which route will Telcos take?
Over the last three years, the Telco 2.0 Initiative has identified new business model growth opportunities for telcos of $375Bn p.a. in mature markets alone (see the ‘$125Bn Telco 2.0 ‘Two-Sided’ Market Opportunity’ and ‘New Mobile, Fixed and Wholesale Broadband Business Models’ Strategy Reports). In that time, most of the major operators have started to integrate elements of Telco 2.0 thinking into their strategic plans and some have begun to communicate these to investors.
But, as they struggle with the harsh realities of the seismic shift from being predominantly voice-centric to data-centric businesses, telcos now find themselves:
Facing rapidly changing consumer behaviours and powerful new types of competitors;
Investing heavily in infrastructure, without a clear payback;
Operating under less benign regulatory environments, which constrain their actions;
Being milked for dividends by shareholders, unable to invest in innovation.
As a result, far from yet realising the innovative growth potential we identified, many telcos around the world seem challenged to make the bold moves needed to make their business models sustainable, leaving them facing retrenchment and potentially ultimately utility status, while other players in the digital economy prosper.
In our new 284 page strategy report – ‘The Roadmap to Telco 2.0 Business Models’ – we describe the transformational path the telecoms industry needs to take to carve out a more valuable role in the evolving ‘digital economy’. Based on the output from 5 intensive senior executive ‘brainstorms’ attended by over 1000 industry leaders, detailed analysis of the needs of ‘upstream’ industries and ‘downstream’ end users markets, and with the input from members and partners of the Telco 2.0 Initiative from across the world, the report specifically describes:
A new ‘Telco 2.0 Opportunity Framework’ for planning revenue growth;
The critical changes needed to telco innovation processes;
The strategic priorities and options for different types of telcos in different markets;
Best practice case studies of business model innovation.
The ‘Roadmap’ Report Builds on Telco 2.0’s Original ‘Two-Sided’ Telecoms Business Model
Source: The Roadmap to New Telco 2.0 Business Models
Who should read this report
The report is for strategy decision makers and influences across the TMT (Telecoms, Media and Technology) sector. In particular, CxOs, Strategists, Technologists, Marketers, Product Managers, and Legal and Regulatory leaders in telecoms operators, vendors, consultants, and analyst companies. It will also be valuable to those managing or considering medium to long-term investment in the telecoms and adjacent industries, and to regulators and legislators.
It provides fresh, creative ideas to:
Grow revenues beyond current projections by:
Protecting revenues from existing customers;
Extending services to new customers;
Generating new service offering and revenues.
Stay relevant with customers through:
A broader range of services and offers;
More personalised services;
Greater interaction with customers.
Evolve business models by:
Moving from a one-sided to a two-sided business model;
Generating cross-platform network effects – between service providers and customers;
Exploiting existing latent assets, skills and relationships.
The Six Telco 2.0 Opportunity Areas
Source: The Roadmap to New Telco 2.0 Business Models
What are the Key Questions the Report Answers?
For Telcos:
Where should your company be investing for growth?
What is ‘best practice’ in telecoms Telco 2.0 business model innovation and how does your company compare to it?
Which additional strategies should you consider, and which should you avoid?
What are the key emerging trends to monitor?
What actions are required in the areas of value proposition, technology, value / partner network, and finances?
For Vendors and Partners:
How to segment telecoms operators?
How well does your offering support Telco 2.0 strategies and transformation needs in your key customers?
What are the most attractive new areas in which you could support telcos in business model innovation?
For Investors and Regulators:
What are and will be the main new categories of telcos/CSPs?
What are the principle opportunity areas for operators?
What are and will be operator’s main strategic considerations with respect to new business models?
What are the major regulatory considerations of new business models?
What are the main advantages and disadvantages that telcos have in each opportunity area?
Summary: ‘Hyper-competition’ in the mobile handset market, particularly in ‘smartphones’, will drive growth in 2010, but also emaciate profits for the majority of manufacturers. Predicted winners, losers and other market consequences.
This is a Guest Note from Arete Research, a Telco 2.0™ partner specialising in investment analysis.Members can download a PDF of this Note here.
The views in this article are not intended to constitute investment advice from Telco 2.0™ or STL Partners. We are reprinting Arete’s Analysis to give our customers some additional insight into how some Investors see the Telecoms Market.
Handsets: Demolition Derby
Arete’s last annual look at global handset markets (Handsets: Wipe-Out!, Oct. ’08) predicted every vendor would see margins fall by ~500bps. This happened: overall industry profitability dropped, as did industry sales. Now everyone is revving their engines with vastly improved product portfolios for 2010. Even with 15% unit and sales growth in ’10, we see the industry entering a phase of desperate “hyper-competition.” Smartphone vendors (Apple, RIMM, Palm, HTC) should grab $15bn of the $23bn increase in industry sales.
Longer term, the handset space is evolving into a split between partly commoditised hardware and high margin software and services. Managements face a classic moral hazard problem, incentivised to gain share rather than preserve capital. Each vendor sees 2010 as “their year.” Individually rational strategies are collectively insane: the question is who has deep enough pockets to keep their vehicles in one piece. Revving the Engines. Every vendor is making huge technology leaps in 2010: high end devices will have 64/128GBs of NAND, 8-12Mpx cameras, OLED nHD capacitive touch displays, and more features than consumers can use. Smartphones should rise 50% to 304m units (while feature phones drop 21% in units). As chipmakers support sub-$200 complete device solutions, we see a race to the bottom in smartphone pricing.
Software Smash-Up. The rush of OEMs into Android will bring differentiation issues (as Symbian faced). Beyond Apple, every software platform faces serious issues, while operators will use “open” platforms to develop their own UIs (360, OPhone, myFaves, etc.). Rising software costs will force some OEMs to adopt a PC-ODM business model, while higher-margin models of RIM and Nokia are most at risk.
Finally, the Asian Invasion. Samsung, HTC and LGE now have 30% ’09E share, with ZTE, Huawei, MTEK customers and PC ODMs all joining the fray. All seek 20%+ growth. Motorola and SonyEricsson are being forced to shrink footprint, and shift risk to ODM partners. Nokia already has an Asian cost base, but lacks new high-end devices outside its emerging markets franchise. Apple looks set to claim 40% of industry profits in ’10, as other OEMs fight a brutal war of attrition, egged on by buoyant demand for fresh products at record low prices.
Forget Defensive Driving
Our thesis for 2010 is as follows: unit volumes will rebound with 15% growth, with highly competitive pricing to keep volumes flowing. This will be driven by highly attractive devices at previously unimaginably low prices. Industry sales will also rise 15%, by $23bn, but half of the extra sales ($11bn) will be taken by Apple. Industry margins will remain under pressure from pricing and rising BoM costs. Every traditional OEM, smartphone pure-play, and new entrant are following individually rational strategies: improve portfolios, promise the moon to operators, and price to gain share. Those that fail to secure range planning slots at leading operators will develop other channels to market. Collectively, the industry is entering a period of desperation and dangerous self-belief. There are few incentives to exercise restraint for the likes of Dell (led by ex-Motorola management), Acer (the consistent PC winner at the low-end), Huawei and ZTE (which view devices as complementary to infrastructure offerings) or Samsung (where rising device units help improve utilisation of its memory and display fabs). Motorola and SonyEricsson must promote themselves actively, just to find sustainable business models on 4% share each.
Table 2 shows industry value; adjusted for the impact of Apple, it shows a continuous 4-5% decline in ASPs (though currencies also play a role). The challenge for mainstream OEMs (Nokia, Samsung, LGE, etc.) is to win back customers now exhibiting high loyalty after switching to iPhone or Blackberry. Excluding gains by Apple and RIM, industry sales are on track to fall 13% in ’09. Apple, RIM, Palm and HTC will collectively account for $15bn of our forecast incremental $23bn in industry sales in ’10E.
Within this base, we see smartphones rising from 162m units in ’08 (13% of the total) to 304m units, or 23% of total ’10E shipments. At the same time, featurephone/mid-range units will drop by 21% in ’09 and 21% again in ’10.
Key Products for 2010
Both SonyEricsson and LGE have innovative Android models coming in 1H10, LG with distinctive designs and gesture input, and a new SonyEricsson UI and messaging method.
Nokia’s roadmap features slimmer form factors, but a range of capacitive touch models will not come until 2H10. It will update the popular 6300/6700 series with a S40 touch device in 1H10.
Samsung has its usual vast array of product, and plans for 100m touch models in ’10 underlining the extent of their form factor transition.
Motorola’s line-up will focus on operator variants, with a lead device shipping in 2Q10, but a number of operators think Motorola lacks distinctive designs and see little need for Blur.
RIM will not change its current form factor approach until 2H10, when it moves to a new software platform to enhance its traditional QWERTY base. It faces commercial challenges around activation and services fees with carrier partners.
We expect Apple to reach lower price points and also launch CDMA-based iPhones in ’10.
HTC must also reduce its costs to address mid-range prices.
Every vendor plans to widen its portfolio with several “hero” models in 2010; if anything the window to hype any single launch is narrowing.
Main Trends
Discussions with a wide range of operators, vendors and chipmakers about 2010 device roadmaps point to an explosion of attractive products – a few trends stand out:
Operators are now deeply engaging Chinese vendors. Huawei and ZTE have Android devices coming, while TCL and Taiwanese ODMs offer low-end devices. Chipmakers confirm Android devices will drop under $100 BoM levels by YE10. This will pressure both prices and margins. The value chain is shifting rapidly to more compute-intensive devices, with Qualcomm and others enabling Asian ODMs to be active in new PC segments with smartphone-like features (touch, Adobe Flash, 3G connectivity, etc.) in large-screen form factors, to leverage their LCD base.
All devices will become “smartphones.” Samsung and Nokia are opening up APIs for mass market phones. The smartphone tag (vs. dumb ones) will be applied to devices of all sorts, the way we formerly spoke of handsets. By the end of 2010, all devices (except basic pre-paid models) will be customisable with popular applications (e.g., search, social networking, IM, etc.) even if they lack hardware for video content (i.e., memory and codecs) or mapping (GPS chipsets). Open OS devices should rise 50% to 304m units, 23% of the total market.
Pure play smartphone vendors (RIMM, HTC, Palm) must transition business models to emulate Apple (i.e., linking devices with services and content). Launching lower-cost versions of popular models (RIMM’s 8520, HTC’s Tattoo, Palm’s Pixi) implicitly recognises how crowded the high-end ($400+) is becoming. This will get worse as Motorola and SonyEricsson seek to re-invent themselves with aspirational models, and Android devices hit mid-range prices in ’10.
Fearless Drivers
We had said before that key purchase criteria (design, features, brand) were reaching parity across OEMs, splitting the market into basic “phones” (voice/camera/radio) and Internet devices. The former has room for two to three scale players: Nokia, Samsung, and a third based on a PC-OEM model using standard offerings (e.g., Qualcomm or MTEK chipsets). LG and ZTE are both seeking this position, from which SonyEricsson and Motorola retreated to focus on Internet devices. This does not mean mobile devices are now commodities, like wheat or steel. The complexity of melding software and hardware in tiny, highly functional packages is not the stuff of commodity markets. But we see a split where a narrow range of standard hardware platforms will accommodate an equally narrow set of software choices. Mediatek is blazing a trail here. Some operators (Vodafone, China Mobile, etc.) aim to follow this model for pre-paid and mid-range featurephones. Preserving software and services value-add for consumers in a market where hardware pricing is fairly transparent is a challenge for all OEMs.
This model is not confined to the low-end: In Wipe Out! we said Motorola (among others) would adopt an HTC/Dell model (integrating standard chipsets/software and cutting R&D). This is happening, with Motorola no longer trying to control its software roadmap, having fully adopted Android. SonyEricsson is following suit, with initial Android devices coming in 1Q10.
Recent management changes make it even more likely SonyEricsson gets absorbed into Sony to integrate with content (as its new marketing campaign pre-sages). Internet devices will become even more fragmented by would-be new entrants in ’10. In addition to Nokia, Apple, RIMM, HTC and Palm, LG and Samsung intend to build a presence in smartphones, as do Huawei, ZTE and PC ODMs. We had expected LGE or Samsung to consider M&A (i.e., buying HTC or Palm) to cement their scale or get a native OS platform. We forecast the shift to Internet devices would bring 27m incremental units from RIM, HTC, and Apple in ’09E. This now looks like it will be 21m units (partly due to weaker HTC sales), a growth of 58% vs. an overall market decline of 6%.
Growth: Steaming Again
After a long string of rises in both units and industry value, the global handset market retreated in ’09. We see risk of a weaker 1H10 mitigated in part by trends in China (3G) and India (competition among new operators). The industry had already scaled up for 10-20%+ growth during the ’05-’08 boom; most vendors have highly outsourced business models and/or partly idle capacity, meaning they could produce additional units relatively quickly. Paradoxically, 15% unit and sales growth will further encourage aggressive efforts to gain share.
Our regional forecasts are in Table 3. Emerging markets are two-thirds of volumes in ’09E and ’10E, and will lead growth – at ever lower price points – as they adopt 3G. Market dynamics vary sharply between highly-subsidised, contract-led markets (i.e., the US, Japan/Korea, and W. Europe) and pre-paid-led emerging markets (China, India, E. Europe, MEA and LatAm). In the former, operators are driving smartphone adoption; while price erosion helps limit subsidy budgets, we see growth in handset market value. As Table 4 shows, mobile data handsets hit 10%+ of EU operator sales, but are not yet driving operators’ sales growth.
In emerging markets, the growth in value is led by further volume increases for LCHs. In ’05, we saw an inflection point around Low-Cost Handsets: Every Penny Counts (July ’05) and A Billion Handsets in ’07? (Aug. ’05). Since ’05, there were 1.2bn handsets shipped in China and India alone. LCH chipsets now sell for <$5, with only Infineon and Mediatek actively supplying meaningful volumes. The ongoing mix shift to emerging markets and weak sales of mid-range devices in developed markets were behind the 13% decline in industry value in ’09E, excluding Apple’s sales. Of the extra 170m units we see shipping in ’10E, 105m come from emerging markets, with ~50m sold in China and India.
Costs: Relentless Slamming
In Wipe Out!, Arete laid out four areas where costs might rise in ’09 and beyond, as the source of structural pressure on industry margins. None of these costs are easing or receding. First, the chipset market is increasingly concentrating. TI is exiting, ST-Ericsson continues to lose money, Infineon recovered but still lacks scale in 3G, and Mediatek dominates outside the top five OEMs. This leaves Qualcomm in a de facto leadership position in 3G. This structure does not support meaningful cost reduction for OEMs. Intel may seek an entry to disrupt the market (see Qualcomm v Intel, Fight of the Century, Sept. ’09) but this is unlikely to happen until ’11. Memory may be in short supply in ’10, while high-end OLED displays still face shortages. Capacity cuts and losses at smaller component suppliers in ’09 limit how much OEMs can save. Outsourced manufacturers like Foxconn, Compal, Jabil, BYD, and Flextronics have low margins and poor cash flow. OEMs want to transfer more risk to suppliers that have little room to cut further.
Second, feature creep also thwarts cost reduction efforts: packing more into every phone is needed to stimulate demand, but adds cost. There are rising requirements in the mid-range, going from 2Mpx to 3.2/5Mpx camera modules, and adding touch, more memory, and multi-radio chipsets (3G, WiFi, BT, FM, etc.). Samsung already offers a 2Mpx touchscreen 2G phone for <$100 on pre-paid tariffs.
Third, software remains the fastest-rising element of handset costs. In Mobile Software Home Truths (Sept. ’09), we discussed how software was adding costs, but how many OEMs were struggling to realise value from software investments? Adopting “licence-free” or open source software does not necessarily reduce these costs: it must still be managed within industrial processes. Yet saving licence costs will be the argument used by OEMs forced to limit the number of platforms they support, as Samsung did by recently indicating it would abandon Symbian. We understand WinMo efforts have been largely mothballed at Motorola and SonyEricsson, even as LG is increasing its spend around Microsoft. Costs are also rising for integration of services, while Software costs are not falling; vendors are just shifting them from handset bill-of-materials (BoM) to other companies’ R&D budgets.
Finally, marketing costs are also rising. Vendors must provide $10m-50m per market of above-the-line marketing support and in-store promotions, to get operators to feature “hero” products. Services adds costs for integration and (often-overlooked) indirect product costs (testing, warranty, logistics, price protection in the channel). SG&A must rise to educate users about new services. OEMs cannot retain or win customers in a mature market without more marketing.
The case for services remains simple and compelling: Nokia’s 33% gross margin on €65 ASPs yields €22 gross profit per device, or €1/month over a two-year lifetime. This is the only way to offset further pressure on device profits. The drive to launch Services is another cost OEMs must bear, with a longer payback than that of 12-18 month design cycles for devices.
Margins: Beyond Fender Benders
When Motorola has lost $4bn since ’07 and SonyEricsson may lose as much as €1bn in ’09, we are no longer talking about minor dents. Gross margins for both are already low (sub-20%). The most notable feature of the past few years was how exposed some vendors were when extensions of hit products (or product families) fell flat. SonyEricsson went from 13% 4Q07 margins to breakeven by 2Q08, and RIM saw group gross margins drop 1000bps. Only Nokia (at 33%), RIM, Apple and HTC have gross margins above 30%. Few OEMs managed to raise gross margins after seeing them decline, though we see SonyEricsson and Motorola seeking to do so by vastly reducing their scope of activities.
Having an Asian low-cost base is a necessary but not sufficient condition of survival. Nokia is already the largest Asian producer, with the industry’s two largest plants (in China and India) giving it the lowest cost structure (i.e., the lowest ASPs, but consistently among the highest margins). Few OEMs other than Nokia make money selling LCHs (i.e., sub-€30). Nokia made ~60% of industry profits in ’08, but will be surpassed in profits in ’09 by Apple, which should make 40% of industry profits in ’10, while Nokia has 25%. It is also worth noting that we forecast margins to fall at nearly every vendor in ’10, though Motorola and SonyEricsson must end large losses, and Nokia will benefit for IPR income within its Devices margin.
Software: Mutual Destruction?
The mobile industry is rapidly adopting the IT industry’s software as a service (SaaS) model. The handset is becoming a distribution platform for services and content; vendors aim to monetise a “community” of their device users. Yet for all the attention it gets, software is a means to an end, and not part of the product. Beyond RIM and Apple, only Nokia can afford its own smartphone platform R&D (i.e., Symbian), yet we see Nokia itself moving closer to Microsoft. Money alone cannot solve software or services issues; if so, Nokia’s industry-leading €3bn R&D budget would have yielded more success, while Apple would not have grabbed as much profit share with a $1.3bn group-wide R&D budget.
No vendor yet excels at ease-of-use for multiple applications (voice, SMS, music, video, browsing, navigation, etc.). RIM offers best-in-class messaging, but falls short in other use cases. The iPhone’s Web experience allowed it to overcome shortcomings in multi-threading and voice/text. Samsung has few services to accompany its sleek designs or high-spec displays and cameras. Just going to 70-100m touch-screed devices in ’10 will not resolve ease-of-use issues.
A number of vendors risk getting addicted to “free” software platforms where others reap the benefits (e.g., Android). Few OEMs have embraced regular updates of components (media players, browser plug-ins, etc.) to meet changing requirements. This is Apple’s edge (and in theory Microsoft’s, but it has not managed handset software efficiently). The current slowdown will only hasten moves to abstraction of hardware and software, long the case in PCs. What is the point of OEMs having their own “developer programmes” (e.g., MOTODEV, Samsung Mobile Innovation, SonyEricsson Developer World, etc.) if they adopt Android? To escape high software costs, some vendors are adopting a PC-OEM model: sub-20% gross margins, 1-5% R&D/sales, with little control over how services are implemented on devices.
When the Dust Settles…
After turmoil and consolidation in ’06, industry margins were robust in ’07, then plunged in ’08. Yet a hoped-for recovery in ’09 has given heart to a range of weaker players, sealing the industry’s fate.
Even with a resumption of growth, rising costs and hyper competition look set to put pressure on margins. The precipitous impact of this may not be seen until 2011; for now, managements are not inclined to call it quits, or admit they lack a services or software play. The handset market is hardly gone ex-growth, but its rules and value chain are shifting, as seen in Apple and Google staking their claims.
The market looks to be falling less than the $11bn we forecast for ’09 (“only” $9bn), but it is Apple’s incremental sales that are changing the dynamics most. We are no fans of M&A, but would welcome moves to remove industry capacity. There are few obvious options, beyond HTC and Palm. We also think Samsung and LGE would benefit from deals that might open up their insular corporate cultures. Nokia has showed how difficult it is for an OEM to assemble a portfolio of Services offerings: none are yet best-in-class. Our verdicts on the key questions for vendors are listed in the following table: We see room for two to three scale players in LCHs/feature-phones (Nokia, Samsung and one other following a PC-OEM model). Smartphones will grow even more fragmented and hotly contested. We are not certain whether the others – SonyEricsson, LGE, Motorola, ZTE, HTC, and Japanese vendors – will emerge from 2010 in one piece.
Richard Kramer, Analyst Arete Research Services LLP richard.kramer@arete.net / +44 (0)20 7959 1303
Regulation AC – The research analyst(s) whose name(s) appear(s) above certify that: all of the views expressed in this report accurately reflect their personal views about the subject company or companies and its or their securities, and that no part of their compensation was, is, or will be, directly or indirectly, related to the specific recommendations or views expressed in this report.
Required Disclosures
For important disclosure information regarding the companies in this report, please call +44 (0)207 959 1300, or send an email to michael.pizzi@arete.net.
Rating System: Long (L), Positive (+ve), Neutral (N), Negative (-ve), and Short (S) – Analysts recommend stocks as Long or Short for inclusion in Arete Best Ideas, a monthly publication consisting of the firm’s highest conviction recommendations. Being assigned a Long or Short rating is determined by a stock’s absolute return potential, related investment risks and other factors which may include share liquidity, debt refinancing, estimate risk, economic outlook of principal countries of operation, or other company or industry considerations. Any stock not assigned a Long or Short rating for inclusion in Arete Best Ideas, may be rated Positive or Negative indicating a directional preference relative to the absolute return potential of the analyst’s coverage group. Any stock not assigned a Long, Short, Positive or Negative rating is deemed to be Neutral. A stock’s absolute return potential represents the difference between the current stock price and the target price over a period as defined by the analyst.
Distribution of Ratings – As of 15 October 2009, 10.8% of stocks covered were rated Long, 6.8% Positive, 25.7% Short, 10.8% Negative and 45.9% deemed Neutral.
Global Research Disclosures – This globally branded report has been prepared by analysts associated with Arete Research Services LLP (“Arete LLP”) and/or Arete Research, LLC (“Arete LLC”), as indicated on the cover page hereof. This report has been approved for publication and is distributed in the United Kingdom and Europe by Arete LLP (Registered Number: OC303210, Registered Office: Fairfax House, 15 Fulwood Place, London WC1V 6AY), which is authorized and regulated by the UK Financial Services Authority (“FSA”), and in the United States by Arete LLC (3 PO Square, Boston, MA 02109), a wholly owned subsidiary of Arete LLP, registered as a broker-dealer with the Financial Industry Regulatory Authority (“FINRA”). Additional information is available upon request. Reports are prepared using sources believed to be wholly reliable and accurate but which cannot be warranted as to accuracy or completeness. Opinions held are subject to change without prior notice. No Arete director, employee or representative accepts liability for any loss arising from the use of any advice provided. Please see www.arete.net for details of any interests held by Arete representatives in securities discussed and for our conflicts of interest policy.
U.S. Disclosures – Arete provides investment research and related services to institutional clients around the world. Arete receives no compensation from, and purchases no equity securities in, the companies its analysts cover, conducts no investment banking, market-making or proprietary trading, derives no compensation from these activities and will not engage in these activities or receive compensation for these activities in the future. Arete restricts the distribution of its investment research and related services to approved institutions only. Analysts associated with Arete LLP are not registered as research analysts with FINRA. Additionally, these analysts may not be associated persons of Arete LLC and therefore may not be subject to Rule 2711 restrictions on communications with a subject company, public appearances and trading securities held by a research analyst account.
Section 28(e) Safe Harbor – Arete LLC has entered into commission sharing agreements with a number of broker-dealers pursuant to which Arete LLC is involved in “effecting” trades on behalf of its clients by agreeing with the other broker-dealer that Arete LLC will monitor and respond to customer comments concerning the trading process, which is one of the four minimum functions listed by the Securities and Exchange Commission in its latest guidance on client commission practices under Section 28(e). Arete LLC encourages its clients to contact Anthony W. Graziano, III (+1 617 357 4800 or anthony.graziano@arete.net) with any comments or concerns they may have concerning the trading process.
General Disclosures – This research is not an offer to sell or the solicitation of an offer to buy any security. It does not constitute a personal recommendation or take into account the particular investment objectives, financial situations, or need of the individual clients. Clients should consider whether any advice or recommendation in this research is suitable for their particular circumstances and, if appropriate, seek professional advice. The price and value of the investments referred to in this research and the income from them may fluctuate. Past performance is not a guide to future performance, future returns are not guaranteed, and a loss of original capital may occur. Fluctuations in exchange rates could have adverse effects on the value or price of, or income derived from, certain instruments.
Networks guru Andrew Odlyzko recently estimated that a typical mobile user consumes 20MB of data a month for voice service, but that T-Mobile Netherlands reports their iPhone users consuming 640MB of data a month; so upgrading everyone to the Jesus Phone would increase the demand for IP bandwidth on cellular networks by a factor of 30.
It had in the past been estimated that major European cellular operators might be able to provide 500MB/user/month without another wave of network upgrades; if this calculation is at all typical, it looks like there is a substantial risk of an ”iPlayer event” hitting cellular in the near future. Recap: when the BBC placed vast amounts of its content on the Internet through its iPlayer service, DSL traffic in the UK spiked; or rather, it didn’t spike, the trend shifted permanently upwards.
That, of course, is much more worrying; because the marginal costs are set by the capacity needed to handle the peaks, a rise in average traffic means a boost to costs multiplied by the peak/mean ratio. An aggravating factor is the pricing structure for BT Wholesale backhaul service – the commits are 155Mbits/s, so if the new peak demand just exceeded your existing commit, you needed to buy a whole 155Mbits/s pipe. The impact on the UK unbundling/bitstream ISPs has been serious and the sector remains in a critical condition.
Traditionally, a mobile base station was provisioned with 2 E-1 leased lines, 2×2 Mbit/s capacity. Multiplied by 4, that’s 9,676,800 Mbits in a month. Divide by 8 to convert to MB, 1,181GB/1.15TB a month. Which means that a typical cell site could support at the most 1,832 users’ activity, or quite a lot less when you consider the peak/mean issue – typical values are 4:1 for GSM voice (458 users), but as high as 50:1 for IP (36!). Clearly, those operators who have had the foresight to pull fibre to the base stations and, especially, to acquire their own infrastructure will be at a major advantage.
The elements of traffic generation
The iPlayer event was an example of content push – what changed was the availability of a huge quantity of compelling content, which was also free. If Samsung’s recently announced video store takes off, that would be another example of content push. But this is far from the only driver of traffic generation, though. It is important to realise that the Internet video market is a tightly-coupled system. The total user experience is made up of content, of the user interface, of feedback and discovery mechanisms, of delivery over the network, and of the business model. All of them are very closely related – if the product is heavily DRM-restricted, prettying up the front end doesn’t help.
It is characteristic of a coupled system that the slowest-changing factor is the main constraint, but the fastest-changing factor is the driver of change. In this case, the slowest-changing factor is the infrastructure, and within that, the digs and poles of layer zero. Even the copper changes faster than that. The fastest-changing factor is the user interface, which can be changed at will. Sociability, discovery and the like, which require serious software development, are in the middle, with issues like BT Wholesale pricing some way below.
There was not much special about the iPhone technically; the first ones were 2G devices in a 3G world, and good luck to you trying to pull 640MB a month on GPRS alone. Is that even possible? Its integration with iTunes gave it access to content, but the cost issue meant that the bulk of the music on iPhones was probably downloaded over WLANs or sideloaded from a PC. But one thing that it did do very well was the user interface; Apple exploited its historic speciality in industrial design and GUI design to the limit. Typically, a lot of geeks and engineers scoffed at the gadget as an overdesigned bauble for big-kid hipsters; fools that we were.
But the core insight of the iPhone designers was to design for the Web and for rich media, probably helped by not having a telephony background. Therefore, they chose to cover as much of the form factor with a high quality screen as possible, and worked from there. They also made some advances in the GUI (zooming, gesture recognition), but the much talked about browser was less sensational. (Like all versions of Safari, it is based on the open-source WebKit engine that also makes the Nokia browser and Konqueror work.)
So we’re now beginning to see that changing the user interface can radically impact the engineering and economics of the network; and because it is a fast-changing element, it can do so faster than the network layer can react.
From receiving to sending
The Internet is a copying machine, they say; more to the point, it is usually a one-to-many medium that is experienced as a many-to-one medium. I draw content from many different sources according to the stuff I like; but each source is broadcasting itself to many readers. As a rule, people read more than they write, even if P2P distribution blurs this. One criticism of the iPhone is that it’s optimised for passive consumption of content; some users report their uplink/downlink ratio changing dramatically on changing to the iPhone.
Looking at another online-video sensation which hammers the ISP economy, YouTube, it’s quite clear that another driver of traffic is improved content ingestion. As whatever you place on the Web will be written relatively few times and read many times, there is a multiplier effect to anything that makes it easier to create or at least to distribute content.
YouTube’s innovation was three-fold; it made it dramatically simpler to upload video to the Internet, and it made it dramatically simpler to popularise it once it was there, through the embedding process and through its social functions. This latter feature meant there was much more of an incentive to upload stuff in the first place, because it was more likely to get viewed.
Better user interfaces and social mechanisms for content creation, then, are potentially major drivers of change in your cost model. They can change very quickly; and their impact is multiplied. Already, I can uplink photos to Flickr faster from my Nokia E71 than from my DSL link; granted, this is because of the UK’s lamentable infrastructure, but it shows some idea of the possibilities. Perhaps that Samsung device with the mini-decks might be less silly than we thought?
Faster adaptation: considered helpful
As we were wondering what would happen to the cellular networks’ backhaul bills, and contemplating the wreck of the DSL unbundler/bitstream business model, we looked enviously across the Channel to Telco 2.0’s favourite ISP, Iliad. They have just announced another set of fantastic figures; their margins are 70%-80% where they have deployed fibre, and their agility in launching new services doesn’t need to be rehearsed again. They even built their own content-creation service, after all; no fear of the future there.
What makes the difference? Iliad has always been committed to investing in engineering and infrastructure, giving it the agility to match the speed of change the application layer can achieve. It’s been determined to realise the OPEX and unbundling/wholesale savings from fibre deployment; and Iliad’s results have demonstrated that they are real and they are enough to fund deployment.
There is a crucial element, however, in their success; in France, access to duct and pole infrastructure is a regulated product, and major cities are more than keen on selling access to their own physical infrastructures – the sewers of Paris are the classic example. If you want to fix the ISP business model, fixing layer zero is the place to start, before the next fast-changing application knocks us back into the ditch.
Conclusions
The ISP/telco market is a closely coupled system: An analysis in terms of differential rates of change shows that rapidly changing applications and user interfaces can have seismic impact on slowly changing network operator business models
The benefits of fibre are real: Iliad is showing that fibre deployment isn’t just nice to have, it’s saving the ISP business model
Open access to infrastructure is vital: There is no contradiction between applications/VAS and layer zero – instead they go together. If you want fantastic new apps, pick up a shovel.