Edge computing has been hailed as key to help deliver the promises of 5G, enabling transformative use cases and experiences. Significantly for mobile service providers, deriving value from their networks and presence at the edge remains an aspiration for a new source of revenues and a more favourable position in the value chain. There is strong belief that this needs to exceed what was achieved with 3G and 4G, where OTT players built entire businesses through successful services using centralised platforms leveraging fast, ubiquitous internet access. Mobile operators remain hopeful that they can evolve from ‘dumb pipes’ and derive more value from dynamic connectivity services, value added platforms, and partnerships.
The edge means different things to different people, so it is useful to define terminology and clarify the scope of this report. We understand the edge to refer to compute, storage and networking infrastructure, facilities, software, and services which exist physically or architecturally between typically non-telco cloud data centres and end-devices. This report will focus on the ‘telco edge’ for both mobile and fixed line telecoms operators. The term MEC (initially ETSI’s Mobile Edge Computing which evolved to Multi-access Edge Computing) has historically been used for telco edge predominantly with a focus on deployment in the access network, however as we will see its use has somewhat broadened as telcos initially deploy edge computing more centrally.
Enter your details below to request an extract of the report
The edge continuum spans between end devices and hyperscale cloud
It is common practice to define an edge continuum in a diagram such as below which shows the different edge locations between an end device and the hyperscaler cloud. Typically, the physical distance, the number of network hops, and network latency will increase the further the edge location shifts to the right.
The edge continuum
In considering the telco edge, we will primarily be focussed on the network edge, consisting of data centres logically situated in telco’s access, transport, and core network facilities. The on-premise edge (sometimes referred to enterprise or private edge) may be offered by telcos and others to enterprises but is closely related to private 4G/5G networks and single tenant propositions which are out of scope of this report. STL has written about this in reports such as Private networks: Lessons so far and what next and Combining private 5G and edge computing: The revenue opportunity.
The network edge affords a wide range of choices to deliver edge services from within the network. Network edge also includes neutral host providers that offer facilities for multiple infrastructure providers, which support enterprise applications, as well as radio access networks. These may be offered by traditional telcos, tower infrastructure providers and others.
The regional edge sits outside telco networks at internet exchanges, carrier exchanges, interconnect points, co-location, and data centre facilities. Multiple parties can deploy infrastructure at such locations which are designed as neutral, well-connected locations for third party equipment. For some use cases, these locations are considered as ‘close enough’ or ‘near enough’ edge sites.
Edge computing drivers and benefits vary depending on the use case
While low latency is often cited as the justification for moving application workloads from the cloud to the edge, there are other drivers such as reduced data transit, data sovereignty and improving redundancy. These factors may be just as relevant as low latency, or more so, depending on the specific use case.
Edge computing benefits
Migrating workloads from end-devices to the edge can also bring benefits such as reduced power consumption, allowing smaller form factors at lower costs, and enabling experiences that are simply not possible on existing devices due to heavy computational requirements. Processing in the cloud may have been previously dismissed due to its limitations or constraints. One consumer example would be Instagram or Snapchat real-time video filters with heavy machine learning processing requirements. The processing for these may move to the edge to improve and standardise performance across devices, by not relying on the end-device’s processing power. Partners
However, the public cloud is well established and here to stay, so it is prudent to view the edge as complementary to and an extension of the public cloud, offering characteristics which may be important for specific components of certain use cases.
Table of Contents
Most telcos do not yet see demand for a fully distributed edge
The platform is an important piece of the edge, but the verdict is still out on which approach to take
Telcos need to guarantee multi-cloud and multi-edge orchestration for their customers
Defining the edge
The state of the edge
Cloud vs edge
Contrasting public cloud and public edge
Latency in fixed vs mobile networks
The rationale for telco edge
Telco edge propositions and use cases
Internal applications for telcos
External applications for telcos
Telco edge propositions based on telco’s capabilities
Potential use case opportunities for telco edge
Where is the telco edge?
Edge really means core for now
Challengers to the telco edge
Building the telco edge platform
Edge developers want a consistent and seamless experience
The potential providers of network edge platforms
Cloud-centric capabilities and business models are key the success of telco edge platforms
Telco industry challenges
Conclusion: What should telcos do?
Enter your details below to request an extract of the report
SK Telecom (SKT), Verizon and Telstra were among the first in the world to commence the commercialisation of 5G networks. SK Telecom and Verizon launched broadband-based propositions in 2018, but it was only in 2019, when 5G smartphones became available, that consumer, business and enterprise customers were really able to experience the networks.
Part 2 of our 3-part series looks at Verizon’s 5G experience and how its propositions have developed from when 5G was launched to the current time. It includes an analysis of both consumer and business offerings promoted on Verizon’s website to identify the revenue streams that 5G is supporting now – as opposed to revenues that new 5G use cases might deliver in future. (We have covered this extensively for a number of verticals, including healthcare, manufacturing, energy, and transport and logistics.)
Initially, Verizon had hoped to charge a $10 monthly fee for existing 4G subscribers to access its millimetre Wave (mmWave) 5G Ultra Wideband (UWB) service, however it waived this fee due to a lack of devices and limited coverage at launch in April 2019. 5G access was offered as a benefit for customers on its top two Unlimited data plans, which promised unlimited 5G UWB data, unlimited 4K HD streaming and unlimited 5G UWB hotspot. It emphasized capacity advantages of 5G as well as its ability to offer an improved video experience.
Verizon’s choice of 5G spectrum (mmWave) was guided by its intention to offer customers a differentiating 5G experience, but it has added complexity to its 5G propositions and initial limited coverage has impacted its competitive position in the market.
This report examines the market factors that have enabled and constrained Verizon’s 5G monetisation efforts, as it moves to extend 5G access beyond early device adopters to a wider audience. It identifies lessons in the commercialisation of 5G for those operators that are on their own 5G journeys and those that have yet to start.
Enter your details below to download an extract of the report
5G performance to date
This analysis is based on the latest data available as we went to press in May 2021.
An early benefit of moving to 5G for Verizon was to reduce costs and increase efficiency. On this score, Verizon reports that it is on track to achieving its $10 billion cost reduction target, ahead of the scheduled end-2021 timeframe (as set out in September 2017).
On the fourth quarter 2020 earnings call, CEO Hans Vestberg claimed there had been a “great migration of our customers [consumer] to Unlimited and to the premium Unlimited” plans, resulting in over 60% (57 million subscribers) of Verizon’s base being on an Unlimited plan and over 20% (19 million) being on premium plans (5G Ultra Wideband – UWB – inclusive). The appeal of 5G is not thought to be the primary driver of this trend (only 9% of the postpay base had a 5G device in December 2020 ):
Verizon is bundling generous content services (non-5G specific) with these high-end plans.
The arrival of the iPhone 12 in the fourth quarter was at least partly responsible for 90% of new net additions in the quarter taking Unlimited plans (with 55% being premium Unlimited plans) – probably more so than the introduction of a low-band 5G nationwide (NW) service in the same time period.
Quarter one 2021 results indicate that the migration to premium plans has continued. Matt Ellis, Verizon EVP and CFO announced that “At quarter end, over 65% of our base was on an Unlimited plan with more than 23% of our base taking a premium plan” . The “5G adoption rate” was reported to be 14% of the consumer postpay subscriber base (12.6 million subscribers). 5G access (5G UWB in particular) may become a more important driver of upgrades to higher value plans in future, particularly as it becomes more widely available (mid-band inclusive) and there are more obvious benefits to consumers (services).
Upgrades have succeeded in driving average revenue per account (ARPA), which reportedly grew by 1.7% for the first quarter year-on-year (despite COVID impacts) – Verizon attributed this to its tiered approach to Unlimited plans.
There were negative net postpay additions in the March quarter, partly ascribed to seasonality, however this followed fourth quarter criticism of Verizon by investors for failing to attract expected numbers of new customers, particularly in comparison to competitors.
In his fourth quarter address, Mr Vestberg also expressed satisfaction that Verizon was adding new enterprise customers, which it continued to do in quarter one 2021. With regard to 5G mobile edge compute, Mr Vestberg reported that Verizon was building a “funnel of customers” but held that significant revenues were only expected in 2022.
First mover advantage?
The risk of being an early mover (which had worked for Verizon with 4G LTE), may not be paying off in the 5G era. While Verizon has retained its 4G market leadership status, Verizon’s 5G market position is being challenged following the merger of Sprint and T-Mobile, which was approved in April 2020.
Prior to the merger, T-Mobile had been rolling out a low-band 5G network – which performed well on coverage, but less so on performance – where Verizon had a competitive advantage. The Sprint merger has since provided T-Mobile with a mid-band 5G network that delivers faster speeds than its previous low-band 5G network and provides better coverage than Verizon’s premium mmWave offering. T-Mobile’s 5G proposition has become more compelling from a performance perspective, and it has better 5G coverage than Verizon’s 5G UWB and NW.T-Mobile reported that it had greater than 10 million 5G subscribers in March 2021 , compared to Verizon’s 12.6 million, based on the 5G adoption rate announced for the same quarter. On this basis, and the fact that T-Mobile net additions have exceeded those of Verizon, while Verizon currently retains 5G leadership, its lead could be narrowing.
T-Mobile versus Verizon subscriber base comparison, March 2021
Source: STL Partners, based on T-Mobile and Verizon reported data
T-Mobile, Verizon and AT&T continue to compete aggressively on network coverage. In March 2021 , it was reported that:
T-Mobile led Verizon on low-band 5G “Extended Range” network coverage, reaching 287 million versus Verizon’s 230 million;
T-Mobile’s 5G “Ultra Capacity” network (mainly mid-band, with some mmWave) covered 125 million people, while Verizon’s UWB (mmWave) coverage is limited to 67 cities (the 5G service based on Verizon’s recently acquired C-band/mid-band spectrum will also fall under UWB, but it is not yet available).
Performance indicators to date
First mover advantage?
Details of launch
Consumer monetisation summary
Business and enterprise propositions
Business monetisation summary
Analysis of 5G market development
This report builds on earlier STL Partners research, including:
To access the report chart pack in PPT download the additional file on the left
Edge computing can help telcos to move up the value chain
The edge computing market and the technologies enabling it are rapidly developing and attracting new players, providing new opportunities to enterprises and service providers. Telco operators are eyeing the market and looking to leverage the technology to move up the value chain and generate more revenue from their networks and services. Edge computing also represents an opportunity for telcos to extend their role beyond offering connectivity services and move into the platform and the application space.
However, operators will be faced with tough competition from other market players such as cloud providers, who are moving rapidly to define and own the biggest share of the edge market. Plus, industrial solution providers, such as Bosch and Siemens, are similarly investing in their own edge services. Telcos are also dealing with technical and business challenges as they venture into the new market and trying to position themselves and identifying their strategies accordingly.
Telcos that fail to develop a strategic approach to the edge could risk losing their share of the growing market as non-telco first movers continue to develop the technology and dictate the market dynamics. This report looks into what telcos should consider regarding their edge strategies and what roles they can play in the market.
Following this introduction, we focus on:
Edge terminology and structure, explaining common terms used within the edge computing context, where the edge resides, and the role of edge computing in 5G.
An overview of the edge computing market, describing different types of stakeholders, current telecoms operators’ deployments and plans, competition from hyperscale cloud providers and the current investment and consolidation trends.
Telcos challenges in addressing the edge opportunity: technical, organisational and commercial challenges given the market
Potential use cases and business models for operators, also exploring possible scenarios of how the market is going to develop and operators’ likely positioning.
A set of recommendations for operators that are building their strategy for the edge.
Enter your details below to request an extract of the report
What is edge computing and where exactly is the edge?
Edge computing brings cloud services and capabilities including computing, storage and networking physically closer to the end-user by locating them on more widely distributed compute infrastructure, typically at smaller sites.
One could argue that edge computing has existed for some time – local infrastructure has been used for compute and storage, be it end-devices, gateways or on-premises data centres. However, edge computing, or edge cloud, refers to bringing the flexibility and openness of cloud-native infrastructure to that local infrastructure.
In contrast to hyperscale cloud computing where all the data is sent to central locations to be processed and stored, edge computing local processing aims to reduce time and save bandwidth needed to send and receive data between the applications and cloud, which improves the performance of the network and the applications. This does not mean that edge computing is an alternative to cloud computing. It is rather an evolutionary step that complements the current cloud computing infrastructure and offers more flexibility in executing and delivering applications.
Edge computing offers mobile operators several opportunities such as:
Differentiating service offerings using edge capabilities
Providing new applications and solutions using edge capabilities
Enabling customers and partners to leverage the distributed computing network in application development
Improving networkperformance and achieving efficiencies / cost savings
As edge computing technologies and definitions are still evolving, different terms are sometimes used interchangeably or have been associated with a certain type of stakeholder. For example, mobile edge computing is often used within the mobile network context and has evolved into multi-access edge computing (MEC) – adopted by the European Telecommunications Standards Institute (ETSI) – to include fixed and converged network edge computing scenarios. Fog computing is also often compared to edge computing; the former includes running intelligence on the end-device and is more IoT focused.
These are some of the key terms that need to be identified when discussing edge computing:
Network edge refers to edge compute locations that are at sites or points of presence (PoPs) owned by a telecoms operator, for example at a central office in the mobile network or at an ISP’s node.
Telco edge cloud is mainly defined as distributed compute managed by a telco This includes running workloads on customer premises equipment (CPE) at customers’ sites as well as locations within the operator network such as base stations, central offices and other aggregation points on access and/or core network. One of the reasons for caching and processing data closer to the customer data centres is that it allows both the operators and their customers to enjoy the benefit of reduced backhaul traffic and costs.
On-premise edge computing refers to the computing resources that are residing at the customer side, e.g. in a gateway on-site, an on-premises data centre, etc. As a result, customers retain their sensitive data on-premise and enjoy other flexibility and elasticity benefits brought by edge computing.
Edge cloud is used to describe the virtualised infrastructure available at the edge. It creates a distributed version of the cloud with some flexibility and scalability at the edge. This flexibility allows it to have the capacity to handle sudden surges in workloads from unplanned activities, unlike static on-premise servers. Figure 1 shows the differences between these terms.
Figure 1: Edge computing types
Source: STL Partners
Network infrastructure and how the edge relates to 5G
Discussions on edge computing strategies and market are often linked to 5G. Both technologies have overlapping goals of improving performance and throughput and reducing latency for applications such as AR/VR, autonomous vehicles and IoT. 5G improves speed by increasing spectral efficacy, it offers the potential of much higher speeds than 4G. Edge computing, on the other hand, reduces latency by shortening the time required for data processing by allocating resources closer to the application. When combined, edge and 5G can help to achieve round-trip latency below 10 milliseconds.
While 5G deployment is yet to accelerate and reach ubiquitous coverage, the edge can be utilised in some places to reduce latency where needed. There are two reasons why the edge will be part of 5G:
First, it has been included in the 5Gstandards (3GPP Release 15) to enable ultra-low latency which will not be achieved by only improvements in the radio interface.
Second, operators are in general taking a slow and gradual approach to 5G deployment which means that 5G coverage alone will not provide a big incentive for developers to drive the application market. Edge can be used to fill the network gaps to stimulate the application market growth.
The network edge can be used for applications that need coverage (i.e. accessible anywhere) and can be moved across different edge locations to scale capacity up or down as required. Where an operator decides to establish an edge node depends on:
Application latency needs. Some applications such as streaming virtual reality or mission critical applications will require locations close enough to its users to enable sub-50 milliseconds latency.
Current network topology. Based on the operators’ network topology, there will be selected locations that can meet the edge latency requirements for the specific application under consideration in terms of the number of hops and the part of the network it resides in.
Virtualisation roadmap. The operator needs to consider virtualisation roadmap and where data centre facilities are planned to be built to support future network
Site and maintenance costs. The cloud computing economies of scale may diminish as the number of sites proliferate at the edge, for example there is a significant difference in maintaining 1-2 large data centres to maintaining 100s across the country
Site availability. Some operators’ edge compute deployment plans assume the nodes reside in the same facilities as those which host their NFV infrastructure. However, many telcos are still in the process of renovating these locations to turn them into (mini) data centres so aren’t yet ready.
Site ownership. Sometimes the preferred edge location is within sites that the operators have limited control over, whether that is in the customer premise or within the network. For example, in the US, the cell towers are owned by tower operators such as Crown Castle, American Tower and SBA Communications.
The potential locations for edge nodes can be mapped across the mobile network in four levels as shown in Figure 2.
Figure 2: possible locations for edge computing
Source: STL Partners
Table of Contents
Recommendations for telco operators at the edge
Four key use cases for operators
Edge computing players are tackling market fragmentation with strategic partnerships
Table of Figures
Definitions of edge computing terms and key components
What is edge computing and where exactly is the edge?
Network infrastructure and how the edge relates to 5G
Market overview and opportunities
The value chain and the types of stakeholders
Hyperscale cloud provider activities at the edge
Telco initiatives, pilots and plans
Investment and merger and acquisition trends in edge computing
Use cases and business models for telcos
Telco edge computing use cases
Roles and business models for telcos
Telcos’ challenges at the edge
Scenarios for network edge infrastructure development
Enter your details below to request an extract of the report
In order to drive these conversations forward, telcos need to listen and learn from developers who will, eventually, be making use of their edge computing capabilities. There are developers who are deeply engaged with the issue of edge computing, seeing it as a game-changing capability for their own solution. But, they also have strong messages they want the telecommunications industry to hear. They have their own requirements and expectations for how edge computing should work. They want clarity around what capabilities it will have, how their application will work on the edge and how they will be charged for its usage. This paper looks to give several application developers at the forefront of edge computing development a platform to address the telecoms industry.
For our interview programme we have focused on four key industries:
Location based services
Video and application optimisation
The focus for this paper is on application developers who primarily serve enterprise markets. However, there is real opportunity and applicability for applications running at the edge in the consumer market as well. In particular, some of the AR/VR applications discussed are currently industry focused but could and will eventually be used by consumers as well.
Our hope with this paper is that it will stimulate discussions within the edge computing community as a whole, including all key stakeholders. We also pull out the key practical implications for telcos in terms of business models, the technology they should look to be developing and the partnerships they may wish to establish.
The promise of industry 4.0 is being discussed broadly, and has been for several years. Much of the promise of increased productivity and reduced waste comes from the automation of processes that have typically required routine, often physical, human intervention. STL Partners has evaluated some of these use cases at length, as well as forecasting the value they can bring to the industry, in an upcoming report focused on the manufacturing industry.
However, there is also much promise in applications that, rather than replacing humans, look to increase their safety, efficiency and productivity. And this kind of use case can span outside of manufacturing, into industries such as mining, utilities, construction, architecture and beyond. One of these use cases is using AR/VR/MR (mixed reality) technology to overlay information for workers. This can span from simpler applications such as improving people management through applications that provide information on the order of tasks that should be performed to more complex applications like using augmented reality to visualise 3D CAD models. Benefits of these kinds of solutions include:
Increased productivity of workers. For example, instead of needing to refer to manuals or instructions before returning to the task at hand, instructions can be overlaid on smart glasses so they can be referred to as the task is being completed.
Increase productivity of experts. VR/AR applications can essentially upskill cheaper labour either through the additional information they can receive through the application or through the ability to more closely collaborate with experts who are not physically in the same place as them.
Tasks performed with more accuracy. If workers can be upskilled through the use of overlaid information, then they are less likely to need to redo tasks because mistakes have been made.
Better health, safety and compliance. Overlays on the smart glasses can warn workers of hazards and enable them to more safely handle challenging situations. Where video is stored, compliance to health and safety standards can be proven.
UAV/drones: Struggling to scale
Forecasts for the drone market have been optimistic in predicting take-up of the technology across different industries. There are proven cases of how drones can deliver benefits across different sectors, for example:
Delivering packages, such as Amazon’s Prime Air
Monitoring critical infrastructure, such as bridges and utility lines
Surveying land and the condition of crop in agricultural settings
Outside of delivery, most drone use cases centre on the ability to capture data that has historically been costly, time consuming or dangerous to do so and make sense of it by creating meaningful maps or interpret the data to identify anomalies. For example, France-based start-up Donecle is enabling automated aircraft inspections through drones to improve efficiencies and reduce the time planes spend in the hangar. Software companies such as Pix4D, DroneDeploy and Bentley are the market leaders for providing photogrammetry tools to translate imagery from drones into practical models.
However, adoption is slower than expected. This is partly due to the nascency of the technology; most drones are limited to 30 minutes of flight time, which restricts the amount of data that can be collected in a single session. Regulation for commercial use is inhibiting use, by putting constraints on how large the drone is, when it can fly and how high, as well as mandating the need for pilot qualifications to fly drones.
Ultimately, the challenge is that, until there is a way to continuously collect data and monitor assets/infrastructure, industries and governments will not be able to access the true benefit of using drones. To make a real economic difference, drones must enable a significant volume of data that is not currently accessible. The current model relies on an individual to manually programme the drone to fly and collect the data, then connect it to a PC, to transfer the data and finally upload it to the photogrammetry software to extract insights. Atrius, a start-up we interviewed who is developing data centre units to enable autonomous drones, likened this to using a bucket to collect oil from an oil field and driving back to the refinery to process it into fuel rather than using a pipeline. Instead of using manual processes, data collection and transformation from drones needs to be autonomous – from the drone knowing when to set off and where to go, to interpreting the data and distributing it to the relevant recipients and systems.
Video and application optimisation
The way in which content, video and applications are optimised to improve performance, scalability and security has evolved. This is due to a number of reasons:
Application and web page content is increasingly personalised and dynamic – caching static content at the edge is not sufficient.
Real-time video streaming is growing in entertainment, as well as enterprise/government applications (e.g. police body cameras) – performance here cannot be improved by moving the content closer to the end-viewer, video has to be optimised as it is captured.
Content is being enriched with augmented reality – for example overlaying live statistics on players when streaming a basketball game.
This is driving a need for edge computing and the ability to run workloads closer to the end user, rather than simply cache content or applications in a CDN. Two of our case studies come from this domain, although have very different propositions: the start-up Section provides a platform deploying workloads for developers at the edge and Smart Mobile Labs’ solutions optimise real-time video streaming.
Location-based services leverage information about a user’s location in order to provide targeted information, advertising or offers. Radius Networks provides these types of solutions for the retail and fast food industry. Specifically, they enable solutions such as:
Table service. Often used in fast food restaurants, when a customer has ordered they are given a beacon and can go and sit at a table. Staff are able to track the customer and bring their food to them when it has been prepared.
Curbside pickup of groceries. When a customer orders groceries in advance and drives to the store to pick it up, their location can be tracked in order for staff to be ready to hand them their order as soon as they arrive in the carpark. This ensures minimum wait time while also minimising the amount of time food is taken out of optimal storage conditions such as a fridge or a freezer.
Asset tracking. Assets such as products or machinery can be tracked throughout a store. This can ensure expensive stock or items are not lost and can help with logistical difficulties such as locating a specific package or item in a large warehouse.
There are current technical limitations that come with location-based services, but Radius Networks believes that edge computing can help solve them.
This report looks at the four use case categories in depth, including the types of services application developers are offering, why they need edge computing, and the opportunity for telecoms operators.
Table of contents
AR/VR for industry
Application introduction (AR/VR for industry)
1000 Realities: Edge computing for remote AR assistance
Light: edge for heavy duty computing with CAD models
Arvizio: edge for dynamic collaboration between remote parties
Challenges and implications for telcos
Commercial drones are struggling to achieve wide scale adoption
Enter edge computing: enabling autonomous drones
Atrius’ experience: edge is necessary, and the network is key
Challenges and implications for operators
Video and application optimisation
The changing nature of video and application optimisation
Benefits of the telecom edge
Edge use cases in video / application optimisation
Challenges and implications for operators
There are current technical limitations that come with location-based services – and edge can help solve them
Edge computing and location-based services: how it works
In early 2016, Facebook launched the Telecom Infra Project (TIP). It was set up as an open industry initiative, to reduce costs in creating telecoms network equipment, and associated processes and operations, primarily through open-source concepts applied to network hardware, interfaces and related software.
One of the key objectives was to split existing proprietary vendor “black boxes” (such as cellular base stations, or optical multiplexers) into sub-components with standard interfaces. This should enable competition for each constituent part, and allow the creation of lower-cost “white box” designs from a wider range of suppliers than today’s typical oligopoly. Critically, this is expected to enable much-broader adoption of networks in developing markets, where costs – especially for radio networks – remain too high for full deployments. Other outcomes may be around cheaper 5G infrastructure, or specialised networks for indoor use or vertical niches.
TIP’s emergence parallels a variety of open-source initiatives elsewhere in telecoms, notably ONAP – the merger of two NFV projects being developed by AT&T (ECOMP) and the Linux Foundation (Open-O). It also parallels many other approaches to improving network affordability for developing markets.
TIP got early support from a number of operators (including SK Telecom, Deutsche Telekom, BT/EE and Globe), hosting/cloud players like Equinix and Bandwidth, semiconductor suppliers including Intel, and various (mostly radio-oriented) network vendors like Radisys, Vanu, IP Access, Quortus and – conspicuously – Nokia. It has subsequently expanded its project scope, governance structure and member base, with projects on optical transmission and core-network functions as well as cellular radios.
More recently, it has signalled that not all its output will be open-source, but that it will also support RAND (reasonable and non-discriminatory) intellectual property rights (IPR) licensing as well. This reflected push-back from some vendors on completely relinquishing revenues from their (R&D-heavy) IPR. While services, integration and maintenance offered around open-source projects have potential, it is less clear that they will attract early-stage investment necessary for continued deep innovation in cutting-edge network technology.
At first sight, it is not obvious why Facebook should be the leading light here. But contrary to popular belief, Facebook – like Google and Amazon and Alibaba – is not really just a “web” company. They all design or build physical hardware as well – servers, network gear, storage, chips, data-centres and so on. They all optimise the entire computing / network chain to serve their needs, with as much efficiency as possible in terms of power consumption, physical space requirements and so on. They all have huge hardware teams and commit substantial R&D resources to the messy, expensive business of inventing new kit. Facebook in particular has set up Internet.org to help get millions online in the developing world, and is still working on its Aquila communications drones. It also set up OCP (Open Computing Platform) as a very successful open-source project for data-centre design; in many ways TIP is OCP’s newer and more telco-oriented cousin.
Many in the telecom industry often overlook the fact that their Internet peers now have more true “technology” investment – and especially networking innovation – than most operators. Some operators – notably DT and SKT – are pushing back against the vendor “establishment”, which they see as stifling network innovation by continuing to push monolithic, proprietary black boxes.
What does Open-Source mean, applied to hardware?
Focus areas for TIP
Strategic considerations and implications
Operator involvement with TIP
A different IPR model to other open-source domains
Fit with other Facebook initiatives
Who are the winners?
Who are the losers?
Conclusions and Recommendations
Figure 1: A core TIP philosophy is “unbundling” components of vendor “black boxes”
Figure 2: OpenCellular functional architecture and external design
Figure 3: SKT sees open-source, including TIP, as fundamental to 5G
A formal definition of MEC is that it enables IT, NFV and cloud-computing capabilities within the access network, in close proximity to subscribers. Those edge-based capabilities can be provided to internal network functions, in-house applications run by the operator, or potentially third-party partners / developers.
There has long been a vision in the telecoms industry to put computing functions at local sites. In fixed networks, operators have often worked with CDN and other partners on distributed network capabilities, for example. In mobile, various attempts have been made to put computing or storage functions alongside base stations – both big “macro” cells and in-building small/pico-cells. Part of the hope has been the creation of services tailored to a particular geography or building.
But besides content-cacheing, none of these historic concepts and initiatives have gained much traction. It turns out that “location-specific” services can be easily delivered from central facilities, as long as the endpoint knows its own location (e.g. using GPS) and communicates this to the server.
This is now starting to change. In the last three years, various market and technical trends have re-established the desire for localised computing. Standards have started to evolve, and early examples have emerged. Multiple groups of stakeholders – telcos and their network vendors, application developers, cloud providers, IoT specialists and various others have (broadly) aligned to drive the emergence of edge/fog computing. While there are numerous competing architectures and philosophies, there is clearly some scope for telco-oriented approaches.
While the origins of MEC (and the original “M”) come from the mobile industry, driven by visions of IoT, NFV and network-slicing, the pitch has become more nuanced, and now embraces fixed/cable networks as well – hence the renaming to “multi-access”.
Before discussing specific technologies and use-cases for MEC, it is important to contextualise some other trends in telecoms that are helping build a foundation for it:
Telcos need to reduce costs & increase revenues: This is a bit “obvious” but bears repeating. Most initiatives around telco cloud and virtualisation are driven by these two fundamental economic drivers. Here, they relate to a desire to (a) reduce network capex/opex by shifting from proprietary boxes to standardised servers, and (b) increase “programmability” of the network to host new functions and services, and allow them to be deployed/updated/scaled rapidly. These underpin broader trends in NFV and SDN, and then indirectly to MEC and edge-computing.
New telco services may be inherently “edge-oriented”: IoT, 5G, vertical enterprise applications, plus new consumer services like IPTV also fit into both the virtualisation story and the need for distributed capabilities. For example, industrial IoT connectivity may need realtime control functions for machinery, housed extremely close by, for millisecond (or less) latency. Connected vehicles may need roadside infrastructure. Enterprises might demand on-premise secure data storage, even for cloud-delivered services, for compliance reasons. Various forms of AI (such as machine vision and deep learning) involve particular needs and new ways of handling data.
The “edge” has its own context data: Some applications are not just latency-sensitive in terms of response between user and server, but also need other local, fast-changing data such as cell congestion or radio-interference metrics. Going all the way to a platform in the core of the network, to query that status, may take longer than it takes the status to change. The length of the “control loop” may mean that old/wrong contextual data is given, and the wrong action taken by the application. Locally-delivered information, via “edge APIs” could be more timely.
Not all virtual functions can be hosted centrally: While a lot of the discussion around NFV involves consolidated data-centres and the “telco cloud”, this does not apply to all network functions. Certain things can indeed be centralised (e.g. billing systems, border/gateway functions between core network and public Internet), but other things make more sense to distribute. For example, Virtual CPE (customer premises equipment) and CDN caches need to be nearer to the edge of the network, as do some 5G functions such as mobility management. No telco wants to transport millions of separate video streams to homes, all the way from one central facility, for instance.
There will therefore be localised telco compute sites anyway: Since some telco network functions have to be located in a distributed fashion, there will need to be some data-centres either at aggregation points / central offices or final delivery nodes (base stations, street cabinets etc.). Given this requirement, it is understandable that vendors and operators are looking at ways to extend such sites from the “necessary” to the “possible” – such as creating more generalised APIs for a broader base of developers.
Radio virtualisation is slightly different to NFV/SDN: While most virtualisation focus in telecoms goes into developments in the core network, or routers/switches, various other relevant changes are taking place. In particular, the concept of C-RAN (cloud-RAN) has taken hold in recent years, where traditional mobile base stations (usually called eNodeB’s) are sometimes being split into the electronics “baseband” units (BBUs) and the actual radio transmit/receive components, called the remote “radio head”, RRH. A number of eNodeB’s BBUs can be clustered together at one site (sometimes called a “hotel”), with fibre “front-haul” connecting the RRHs. This improves the efficiency of both power and space utilisation, and also means the BBUs can be combined and virtualised – and perhaps have extra compute functions added.
Property business interests: Telcos have often sold or rented physical space in their facilities – colocation of equipment racks for competitive carriers, or servers in hosting sites and data-centres. In turn, they also rely on renting space for their own infrastructure, especially for siting mobile cell-towers on roofs or walls. This two-way trade continues today – and the idea of mobile edge computing as a way to sell “virtual” space in distributed compute facilities maps well to this philosophy.
Background market drivers for MEC
Why Edge Computing matters
The ever-wider definition of “Edge”
Wider market trends in edge-computing
Use-cases & deployment scenarios for MEC
Addressing vertical markets – the hard realities
MEC involves extra costs as well as revenues
Current status & direction of MEC
Standards path and operator involvement
Conclusions & Recommendations
Figure 1: A taxonomy of mobile edge computing
Figure 2: Even within “low latency” there are many different sets of requirements
Figure 3: The “network edge” is only a slice of the overall cloud/computing space
Figure 4: Telcos can implement MEC at various points in their infrastructure
Figure 5: Networks, Cloud and IoT all have different starting-points for the edge
Figure 6: Network-centric use-cases for MEC suggested by ETSI
Figure 7: MEC needs to integrate well with many adjacent technologies and trends
Application programming interfaces (APIs) are a central part of the mobile and cloud-based app economy. On the web, APIs serve to connect back-end and front-end applications (and their data) to one another. While often treated as a technical topic, APIs also have tremendous economic value. This was illustrated very recently when Oracle sued Google for copyright infringement over the use of Oracle-owned Java APIs during the development of Google’s Android operating system. Even though Google won the case, Oracle’s quest for around $9 billion showed the huge potential value associated with widely-adopted APIs.
The API challenge facing telcos…
For telcos, APIs represent an opportunity to monetise their unique network and IT assets by making them available to third-parties. This is particularly important in the context of declining ‘core’ revenues caused by cloud and content providers bypassing telco services. This so-called “over the top” (OTT) threat forces telcos to both partner with third-parties as well as create their own competing offerings in order to dampen the decline in revenues and profits. With mobile app ecosystems maturing and, increasingly, extending beyond smartphones into wearables, cars, TVs, virtual reality, productivity devices and so forth, telcos need to embrace these developments to avoid being a ‘plain vanilla’ connectivity provider – a low-margin low-growth business.
However, thriving in this co-opetitive environment is challenging for telcos because major digital players such as Google, Amazon, Netflix and Baidu, and a raft of smaller developers have an operating model and culture of agility and fast innovation. Telcos need to become easier to collaborate with and a systematic approach to API management and API exposure should be central to any telco partnership strategy and wider ‘transformation programme’.
…and Dialog’s best-practice approach
In this report, we will analyse how Dialog, Sri Lanka’s largest operator, has adopted a two-pronged API implementation strategy. Dialog has systematically exposed APIs:
Externally in order to monetise in partnership with third-parties;
Internally in order to foster agile service creation and reduce operational costs.
STL Partners believes that this two-pronged strategy has been instrumental in Dialog’s API success and that other operators should explore a similar strategy when seeking to launch or expand their API activities.
Dialog Axiata has steadily increased the number of API calls (indexed)
Source: Dialog Axiata
In this report, we will first cover the core lessons that can be drawn from Dialog’s approach and success and then we will outline in detail how Dialog’s Group CIO and Axiata Digital’s CTO, Anthony Rodrigo, and his team implemented APIs within the company and, subsequently, the wider Axiata Group.
The value of APIs
The API challenge facing telcos…
…and Dialog’s best-practice approach
5 key ‘telco API programme’ lessons
Background: What are APIs and why are they relevant to telcos?
The telecoms industry’s API track record is underwhelming
The Dialog API Programme (DAP)
Ideamart: A flexible approach to long-tail developer engagement
Axiata MIFE – building a multipurpose API platform
Drinking your own champagne : Dialog’s use of APIs internally
Expanding MIFE across Axiata opcos and beyond
Conclusion and outlook
Figure 1: APIs link backend infrastructure with applications
Figure 2: The explosive growth of open APIs
Figure 3: How a REST API works its magic
Figure 4: DAP service layers
Figure 5: Five APIs are available for Idea Pro apps
Figure 6: Idea Apps – pre-configured API templates
Figure 7: Ideadroid/Apptizer allows restaurants to specify food items they want to offer through the app
Figure 8: Ideamart’s developer engagement stats compare favourably to AT&T, Orange, and Vodafone
Figure 9: Steady increase in the number of API calls (indexed)
Figure 10: Dialog Allapps on Android
Figure 11: Ideabiz API platform for enterprise third-parties
Figure 12: Dialog Selfcare app user interface
Figure 13: Dialog Selfcare app functions – share in total number of hits
Figure 14: Apple App Store – Dialog Selfcare app ratings
Figure 15: Google Play Store – Dialog Selfcare app ratings
Figure 16: MIFE enables the creation of a variety of digital services – both internally and externally
At the beginning of 2013, we issued an Executive Briefing on the proposed take-over of Sprint-Nextel by Softbank, which we believed to be the starting gun for disruption in the US mobile market.
At the time, not only was 68% of revenue in the US market controlled by the top two operators, AT&T and Verizon, it was also an unusually lucrative market in general, being both rich and high-spending (see Figure 1, taken from the The Future Value of Voice & Messaging strategy report). Further, the great majority of net-adds were concentrated among the top two operators, with T-Mobile USA flat-lining and Sprint beginning to lose subscribers. We expected Sprint to initiate a price war, following a plan similar to Softbank’s in Japan, separating the cost of devices from that of service, making sure to offer the hero smartphone of the day, and offering good value on data bundles.
Figure 1: The US, a rich country that spends heavily on telecoms
Source: STL Partners
In the event, the fight for control of Sprint turned out to be more drawn out and complex than anyone expected. Add to this the complexity of Sprint’s major network upgrade, Network Vision, as shown in Figure 2, and the fact that the plans changed in order to take advantage of Softbank’s procurement of devices for the 2.5GHz band, and it is perhaps less surprising that we have yet to see a major strategic initiative from Sprint.
Figure 2: The Softbank deal brought with it major changes to Network Vision
Source: Sprint Q3 earnings report
Instead, T-Mobile USA implemented a very similar strategy, having completed the grieving process for the AT&T deal and secured investment from DTAG for their LTE roll-out and spectrum enhancements. So far, their “uncarrier” strategy has delivered impressive subscriber growth at the expense of slashing prices. The tale of 2013 in terms of subscribers can be seen in the following chart, updated from the original Sprint/Softbank note. (Note that AT&T, VZW, and T-Mobile have released data for calendar Q3, but Sprint hasn’t yet – the big question, going by the chart, will be whether T-Mobile has overtaken Sprint for cumulative net-adds.)
Figure 3: The duopoly marches on, T-Mobile recovers, Sprint in trouble
Source: STL Partners
However, Sprint did have a major strategic initiative in the last two years – and one that went badly wrong. We refer, of course, to the shutdown of the Nextel half of Sprint-Nextel.
Closing Nextel: The Optimistic Case
There is much that is good inside Sprint, which explains both why so much effort went into its “turnaround” and why Masayoshi Son was interested. For example, its performance in terms of ARPU is strong, to say the least. The following chart, Figure 4, illustrates the point. Total ARPU in post-paid, which is most of the business, is both high at just under $65/mo and rising steadily. ARPU in pre-paid is essentially flat around $25/mo. The problem was Nextel and specifically, Nextel post-paid – while pre-paid hovered around $35/mo, post-paid trended steadily down from $45/mo to parity with pre-paid by the end.
Figure 4: Sprint-Nextel ARPU
Source: STL Partners
The difference between the two halves of Sprint that were doing the work here is fairly obvious. Nextel’s unique iDEN network was basically an orphan, without a development path beyond the equivalent of 2005-era WCDMA speeds, and without smartphones. Sprint CDMA, and later LTE, could offer wireless broadband and could offer the iPhone. Clearly, something had to be done. You can see the importance of smartphone adoption from the following graphic, Figure 5, showing that smartphones drove ARPU on Sprint’s CDMA network.
Figure 5: Sprint CDMA has reached 80% smartphone adoption
Source: STL Partners
It is true that smartphones create opportunities to substitute OTT voice and messaging, but this is less of a problem in the US. As the following chart from the Future Value of Voice and Messaging strategy report shows, voice and messaging are both cheap in the US, and people spend heavily on mobile data.
Figure 6: US mobile key indicators
Source: STL Partners
So far, the pull effect of better devices on data usage has helped Sprint grow revenues, while it also drew subscribers away from Nextel. Sprint’s strategy in response to this was to transition Nextel subscribers over to the mainline platform, and then shut down the network, while recycling savings and spectrum from the closure of Nextel into their LTE deployment.
Closing Nextel: The Scoreboard
The Double Dippers
The Competition: AT&T Targets the Double Dippers
Developers, Developers, Devices
Figure 1: The US, a rich country that spends heavily on telecoms
Figure 2: The Softbank deal brought with it major changes to Network Vision
Figure 3: The duopoly marches on, T-Mobile recovers, Sprint in trouble
Figure 4: Sprint-Nextel ARPU
Figure 5: Sprint mainline has reached 80% smartphone adoption
Figure 6: US mobile key indicators
Figure 7: Tale of the tape – something goes wrong in early 2012
Figure 8: Sprint’s “recapture” rate was falling during 3 out of the 4 biggest quarters for Nextel subscriber losses, when it needed to be at its best
Figure 9: Nextel post-paid was 72% business customers in 3Q 2011
Figure 10: The loss of high-value SMB customers dragged Sprint’s revenues into negative territory
Figure 11: The way mobile applications development used to be
Summary: Vodafone 360 was meant to be a new, social-network centred approach to managing the customer interface. Unfortunately, it was also bug-ridden and dogged by a lack of clarity of purpose. Now, its availability on Android Market and iTunes may create a strategic opportunity for Vodafone to access more customers.
NB You can download a PDF copy of this 15 page note here.
Vodafone 360, one of the currently trendy “all your social networks in one app” products, launched in 2009 to considerable publicity and enthusiasm – not least from us. However, thanks to a variety of problems at the tactical and technical levels, it has failed to achieve the scale required for a successful platform. Vodafone is now trying to resolve this, notably by integrating 360 into the Android Market and iTunes as an app in its own right.
In this note, we discuss this move, and the possibilities opened up by repackaging operator and partner products into a pure software user experience that can be distributed to the user bases of very large app stores. This, we argue, creates a horizontal service layer that reaches across devices and connectivity providers, essentially implementing the vision laid out here by Giles Corbett of Orange’s innovation group.
This may also be an innovative way of generating relevant customer data, an alternative to the extremely complex processes of data federation and database systems integration typically seen in customer-data projects. Turning Vodafone into an app may be the answer.
At the 7th Telco 2.0 Executive Brainstorm, held in London in November 2009, Vodafone’s director of new media, Bobby Rao, presented their new social network product – Vodafone 360. We were enthusiastic. Why?
It was good to see an operator innovating
Rather than trying to bar users from going to their favourite Web services, or extract a tax from Google, Vodafone was trying to improve its users’ mobile Web experience and facilitate their interactions with Facebook, YouTube and friends. The technology approach was sensible, using Web 2.0 rather than RCS clients and such things. The Linux-based handsets had a truly impressive user interface.
It was an open development platform
Vodafone was embracing developers, making use of open-source technology, and doing things like integrating carrier billing into the content and app-store elements of the service so that their upstream customers could get paid. Using the OMTP’s BONDI standard for access to device capabilities was sensible.
It was good to see an operator focusing on communications, rather than dollops of “content”
The applications for Vodafone 360 were all about communication of one kind or another – instant-messaging, status-updating, sharing location, photos, and other media. It was even suggested that it might grow into voice at some point.
It was a positive proposal
Rather than just barricading themselves in the telco bunker, or reaching for the symptomatic relief of more handset subsidy, Vodafone was actually trying something new and interesting. And that’s always worth watching.
A False Start…
8 months on, it would be unfair to call Vodafone 360 a flop, or call out the Telco 2.0 Crash Investigator, but it is hard to say that it’s been a success. Perhaps, at this point, it’s more of a case for Telco 2.0 Safety Event Reporting rather than Crash Investigation. But user adoption has been slow, there has been much negative comment, and the developer community has hardly caught fire.
Vodafone’s own actions speak volumes; they rapidly downsized the space in their retail outlets that was devoted to 360 in order to make more room for iPhones. Actually, there were signs very early on that the company’s senior management might not have been fully committed – despite the huge Vodafone UK ad budget, the initial push for 360 was hardly impressive.
Negative comment piled up; there were reports of very high returns, a buggy user experience, and a number of odd decisions. For example, the photo-sharing element didn’t support Flickr, the world’s most popular photo-sharing site, because “nobody used it”. Contacts ingestion, a key feature for any social application, was heavily criticised.
“This is what racked me off the most. After getting all my contacts merged and sorted I found that at random times I would log back on to the 360 website and find either duplicated contacts appear or the list gets shorted and a load have been deleted. Now this is grass roots basics for a contacts management program. It stores them safely and accurately and 360 does not. I therefore cannot trust it with my information. It’s a good job I keep my contacts backed up in Outlook…”
There was no support for Twitter at launch – this is telling, as the process of posting a status update to Twitter can be implemented in one line of code on a Linux/Unix system like the Samsung H1s. It’s not rocket science:
The most tangible sign of this is the decision to integrate the appstore element of 360 with Android Market. With 90,000 application SKUs in the Market as opposed to 8,000 in 360, this is no surprise. It could also be read as an admission of defeat. Wasn’t part of the point of 360 that it would be the commercial spearhead for JIL, BONDI, LiMo, and the related telco-sponsored TLAs like WAC?
Obviously, the prospect of adding some 90,000 new apps with a stroke of the pen is attractive. Werner Vogels, CTO of Amazon.com, considers “selection” – that is to say, choice or variety – to be one of the critical “flywheels” that drives the growth of platform businesses. That is to say, it’s a source of increasing returns to scale, as the network effect between many buyers and sellers comes into play.
The market leader, Apple’s App Store, counts some 225,000 SKUs as of 7th June, 2010. Android Market had reached 90,000 by the 11th of July; the white-label app store provider Getjar offers 60,000. As Amazon’s experience would suggest, there appear to be increasing returns to scale – Apple has so far counted 5 billion transactions through the App Store in two-and-a-half years, while Getjar reports 900 million downloads over a significantly longer period of time. In its Q210 results, Nokia reported that the Ovi Store showed 13,000 SKUs and a run rate of 1.7m downloads/day one year after launch; assuming a linear trend, we estimate that this equals 310,250,000 downloads since launch.
Tactical Execution: There Is No Such Word as “Unlaunch”
But the Android integration is just as important from the point of view of hardware. To the end user, mobile is all about hardware – one of the numerous lessons from Apple is the enduring centrality of shiny gadgets in any mobile marketing effort. Arguably, there are two models for success here.
Apple- or RIM-like – the superstar device
If your service (including the core telco services) is going to be tied to one specific device, it is obviously vital that the device be outstanding. Close coupling between the device and the service means that you can control more of the value chain, and also that you can control the user experience more closely. It also means that the rest of the value chain – specifically the hardware and device software elements – controls you.
If the devices are subpar, or simply drowned out in the iHubbub, the service will be too. This has consequences for tactics as well as strategy. The marketing, advertising, and retail effort has to push the device as much as it does the service. The supply chain, activation, and support infrastructure must be ready. And most of all, the device has to be ready on launch day – you can’t afford a slow start.
Android-like – the teeming bazaar
Alternatively, you can concentrate your effort on service, software, and tariffs, and go for the largest possible range of devices. This is the Android (and also Nokia) strategy.
It permits you to hedge your bets, creates more scope for adjustment to changing circumstances, and avoids getting into a creepy clinch with any particular vendor. It also precludes the sort of close control of the user experience that the BlackBerry-like strategy provides, unless this can be done entirely in software.
These two approaches intersect with two models of go-to-market tactics:
1. The big bang!
We’ve all seen it – Steve Jobs strides onto the podium at MacWorld as the cameras click and produces the new shiny from his pocket. Big-budget videos. Publicity stunts. Basically, it’s a huge pre-planned event, backed up by an integrated media operation cued to peak at that moment and linked behind the scenes with a carefully prepared supply chain. The advantage is, of course, concentration of effort in space and time.
The disadvantage is that once the retailers fill their stocks, and the production servers are fired up, you’re committed to going through with the launch. The flop will be all the bigger for all the concentrated effort if anything goes wrong.
2. Permanent beta
This is the anti-launch; rather than trying to seize everyone’s attention, the idea is to recruit a select band of early adopters, gradually build scale, and carry out kaizen over the medium term. Google is famous for it, as are games companies in general and the open-source world. It allows a maximum of flexibility, and permits adaptation as you go along. There is a risk that the product will never catch on, but that risk is always there.
There is, to a rough approximation, a mapping between these pairs – the superstar device option tends to require a big bang, big day go-to-market plan. It’s possible to integrate the two in that you start with the beta, and move on to a full launch when it passes some project milestone; we could call that scenario the “rolling start”. However, it’s impossible to do the opposite and move from a launch to a beta.
Unfortunately, Vodafone 360 didn’t really succeed in going for either a big bang or permanent beta approach; rather than launching 360 with one superstar device (perhaps one of the top-end Androids, or even the iPhone), or else pushing it out across the board, they chose two rather specialised devices. If Vodafone made a major publicity push, it didn’t succeed in getting the public’s attention; it did, however, succeed in generating enough publicity that everyone noticed the bugs.
Integrating into Android Market has the effect of definitively plumping for a teeming bazaar strategy, going for device diversity. It also means that Vodafone 360 will have to rely on implementing its features and user experience as a software client on the device.
But this could be a major strategic opportunity.
What if we were to turn 360 through 180 degrees?
If you can distribute 360 applications through the Android Market, you can also productise 360 itself as a software application, and then distribute it through the Market.
This would give Vodafone an access route to the global Android user base. A detail of 360 we liked originally was that it isn’t restricted to Vodafone customers – distributing it as an application on the Android Market would take this and go further. So far, the separation of access, enablers, and services – the horizontalisation of telecoms – has mostly benefited vendors, content providers, and software developers. But this doesn’t have to be true. Converting its customer-facing product into a software application would let Vodafone play at that game, too.
This is very similar, in some ways, to our view of Google’s strategy. Google is trying to extend into the middle of interactions across a wide range of markets, taking a thin layer of value between buyer and seller; Vodafone 360 could capture a similar thin layer of value from other operators, by providing a better interface for a wide range of online services.
As well as creating a Vodafone access route onto devices that don’t live on its network, 360 might also have important consequences in terms of customer data. It is well placed to capture information on how users interact with the services it talks to; it will be only more so on Android with the range of interfaces it provides for collecting social-graph and location data. In fact, it’s fairly trivial to have an Android app receive notifications when the network signal strength changes – it could even be a way of capturing real-world network quality data.
Operators are still struggling to get a grip on the piles of data they collect – the stereotype example is the operator with dozens of billing systems, some of which are 20 years old. Federating data across these hugely complex legacy systems-of-systems amounts to a major systems integration project as well as a significant software development and data-management challenge. There is a strong argument that it might be easier to solve this at the mobile application level, creating a new edge interface at which customer data is generated, and possibly also gaining data created by customers you don’t yet have for the core services.
After all, that’s precisely what Google did with Google Ads – rather than trying to, say, extract information from tens of thousands of websites’ server logs, they simply got their users to declare their interests as search strings and matched ads to them. So there’s a possible play for data-enriched advertising, especially as in-application ads become more common.
With this “Vodafone bridgehead” onto Android devices, there are many other opportunities. Back at the inception of 360, we noted that Vodafone was suggesting that it might eventually include a voice element. In our recent Ribbit note, we quoted one Ribbit Mobile user as saying that he wanted it to “take over the entire dialler function” of his iPhone. It is entirely possible to do this in Android. As well as providing call management, better voicemail, and integration with other social networks and contacts lists, this could use something along the lines of carrier preselection, rather as Google Voice does, to offer competing call rates.
Android devices are highly effective WLAN finders; another option would be to make use of the GAN standard and route SIP calls via Vodafone while a WLAN hotspot is available. This would make it possible to create an application that grabs the user, creates a new source of customer data, captures minutes of use, or at the very least, denies them to the enemy. We referred to Ribbit Mobile; in fact, the service could actually be implemented with Ribbit’s technology under licence.
But Vodafone could do better than that; they already have a hosted unified-comms product, Vodafone One. Just as Ribbit, being a cloud service, fits into BT’s existing sales and provisioning processes for SMBs and enterprises, so an Android-based Vodafone app would neatly fit the mobile features of Vodafone One into an effective package for distribution to individual customers. We’ve already seen that a wide variety of businesses and functions can be effectively distributed to the individual user base through app stores. Wrapping Vodafone in an app would allow them to leverage this.
There are other options in this line – self-care features could be embedded in the app, for example. Vodafone has already dabbled at this with its MyVodafone app; YouFon’s “manage your account through Facebook” is another pointer (see Figure 6, below). Or the carrier could use the app and its service back-end as the underlying technology for a range of niche MVNO propositions.
Another key capability that Vodafone could make use of is its existing pre-pay infrastructure, both the OSS and other IT resources behind the scenes and its networks of reseller agents. At the moment, prepay users need a credit card to take part in the app/content ecosystem – or at least, they need to go to the trouble of entering details on a non-keyboard device or risk having them stored on an easy-to-lose, easy-to-steal device.
But Vodafone 360 already ingests credit through the existing VF pre-pay system, so it could also pay out rewards, revenue shares, or peer-to-peer transfers through the same mechanism. And, of course, they have the M-PESA system available.
Conclusions and Recommendations
The Vodafone 360 experience demonstrates the opportunities and pitfalls of moving from a traditional telco model towards one oriented towards the Internet and based on software. Initial failures, and the recent fiasco when Vodafone decided to impose a variety of 360-branded apps on its HTC Desire users as an unannounced software update, show how difficult it can be for our organisations to adapt to these challenges.
However, we can also see how this presents an opportunity to compete on the Web majors’ and Voice 2.0 players’ own terms. If operators can develop compelling new applications and services, the vendor app store/smartphone model is a valid way of distributing them and gaining access to a wider user base. Operators have specific assets, notably their PAYG and/or Mobile Money Transfer (MMT) infrastructure, that such a move can leverage, notably by opening up the app/content store to the PAYG subscribers. At the same time, MMT operators can use this to deploy their product more widely by packaging it as an app.
Further, it seems to be a good general forecasting principle that major customer-data projects are harder, more expensive, and more complicated than expected. There is no reason to expect this to change, as the reasons are structural and rooted in the existing infrastructure and the politics and economics of privacy. As a result, it may be a good idea to seek new and additional ways of acquiring this data – Google, after all, didn’t start off by integrating a variety of legacy databases, but rather by creating a new user base. The Web 2.0 experience demonstrates that it is possible to derive useful data profiles from very low-touch customers.
The greatest opportunities appear to be in integrating such an approach with existing MMT, content, channel marketing, and Voice 2.0 ideas – using the app store paradigm to repackage the rest of your Telco 2.0 activities in the consumer and SMB sectors.
However, it is critical that operators master the tactical problems of execution in a space which is fundamentally different to traditional mobile. Customers’ ability to churn is significantly higher, and the fall-out from missteps will arrive much faster than most of us are used to.
NB A full PDF copy of this briefing can be downloaded here.
This special Executive Briefing report summarises the brainstorming output from the Open APIs 2.0 section of the 6th Telco 2.0 Executive Brainstorm, held on 6-7 May in Nice, France, with over 200 senior participants from across the Telecoms, Media and Technology sectors. See: www.telco2.net/event/may2009.
It forms part of our effort to stimulate a structured, ongoing debate within the context of our ‘Telco 2.0′ business model framework (see www.telco2research.com).
Each section of the Executive Brainstorm involved short stimulus presentations from leading figures in the industry, group brainstorming using our ‘Mindshare’ interactive technology and method, a panel discussion, and a vote on the best industry strategy for moving forward.
There are 5 other reports in this post-event series, covering the other sections of the event: Retail Services 2.0, Content Distribution 2.0, Enterprise Services 2.0, Piloting 2.0, Technical Architecture 2.0, and Devices 2.0. In addition there is an overall ‘Executive Summary’ report highlighting the overall messages from the event.
Each report contains:
Our independent summary of some of the key points from the stimulus presentations
An analysis of the brainstorming output, including a large selection of verbatim comments
The ‘next steps’ vote by the participants
Our conclusions of the key lessons learnt and our suggestions for industry next steps.
The brainstorm method generated many questions in real-time. Some were covered at the event itself and others we have responded to in each report. In addition we have asked the presenters and other experts to respond to some more specific points.
Chris Barraclough, MD, Telco 2.0 asked what the relevance of APIs was? This is terribly technical – why should I care? But history shows it’s a great way of generating volume on platforms. Amazon realised that easy third-party access creates more selection, which creates footfall. So merchants can squirt their catalogue into Amazon. Similarly Betfair successfully increased liquidity on its betting platform, resulting in better prices and more transactions, by allowing independent bookies to post their whole book into the market via an API.
There’s a lot of activity just now in the industry, mostly technical so far. What are the commercial implications?
Andrew Bud, Chairman, MEF: ”Too many projects are led by the engineering function; these things must provide a business kicker for the content community.”
There are several strategic options for operators for APIs:
1. Expose a resource to third parties as a Web service. This is fairly low value – also, it has scary security/privacy implications, and there is the possibility that someone might scrape the entire database and use it for their own unspeakable ends.
2. Expose real-time data in an aggregated form. The aggregation process provides some protection from the privacy problems; real-time data rapidly becomes stale and individual data is hidden. For example, like Vodafone, you could provide TomTom with the locations of concentrations of users who are not moving, or traffic jams as we call them.
3. You can also use APIs to drive traffic through the core services. You require the use of your voice and messaging in conjunction with it, like BT does with Ribbit. There’s a problem, though – the price of these services is declining rapidly, and it is becoming much easier to provide them outside the telcos.
4. And then, there are new CEBPs. Instead of pulling data out of the telco, the upstream customers push queries or rules up into the telco, and use the result rather than processing data returned by a Web service. This permits value-based pricing and revenue sharing business models.
Questions of pricing follow from this; part of a bigger platform pricing strategy. There are two possible levers to pull – the cost to join and the cost per transaction. Both can go to zero. There is also a third dimension – the share of pricing between upstream and downstream.
Keith Willetts, Chairman, TM Forum: ”B2B activity is fundamental to the 2-sided business model. It involves lots of transactions, and lots of money.”
MS Windows – has a high entry cost for end users (buy a computer), zero incremental cost, and zero entry cost for application developers;
Ebay – charges a listing fee, plus a 10% transaction commission;
Amazon – flat rate joining fee, then a commission on transactions;
Google – access free, variable transaction fee on upstream customers;
Premium SMS – access and management fees are usually rather more than Amazon or Ebay, plus a fee per transaction of minimum 40% to the company issuing the SMS.
Our chart here identifies the Zone of Death – fragmentation means joining costs so high that there is no scale, hence transaction fees have to be really high to make money, and therefore scale remains low and joining costs high!
Nicolas de Cordes, SVP Corporate Strategy, Orange said that the customer is core to Orange’s business. The business grows through customer interactions. But what happens after Telco 1.0? Without knowing it we’ve built major assets; network; operations; information resources; and trust. Trust is vital for commerce. It’s an unconscious element; which bank would I put information in? Operators need to consider how to be trusted partners for customers:
As a result, Orange has established three lines of business:
1. Content Services; content distribution and management
2. Vertical Services – notably healthcare. Also developing applications for specific business processes using, for example, M2M.
3. Service Management – wholesale, enterprise, and perhaps home networks.
We recognised that we couldn’t innovate at the speed of the Internet. So we created Orange Partner. There were some embryonic initiatives in the UK and France – these were rolled together into the new organisation. Its early aims were to help suppliers interact with Orange mobile, then to aid suppliers to interact with Orange across all platforms. Now, we’re aiming for developers in general – so far we’ve got 65,000 registrations.
Orange Partner has three focuses:
1. Getting the API out of the door;
2. Open Innovation (which links new businesses to stakeholders in Orange and its internal VC group);
3. App Shop, the app store.
So far, we’ve opened 30 APIs in four categories; as an example, you can meet members of AlloSortir near you using SMS and location, or call a taxi using click-to-call, SMS, and location. We want to see major brands like DHL and FedEx using these APIs this year.
We now have 50 million subscribers with access to certified, filtered apps through the Application Shop.
Karl Bream, Head, Corporate Strategic Marketing, Alcatel-Lucent introduced some problems from online games. He showed a screenshot of a tool identifying how well the network is performing. The value of each games player joining a game is assessed according to this so that it is more attractive to play opponents with higher quality connections. More than 200ms latency, and you’ve got trouble – you’ll get shot before you can duck.
There are 90 million online gamers; 14% of our sample was willing to pay for a better service: one which would guarantee quality of service to support their passion. If this could be exposed as a product, we could place a value on it of about €5-10 per month. We think there’s an alternative model here, as well; an advertiser could pay to boost the QoS, sponsoring a game or a group of gamers, or putting up a logo saying ”quality boosted by…”
Other possible APIs: ”Share the Moment” (video-sharing), which requires QoS and could earn about €0.15-0.20 per use, ”My Media Vault” (storage in the cloud), which requires content and context and might attract €7-9 a month per user, ”Put Me In Control” (remote-desktop) which requires context, security/privacy. There’s lots of value out there to capture, and somebody will do it.
”Video will break the bank”; the application and content providers do understand that there is a problem. But they need simple developer platforms – and new business models. Revenue sharing is acceptable, but they are concerned about the terms. There is a serious gap in their awareness of operators’ capabilities.
Keith Willetts, Chairman, TM Forum: ”1,000 operators and 30 APIs each – that is no standard. Look at USB – there’s only one of it.”
The final session at the event was essentially a “battle of the APIs”, as well as the commercial models that might be used to exploit them. Over the last two years, various companies and groups from the Telco industry have started moving towards “exposable” open platforms, sometime individually, and sometimes through concerted action by groups of operators. We have seen network platform APIs, device APIs, billing APIs and many others – and these are just the ones controlled by the Telcos themselves, excluding the manufacturers’ and software vendors’ own initiatives. The future will see even more emerge – for example, even femtocells might be “programmable” for new services.
This creates a number of problems. First is the overall noise and confusion from a set of uncoordinated Telco initiatives, perhaps even from multiple parts of the same organisations. Another huge issue comes when one considers that many new applications or services could be developed in a number of different ways. Take the provision of location of a mobile user to an application, as an example. This could be performed in the network (in a variety of ways, such as triangulation or Cell ID), on the device through access to a GPS chip or by mapping against a database of cell-tower locations.
So this session brought together a number of Telco-based “API-mongers” from across the value chain, with the hope of getting either alignment or clear water between them. They included the GSMA, OMTP, TMF and MEF, plus a notional upstream customer, the BBC and a “solo” operator, Orange.
There remains a healthy measure of indignation and jealousy at Internet players (often vilified as “over the top players”), and how they are pushing their own APIs to developers, often with more success.
· Do I trust my ISP / mobile provider or an OTT such as Google, Yahoo! and Amazon. [#9]
· Hmmm…so if I want to reach the whole world via Telco APIs I’m going to need to code in interfaces in my OTT app to support each operator’s flavour. Don’t you think that this will inhibit the creation of truly mass market services? There’s a clear need for standardization here and better collaboration to build interoperability. [#21]
· I can’t believe that people in this room are still referring to their future customers as ‘OTT Players’ which is as derogatory as calling Telco’s ‘pipe salesmen’, or ‘under the floor players’. Unless you show some respect to these companies, do you really think they will prefer to do business with you, rather than destroy you? [#22]
Orange’s comparison of its platform with a bank’s were perceived more as walled-garden than open.
· Orange: I can take my money out of a bank’s safe deposit box and put it somewhere else if I want. Will you
do the same with your customers’ stored data and content? [#12]
o 12, do you know any customer that has asked their Telco for the CDRs when they port their number? Aren’t you making an issue when there isn’t one? [#14]
· The bank also borrows my money and loans it out to make money/interest. that relationship is based on trust. [#13]
o Re 13 unlike a bank, a Telco quite often doesn’t control the application which manages the asset, but rather outsources it, so the trust may be diminished if the customer has transparency into who has access to their customer data. [#15]
o Re 13: And banks are regulated on repaying my money [#24]
There is also scepticism about whether many operators (or industry groups) really understand developers.
· The Telco can get value from the third party for the location API via a % of the transaction that result. [#7]
o 7, only if the market will pay for this. not always easy to measure the value of a transaction in which location API plays a part – it is not like a product sale! [#10] [Telco 2.0 view – this is very true. If someone uses a location API to find their closest ATM machine, would the operator’s payment really be dependent on how much cash was withdrawn?]
· Which developer do you target for simple APIs? Enterprise, Telco,’2 AM stoner’? [#23]
· Developing an application against a REST API is truly trivial. The assumption that developing against a telecom API is super difficult sounds like it’s coming from somebody who hasn’t lived in the Web world in the past three years. [#25]
· I want to know if any of the people on the stage have actually written a program using a telecom API in the past year. [#26]
There still remains considerable doubt that the Telcos can achieve all the requirements of developers on their own – and if not, how they might integrate and align their API offering with those elsewhere in the ecosystem.
· Should the Telco provide vertical apps or provide APIs on which these apps can be built. [#11]
· There are so many other business applications that work on APIs, not sure why those three from Orange were selected. For instance, why not talk about credit card fraud protection using SMS for validation? [#17]
· How can one expect to discuss a new ecosystem with many different animals and have only one kind of them (Telco’s) on stage? Where are the Sony, Disney, Time Warner, Philips, CA vendors etc? [#18]
o RE: 18 you are exactly right. [#19]
o Re 18, this is true but it is a chicken and egg. Why would they bother turning up if Telco’s can’t show them any value? Advertisers have pitched up at Telco events and found Telco’s don’t offer them anything of value. We have to get our thoughts/ideas aligned to some degree before we go to these potential customers. [#20]
The actual business models behind the Telcos’ API exposure are still primitive – and sometimes contradictory, as the lively debate between OMTP (handset browser/widgets) and MEF (sender-pays data) representatives revealed.
· Needing to understand the business process of commercialising APIs merely replaces the problem at another level: why do people think that operators who have shown a distinct lack of imagination in how to innovate on top of their assets will now innovate in business or commercial models? The problem seems to be the replication of the same lack of imagination and innovation is now being displaced into the need for understanding. [#27]
· Surely APIs need to be developed as open standards so that anyone can innovate using them, like app development on the Web? This sounds like giving with one hand while taking away with another… [#28]
o Re 18 and 26, you are on track. We need more outside the ecosystem players like Apple coming in to cross pollinate with our gene pool. My guess is that apple doesn’t attend Telco events because they are worried about damaging their own gene pool with our status quo. Give some kids full artistic license at a reference acct/operator to build their playground. Lock them in a room with caffeine and pizza and a big pipe. Carriers have great toys they would like to build with. Output = Telco 2.0 [#29]
Participants were asked: Which of the following statements best reflects your views on the API efforts of the Telco industry?
Individual operators and cross-industry bodies are getting things about right and current API programmes will yield significant value to the Telco industry in the next 3 – 5 years.
Individual operators and cross industry bodies have made a good start with their developer and API programmes but more needs to be done to standardise approaches and to bring commercial thinking to the fore if APIs are going to generate significant value to the Telco industry in the next 3 – 5 years.
The current developer and API activity by individual operators and cross industry bodies are totally inadequate and are unlikely to create value in the next 3 – 5 years.
APIs are a hot topic in the industry at present and this lively session highlighted three things very clearly:
1. There is a great deal of work being done on APIs by the operator and vendor community. There is a real sense of urgency in the Industry to make a set of cross-operator/platform/bearer/device APIs available to developers quickly.
2. There is a real risk of this API activity being derailed by the emergence of numerous independent “islands” of APIs and developer programmes. It is not uncommon for operators to have 3 or more separate initiatives around “openness” – in the network, on handsets or on home fixed-broadband devices, in the billing system and so on. Various industry bodies have taken prominent roles, usually at the level of setting requirements, rather than developing detailed standards.
Thomas Howe, CEO, Jaduka: ”Standards aren’t something we have to wait for! In the web sphere standards were something we did which worked so well that everyone said ‘that’s the standard’ and started using it. This is what happened with AJAX.”
3. It is still extremely early days for the commercial model for APIs. This is an area that the Telco 2.0 Initiative is concentrating hard on at present. It is already becoming apparent that a one-size-fits-all solution will be difficult. In line with the previous discussion about piloting Telco 2.0 services, it is important for operators to ensure that API platforms (and the associated revenue mechanisms) can service two distinct classes of user/customer:
Broad adoption by thousands/millions of developers via automated web interfaces (similar to signing up for Google Adwords or Amazon’s cloud storage & computing services);
Large-scale one-off projects and collaborations, which may require custom or bespoke capabilities (e.g. linked to subscriber data management systems or “semi-closed” / “private” APIs), for example with governments or major media companies.
It seems that certain sets of APIs are quite standalone and perhaps have simpler monetisation models – e.g. location lookups or well-defined authentication tasks. Others, such as granting 3rd-party access to specific “cuts” of subscriber data, may be more difficult to automate.
The fireworks between various panellists also illustrated an important point – there remains considerable tension between those advocating business models which are ‘content’-driven, involving the delivery of packaged entertainment and information to consumers and enterprise customers, versus those which are aimed at facilitating large numbers of new and (mostly) unknown developers who may use the platform to create ‘the next big thing’. Both business models have merit – while there is certainly value in using packaged approaches like “sender-pays data” for well-defined content types, there is also huge potential in becoming the platform of choice for unexpected ‘viral’ web applications that exploit unique Telco assets.
Dean Bubley, Telco 2.0: “I can’t believe that people in this room are still referring to their future customers as ‘OTT Players’ which is as derogatory as calling Telcos ‘pipe salesmen’, or ‘under the floor players’. Unless you show some respect to these companies, do you really think they will prefer to do business with you, rather than destroy you? ‘
In the short term, work needs to continue on developing the API platform, but also on evolving the attitudes and processes within the operator to support successful future business models:
Avoid pre-conceptions about the commercial model for APIs. In particular, revenue-shares and flatrate % commissions are extremely difficult to justify, except for the most commoditised capabilities like payment, or large-scale individually-negotiated contracts;
Develop thinking around the commercial model for APIs as getting this right will drive the success of existing industry-wide API initiatives – these technical programmes will fail without input from strategists and marketers on the required frameworks;
Most operators are undergoing major programmes of transformation – e.g. around outsourcing or IP network deployment. It is critical that these actions are constantly reviewed for fit against API-type initiatives to ensure they ease their creation, and don’t create new bottlenecks or structural silos;
Recognise that individual propositions about openness often make sense when viewed in isolation – but need to be seen in a wider strategic context, including all interface points between the operator domain and the Internet/apps world;
Non-handset specialists should make an effort to understand the implications of OMTP’s BONDI, as it can support a broad set of innovative applications and business models – and may well also appeal to third party developers;
Be aware that many developers will not want to have dozens of separate relationships with individual operators – do not force them to duplicate effort. Instead, work with industry-wide groups to address their core needs;
Develop a checklist of open API “hygiene factors” that are critical for developers, such as easy app testing mechanisms, transparency in application approval/signing, clear API pricing and so forth;
Consider “eating your own dogfood” and use elements of third-party web services and APIs as part of your own offering, at least in the early stages. In particular, this could reduce time-to-market and enhance flexibility.
Longer term actions should include:
Adopt a clear strategy for API “supermarkets” or clearing-houses. Developers will ultimately want to “shop around” for APIs and capabilities – or buy bulk “packages” across multiple operators. Individual telcos will need to decide how their relationships with such API wholesale providers will evolve – or if they want to take that role for themselves;
At present, it is highly unclear about how APIs will be marketed and sold. Most potential customers are not even aware that operators have something to offer them – would a software developer for utility meter-reading even consider how Telcos could add value, for example? Today’s developer programmes are insufficient, as they only tend to reach existing telco-minded companies. There will need to be much broader outreach and evangelism, perhaps piggy-backing on the developer programmes of larger IT firms like Microsoft or Oracle;
Consider the issue of openness when applied to other (possibly competing) Telcos. What happens when another operator becomes one of your developers? Or when you choose to exploit your peers’ capabilities?
Adopting a “semi-open” policy such as Apple’s with its uncertainties over application acceptance, is high risk. It potentially mitigates the risk of “damaging” or “cannibal” apps, but also risks alienating well-intentioned developers. Think very carefully about whether you have the same “pull” as Apple (especially its monopoly on its own platform) before employing a similar strategy – being seen as a “benevolent dictator” is not common amongst Telcos;
To service the “mass-market” API segment it will be absolutely crucial to provide an easy interface and simplified payment options. A newcomer can sign up for Google Adwords, or some of Amazon’s Web Services APIs, in minutes – Telcos need to offer the same capability.
Mobile Linux foundation LiMo‘s presence at the Mobile World Congress was impressive. DoCoMo demonstrated a series of handsets built on the OS; and LG & Samsung showed a series of reference implementations. But more impressive than the actual and reference handsets were the toolkits launched by Access & Azingo.
We believe that LiMo has an important role to play in the Mobile Ecosystem and the platform is so compelling that over time more and more handsets based upon the OS will find their way into consumers hands. So why is LiMo different and important?
In a nutshell, it is not owned by anyone and is not being driven forward by any one member. Symbian and Android may also be open-source, but no-one has any serious doubt who is paying for the majority of the resources and therefore whether consciously or sub-consciously whose business model they could favour. The LiMo founder members were split evenly between operators (DoCoMo, Vodafone and Orange) and Consumer Electronic Companies (NEC, Panasonic & Samsung). Since then several other operators, handset makers, chip makers and software vendors have joined. The current board contains a representative sample of organisations across the mobile value chain.
LiMo as the Unifying Entity
The current handset OS market reminds us very much of the days when the computing industry shifted from proprietary operating systems to various mutations of Unix. Over time, more and more companies moved away from proprietary extensions and moved them into full open source. Unix was broken down into a core kernel, various drivers, thousands of bytes of middleware and a smattering of User Interfaces. Value shifted to the applications and services. Today, as open source has matured each company can decide which bits of Unix they want to push resources onto to develop further and which bits they want to include in their own distribution.
Figure 2: LiMo Architecture
The reason that Unix developed this way is pure economics – it is just too expensive for many companies to build and maintain their own flavours of operating systems. In fact there is only currently two mainstream companies who can afford to build their own – Microsoft and Apple – and the house of Apple is built upon Unix foundations anyway. Today, we are seeing the same dynamics in the mobile space and it is only a question of time, before more and more companies shift resources away from internal projects and onto open-source ones. LiMo is the perfect home for coordinating this open-source effort – especially if the Limo foundation allows freedom for the suppliers of code to develop their own roadmap according to areas of perceived value and weakness. LiMo should be really promiscuous to succeed
In June 2008, LiMo merged with the LiPS foundation – great news. It is pointless and wasteful to have two foundations doing more or less the same thing, one from a silicon viewpoint and the other from an operator viewpoint. Just before Barcelona, LiMo endorsed the OMTP BONDI specification and announced that it expects future LiMo handsets using a web runtime to support the BONDI specification. Again, great news. It is pointless to redo specification work, perhaps with a slightly different angle. These type of actions are critical to the success of LiMo – embracing the work done by others and implementing it in an open-source, available to all manner.
Compelling base for Application Innovation
The real problem with developing mobile applications today is the porting cost to support the wide array of operating systems. LiMo offers the opportunity to radically reduce this cost. This is going to become critical for the next generation of devices which become wirelessly connected, whether machine-2-machine, general consumer devices or niche applications serving vertical industries. For the general consumer market, the key is to get handsets to the consumers. DoCoMo has done a great job of driving LiMo-based handsets into the Japanese market. 2009 needs to be the year that some European (eg Vodafone) or US (eg Verizon) deploy handsets in other markets.
Also, it is vital that the operators also make available some of its internal capabilties for use directly by the LiMo handsets and allow coupling to externally developed applications. These assets are not just the standard network services, but also internal service delivery platform capabilities. This adds benefits to the cost advantage that LiMo will ultimately have over the other handset operating systems. As in the computing world before, over time value will move away from hardware and operating systems towards applications and services. It is no accident that both Nokia and Google are moving into mobile services as a future growth area. The operators need an independent operating system to hold back their advance onto traditional operator turf.
We feel that as complexity increases in the mobile world, the economics of LiMo will become more favourable. It is only a matter of time, but LiMo market share will start to increase – the only question is the timeframe. Crucially, LiMo is well placed to get the buy-in of the most important stakeholders – operators. Operators are to mobile devices as content creators were to VHS; how well would the iPhone have done without AT&T?
following the same path as the evolution of the computing industry
broad and growing industry support
not yet reached critical mass
economic incentives for application developers are still vague
commodisation of hardware and operating system layer – value moving towards applications and services
a way for operators to counter the growing strength of Apple, Nokia & Google.
how can operators add their assets to make the operating system more compelling?
how can the barriers of intellectual property ownership be overcome?