Telco digital twins: Cool tech or real value?

Definition of a digital twin

Digital twin is a familiar term with a well-known definition in industrial settings. However, in a telco setting it is useful to define what it is and how it differs from a standard piece of modelling. This research discusses the definition of a digital twin and concludes with a detailed taxonomy.

An archetypical digital twin:

  • models a single entity/system (for example, a cell site).
  • creates a digital representation of this entity/system, which can be either a physical object, process, organisation, person or abstraction (details of the cell-site topology or the part numbers of components that make up the site).
  • has exactly one twin per thing (each cell site can be modelled separately).
  • updates (either continuously, intermittently or as needed) to mirror the current state of this thing. For example, the cell sitescurrent performance given customer behavior.

In addition:

  • multiple digital twins can be aggregated to form a composite view (the impact of network changes on cell sitesin an area).
  • the data coming into the digital twin can drive various types of analytics (typically digital simulations and models) within the twin itself – or could transit from one or multiple digital twins to a third-party application (for example, capacity management analytics).
  • the resulting analysis has a range of immediate uses, such as feeding into downstream actuators, or it can be stored for future use, for instance mimicking scenarios for testingwithout affecting any live applications.
  • a digital twin is directly linked to the original, which means it can enable a two-way interaction. Not only can a twin allow others to read its own data, but it can transmit questions or commands back to the original asset.

Enter your details below to download an extract of the report

What is the purpose of a digital twin?

This research uses the phrase “archetypical twin” to describe the most mature twin category, which can be found in manufacturing, operations, construction, maintenance and operating environments. These have been around in different levels of sophistication for the last 10 years or so and are expected to be widely available and mature in the next five years. Their main purpose is to act as a proxy for an asset, so that applications wanting data about the asset can connect directly to the digital twin rather than having to connect directly with the asset. In these environments, digital twins tend to be deployed for expensive and complex equipment which needs to operate efficiently and without significant down time. For example, jet engines or other complex equipment. In the telco, the most immediate use case for an archetypical twin is to model the cell tower and associated Radio Access Network (RAN) electronics and supporting equipment.

The adoption of digital twins should be seen as an evolution from today’s AI models

digital-twins-evolution-of-todays-ai-models-stl-partners

*See report for detailed graphic.

Source: STL Partners

 

At the other end of the maturity curve from the archetypical twin, is the “digital twin of the organisation” (DTO). This is a virtual model of a department, business unit, organisation or whole enterprise that management can use to support specific financial or other decision-making processes. It uses the same design pattern and thinking of a twin of a physical object but brings in a variety of operational or contextual data to model a “non-physical” thing. In interviews for this research, the consensus was that these were not an initial priority for telcos and, indeed, conceptually it was not totally clear whether the benefits make them a must-have for telcos in the mid-term either.

As the telecoms industry is still in the exploratory and trial phase with digital twins, there are a series of initial deployments which, when looked at, raise a somewhat semantic question about whether a digital representation of an asset (for example, a network function) or a system (for example, a core network) is really a digital twin or actually just an organic development of AI models that have been used in telcos for some time. Referring to this as the “digital twin/model” continuum, the graphic above shows the characteristics of an archetypical twin compared to that of a typical model.

The most important takeaway from this graphic are the factors on the right-hand side that make a digital twin potentially much more complex and resource hungry than a model. How important it is to distinguish an archetypical twin from a hybrid digital twin/model may come down to “marketing creep”, where deployments tend to get described as digital twins whether they exhibit many of the features of the archtypical twin or not. This creep will be exacerbated by telcos’ needs, which are not primarily focused on emulating physical assets such as engines or robots but on monitoring complex processes (for example, networks), which have individual assets (for example, network functions, physical equipment) that may not need as much detailed monitoring as individual components in an airplane engine. As a result, the telecoms industry could deploy digital twin/models far more extensively than full digital twins.

Table of contents

  • Executive Summary
    • Choosing where to start
    • Complexity: The biggest short-term barrier
    • Building an early-days digital twin portfolio
  • Introduction
    • Definition of a digital twin
    • What is the purpose of a digital twin?
    • A digital twin taxonomy
  • Planning a digital twin deployment
    • Network testing
    • Radio and network planning
    • Cell site management
    • KPIs for network management
    • Fraud prediction
    • Product catalogue
    • Digital twins within partner ecosystems
    • Digital twins of services
    • Data for customer digital twins
    • Customer experience messaging
    • Vertical-specific digital twins
  • Drivers and barriers to uptake of digital twins
    • Drivers
    • Barriers
  • Conclusion: Creating a digital twin strategy
    • Immediate strategy for day 1 deployment
    • Long-term strategy

Related research

Enter your details below to download an extract of the report

Scaling private cellular and edge: How to avoid POC and pilot purgatory

Evaluating the opportunities with private cellular and edge

The majority of enterprises today are still at the early stages of understanding the potential benefits of private cellular networking and edge computing in delivering enhanced business outcomes, but the interest is evident. Within private cellular for example, we have seen significant traction and uptake globally during 2020 and 2021, partially driven by increased availability and routes to spectrum due to localised spectrum licensing models across different markets (see this report). This has resulted in several trials and engagements with large companies such as Bosch, Ford, Rio Tinto, Heathrow Airport and more.

However, despite the rising interest, enterprises often encounter challenges with a lack of internal stakeholder alignment or the inability to find the right stakeholder to be accountable and own the deployment. Furthermore, many enterprises feel they lack the expertise to deploy and manage private networking and/or edge solutions. In some cases, enterprises have also cited a lack of maturity in the device and solution ecosystem, for example with lack of supported (or industry-grade) devices which have a 5G/LTE/CBRS capability embedded in them, or a significant inertia in the installed base around other connectivity solutions (e.g. Wi-Fi). Therefore, despite the value and business outcomes that private cellular and edge compute can unlock for enterprises, the opportunity is rarely clear-cut.

Our research is based on findings and analysis from a global interview programme with 20 enterprises in sectors that are ahead in exploring private cellular and edge computing, primarily in the industrial verticals, as well as telecoms operators and solutions providers within the private cellular and edge computing ecosystem.

Enter your details below to request an extract of the report

Telcos see private cellular and edge as two peas in a pod…

Telecoms operators see private cellular and edge computing as part of a larger revenue opportunity beyond fixed and public cellular. It is an opportunity for telcos to move from being seen as horizontal players providing increasingly commoditised connectivity services, to more vertical players that address value-adding industry-specific use cases. Private cellular and edge compute can be seen as components of a wider innovative and holistic end-to-end solution for enterprises, and part of the telcos’ ambition to become strategic partners or trusted advisors to customers.

We define a private cellular network as a dedicated local on-premises network, designed to cover a geographically-constrained area or site such as a production plant, a warehouse or a mine. It uses dedicated spectrum, which can be owned by the enterprise or leased from a telco operator or third party, and has dedicated operating functions that can run on the enterprise’s own dedicated or shared edge compute infrastructure. Private cellular networking is expected to play a key role in future wireless technology for enterprise on-premises connectivity. Private cellular networks can be configured specifically to an individual enterprise’s requirements to meet certain needs around reliability, throughput, latency etc. to enable vertical-specific use cases in a combined way that other alternatives have struggled to before. Although there are early instances of private networks going back to 2G GSM-R in the railway sector, for the purpose of this report, we focus on private cellular networks that leverage 4G LTE (Long Term Evolution) or 5G mobile technology.

Figure 1: Private cellular combines the benefits of fixed and wireless in a tailored way

benefits of private cellular

Source: STL Partners

Edge compute is about bringing the compute, storage and processing capabilities and power of cloud closer to the end-user or end-device (i.e. the source of data) by locating workloads on distributed physical infrastructure. It combines the key benefits of local compute, such as low latency, data localisation and reduced backhaul costs, with the benefits of cloud compute, namely scalability, flexibility, and cloud native operating models.

Figure 2: Edge computing combines local and cloud compute benefits to end-users

benefits of edge computing

Source: STL Partners

Within the telecoms industry, private cellular and edge computing are often considered two closely interlinked technologies that come hand-in-hand. Our previous report, Navigating the private cellular maze: when, where and how, explored the different private cellular capabilities that enterprises are looking to leverage, and our findings showed that security, reliability and control were cited as the most important benefits of private cellular. In many ways, edge compute also addresses these needs. Both are means of delivering ultra-low latency, security, reliability and high-throughput real time analytics, but in different ways.

…but this is not necessarily the case with enterprises

Although the telecoms industry often views edge computing and private cellular in the same vein, this is not always the case from the enterprise perspective. Not only do the majority of enterprises approach edge computing and private cellular as separate technologies, addressing separate needs, many are still at the early stages of understanding what they are.

There is oftentimes also a different interpretations and confusion of terminology when it comes to private cellular and edge compute. For example, in our interviews, a few enterprises describe traditional on-premises compute with local dedicated compute facilities within an operating site (e.g. a server room) as a flavour of edge compute. We argue that the key difference between traditional on-premises compute and on-premises edge compute is that with the latter, the applications and underlying infrastructure are both more cloud-like. Applications that leverage edge compute also use cloud-like technologies and processes (such as continuous integration and continuous delivery, or CI/CD in short) and the edge infrastructure uses containers or virtual machines and can be remotely managed (rather than being monolithic).

The same applies when it comes to private cellular networking, where the term ‘private network’ is used differently by certain individuals to refer to virtual private networks (VPNs) as opposed to the dedicated local on-premises network we have defined above. In addition, when it comes to private 5G, there is also confusion as to the difference between better in-building coverage of public 5G (i.e. the macro network) versus a private 5G network, for a manufacturing plant for example. This will only be further complicated by the upswing of network slicing, which can sometimes (incorrectly) be marketed as a private network.

Furthermore, for enterprises that are more familiar with the concepts, many are still looking to better understand the business value and outcomes that private LTE/5G and edge compute can bring, and what they can enable for their businesses.

 

Table of Contents

  • Executive Summary
  • Introduction
    • Evaluating the opportunities with private cellular and edge
    • Telcos see private cellular and edge as two peas in a pod…
    • …but this is not necessarily the case with enterprises
    • Most private cellular or edge trials or PoCs have yet to scale
  • Edge and private cellular as different tracks
    • Enterprises that understand private cellular don’t always understand edge (and vice versa)
    • Edge and private cellular are pursued as distinct initiatives
  • Breaking free from PoC purgatory
    • Lack of stakeholder alignment
    • Ecosystem inertia
    • Unable to build the business case
  • Addressing different deployment pathways
    • Tactical solutions versus strategic transformations
    • Find trigger points as key opportunities for scaling
    • Readiness of solutions: Speed and ease of deployment
  • Recommendations for enterprises
  • Recommendations for telco operators
  • Recommendations for others
    • Application providers, device manufacturers and OEMs
    • Regulators

Enter your details below to request an extract of the report

Stakeholder model: Turn growth killers into growth makers

Introduction: The stakeholder model

Telecoms operators’ attempts to build new sources of revenue have been a core focus of STL Partners’ research activities over the years. We’ve looked at many telecoms case studies, adjacent market examples, new business models and technologies and other routes to explore how operators might succeed. We believe the STL stakeholder model usefully and holistically describes telcos’ main stakeholder groups and the ideal relationships that telcos need to establish with each group to achieve valuable growth. It should be used in conjunction with other elements of STL’s portfolio which examine strategies needed within specific markets and industries (e.g., healthcare) and telcos’ operational areas (e.g., telco cloud, edge, leadership and culture).

This report outlines the stakeholder model at a high level, identifying seven groups and three factors within each group that summarise the ideal relationship. These stakeholder and influencer groups include:

  1. Management
  2. People
  3. Customer propositions
  4. Partner and technology ecosystems
  5. Investors
  6. Government and regulators
  7. Society

Enter your details below to download an extract of the report

1. Management

Growth may not always start at the top of an organisation, but to be successful, top management will be championing growth, have the capabilities to lead it, and aligning and protecting the resources needed to foster it. This is true in any organisation but especially so in those where there is a strong established business already in place, such as telecoms. The critical balance to be maintained is that the existing business must continue to succeed, and the new growth businesses be given the space, time, skills and support they need to grow. It sounds straightforward, but there are many challenges and pitfalls to making it work in practice.

For example, a minor wobble in the performance of a multi-billion-dollar business can easily eclipse the total value of a new business, so it is often tempting to switch resources back to the existing business and starve the fledgling growth. Equally, perceptions of how current businesses need to be run can wrongly influence what should happen in the new ones. Unsuitable choices of existing channels to market, familiar but ill-fitting technologies, or other business model prejudices are classic bias-led errors (see Telco innovation: Why it’s broken and how to fix it).

To be successful, we believe that management needs to exhibit three broad behaviours and capabilities.

  1. Stable and committed long term vision for growth aligned with the Coordination Age.
  2. Suitable knowledge, experience and openness.
  3. Effective two-way engagement with stakeholders. (N.B. We cover the board and most senior management in this group. Other management is covered in the People stakeholder group.)

Management: Key management enablers of growth

management-leadership-vision-growth-indicators

Source: STL Partners

Stable and committed long-term vision for growth

The companies that STL has seen making more successful growth plays typically exhibit a long-term commitment to growth and importantly, learning too.

Two examples we have studied closely are TELUS and Elisa. In both cases, the CEO has held tenure in the long-term, and the company has demonstrated a clear and well managed commitment to growth.

In TELUS’s case, the primary area of growth targeted has been healthcare, and the company now generates somewhere close to 10% of its revenue from the new areas (it does not publish a number). It has been working in healthcare for over 10 years, and Darren Entwistle, its CEO, has championed this cause with all stakeholders throughout.

In Elisa’s case, the innovation has been developed in a number of areas. For example, how it couples all you can use data plans and a flat sales/capex ratio; a new network automation business selling to other telcos; and an industrial IoT automation business.

Again, CEO Veli-Matti Mattila has a long tenure, and has championed the principle of Elisa’s competitive advantage being in its ability to learn and leverage its existing IP.

…aligned with the Coordination Age

STL argues that the future growth for telcos will come by addressing the needs of the Coordination Age, and this in turn is being accelerated by both the COVID-19 pandemic and growing realisation of climate change.

Why COVID-19 and Climate change are accelerating the Coordination Age

COVID-19-and-Climate-change-Coordination-Age-STL

 

Source: STL Partners

The Coordination Age is based on the insight that most stakeholder needs are driven by a global need to make better use of resources, whether in distribution (delivery of resources when and where needed), efficiency (return on resources, e.g. productivity), and sustainability (conservation and protection of resources, e.g. climate change).

This need will be served through multi-party business models, which use new technologies (e.g. better connectivity, AI, and automation) to deliver outcomes to their customers and business ecosystems.

We argue that both TELUS and Elisa are early innovators and pathfinders within these trends.

Suitable knowledge, experience and openness

Having the right experience, character and composition in the leadership team is an area of constant development by companies and experts of many types.

The dynamics of the leadership team matter too. There needs to be leadership and direction setting, but the team must be able to properly challenge itself and particularly its leader’s strongest opinions in a healthy way. There will of course be times when a CEO of any business unit needs to take the helm, but if the CEO or one of the C-team is overly attached to an idea or course of action and will not hear or truly consider alternatives this can be extremely risky.

AT&T / Time Warner – a salutary tale?

AT&T’s much discussed venture into entertainment with its acquisitions of DirecTV and Time Warner is an interesting case in point here. One of the conclusions of our recent analysis of this multi-billion-dollar acquisition plan was that AT&T’s management appeared to take a very telco-centric view throughout. It saw the media businesses primarily as a way to add value to its telecoms business, rather than as valuable business assets that needed to be nurtured in their own right.

Regardless of media executives leaving and other expert commentary suggesting it should not neglect the development of its wider distribution strategy for the content powerhouse for example, AT&T ploughed on with an approach that limited the value of its new assets. Given the high stakes, and the personalised descriptions of how the deal arose through the CEOs of the companies at the time, it is hard to escape the conclusion that there was a significant bias in the management team. We were struck by the observation that it seemed like “AT&T knew best”.

To be clear, there can be little doubt that AT&T is a formidable telecoms operator. Many of its strategies and approaches are world leading, for example in change management and Telco Cloud, as we also highlight in this report.

However, at the time those deals were done AT&T’s board did not hold significant entertainment expertise, and whoever else they spoke with from that industry did not manage to carry them to a more balanced position. So it appears to us that a key contributing factor to the significant loss of momentum and market value that the media deals ultimately inflicted on AT&T was that they did not engineer the dynamics or character in their board to properly challenge and validate their strategy.

It is to the board’s credit that they have now recognised this and made plans for a change. Yet it is also notable that AT&T has not given any visible signal that it made a systemic error of judgement. Perhaps the huge amounts involved and highly litigious nature of the US market are behind this, and behind closed doors there is major change afoot. Yet the conveyed image is still that “AT&T knows best”. Hopefully, this external confidence is now balanced with more internal questioning and openness to external thoughts.

What capabilities should a management team possess?

In terms of telcos wishing to drive and nurture growth, STL believes there are criteria that are likely to signal that a company has a better chance of success. For example:

  • Insight into the realistic and differentiating capabilities of new and relevant markets, fields, applications and technologies is a valuable asset. The useful insight may exist in the form of experience (e.g. tenure in a relevant adjacent industry such as healthcare, or delivery of automation initiatives, working in relevant geographies, etc.), qualification (e.g. education in a relevant specialism such as AI), or longer term insight (which may be indicated by engagement with Research and Development or academic activities)

[The full range of management capabilities can be viewed in the report…..] 

 

2. People…

 

Table of Contents

  • Executive Summary
  • Introduction
  • Management
    • Stable and committed long-term vision for growth
    • …aligned with the Coordination Age
    • Suitable knowledge, experience and openness
    • Two-way engagement with stakeholders
  • People
    • Does the company have a suitable culture to enable growth?
    • Does the company have enough of the new skills and abilities needed?
    • Is the company’s general management collaborative, close to customers, and diverse?
  • Customer propositions
    • Nature of the current customer relationship
    • How far beyond telecoms the company has ventured
    • Investment in new sectors and needs
  • Partner and technology ecosystems
    • Successful adoption of disruptive technologies and business models
    • More resilient economics of scale in the core business
    • Technology and partners as an enabler of change
  • Investors
    • The stability of the investor base
    • Has the investor base been happy?
    • Current and forecast returns
  • Government and regulators
    • The tone of the government and regulatory environment
    • Current status of the regulatory situation
    • The company’s approach to government and regulatory relationships
  • Society
    • Brand presence, engagement and image
    • Company alignment with societal priorities
    • Media portrayal

Related research

Enter your details below to download an extract of the report

Revisiting convergence: How to address the growth imperative

Introduction

Significant opportunity, high risk of complacency

The opportunity for communications service providers (CSPs) to provide greater value and innovative services to customers through new technology advancements is well-documented. For example, the network capabilities (and programmability) that 5G and cloud native bring is touted to change the way that CSPs address revenue opportunities with customers and partners in a more ecosystem-centric environment. The emergence of FTTx (fibre to the x) technology can optimise the use of operators’ assets in a way that delivers seamless connectivity to customers. These advancements allow CSPs to better serve customer needs in a more flexible, scalable, sustainable and agile way than ever before.

Part of the imperative to address this opportunity and vision stems from significant market disruption with new entrants and new types of ‘co-opetitors’, such as the hyperscale cloud providers and greenfield operators, that challenge operators’ existing business and operating models. As a result, CSPs face growing pressure to respond much faster to market and customer demands and enhance their capabilities in a way that does not inflate their cost base or undermine their net-zero goals.

Although CSPs have identified these green pastures for growth, there is still a considerable disconnect between the vision (and what is required to fulfil the ambition) and what capabilities CSPs have today to meet it. Today, CSPs are grappling with too much complexity, fragmentation and duplication within their networks, capabilities and systems. This not only means costs are too high, but it also poses a significant barrier to how they can accelerate the beat rate of innovation and serve new revenue-generating opportunities. This is a gap that CSPs need to close urgently or be at risk of their market shares and value eroding as a result of competition.

Enter your details below to request an extract of the report

The imperative that CSPs can no longer ignore

There is therefore a renewed urgency in building a stronger cost base, scalability, agility and innovation, which could soon become a matter of survival. CSPs are evaluating different strategies and means of making better (and smarter) use of their assets and capabilities in a more agile way and provide the services that customers and partners are increasingly demanding. One such strategy that CSPs have long pursued is network convergence. Although the concept is not new and has been consistently explored and sporadically pursued by operators over the years, the interest has now been reignited to address this imperative. The balance of forces between convergence and divergence has also shifted in favour of the latter in recent years. This has been driven by the adoption of cloud native technologies, which enables operators to deliver new innovative services on top of a common platform (versus siloed islands) and drive for more sustainability & efficiency in the network. This has brought convergence back up to the top of operators’ agendas.

Our report therefore looks to address the following questions:

  • Why and how are CSPs converging their networks to fulfil their growth ambitions?
  • What are the key challenges they face and how can they overcome them?

Evaluating the key drivers for convergence

Cost savings are a priority, but CSPs also want top line growth

The key drivers that CSPs are focused on as part of this renewed pursuit of network convergence are internal and external. Although most operators see capital investment savings and reduction of total cost of ownership (TCO) as an essential priority, the majority of interviewees we spoke to also emphasised the need to support greater innovation with customers and ecosystem development. We describe the main drivers we found through our research with operators below:

Four key drivers that CSPs are focused on

Source: STL Partners

Reducing TCO through network simplification and consolidation

Many operators we spoke to cited network simplification and convergence in addressing the need to ‘do more with less’ and the ability to drive economies of scale and serve market requirements. Convergence can address different disparate sub-systems and siloes that don’t interact with one another (e.g. performance management and inventory management, IP and optical). This fragmentation creates unnecessary complexity for network operations teams to run, manage and assure their networks and introduce potential human errors and associated costs. CSPs have an opportunity to move towards having common infrastructure and management toolset to serve multiple needs, reduce overall TCO and to achieve better control and ubiquitous visibility across their networks. This is particularly important for larger and/or multi-service, multi-country operators. The decommissioning of legacy services (in some cases with government support, for example with PSTN services) is a key opportunity for this.

One European operator described the importance of being able to serve fixed (residential), mobile (consumer), enterprise and wholesale customers with a single backbone and transport network. Inherent in this is greater efficiency, ease of management and less capital spend required to serve multiple types of customers. For example, our interviewee cited the economies of scale they have achieved by putting all of their traffic onto a single IP network that supports all types of customers. This includes greater efficiency and simplicity in not having to run separate IP networks for each type of customer group and less spend on IP routers and lower TCO overall as part of the consolidation.

Creating a sustainable platform for scale and massive data growth

New use cases are projected to increase network traffic and demands. Operators need to prepare for this volume expansion, support more types of fibre connections, provide more flexible capacity and address high performance demands (throughput, latency, error rates). Another European group operator described scale as the main driver for convergence, in being able to seamlessly support thousands of points within the network and offer their portfolio of services across their operations as one package to customers in a simpler way.

Operators need to consider how they can maximise the use of their infrastructure to serve increasingly demanding needs. For example, there is a significant need for CSPs to extract greater synergies from their access fibre: two operators we spoke to – one in North America, the other in Asia – are using fibre originally deployed for residential broadband (Gigabit Passive Optical Network, or GPON) to connect 5G cells. Operators are joining national governments and high-profile corporations in making ‘net-zero’ commitments which is leading them to actively identify and implement strategies that will dramatically reduce their own environmental footprint and play a more active role in reducing their customers’ carbon emissions.

Enabling greater control, resilience and automation

Implicit in these developments is the greater need for automation within the network to ensure not only the greatest cost efficient optimisation of network speeds and processing power, but also the ability to navigate greater network intricacy. One particular European operator we spoke to described the need to enable greater automation across the entire lifecycle, introduce CI/CD pipelines for more agile service development and provide much more granular information and visibility across the entire network. By simplifying and converging the network, operators, operators can address some of the inherent complexity and disparate siloes in their networks and create a unified view of their network. This provides better visibility across the entire network for network operations teams and makes the task of assuring their networks easier. A more unified or common management layer also enables a more granular view and creates scope for AI/ML to deliver further gains in operational simplification and automation. In addition to the benefits for service assurance and lifecycle management, CSPs are also looking to better identify priority areas for improvement and develop more granular cost-benefit analysis for future investment planning.

Enabling greater control, resilience and automation

Implicit in these developments is the greater need for automation within the network to ensure not only the greatest cost efficient optimisation of network speeds and processing power, but also the ability to navigate greater network intricacy. One particular European operator we spoke to described the need to enable greater automation across the entire lifecycle, introduce CI/CD pipelines for more agile service development and provide much more granular information and visibility across the entire network. By simplifying and converging the network, operators can address some of the inherent complexity and disparate siloes in their networks and create a unified view of their network. This provides better visibility across the entire network for network operations teams and makes the task of assuring their networks easier. A more unified or common management layer also enables a more granular view and creates scope for AI/ML to deliver further gains in operational simplification and automation. In addition to the benefits for service assurance and lifecycle management, CSPs are also looking to better identify priority areas for improvement and develop more granular cost-benefit analysis for future investment planning.

Supporting greater innovation and ecosystem development

As the industry moves to more ecosystem-centric, B2B2X models, operators need to be more versatile in supporting diverse types of services with different types of customers. As more and more devices become connected throughout the Coordination Age , the network will need to become more responsive to different use case needs. The underlying network infrastructure needs to facilitate the faster development of richer network functionality and the plethora of emerging use cases, in order to support greater innovation. This means the network (and network teams) need to handle fast changing functions and more agile service development, and frequent software updates.

With a resurging interest in more network-enabled applications, from telematics and connected car to different types of location-based services or immersive experiences (AR/VR) that can respond to network performance data, the network needs to become more visible, distributed, programmable and instructible. Operators can leverage and expose these network capabilities to both internal and external parties, including customers and partners such as application developers, to serve new types of revenue opportunities and ecosystem partners . The expansion of 5G will create the risk of added complexity to the network, not least through the increase in access infrastructure including thousands of locations supporting distributed virtualised workloads (both cloud native network functions and other applications). This makes convergence and the simplification of the management layer even more imperative. The ability to dynamically manipulate network functions is just one of many programmable capabilities the network will require but doing this while keeping the network and associated services secured is no simple task.

Table of contents

  • Executive Summary
  • Preface
  • Introduction
    • Significant opportunity, high risk of complacency
    • The imperative that CSPs can no longer ignore
  • Evaluating the key drivers for convergence
    • Cost savings are a priority, but CSPs also want top line growth
  • Revisiting the concept of convergence
    • Convergence is a multifaceted problem and solution
    • CSPs take different approaches to tackle similar problems
    • Logical convergence
    • Horizontal convergence
    • Vertical convergence
    • The whole is greater than the sum of its parts
  • A matter of how? not why?
    • History and market variance play a role
    • Understanding the key challenges
  • Taking the plunge
    • Convergence is not just a technology decision
    • Incremental steps, not radical change

Related Research

Enter your details below to request an extract of the report

Commerce and connectivity: A match made in heaven?

Rakuten and Reliance: The exceptions or the rule?

Over the past decade, STL Partners has analysed how connectivity, commerce and content have become increasingly interdependent – as both shopping and entertainment go digital, telecoms networks have become key distribution channels for all kinds of consumer businesses. Equally, the growing availability of digital commerce and content are driving demand for connectivity both inside and outside the home.

To date, the top tier of consumer Internet players – Google, Apple, Amazon, Alibaba, Tencent and Facebook – have tended to focus on trying to dominate commerce and content, largely leaving the provision of connectivity to the conventional telecoms sector. But now some major players in the commerce market, such as Rakuten in Japan and Reliance in India, are pushing into connectivity, as well as content.

This report considers whether Rakuten’s and Reliance’s efforts to combine content, commerce and connectivity into a single package is a harbinger of things to come or the exceptions that will prove the longstanding rule that telecoms is a distinct activity with few synergies with adjacent sectors. The provision of connectivity has generally been regarded as a horizontal enabler for other forms of economic activity, rather than part of a vertically-integrated service stack.

This report also explores the extent to which new technologies, such as cloud-native networks and open radio access networks, and an increase in licence-exempt spectrum, are making it easier for companies in adjacent sectors to provide connectivity. Two chapters cover Google and Amazon’s connectivity strategies respectively, analysing the moves they have made to date and what they may do in future. The final section of this report draws some conclusions and then considers the implications for telcos.

This report builds on earlier STL Partners research, including:

Enter your details below to request an extract of the report

Mixing commerce and connectivity

Over the past decade, the smartphone has become an everyday shopping tool for billions of people, particularly in Asia. As a result, the smartphone display has become an important piece of real estate for the global players competing for supremacy in the digital commerce market. That real estate can be accessed via a number of avenues – through the handset’s operating system, a web browser, mobile app stores or through the connectivity layer itself.

As Google and Apple exercise a high degree of control over smartphone operating systems, popular web browsers and mobile app stores, other big digital commerce players, such as Amazon, Facebook and Walmart, risk being marginalised. One way to avoid that fate may be to play a bigger role in the provision of wireless connectivity as Reliance Industries is doing in India and Rakuten is doing in Japan.

For telcos, this is potentially a worrisome prospect. By rolling out its own greenfield mobile network, e-commerce, and financial services platform Rakuten has brought disruption and low prices to Japan’s mobile connectivity market, putting pressure on the incumbent operators. There is a clear danger that digital commerce platforms use the provision of mobile connectivity as a loss leader to drive to traffic to their other services.

Table of Contents

  • Executive Summary
  • Introduction
  • Mixing connectivity and commerce
    • Why Rakuten became a mobile network operator
    • Will Rakuten succeed in connectivity?
    • Why hasn’t Rakuten Mobile broken through?
    • Borrowing from the Amazon playbook
    • How will the hyperscalers react?
  • New technologies, new opportunities
    • Capacity expansion
    • Unlicensed and shared spectrum
    • Cloud-native networks and Open RAN attract new suppliers
    • Reprogrammable SIM cards
  • Google: Knee deep in connectivity waters
    • Google Fiber and Fi maintain a holding pattern
    • Google ramps up and ramps down public Wi-Fi
    • Google moves closer to (some) telcos
    • Google Cloud targets telcos
    • Big commitment to submarine/long distance infrastructure
    • Key takeaways: Vertical optimisation not integration
  • Amazon: A toe in the water
    • Amazon Sidewalk
    • Amazon and CBRS
    • Amazon’s long distance infrastructure
    • Takeaways: Control over connectivity has its attractions
  • Conclusions and implications for telcos in digital commerce/content
  • Index

Enter your details below to request an extract of the report

ngena SD-WAN: scaling innovation through partnership

Introducing ngena

This report focusses on ngena, a multi-operator alliance founded in 2016, which offers multi-national networking services aimed at enterprise customers. ngena is interesting to STL Partners for several reasons:

First, it represents a real, commercialised example of operators working together, across borders and boundaries, to a common goal – a key part of our Coordination Age vision.

Second, ngena’s SDN product is an example of a new service which was designed around a strong, customer-centric proposition, with a strong emphasis on partnership and shared vision – an alternative articulation, if you like, of Elisa’s cultural strategy.

Third, it was born out of Deutsche Telekom, the world’s sixth-largest telecoms group by revenue, which operates in more than fifty countries. This makes it a great case study of an established operator innovating new enterprise services.

And lastly, it is a unique example of a telco and technology company (in this case Cisco) coming together in a mutually beneficial creative partnership, rather than settling into traditional buyer-supplier roles.

Over the coming pages, we will explore ngena’s proposition to customers, how it has achieved what it has to date, and to what extent it has made a measurable impact on the companies that make up the alliance. The report explains STL Partners’ independent view, informed by conversations with Marcus Hacke, Founder and Managing Director, as well as others across the industry.

Enter your details below to request an extract of the report

Shifting enterprise needs

Enterprises throughout the world are rapidly digitising their operations, and in large part, that involves the move to a ‘multicloud’ environment, where applications and data are hosted in a complex ecosystem of private data centres, campus sites, public clouds, and so on.

Digital enterprises need to ensure that data and applications are accessible from any location, at any time, from any device, and any network, reliably and without headaches. A large enterprise such as a retail bank might have physical branches located all over the place – and the same data needs to be accessible from any branch.

Traditionally, this sort of connectivity was achieved over the wide area network (WAN), with enterprises investing in private networks (often virtual private networks) to ensure that data remained secure and reliably accessible. Traditional WAN architectures work well – but they are not known for flexibility of the sort required to support a multicloud set-up. The network topology is often static, requiring manual intervention to deploy and change, and in our fast-changing world, this becomes a bottleneck. Enterprises are still faced with several challenges:

Key enterprise networking challenges

Source: STL Partners, SD-WAN mini series

The rise of SD-WAN: 2014 to present

This is where, somewhere around 2014, software-defined WAN (SD-WAN) came on the scene. SD-WAN improves on traditional WAN by applying the principles of software-defined networking (SDN). Networking hardware is managed with a software-based controller that can be hosted in the cloud, which opens up a realm of possibilities for automation, smart traffic routing, optimisation, and so on – which makes managing a multicloud set-up a whole lot easier.

As a result, enterprises have adopted SD-WAN at a phenomenal pace, and over the past five years telecoms operators and other service providers worldwide have rushed to add it to their managed services portfolio, to the extent that it has become a mainstream enterprise service:

Live deployments of SD-WAN platforms by telcos, 2014-20 (global)

Source: STL Partners NFV Deployment Tracker
Includes only production deployments; excludes proof of concepts and pilots
Includes four planned/pending deployments expected to complete in 2020

The explosion of deployments between 2016 and 2019 had many contributing factors. It was around this time that vendor offerings in the space became mature enough for the long tail of service providers to adopt more-or-less off-the shelf. But also, the technology had begun to be seen as a “no-brainer” upgrade on existing enterprise connectivity solutions, and therefore was in heavy demand. Many telcos used it as a natural upsell to their broader suite of enterprise connectivity solutions.

The challenge of building a connectivity platform

While SD-WAN has gained significant traction, it is not a straightforward addition to an operator’s enterprise service portfolio – nor is it a golden ticket in and of itself.

First, it is no longer enough to offer SD-WAN alone. The trend – based on demand – is for it to be offered alongside a portfolio of other SDN-based cloud connectivity services, over an automated platform that enables customers to pick and choose predefined services, and quickly deploy and adapt networks without the effort and time needed for bespoke customer deployments. The need this addresses is obvious, but the barrier to entry in building such a platform is a big challenge for many operators – particularly mid-size and smaller telcos.

Second, there is the economic challenge of scaling a platform while remaining profitable. Platform-based services require continuous updating and innovation, and it is questionable whether many telecoms operators are up to have the financial strength to do so – a situation you find for nearly all IT cloud platforms.

Last – and by no means least – is the challenge of scaling across geographies. In a single-country scenario, where most operators (at least in developed markets) will already have the fixed network infrastructure in place to cover all of a potential customer’s branch locations, SD-WAN works well. It is difficult, from a service provider’s perspective, to manage network domains and services across the whole enterprise (#6 above) if that enterprise has locations outside of the geographic bounds of the service provider’s own network infrastructure. There are ways around this – including routing traffic over the public Internet, and other operators’ networks, but from a customer point-of-view, this is less than ideal, as it adds complexity and limits flexibility in the solution they are paying for.

There is a need, then, for a connectivity platform “with a passport”: that can cross borders between operators, networks and markets without issue. ngena, or the Next Generation Enterprise Network Alliance, aims to address this need.

Table of Contents

  • Executive summary
    • What is ngena?
    • Why does ngena matter?
    • Has ngena been successful?
    • What does ngena teach us about successful telco innovation?
    • What does this mean for other telcos?
    • What next?
  • Introduction
  • Context: Enterprise needs and SD-WAN
    • Shifting enterprise needs
    • The rise of SD-WAN: 2014 to present
    • The challenge of building a connectivity platform
  • ngena: Enterprise connectivity with a passport
    • A man with a vision
    • The ngena proposition
  • How successful has ngena been?
    • Growth in alliance membership
    • Growth in ngena itself
    • Making money for the partners
  • What does ngena teach us about successful innovation culture in telecoms?
    • Context: the need to disrupt and adapt in telecoms
    • Lessons from ngena
  • What does this mean for other telcos?
      • Consider how you support innovation
      • Consider how you partner for mutual benefit
      • What next?

Enter your details below to request an extract of the report

Reliance Unlimit: How to build a successful IoT ecosystem

Reliance Unlimit’s success so far

Unlimit, Reliance Jio’s standalone IoT business in India, established in 2016, understood from the start that the problem with the IoT wasn’t the availability of technology, but how to quickly pull it all together into a clear, affordable solutions for the end customer. The result is that less than four years later, it has deployed more than 35,000 end-to-end IoT projects for a prestigious portfolio of customers, including Nissan Motor, MG Motor, Bata, DHL, GSK and Unilever. To meet their varying and evolving needs, Unlimit had built a IoT ecosystem of almost 600 partner companies by the end of 2019. Of these, nearly 100 are fully certified partners, with which Unlimit co-innovates solutions tailored to the Indian market.

Enter your details below to download an extract of the report

The state of the IoT: Balancing cost and complexity

In 1968, Theodore Paraskevakos, a Greek American inventor and businessman, explored the idea of making two machines communicate to each other. He first developed a system for transmitting the caller’s number to the receiver’s device. Building on this experiment, in 1977 he founded Metretek Inc, a company that conducted commercial automatic meter reading, which is essentially today’s commercial smart meter. From then, the world of machine to machine communications (M2M) developed rapidly. The objective was mainly to remotely monitor devices in order to understand conditions and performance. The M2M world was strongly telecommunications-oriented and focused on solving specific business problems. Given this narrow focus, there was little diversity in devices, data sets were specific to one or two measurements, and the communications protocols were well known. Given this context, it is fair to describe first-generation M2M solutions as a siloed, with little – if any – interaction with other data and solutions.

The benefits and challenges of the IoT

The purpose of the Internet of Things (IoT) is to open those silos and incorporate solution designers and developers into the operating environment. In this evolved environment, there might be several applications and solutions, each delivering a unique operational benefit. Each of those solutions require different devices, which produce different data. And those devices require life cycle management, the data needs to be analysed to inform better decisions, and automation integrated to improve efficiency in the operational environment. The communication methods between those devices can also vary significantly, depending on the environment, where the data is, and the type of applications and intelligence required. Finally, all this needs to run securely.

Therefore, the IoT has opened the silos, but it has brought complexity. The question is then whether this complexity is worth it for the operational benefits.

There are several studies highlighting the advantages of IoT solutions. The recent Microsoft IoT Signals publication, which surveys over 3000 decision makers in companies operating across different sectors, clearly demonstrates the value that IoT is bringing to organisations. The top three benefits are:

  • 91% of respondents claim that the IoT has increased efficiency
  • 91% of respondents claim that the IoT has increased yield
  • 85% of respondents claim that the IoT has increased quality.

The sectors leading IoT adoption

The same study highlights how these benefits are materialising in different business sectors. According to this study – and many others – manufacturing is seen as a top adopter of IoT solutions, as also highlighted in STL Partners research on the Industrial IoT.

Automotive, supply chain and logistics are other sectors that have widely adopted the IoT. Their leadership comes from a long M2M heritage, since telematics was a core application of M2M, and is an important part of the supply chain and logistics process.

The automotive sector’s early adoption of IoT was also driven by regulatory initiatives in different parts of the world, for instance to support remotely monitored emergency services in case of accidents (e.g. EU eCall). To enable this, M2M SIMs were embedded in cars, and only activated in the case of an accident, sending a message to an emergency centre. From there, the automotive industry and mobile network operators gradually developed a broader range of applications, culminating in the concept of connected cars. The connected car is much more sophisticated than a single emergency SIM – it is an IoT environment in which an array of sensors is gathering different data, sharing that data externally in various forms of V2X settings, supporting in-vehicle infotainment, and also enabling semiautonomous mobility. Sometime in the future, this will mature into fully autonomous mobility.

The complexity of an IoT solution

The connected car clearly represents the evolution from siloed M2M solutions to the IoT with multiple interdependent data sources and solutions. Achieving this has required the integration of various technologies into an IoT architecture, as well as the move towards automation and prediction of events, which requires embedding advanced analytics and AI technology frameworks into the IoT stack.

High level view of an IoT architecture

Overview of IoT architecture

Source: Saverio Romeo, STL Partners

There are five levels on an IoT architecture:

  1. The hardware level includes devices, sensors, gateways and hardware development components such as microcontrollers.
  2. The communication level includes the different types of IoT connectivity (cellular, LP-WAN, fixed, satellite, short-range wireless and others) and the communication protocols used in those forms of connectivity.
  3. The middleware software backend level is a set of software layers that are traditionally called an IoT platform. A high-level breakdown of the IoT platform includes a connectivity management layer, a device management layer, and data management and orchestration, data analytics and visualisations layers.
  4. The application level includes application development enablement tools and the applications themselves. Those tools enable the development of applications using machine-generated data and various other sources of data –all integrated by the IoT platform. It also includes applications that use results of these analytics to enable remote and automated actions on IoT devices.
  5. Vertically across these levels, there is a security layer. Although this is simplified into a single vertical layer, in practice there are separate security features integrated into IoT solutions at each layer of the architecture. Those features work together to offer layer-to-layer and end-to-end security. This is a complex process that required a detailed use of security-by-design methodology.

The IoT architecture is therefore composed of different technological parts that need to be integrated in order to work correctly in the different circumstances of potential deployment. The IoT architecture also needs to enable scalability supporting the expansion of a solution in terms of number of devices and volume and types of data. Each architectural layer is essential for the IoT solution to work, and they must interact with each other harmoniously, but each requires different technological expertise and skills.

An organisation that wants to offer end-to-end IoT solutions must therefore make a strategic choice between “in-house” IoT architecture development, or form strategic partnerships with existing IoT technology platform providers, and integrate their solutions into a coherent architecture to support an IoT ecosystem.

In the following sections of this report, we discuss Unlimit’s decision to take an ecosystem approach to building its IoT business, and the steps it took to get where it is today.

Table of contents

  • Executive Summary
    • Four lessons from Unlimit on building IoT ecosystems
    • How Unlimit built a successful IoT ecosystem
    • What next?
  • The state of the IoT: Balancing cost and complexity
    • The benefits and challenges of the IoT
    • The sectors leading IoT adoption
    • The complexity of an IoT solution
    • The nature of business ecosystems
  • How Unlimit built a successful IoT business
    • So far, Unlimit looks like a success
    • How will Unlimit sustain leadership and growth?
  • Lessons from Unlimit’s experience

Enter your details below to download an extract of the report

Telco ecosystems: How to make them work

The ecosystem business framework

The success of large businesses such as Microsoft, Amazon and Google as well as digital disrupters like Airbnb and Uber is attributed to their adoption of platform-enabled ecosystem business frameworks. Microsoft, Amazon and Google know how to make ecosystems work. It is their ecosystem approach that helped them to scale quickly, innovate and unlock value in opportunity areas where businesses that are vertically integrated, or have a linear value chain, would have struggled. Internet-enabled digital opportunity areas tend to be unsuited to the traditional business frameworks. These depend on having the time and the ability to anticipate needs, plan and execute accordingly.

As businesses in the telecommunications sector and beyond try to emulate the success of these companies and their ecosystem approach, it is necessary to clarify what is meant by the term “ecosystem” and how it can provide a framework for organising business.

The word “ecosystem” is borrowed from biology. It refers to a community of organisms – of any number of species – living within a defined physical environment.

A biological ecosystem

The components of a biological ecosystem

Source: STL Partners

A business ecosystem can therefore be thought of as a community of stakeholders (of different types) that exist within a defined business environment. The environment of a business ecosystem can be small or large.  This is also true in biology, where both a tree and a rainforest can equally be considered ecosystem environments.

The number of organisms within a biological community is dynamic. They coexist with others and are interdependent within the community and the environment. Environmental resources (i.e. energy and matter) flow through the system efficiently. This is how the ecosystem works.

Companies that adopt an ecosystem business framework identify a community of stakeholders to help them address an opportunity area, or drive business in that space. They then create a business environment (e.g. platforms, rules) to organise economic activity among those communities.  The environment integrates community activities in a complementary way. This model is consistent with STL Partners’ vision for a Coordination Age, where desired outcomes are delivered to customers by multiple parties acting together.

Enter your details below to request an extract of the report

Characteristics of business ecosystems that work

In the case of Google, it adopted an ecosystem approach to tackle the search opportunity. Its search engine platform provides the environment for an external stakeholder community of businesses to reach consumers as they navigate the internet, based on what consumers are looking for.

  • Google does not directly participate in the business-consumer transaction, but its platform reduces friction for participants (providing a good customer experience) and captures information on the exchange.

While Google leverages a technical platform, this is not a requirement for an ecosystem framework. Nespresso built an ecosystem around its patented coffee pod. It needed to establish a user-base for the pods, so it developed a business environment that included licensing arrangements for coffee machine manufacturers.  In addition, it provided support for high-end homeware retailers to supply these machines to end-users. It also created the online Nespresso Club for coffee aficionados to maintain demand for its product (a previous vertically integrated strategy to address this premium coffee-drinking niche had failed).

Ecosystem relevance for telcos

Telcos are exploring new opportunities for revenue. In many of these opportunities, the needs of the customer are evolving or changeable, budgets are tight, and time-to-market is critical. Planning and executing traditional business frameworks can be difficult under these circumstances, so ecosystem business frameworks are understandably of interest.

Traditional business frameworks require companies to match their internal strengths and capabilities to those required to address an opportunity. An ecosystem framework requires companies to consider where those strengths and capabilities are (i.e. external stakeholder communities). An ecosystem orchestrator then creates an environment in which the stakeholders contribute their respective value to meet that end. Additional end-user value may also be derived by supporting stakeholder communities whose products and services use, or are used with, the end-product or service of the ecosystem (e.g. the availability of third party App Store apps add value for end customers and drives demand for high end Apple iPhones). It requires “outside-in” strategic thinking that goes beyond the bounds of the company – or even the industry (i.e. who has the assets and capabilities, who/what will support demand from end-users).

Many companies have rushed to implement ecosystem business frameworks, but have not attained the success of Microsoft, Amazon or Google, or in the telco arena, M-Pesa. Telcos require an understanding of the rationale behind ecosystem business frameworks, what makes them work and how this has played out in other telco ecosystem implementations. As a result, they should be better able to determine whether to leverage this approach more widely.

Table of Contents

  • Executive Summary
  • The ecosystem business framework
  • Why ecosystem business frameworks?
    • Benefits of ecosystem business frameworks
  • Identifying ecosystem business frameworks
  • Telco experience with ecosystem frameworks
    • AT&T Community
    • Deutsche Telekom Qivicon
    • Telecom Infra Project (TIP)
    • GSMA Mobile Connect
    • Android
    • Lessons from telco experience
  • Criteria for successful ecosystem businesses
    • “Destination” status
    • Strong assets and capabilities to share
    • Dynamic strategy
    • Deep end-user knowledge
    • Participant stakeholder experience excellence
    • Continuous innovation
    • Conclusions
  • Next steps
    • Index

Enter your details below to request an extract of the report

Telco edge computing: What is the operator strategy?

To access the report chart pack in PPT download the additional file on the left

Edge computing can help telcos to move up the value chain

The edge computing market and the technologies enabling it are rapidly developing and attracting new players, providing new opportunities to enterprises and service providers. Telco operators are eyeing the market and looking to leverage the technology to move up the value chain and generate more revenue from their networks and services. Edge computing also represents an opportunity for telcos to extend their role beyond offering connectivity services and move into the platform and the application space.

However, operators will be faced with tough competition from other market players such as cloud providers, who are moving rapidly to define and own the biggest share of the edge market. Plus, industrial solution providers, such as Bosch and Siemens, are similarly investing in their own edge services. Telcos are also dealing with technical and business challenges as they venture into the new market and trying to position themselves and identifying their strategies accordingly.

Telcos that fail to develop a strategic approach to the edge could risk losing their share of the growing market as non-telco first movers continue to develop the technology and dictate the market dynamics. This report looks into what telcos should consider regarding their edge strategies and what roles they can play in the market.

Following this introduction, we focus on:

  1. Edge terminology and structure, explaining common terms used within the edge computing context, where the edge resides, and the role of edge computing in 5G.
  2. An overview of the edge computing market, describing different types of stakeholders, current telecoms operators’ deployments and plans, competition from hyperscale cloud providers and the current investment and consolidation trends.
  3. Telcos challenges in addressing the edge opportunity: technical, organisational and commercial challenges given the market
  4. Potential use cases and business models for operators, also exploring possible scenarios of how the market is going to develop and operators’ likely positioning.
  5. A set of recommendations for operators that are building their strategy for the edge.

Enter your details below to request an extract of the report

What is edge computing and where exactly is the edge?

Edge computing brings cloud services and capabilities including computing, storage and networking physically closer to the end-user by locating them on more widely distributed compute infrastructure, typically at smaller sites.

One could argue that edge computing has existed for some time – local infrastructure has been used for compute and storage, be it end-devices, gateways or on-premises data centres. However, edge computing, or edge cloud, refers to bringing the flexibility and openness of cloud-native infrastructure to that local infrastructure.

In contrast to hyperscale cloud computing where all the data is sent to central locations to be processed and stored, edge computing local processing aims to reduce time and save bandwidth needed to send and receive data between the applications and cloud, which improves the performance of the network and the applications. This does not mean that edge computing is an alternative to cloud computing. It is rather an evolutionary step that complements the current cloud computing infrastructure and offers more flexibility in executing and delivering applications.

Edge computing offers mobile operators several opportunities such as:

  • Differentiating service offerings using edge capabilities
  • Providing new applications and solutions using edge capabilities
  • Enabling customers and partners to leverage the distributed computing network in application development
  • Improving networkperformance and achieving efficiencies / cost savings

As edge computing technologies and definitions are still evolving, different terms are sometimes used interchangeably or have been associated with a certain type of stakeholder. For example, mobile edge computing is often used within the mobile network context and has evolved into multi-access edge computing (MEC) – adopted by the European Telecommunications Standards Institute (ETSI) – to include fixed and converged network edge computing scenarios. Fog computing is also often compared to edge computing; the former includes running intelligence on the end-device and is more IoT focused.

These are some of the key terms that need to be identified when discussing edge computing:

  • Network edge refers to edge compute locations that are at sites or points of presence (PoPs) owned by a telecoms operator, for example at a central office in the mobile network or at an ISP’s node.
  • Telco edge cloud is mainly defined as distributed compute managed by a telco  This includes running workloads on customer premises equipment (CPE) at customers’ sites as well as locations within the operator network such as base stations, central offices and other aggregation points on access and/or core network. One of the reasons for caching and processing data closer to the customer data centres is that it allows both the operators and their customers to enjoy the benefit of reduced backhaul traffic and costs.
  • On-premise edge computing refers to the computing resources that are residing at the customer side, e.g. in a gateway on-site, an on-premises data centre, etc. As a result, customers retain their sensitive data on-premise and enjoy other flexibility and elasticity benefits brought by edge computing.
  • Edge cloud is used to describe the virtualised infrastructure available at the edge. It creates a distributed version of the cloud with some flexibility and scalability at the edge. This flexibility allows it to have the capacity to handle sudden surges in workloads from unplanned activities, unlike static on-premise servers. Figure 1 shows the differences between these terms.

Figure 1: Edge computing types

definition of edge computing

Source: STL Partners

Network infrastructure and how the edge relates to 5G

Discussions on edge computing strategies and market are often linked to 5G. Both technologies have overlapping goals of improving performance and throughput and reducing latency for applications such as AR/VR, autonomous vehicles and IoT. 5G improves speed by increasing spectral efficacy, it offers the potential of much higher speeds than 4G. Edge computing, on the other hand, reduces latency by shortening the time required for data processing by allocating resources closer to the application. When combined, edge and 5G can help to achieve round-trip latency below 10 milliseconds.

While 5G deployment is yet to accelerate and reach ubiquitous coverage, the edge can be utilised in some places to reduce latency where needed. There are two reasons why the edge will be part of 5G:

  • First, it has been included in the 5Gstandards (3GPP Release 15) to enable ultra-low latency which will not be achieved by only improvements in the radio interface.
  • Second, operators are in general taking a slow and gradual approach to 5G deployment which means that 5G coverage alone will not provide a big incentive for developers to drive the application market. Edge can be used to fill the network gaps to stimulate the application market growth.

The network edge can be used for applications that need coverage (i.e. accessible anywhere) and can be moved across different edge locations to scale capacity up or down as required. Where an operator decides to establish an edge node depends on:

  • Application latency needs. Some applications such as streaming virtual reality or mission critical applications will require locations close enough to its users to enable sub-50 milliseconds latency.
  • Current network topology. Based on the operators’ network topology, there will be selected locations that can meet the edge latency requirements for the specific application under consideration in terms of the number of hops and the part of the network it resides in.
  • Virtualisation roadmap. The operator needs to consider virtualisation roadmap and where data centre facilities are planned to be built to support future network
  • Site and maintenance costs. The cloud computing economies of scale may diminish as the number of sites proliferate at the edge, for example there is a significant difference in maintaining 1-2 large data centres to maintaining 100s across the country
  • Site availability. Some operators’ edge compute deployment plans assume the nodes reside in the same facilities as those which host their NFV infrastructure. However, many telcos are still in the process of renovating these locations to turn them into (mini) data centres so aren’t yet ready.
  • Site ownership. Sometimes the preferred edge location is within sites that the operators have limited control over, whether that is in the customer premise or within the network. For example, in the US, the cell towers are owned by tower operators such as Crown Castle, American Tower and SBA Communications.

The potential locations for edge nodes can be mapped across the mobile network in four levels as shown in Figure 2.

Figure 2: possible locations for edge computing

edge computing locations

Source: STL Partners

Table of Contents

  • Executive Summary
    • Recommendations for telco operators at the edge
    • Four key use cases for operators
    • Edge computing players are tackling market fragmentation with strategic partnerships
    • What next?
  • Table of Figures
  • Introduction
  • Definitions of edge computing terms and key components
    • What is edge computing and where exactly is the edge?
    • Network infrastructure and how the edge relates to 5G
  • Market overview and opportunities
    • The value chain and the types of stakeholders
    • Hyperscale cloud provider activities at the edge
    • Telco initiatives, pilots and plans
    • Investment and merger and acquisition trends in edge computing
  • Use cases and business models for telcos
    • Telco edge computing use cases
    • Vertical opportunities
    • Roles and business models for telcos
  • Telcos’ challenges at the edge
  • Scenarios for network edge infrastructure development
  • Recommendation
  • Index

Enter your details below to request an extract of the report

A new role for telcos in smart cities

This report considers how telecommunications operators could play a deeper role in smart city projects, arguing that the multi-stakeholder and multidisciplinary nature of smart city strategies requires a high level of coordination. Some telecommunications operators may be able to play that role. That will bring the operator closer to the citizens, who, in turn, are also their customers. This new position could enable new business models for telecommunications operators.

With the aim of identifying how telecoms operators can evolve and deepen their reach into the smart cities vertical, this report explores the various forms of smart city governance used or that could be used in the development of smart city strategies, and the potential value for telcos in participating in each of them.

Enter your details below to request an extract of the report

The smart city lifecycle

The evolution of smart city strategies

The concept of smart city and smart community goes back to 1997 when the California Institute for Smart Communities developed a “Smart Communities Guidebook” in which smart community was defined as following:

“A smart community is simply that: a community in which government, business, and residents understand the potential of information technology, and make a conscious decision to use that technology to transform life and work in their region in significant and positive ways.”

Since then, the definition of smart city has evolved between an approach majorly focussed on the use of technology and another one towards a more collaborative approach among different disciplines trying to make the entire concept less technology centric. The latter has driven the attention on the concept of smart city. In fact, on the technology side, the advent of the Internet of Things (IoT) has provided the technological tools for simply implementing the definition by the California Institute for Smart Communities. On the socio-economics side, the continuous demographic pressure on cities and their increasing economic importance have pushed city administrations to re-think the purpose of the city and the services provided to citizens, businesses and other city stakeholders. The combination of the possibilities offered by technology and the increasing socio-economic importance of cities have brought the concept of the smart city to the top of the political agenda and challenged the business community to explore how to transform smart cities into a business opportunity.

Putting aside the socio-economic and political aspects of smart cities, the IoT has become an important technological framework for smart city development. The IoT transforms spaces into connected and intelligent ones. The data are gathered, exchanged, analysed and actions are taken based on that analysis. However, the data gathered within smart cities is spread across multiple different systems. The key role of IoT is therefore to provide the technological fabric for the smooth functioning of a smart city’s “system of systems” that benefits both citizens and businesses.

In practice, many smart city projects evolve organically, from the bottom up, rather than from a top-down technology driven model. Several cities have started experimenting with the application of IoT in their services, initially, focussing on a specific application. There have been then several smart parking projects, intelligent lighting projects, smart public safety solutions and so on. But that is only the first step. As per any IoT solution, the user appreciates the value of the IoT project outcome – the beauty of the data gathered and the value of its analysis – and wants then to explore more. In that way, the smart parking projects have expanded into environmental monitoring solutions and/or public safety solutions, gradually morphing into more complex projects.

Introducing the smart city strategy lifecycle

The evolution of smart city projects requires an overall smart city strategy that needs to be managed. The smart city strategy does not have a conclusion, but rather evolves continuously based on achievements, issues and new city needs. Therefore, it is important to see smart city strategies with a lifecycle approach, broken into five key phases.

Figure 1: Smart city strategy lifecycle

Smart city lifecycle: assessment > design > launch > implementation > monitoringSource: STL Partners

  • Smart city assessment: This phase looks at the needs of the city, as well as its level of digital maturity. The digital maturity can be addressed in a variety of ways through the monitoring framework (discussed in more detail later in the report). This phase needs to be very inclusive of all the city stakeholders: businesses, academia, public organisations and citizens’ groups. The output of the smart city assessment is then used in the strategy design phase.
  • Strategy design: A smart city strategy document should contain overall objectives, projects to implement, and resources to use. The strategy document should also include a monitoring framework.
  • Strategy launch: Following agreement on a smart city strategy, some cities run an external consultation with city stakeholders for a sort of wider evaluation. The launch phase’s main goal is to make the city aware of the strategy and the roadmap for implementation. The inclusiveness of the city as a whole in the process is a key factor of success.
  • Strategy implementation: The length of this phase really depends on the decisions in the roadmap. The roadmap could include both short-term and long-term projects.
  • Smart city monitoring: In this phase the monitoring framework established in the strategy design phase is put into operation. That framework should assess the evolution of the smart city strategy implementation. The output of the smart city monitoring can enable another cycle, starting with a fresh assessment. The repetition of the cycle can also be established in the smart city strategy.

Those participating in smart city monitoring, assessment and strategy design phases tend to be long-term, ongoing partners of municipalities, while the implementation phase includes many more partners on a project basis. For telcos seeking to play a broader role in smart cities, the goal is therefore to be more involved in the monitoring, assessment and strategy phases.

Table of contents

  • Executive Summary
  • Introduction
    • Research methodology
  • The smart city lifecycle
    • The evolution of smart city strategies
    • Introducing the smart city strategy lifecycle
    • Smart city monitoring framework: What smart cities are trying to achieve
  • Smart city governance models: How cities are working towards their goals
    • Defining smart city governance
    • Mapping smart city governance models
    • Smart governance case studies
  • The smart city coordination opportunity for telcos
    • Telcos’ current participation in smart city governance
    • How telcos can develop a coordination role in smart cities
  • Conclusions and recommendations

Enter your details below to request an extract of the report

New age, new control points?

Why control points matter

This executive briefing explores the evolution of control points – products, services or roles that give a company disproportionate power within a particular digital value chain. Historically, such control points have included Microsoft’s Windows operating system and Intel’s processor architecture for personal computers (PCs), Google’s search engine and Apple’s iPhone. In each case, these control points have been a reliable source of revenues and a springboard into other lucrative new markets, such as productivity software (Microsoft) server chips (Intel), display advertising (Google) and app retailing (Apple).

Although technical and regulatory constraints mean that most telcos are unlikely to be able to build out their own control points, there are exceptions, such as the central role of Safaricom’s M-Pesa service in Kenya’s digital economy. In any case, a thorough understanding of where new control points are emerging will help telcos identify what their customers most value in the digital ecosystem. Moreover, if they move early enough to encourage competition and/or appropriate regulatory intervention, telcos could prevent themselves, their partners and their customers from becoming too dependent on particular companies.

The emergence of Microsoft’s operating system as the dominant platform in the PC market left many of its “partners” struggling to eke out a profit from the sale of computer hardware. Looking forward, there is a similar risk that a company that creates a dominant artificial intelligence platform could leave other players in various digital value chains, including telcos, at their beck and call.

This report explores how control points are evolving beyond simple components, such as a piece of software or a microprocessor, to become elaborate vertically-integrated stacks of hardware, software and services that work towards a specific goal, such as developing the best self-driving car on the planet or the most accurate image recognition system in the cloud. It then outlines what telcos and their partners can do to help maintain a balance of power in the Coordination Age, where, crucially, no one really wants to be at the mercy of a “master coordinator”.

The report focuses primarily on the consumer market, but the arguments it makes are also applicable in the enterprise space, where machine learning is being applied to optimise specialist solutions, such as production lines, industrial processes and drug development. In each case, there is a danger that a single company will build an unassailable position in a specific niche, ultimately eliminating the competition on which effective capitalism depends.

Enter your details below to request an extract of the report

Control points evolve and shift

A control point can be defined as a product, service or solution on which every other player in a value chain is heavily dependent. Their reliance on this component means the other players in the value chain generally have to accept the terms and conditions imposed by the entity that owns the control point. A good contemporary example is Apple’s App Store – owners of Apple’s devices depend on the App Store to get access to software they need/want, while app developers depend on the App Store to distribute their software to the 1.4 billion Apple devices in active use. This pivotal position allows Apple to levy a controversial commission of 30% on software and digital content sold through the App Store.

But few control points last forever: the App Store will only continue to be a control point if consumers continue to download a wide range of apps, rather than interacting with online services through a web browser or another software platform, such as a messaging app. Recent history shows that as technology evolves, control points can be sidestepped or marginalised. For example, Microsoft’s Windows operating system and Internet Explorer browser were once regarded as key control points in the personal computing ecosystem, but neither piece of software is still at the heart of most consumers’ online experience.

Similarly, the gateway role of Apple’s App Store looks set to be eroded over time. Towards the end of 2018, Netflix — the App Store’s top grossing app — no longer allowed new customers to sign up and subscribe to the streaming service within the Netflix app for iOS across all global markets, according to a report by TechCrunch. That move is designed to cut out the expensive intermediary — Apple. Citing data compiled by Sensor Tower, the report said Netflix would have paid Apple US$256 million of the US$853 million grossed by its 2018 the Netflix iOS app, assuming a 30% commission for Apple (however, after the first year, Apple’s cut on subscription renewals is lowered to 15%).

TechCrunch noted that Netflix is following in the footsteps of Amazon, which has historically restricted movie and TV rentals and purchases to its own website or other “compatible” apps, instead of allowing them to take place through its Prime Video app for iOS or Android. In so doing, Amazon is preventing Apple or Google from taking a slice of its content revenues. Amazon takes the same approach with Kindle e-books, which also aren’t offered in the Kindle mobile app. Spotify has also discontinued the option to pay for its Premium service using Apple’s in-app payment system.

Skating ahead of the puck

As control points evolve and shift, some of today’s Internet giants, notably Alphabet, Amazon and Facebook, are skating where the puck is heading, acquiring the new players that might disrupt their existing control points. In fact, the willingness of today’s Internet platforms to spend big money on small companies suggests they are much more alert to this dynamic than their predecessors were. Facebook’s US$19 billion acquisition of messaging app WhatsApp, which has generated very little in the way of revenues, is perhaps the best example of the perceived value of strategic control points – consumers’ time and attention appears to be gradually shifting from traditional social into messaging apps, such as WhatsApp, or hybrid-services, such as Instagram, which Facebook also acquired.

In fact, the financial and regulatory leeway Alphabet, Amazon, Facebook and Apple enjoy (granted by long-sighted investors) almost constitutes another control point. Whereas deals by telcos and media companies tend to come under much tougher scrutiny and be restricted by rigorous financial modelling, the Internet giants are generally trusted to buy whoever they like.

The decision by Alphabet, the owner of Google, to establish its “Other Bets” division is another example of how today’s tech giants have learnt from the complacency of their predecessors. Whereas Microsoft failed to anticipate the rise of tablets and smart TVs, weakening its grip on the consumer computing market, Google has zealously explored the potential of new computing platforms, such as connected glasses, self-driving cars and smart speakers.

In essence, the current generation of tech leaders have taken Intel founder Andy Grove’s famous “only the paranoid survive” mantra to heart. Having swept away the old order, they realise their companies could also easily be side-lined by new players with new ways of doing things. Underlining this point, Larry Page, founder of Google, wrote in 2014:Many companies get comfortable doing what they have always done, making only incremental changes. This incrementalism leads to irrelevance over time, especially in technology, where change tends to be revolutionary, not evolutionary. People thought we were crazy when we acquired YouTube and Android and when we launched Chrome, but those efforts have matured into major platforms for digital video and mobile devices and a safer, popular browser.”

Table of contents

  • Executive Summary
  • Introduction
  • What constitutes a control point?
    • Control points evolve and shift
    • New kinds of control points
  • The big data dividend
    • Can incumbents’ big data advantage be overcome?
    • Data has drawbacks – dangers of distraction
    • How does machine learning change the data game?
  • The power of network effects
    • The importance of the ecosystem
    • Cloud computing capacity and capabilities
    • Digital identity and digital payments
  • The value of vertical integration
    • The machine learning super cycle
    • The machine learning cycle in action – image recognition
  • Tesla’s journey towards self-driving vehicles
    • Custom-made computing architecture
    • Training the self-driving software
    • But does Tesla have a sustainable advantage?
  • Regulatory checks and balances
  • Conclusions and recommendations

Enter your details below to request an extract of the report

Telco 2.0: Choose your future – while you still can

Introduction

Time to update Telco 2.0

Telcos are facing difficult choices about whether and how to invest in new technologies, how to cut costs, and how to create new services, either to pair with their core network services or to broaden their customer bases beyond connectivity users.

Through the Telco 2.0 vision (our shorthand for ‘what a future telco should look like’), STL Partners has long argued that telcos need to make fundamental changes to their business models in response to the commoditisation of connectivity and the ‘softwarisation’ of all industries, including telecoms. At the very least this means digitalising operations to become more data-centric and efficient in the way they deliver connectivity. But to generate significant new revenue growth, we still believe telcos need to look beyond connectivity and develop (or acquire) new product and service offerings.

The original Telco 2.0 two-sided business model

original telco 2.0

Source: STL Partners

Since 2011, a handful of telcos have made significant investments into areas beyond connectivity that fall into these categories. For example:

  • NTT Docomo has continued to expand its ‘dmarket’ consumer loyalty scheme, media and sports content and payment services, which accounted for nearly 20% of total revenues for FY2017.
  • Singtel acquired digital advertising provider Amobee in 2012, followed by several more acquisitions in the same area to build an end-to-end digital marketing platform. Its digital services accounted for more than 10% of quarterly revenues by December 2017, and was the fourth largest revenue segment, ahead of voice revenues.
  • TELUS first acquired a health IT company in 2008, and has since expanded its reach and range of services to become Canada’s largest provider of health IT solutions, such as a nation-wide e-prescription system. Based on a case study we did on TELUS, we estimate its health solutions accounted for at least 7% of total revenues by 2017.

Enter your details below to request an extract of the report

var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);

However, these telcos are the exception rather than the rule. Over the last decade, most telcos have failed to build a significant revenue stream beyond their core services.

While many telcos remain cautious or even sceptical about their ability to generate significant revenue from non-connectivity based products and services, “digitalising” operations has become a widespread approach to sustain margins as revenue growth has slowed.

In Figure 3 we illustrate these as the two ‘digital dimensions’ along which telcos can drive change, where most telcos are prioritising an infrastructure play, but few are putting significant resources into product innovation, and only a small number with the ability to do both.

  • Digitalising telecoms operations: Reduction of capex and opex by reducing complexity and automating processes, and improving customer experience
  • Developing new services: This falls into two categories on the right-hand side of Figure 3
    • Product innovation: New services that are independent from the network, in which case digitalising telecoms operations is only moderately important
    • Platform (& product): New services that are strongly integrated with the network and therefore require the network to be opened up and digitalised

Few telcos are putting real resources into product & platform innovation

2 digital dimensions

Source: STL Partners

Four developments driving our Telco 2.0 update

  • AI and automation technology is ready to deploy at scale. AI is no longer an over-hyped ideal – machine and deep learning techniques are proven to deliver faster and more accurate decision-making for repetitive and data-intensive tasks, regardless of the type of data (numerical, audio, images, etc.). This has the potential to transform all areas of operators’ businesses.
  • We live and work in a world of ecosystems. Few services are completely self-sufficient and independent from everything else, but rather enable, complement and/or augment other services. Telcos must accept that they are not immune to this trend, just because connectivity is one of the key enablers of content, cloud and IoT ecosystems (see Figure 4).
  • Software-defined networks and 5G are coming. This is happening at a different pace in different markets, but over the next five to ten years these technologies will drastically change the ‘thing’ that telcos operate: the ‘network’ will become another cloud service, with many operational functions instantiated in near real-time in hardware at the network edge, so never even reaching a centralised cloud. So telcos need to become more proficient in software and computing, and they should think of themselves as cloud service providers that operate in partnership with many other players to deliver end-users a complete service.
  • As other industries go through their own digital transformations, the connectivity and IT needs of enterprises have become much more complex and industry specific. This means the one-size-fits-all approach does not apply for operators or for their enterprise customers in any sector.

Telcos and connectivity are not a central pillar, but an enabler in a much richer ecosystem

telco myth vs reality

Source: STL Partners

We are updating the Telco 2.0 Vision in light of these realities. Previously, we proposed six opportunity areas for new revenue growth, and expected large, proactive telcos to be able to address many of them. But telcos have been slow to change, margins are tighter now, implementing NFV/SDN is hard, and software skills are necessary for succeeding in any vertical. So telcos can no longer hope to do it all and must make choices of where to put their bets. As NTT Docomo, Singtel and TELUS show, it also takes time to succeed, so telcos need to choose and commit to a strategy now for long term success.

Contents:

  • Executive Summary
  • Introduction
  • Time to update Telco 2.0
  • Four developments driving our Telco 2.0 update
  • Analysing the current market state
  • Options for the future
  • If connectivity won’t drive growth, do telcos’ network strategies matter?
  • Imagining the future telecoms stack
  • Conclusions

Figures:

  • Figure 1: The telco stack
  • Figure 2: The original Telco 2.0 two-sided business model
  • Figure 3: Few telcos are putting real resources into product & platform innovation
  • Figure 4: Telcos and connectivity are not a central pillar, but an enabler in a much richer ecosystem
  • Figure 5: The network cloud platform within the telco stack
  • Figure 6: Steps to becoming a cloud platform
  • Figure 7: Horizontal specialisation within the telco stack
  • Figure 8: Vertical specialisation within the telco stack
  • Figure 9: Enterprise verticals
  • Figure 10: Consumer services and applications
  • Figure 11: Network technology company versus lean network operator
  • Figure 12: Example of a fixed telco stack
  • Figure 13: Example of a telco IoT stack
  • Figure 14: Example of a lean network operator stack

Enter your details below to request an extract of the report

var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);

Blockchain for telcos: Where is the money?

If you don’t subscribe to our research yet, you can download the free report as part of our sample report series.

Introduction

Looking at existing players in the industry, there are two business approaches to blockchain.

Blockchain to make money

Blockchain to save money or do something new

In this report, we look at how these business models apply for telcos seeking to participate in blockchain ecosystems for digital identity and IoT.

We will also present this report in a webinar on Tuesday, June 19th – register here

Contents:

  • Overview of existing blockchain business models
  • Telco monetisation models in:
  • Digital identity
  • IoT
  • Conclusion & recommendations

 

BBVA: Traditional retail bank embraces digital disruption

Introduction

Why are we doing non-telco case studies?

Digital transformation is a phenomenon that is affecting every sector. Many industries have been through a transformation process far more severe than we have seen in telecoms, while others began the process much earlier in time. We believe that there are valuable lessons telcos can learn from these sectors, so we have decided to find and examine the most interesting and useful case studies.

Traditional banking is being disrupted by fintech. This disruption has not happened overnight, but its speed has accelerated in recent years as consumers and enterprises have become more confident using digital tools to manage their finances. Although the fintech market is currently highly fragmented, with fintech companies typically focussing on one or two specific financial products, this can still have an enormous impact on the traditional banking value chain, which relies on a diversified portfolio to create profit. In addition, there is the threat that a digital native company, such as Amazon or Google, will enter the mainstream banking market through a series of acquisitions.

BBVA’s chairman, Francisco Gonzalez, foresaw this threat early-on, and has worked tirelessly to restructure the bank to be competitive in the era of digital banking. This transformation has involved significant changes in leadership, technology, business processes, and the bank’s portfolio. Like telcos, traditional banks are large organisations with legacy technology and processes, and turning the ship around is challenging. Therefore, there are many ways that BBVA’s experience can inform telcos’ own digital transformation strategies.

Enter your details below to request an extract of the report

var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);

General outline of STL Partners’ case study transformation index

We intend to complete more case studies in the future from other industry verticals, with the goal of creating a ‘case study transformation index’, illustrating how selected companies have overcome the challenge of digital disruption. In these case studies we are examining five key areas of transformation, identifying which have been the most challenging, which have generated the most innovative solutions, and which can be considered successes or failures. These five areas are:

  • Market
  • Proposition
  • Value Network
  • Technology
  • Finances

We anticipate that some of these five sections will overlap, and some will be more pertinent to certain case studies than others. But central to the case studies will be analysis of how the transformation process is relevant to the telco industry and the lessons that can be learned to help operators on the path to change.

How digital disruption is threatening banking

Retail banks rely on a two-sided business model

Retail banks make money by using deposits in current or savings accounts made by one group of customers (depositors) to finance loans to other customers (borrowers). The borrower not only pays the bank back its loan, but also interest on top – in effect, paying the bank for the service of providing the loan. The bank pays the depositor a lower interest on savings, and makes money on the spread between the two rates of interest.

Retaining depositors is a vital part of retail banks’ business model

Source: STL Partners

While this is highly simplified, this is the fundamental business model of all traditional retail banks, whose main source of income is created through managing a diversified portfolio of financial products across savings and loans. Banks also make money from applying charges when customers use credit or debit cards, or charging its customers fees such as ATM fees, overdraft fees, late payment fees, penalty fees.

Societal changes have driven digital banking adoption

Digital disruption in banking has taken much longer than in other industries, for example, publishing and media, despite attempts from banks themselves to persuade more customers to use online services. For traditional banks, moving customers to digital channels for most of their banking needs could significantly cut the cost of maintaining and staffing a large network of physical branches. However, when online banking services were first launched in the 1980s and 90s, consumer concerns about security and a lack of confidence in managing accounts themselves online meant that adoption was slow.

Since then the market has changed: For example, in 2000, 80% of banks in the U.S. were offering internet banking services. The launch of the iPhone seven years later caused a paradigm shift, triggering a wave of enormous development and widespread adoption of digital services accessible online and via smartphone apps. Ten years on, consumers are much more confident using digital financial services, and, although younger consumers are leading adoption, older generations are also increasingly using these services.

To read on about how BBVA responded to a changing market, please login and download the report, or contact us to subscribe.

Contents:

  • Executive Summary
  • Six lessons telcos can learn from BBVA
  • BBVA in STL Partners’ transformation index
  • Introduction
  • Why are we doing non-telco case studies?
  • General outline of STL Partners’ case study transformation index
  • How digital disruption is threatening banking 
  • Retail banks rely on a two-sided business model
  • Societal changes have driven digital banking adoption
  • Challenger banks and fintechs are changing the game
  • BBVA’s story
  • Phase one: Investing in technology to catalyse change
  • Phase two: Organisational change
  • Conclusions
  • BBVA in STL Partners’ transformation index
  • Appendix

Figures:

  • Figure 1: BBVA is rated as “Green” (good) in the STL Partners’ Transformation Index
  • Figure 2: Retaining depositors is a vital part of retail banks’ business model
  • Figure 3: The digital banking generation gap is closing
  • Figure 4: The sharing economy has taken off
  • Figure 5: BBVA’s global presence
  • Figure 6: Telcos need to virtualise their core to deliver cloud business models
  • Figure 7: Digital experience needs to be distributed across the organisation for transformation to succeed
  • Figure 8: BBVA’s leadership team is structured to accelerate digital transformation
  • Figure 9: Traditional banks need to adopt agile processes to compete with digital-native competitors
  • Figure 10: Ecosystem markets need new business models
  • Figure 11: BBVA’s co-opetition strategy involves acquisitions, investments and open APIs
  • Figure 12: BBVA’s shares are performing well
  • Figure 13: More smart and mobile device owners in Turkey use their devices for digital banking services than any other country surveyed
  • Figure 14: Turkey leads the way in four out of seven digital banking services
  • Figure 15: Turkish respondents are the most open to automated digital banking services
  • Figure 16: Less than 60% of Turkish adults had a bank account in 2014
  • Figure 17: Turkey is an attractive emerging market for investment
  • Figure 18: BBVA is rated as “Green” (good) in the STL Partners’ Transformation Index

Enter your details below to request an extract of the report

var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);

IoT and blockchain: There’s substance behind the hype

Introduction

There is currently a lot of market speculation about blockchain and its possible use-cases, including how it can be used in the IoT ecosystem.

This short report identifies three different reasons why blockchain is an attractive technology to use in IoT solutions, and how blockchain can help operators move up the IoT value chain by enabling new business models.

This report leverages research from the following recent STL publications:

Enter your details below to request an extract of the report


The IoT ecosystem is evolving rapidly, and we are moving towards a hyper-connected and automated future…

Blockchain IoT

Source: STL Partners

This future vision won’t be possible unless IoT devices from different networks can share data securely. There are three things that make blockchain an attractive technology to help overcome this challenge and enable IoT ecosystems:

  1. It creates a tamper-proof audit trails
  2. It enables a distributed operating model
  3. It is open-source

Contents:

  • Introduction
  • IoT is not a quick win for operators
  • Can blockchain help?
  • The IoT ecosystem is evolving rapidly…
  • The future vision won’t be possible unless IoT devices from different networks can share data securely
  • Application 1: Enhancing IoT device security
  • Use-case 1: Protecting IoT devices with blockchain and biometric data
  • Use-case 2: Preventing losses in the global freight and logistics industry
  • Application 2: Enabling self-managing device-to-device networks
  • Use-case 1: Enabling device-to-device payments
  • Use-case 2: Granting location-access through smart locks
  • Use-case 3: Enabling the ‘sharing economy’
  • Blockchain is not a silver bullet
  • Blockchain in operator IoT strategies

Enter your details below to request an extract of the report

Monetising IoT: Four steps for success

Introduction

The internet of things (IoT) will revolutionise all industries, not just TMT. In addition to the benefits of connecting previously unconnected objects to monitor and control them, the data that IoT will make available could play a pivotal role in other major technological developments, such as big data analytics and autonomous vehicles.

It seems logical that, because IoT relies on connectivity, this will be a new growth opportunity for telcos. And indeed, as anyone who has attended MWC in the last few years can testify, most if not all major telcos are providing some kind of IoT service.

But IoT is not a quick win for telcos. The value of IoT connectivity is only a small portion of the total estimated value of the IoT ecosystem, and therefore telcos seeking to grow greater value in this area are actively moving into other layers, such as platforms and vertical end solutions.

Enter your details below to download an extract of the report

Figure 1: Telcos are moving beyond IoT connectivity

Telcos are moving beyond IoT connectivity

Source: STL Partners

Although telco IoT strategies have evolved significantly over the past five years, this is a complicated and competitive area that people are still figuring out how to monetise. To help our clients overcome this challenge we are publishing a series of reports and best practice case studies over the next 12 months designed to help individual operators define their approach to IoT according to their size, market position, geographic footprint and other key characteristics such as appetite for innovation.

This report is the first in this series. The findings it presents are based upon primary and secondary research conducted between May and September 2017 which included:

  • A series of anonymous interviews with operators, vendors and other key players in the IoT ecosystem
  • A brainstorming session held with senior members from telco strategy teams at our European event in June 2017
  • An online survey about telcos’ role in IoT, which ran from May to June 2017

Contents:

  • Executive Summary
  • Introduction
  • A four-step process to monetise IoT
  • Step 1: Look beyond connected device forecasts
  • Step 2: Map out your IoT strategy
  • Step 3: Be brave and commit
  • Step 4: Develop horizontal capabilities to serve your non-core verticals
  • Result: The T-shaped IoT business model
  • IoT data is a secondary opportunity
  • Conclusion

Figures:

  • Figure 1: Telcos are moving beyond IoT connectivity
  • Figure 2: IoT verticals and use-cases
  • Figure 3: Four possible roles within the IoT ecosystem
  • Figure 4: Telcos can play different roles in different verticals
  • Figure 5: IoT connectivity can be simplified into four broad categories
  • Figure 6: As the IoT field matures, use-cases become more complex
  • Figure 7: The technical components of an IoT platform
  • Figure 8: The T-shaped IoT business model

Enter your details below to download an extract of the report