MWC 2023: You are now in a new industry

The birth of a new sector: “Connected Technologies”

Mobile World Congress (MWC) is the world’s biggest showcase for the mobile telecoms industry. MWC 2023 marked the second year back to full scale after COVID disruptions. With 88k visitors, 2,400 exhibitors and 1,000 speakers it did not quite reach pre-COVID heights, but remained an enormous scale event. Notably, 56% of visitors came from industries adjacent to the core mobile ecosystem, reflecting STL’s view that we are now in a new industry with a diverse range of players delivering connected technologies.

With such scale It can be difficult to find the significant messages through the noise. STL’s research team attended the event in full force, and we each focused on a specific topic. In this report we distil what we saw at MWC 2023 and what we think it means for telecoms operators, technology companies and new players entering the industry.

Enter your details below to download an extract of the report

STL Partners research team at MWC 2023

STL-Partners-MWC23-research-team

The diversity of companies attending and of applications demonstrated at MWC23 illustrated that the business being conducted is no longer the delivery of mobile communications. It is addressing a broader goal that we’ve described as the Coordination Age. This is the use of connected technologies to help a wide range of customers make better use of their resources.

The centrality of the GSMA Open Gateway announcement in discussions was one harbinger of the new model. The point of the APIs is to enable other players to access and use telecoms resources more automatically and rapidly, rather than through lengthy and complex bespoke processes. It starts to open many new business model opportunities across the economy. To steal the words of John Antanaitis, VP Global Portfolio Marketing at Vonage, APIs are “a small key to a big door”.

Other examples from MWC 2023 underlining the transition of “telecommunications” to a sector with new boundaries and new functions include:

  • The centrality of ecosystems and partnerships, which fundamentally serve to connect different parts of the technology value chain.
  • The importance of sustainability to the industry’s agenda. This is about careful and efficient use of resources within the industry and enabling customers to connect their own technologies to optimise energy consumption and their uses of other scarce resources such as land, water and carbon.
  • An increasing interest and experimentation with the metaverse, which uses connected technologies (AR/VR, high speed data, sometimes edge resources) to deliver a newly visceral experience to its users, in turn delivering other benefits, such as more engaging entertainment (better use of leisure time and attention), and more compelling training experiences (e.g. delivering more realistic and lifelike emergency training scenarios).
  • A primary purpose of telco cloud is to break out the functions and technologies within the operators and network domains. It makes individual processes, assets and functions programmable – again, linking them with signals from other parts of the ecosystem – whether an external customer or partner or internal users.
  • The growing dialogues around edge computing and private networks –evolving ways for enterprise customers to take control of all or part of their connected technologies.
  • The importance of AI and automation, both within operators and across the market. The nature of automation is to connect one technology or data source to another. An action in one place is triggered by a signal from another.

Many of these connecting technologies are still relatively nascent and incomplete at this stage. They do not yet deliver the experiences or economics that will ultimately make them successful. However, what they collectively reveal is that the underlying drive to connect technologies to make better use of resources is like a form of economic gravity. In the same way that water will always run downhill, so will the market evolve towards optimising the use of resources through connecting technologies.

Table of contents

  • Executive Summary
    • The birth of a new sector: ‘Connected technologies’
    • Old gripes remain
    • So what if you are in a new industry?
    • You might like it
    • How to go from telco to connected techco
    • Next steps
  • Introduction
  • Strategy: Does the industry know where it’s going?
    • Where will the money come from?
    • Telcos still demanding their “fair share”, but what’s fair, or constructive?
    • Hope for the future
  • Transformation leadership: Ecosystem practices
    • Current drivers for ecosystem thinking
    • Barriers to wider and less linear ecosystem practices
    • Conclusion
  • Energy crisis sparks efficiency drive
    • Innovation is happening around energy
    • Orange looks to change consumer behaviour
    • Moves on measuring enablement effects
    • Key takeaways
  • Telco Cloud: Open RAN is important
    • Brownfield open RAN deployments at scale in 2024-25
    • Acceleration is key for vRAN workloads on COTS hardware
    • Energy efficiency is a key use case of open RAN and vRAN
    • Other business
    • Conclusion
  • Consumer: Where are telcos currently focused?
    • Staying relevant: Metaverse returns
    • Consumer revenue opportunities: Commerce and finance
    • Customer engagement: Utilising AI
  • Enterprise: Are telcos really ready for new business models?
    • Metaverse for enterprise: Pure hype?
    • Network APIs: The tech is progressing
    • …But commercial value is still unclear
    • Final takeaways:
  • Private networks: Coming over the hype curve
    • A fragmented but dynamic ecosystem
    • A push for mid-market adoption
    • Finding the right sector and the right business case
  • Edge computing: Entering the next phase
    • Telcos are looking for ways to monetise edge
    • Edge computing and private networks – a winning combination?
    • Network APIs take centre stage
    • Final thoughts
  • AI and automation: Opening up access to operational data
    • Gathering up of end-to-end data across multiple-domains
    • Support for network automations
    • Data for external use
    • Key takeaways

Enter your details below to download an extract of the report

6G: Hype versus reality

What is 6G and why does it matter?

Who’s driving the 6G discussion?

There already are numerous 6G visions, suggested use-cases and proposed technical elements. Many reflect vendors’ or universities’ existing specialist research domains or IPR in wireless, or look to entrench and extend existing commercial models and “locked-in” legacy technology stacks.

Others start from broad visions of UN development goals and policymakers’ desires for connected societies, and try to use these to frame and underpin 6G targets, even if the reality is that they will often be delivered by 5G, fibre or other technologies.

The stakeholder groups involved in creating 6G are wider than for 5G – governments, cloud hyperscalers / tech-co’s, industrial specialists, NGOs and many other groups seem more prominent than in the past, when the main drivers came from MNOs, large vendors and key academic clusters.

Over time, a process of iteration and “triangulation” will occur for 6G, initially starting with a wide funnel of ideas, which are now starting to coalesce into common requirements – and then to specific standards and underlying technical innovations. By around 2024-25 there should be more clarity, but at present there are still many directions that 6G could take.

What are they saying?

Discussions with and available material from parties interested in 6G discusses a wide range of new technologies (e.g. ultra-massive MIMO) and design goals (e.g. speeds of 1Tbps). These can be organised into six categories to provide a high-level set of futuristic statements that underpin the concept of 6G,  as articulated by the various 6G consortia and governing bodies:

  1. Provision of ultra-high data rate and ultra-low latency: Provision of up to 1Tbps speeds and as low as 1 microsecond latency – both outdoors and – implicitly at least – indoors.
  2. Use of new frequencies and interconnection of new network types: Efficient use of high, medium, and low-frequency bands, potentially including visible light and >100GHz and even THz spectrum. This will include possible coordination between non-terrestrial networks and other existing networks, and new types of radio and antenna to provide ubiquitous coverage in a dispersed “fabric” concept, rather than traditional discrete “cells”.
  3. Ultra-massive MIMO and ultra-flexible physical and control layers: The combination of ultra-large antenna arrays, intelligent surfaces, AI and new sensing technologies working in a range of frequency bands. This will depend on the deployment of a range of new technologies in the physical and control layers to increase coverage and speed, while reducing cost and power consumption.
  4. High resolution location: The ability to improve locational accuracy, potentially to centimetre-level resolutions, as well as the ability to find and describe objects in 3D orientation.
  5. Improved sensing capabilities: Ability to use 6Gradio signals for direct sensing applications such as radar, as well as for communications.
  6. General network concepts: A variety of topics including the concept of a distributed network architecture and a “network of networks” to improve network performance and coverage. This also includes more conceptual topics such as micro-networks and computing aware networks. Finally, there is discussion on tailoring 6Gfor use of / deployment by other industries beyond traditional telcos (“verticals”), such as enhancements for sectors including rail, broadcast, agriculture, utilities, among others, which may require specific features for coverage, sector-specific protocols or legacy interoperability.

Enter your details below to request an extract of the report

How is 6G different to 5G?

In reality, the boundaries between later versions of 5G and 6G are likely to be blurred, both in terms of the technology standards development and in the ways marketers present network products and services. As with 5G, the development of 6G will take time to reach many of the goals above. From 3GPP Release 18 onwards, 5G is officially being renamed as “5G Advanced”, mirroring a similar move in the later stages of 4G/LTE development. Rel18 standards are expected to be completed around the end of 2023, with preliminary Rel19 studies also currently underway. Rel20 and Rel21 will continue the evolution.

Figure 1: Roadmap for 6G

Source: Slides presented by Bharat B Bhatia President, ITU-APT Foundation of India at WWRF Huddle 2022

However, from 2024 onwards, the work done at 3GPP meetings and in its various groups will gradually shift from enhancing 5G to starting the groundwork for 6G – initially defining requirements in 2024-25, then creating “study items” in 2025-26. During that time, new additions to 5G in Rel20/21/22 will get progressively thinner as resources are devoted to 6G preparations.

The heavy lifting efforts on “work items” for 6G will probably start around 2026-27, with 5G Advanced output then dwindling to small enhancements or maintenance releases. It is still unclear what will get included in 5G Advanced, versus held over until 6G, but the main emphasis for 6G is likely to be on:

  • Greater performance and efficiency for mobile broadband, with attention paid to MIMO techniques, better uplink mechanisms and improved cell-to-cell handover
  • Additional features for specific verticals, as well as V2X deployments and IoT
  • Support of new spectrum bands
  • Improvements in mapping and positioning
  • Enhanced coverage and backhaul, for instance by establishing “daisy-chains” of cell sites and extensions and repeaters, including using 5Gfor backhaul and access
  • More intelligence and automation in the 5Gnetwork core, including improvements to slicing and orchestration
  • Better integration of non-terrestrial networks, typically using satellites or high-altitude platforms
  • Capabilities specifically aimed at AR/VR/XR
  • Direct device-to-device connections (also called “sidelink”) that allow communication without the need to go via a cell tower.

We can expect these 5G Advanced areas to also progress from requirements, to study, and then to work items during the period from 2022-27.

However, these features will mostly be an evolution of 5G, rather than a revolution by 5G. While there may be a few early moves on areas such as wireless sensing, Releases 18-21 are unlikely to include any radical breakthroughs. The topics we discuss elsewhere in this report, such as potential use of terahertz bands, blending of O-RAN principles of disaggregation, and new technology domains such as smart surfaces, will be solidly in the 6G era.

An important point here is that the official ITU standard for next-gen wireless, likely to be called IMT2030, is not the same as 3GPP’s branding of the cellular “generation”, or individual MNOs service names. There may well be early versions of 6G cellular, driven by market demand, that don’t quite match up to the ITU requirements. Ultimately 3GPP is an industry-led organisation, so may follow the path of expediency if there are urgent commercial opportunities or challenges.

In addition, based on the experience of 4G and 5G launches, it is probable that at least one MNO will try to call a 5G Advanced launch “6G” in their marketing. AT&T caused huge controversy – and even lawsuits – by calling a late version of LTE “5Ge” (5G evolution), even including the icons on some phones’ screens, while Verizon’s early 5G FWA systems were actually a proprietary pre-standard version of the technology.

If you’re a purist about these things – as we are – prepare to be howling in frustration around 2027-28 and describing new services as “fake 6G”.

Table of Contents

  • Executive Summary
    • What is 6G?
    • Key considerations for telcos and vendors around 6G
    • What should telcos and vendors do now?
    • 6G capabilities: Short-term focus areas
    • Other influencing factors
  • What is 6G and why does it matter?
    • Who’s driving the 6G discussion?
    • What are they saying?
    • The reality of moving from 5G Advanced to 6G
    • Likely roll out of 6G capabilities
  • Regulation and geopolitics
    • The expected impact of regulation and geopolitics
    • Summary of 6G consortiums and other interested parties
  • 6G products and services
  • Requirements for 6G
    • AI/ML in 6G
    • 6G security
    • 6G privacy
    • 6G sustainability
  • Drivers and barriers to 6G deployment
    • Short-term drivers
    • Short-term barriers
    • Long-term drivers
    • Long-term barriers
  • Conclusion: Realistic expectations for 6G
    • The reality: What we know for certain about 6G / IMT2030
    • Possibilities: Focus areas for 6G development
    • The hype: Highly unlikely or impossible by 2030

Related research

 

Enter your details below to request an extract of the report

Driving the agility flywheel: the stepwise journey to agile

Agility is front of mind, now more than ever

Telecoms operators today face an increasingly challenging market, with pressure coming from new non-telco competitors, the demands of unfamiliar B2B2X business models that emerge from new enterprise opportunities across industries and the need to make significant investments in 5G. As the telecoms industry undergoes these changes, operators are considering how best to realise commercial opportunities, particularly in enterprise markets, through new types of value-added services and capabilities that 5G can bring.

However, operators need to be able to react to not just near-term known opportunities as they arise but ready themselves for opportunities that are still being imagined. With such uncertainty, agility, with the quick responsiveness and unified focus it implies, is integral to an operator’s continued success and its ability to capitalise on these opportunities.

Traditional linear supply models are now being complemented by more interconnected ecosystems of customers and partners. Innovation of products and services is a primary function of these decentralised supply models. Ecosystems allow the disparate needs of participants to be met through highly configurable assets rather than waiting for a centralised player to understand the complete picture. This emphasises the importance of programmability in maximising the value returned on your assets, both in end-to-end solutions you deliver, and in those where you are providing a component of another party’s system. The need for agility has never been stronger, and this has accelerated transformation initiatives within operators in recent years.

Enter your details below to request an extract of the report

Concepts of agility have crystallised in meaning

In 2015, STL Partners published a report on ‘The Agile Operator: 5 key ways to meet the agility challenge’, exploring the concept and characteristics of operator agility, including what it means to operators, key areas of agility and the challenges in the agile transformation. Today, the definition of agility remains as broad as in 2015 but many concepts of agility have crystallised through wider acceptance of the importance of the construct across different parts of the organisation.

Agility today is a pervasive philosophy of incremental innovation learned from software development that emphasises both speed of innovation at scale and carrier-grade resilience. This is achieved through cloud native modular architectures and practices such as sprints, DevOps and continuous integration and continuous delivery (CI/CD) – occurring in virtuous cycle we call the agility flywheel.

The Agility Flywheel

agility-flywheel

Source: STL Partners

Six years ago, operators were largely looking to borrow only certain elements of cloud native for adoption in specific pockets within the organisation, such as IT. Now, the cloud model is more widely embraced across the business and telcos profess ambitions to become software-centric companies.

Same problem, different constraints

Cloud native is the most fundamental version of the componentised cloud software vision and progress towards this ideal of agility has been heavily constrained by operators’ underlying capabilities. In 2015, operators were just starting to embark on their network virtualisation journeys with barriers such as siloed legacy IT stacks, inelastic infrastructures and software lifecycles that were architecture constrained. Though these barriers continue to be a challenge for many, the operators at the forefront – now unhindered by these basic constraints – have been driving a resurgence and general acceleration towards agility organisation-wide, facing new challenges around the unknowns underpinning the requirements of future capabilities.

With 5G, the network itself is designed as cloud native from the ground up, as are the leading edge of enterprise applications recently deployed by operators, alleviating by design some of the constraints on operators’ ability to become more agile. Uncertainty around what future opportunities will look like and how to support them requires agility to run deep into all of an operators’ processes and capabilities. Though there is a vast raft of other opportunities that do not need cloud native, ultimately the market is evolving in this direction and operators should benchmark ambitions on the leading edge, with a plan to get there incrementally. This report looks to address the following key question:

Given the flexibility and driving force that 5G provides, how can operators take advantage of recent enablers to drive greater agility and thrive in the current pace of change?

Enter your details below to request an extract of the report

 

Table of Contents

    • Executive Summary
    • Agility is front of mind, now more than ever
      • Concepts of agility have crystallised in meaning
      • Same problem, different constraints
    • Ambitions to be a software-centric business
      • Cloudification is supporting the need for agility
      • A balance between seemingly opposing concepts
    • You are only as agile as your slowest limb
      • Agility is achieved stepwise across three fronts
      • Agile IT and networks in the decoupled model
      • Renewed need for orchestration that is dynamic
      • Enabling and monetising telco capabilities
      • Creating momentum for the agility flywheel
    • Recommendations and conclusions

NFV and OSS: Virtualization meets reality

Introduction: New virtual network, same old OSS

The relationship between NFV and OSS

This report discusses the relationship between NFV (Network Functions Virtualization) and OSS (Operations Support Systems), and the difficulties that operators and the developer community are facing in migrating from legacy OSS to NFV-based methods for delivering and managing services.

OSS are essentially the software systems and applications that are used to deliver services and manage network resources and elements in legacy telecom networks – such as, to name but a few:

  • Service provisioning: designing and planning a new service, and deploying it to the network elements required to deliver it
  • Service fulfillment: in its broader definition, this corresponds to the ‘order-to-activation’ (O2A) process, i.e. the sequence of actions enabling a service order to be logged, resourced on the network, configured to the relevant network elements, and activated
  • Service assurance: group of processes involved in monitoring network performance and service quality, and in proactively preventing or retrospectively repairing defective performance or network faults
  • Inventory and network resource management: managing the physical and logical network assets and service resources; keeping track of their utilization, condition and availability to be allocated to new services or customers; and therefore, closely related to service fulfillment and assurance.

As these examples illustrate, OSS perform highly specific management functions tied to physical network equipment and components, or Physical Network Functions (PNFs). As part of the migration to NFV, many of these PNFs are now being replaced by Virtualized Network Functions (VNFs) and microservices. NFV is developing its own methods for managing VNFs, and for configuring, sequencing and resourcing them to create, deliver and manage services: so-called Management and Orchestration (MANO) frameworks.The MANO plays a critical role in delivering the expected benefits of NFV, in that it is designed to enable network functions, resources and services to be much more easily programmed, combined, modified and scaled than is possible with PNFs and with OSS that perform isolated functions or are assigned only to individual pieces of kit.The problem that operators are now confronting is that many existing OSS will need to be retained while networks are transitioning to NFV and MANO systems. This is for a number of reasons. 

  • Executive Summary
  • Next Steps
  • Introduction: New virtual network, same old OSS
  • The relationship between NFV and OSS
  • Potential solutions and key ongoing problem areas
  • Conclusion: OSS may ultimately be going away – but not anytime soon
  • OSS-NFV interoperability: three approaches
  • OSS-NFV integration method Number 1: use the existing BSS / OSS to manage both legacy and virtualized services
  • OSS-NFV integration method number 2: Use a flexible combination of existing OSS for legacy infrastructure and services, and MANO systems for NFV
  • OSS-NFV integration method number 3: Replace the existing OSS altogether using a new MANO system
  • Three critical problem areas: service assurance, information models, and skills
  • 1. Closed-loop service fulfillment and assurance
  • 2. A Common Information Model (CIM)
  • 3. Skills, organization and processes

 

  • Figure 1: Classic TMN BSS / OSS framework
  • Figure 2: Telcos’ BSS / OSS strategy for NFV
  • Figure 3: Transition from BSS / OSS-driven to NFV-driven service management as proposed by Amdocs
  • Figure 4: NFV / SDN functions as modules within the Comarch OSS architecture
  • Figure 5: Closed-loop network capacity augmentation using Netscout virtual IP probes and a common data model
  • Figure 6: Service-driven OSS-MANO integration according to Amdocs
  • Figure 7: HPE’s model for OSS-MANO integration
  • Figure 8: BSS and OSS still out of scope in OSM 1.0
  • Figure 9: Subordination of OSS to the MANO system in Open-O
  • Figure 10: Vodafone Ocean platform architecture
  • Figure 11: Vodafone’s VPN+ PoC
  • Figure 12: Operators’ main concerns regarding NFV
  • Figure 13: Closed-loop service fulfillment and assurance
  • Figure 14: Relationship between Information Model and Data Models

The Devil’s Advocate: SDN / NFV can never work, and here’s why!

Introduction

The Advocatus Diaboli (Latin for Devil’s Advocate), was formerly an official position within the Catholic Church; one who “argued against the canonization (sainthood) of a candidate in order to uncover any character flaws or misrepresentation evidence favouring canonization”.

In common parlance, the term a “devil’s advocate” describes someone who, given a certain point of view, takes a position they do not necessarily agree with (or simply an alternative position from the accepted norm), for the sake of debate or to explore the thought further.

SDN / NFV runs into problems: a ‘devil’s advocate’ assessment

The telco industry’s drive toward Network Functions Virtualization (NFV) got going in a major way in 2014, with high expectations that the technology – along with its sister technology SDN (Software-Defined Networking ) – would revolutionize operators’ abilities to deliver innovative communications and digital services, and transform the ways in which these services can be purchased and consumed.

Unsurprisingly, as with so many of these ‘revolutions’, early optimism has now given way to the realization that full-scope NFV deployment will be complex, time-consuming and expensive. Meanwhile, it has become apparent that the technology may not transform telcos’ operations and financial fortunes as much as originally expected.

The following is a presentation of the case against SDN / NFV from the perspective of the ‘devil’s advocate’. It is a combination of the types of criticism that have been voiced in recent times, but taken to the extreme so as to represent a ‘damning’ indictment of the industry effort around these technologies. This is not the official view of STL Partners but rather an attempt to explore the limits of the skeptical position.

We will respond to each of the devil’s advocate’s arguments in turn in the second half of this report; and, in keeping with good analytical practice, we will endeavor to present a balanced synthesis at the end.

‘It’ll never work’: the devil’s advocate speaks

And here’s why:

1. Questionable financial and operational benefits:

Will NFV ever deliver any real cost savings or capacity gains? Operators that have launched NFV-based services have not yet provided any hard evidence that they have achieved notable reductions in their opex and capex on the basis of the technology, or any evidence that the data-carrying capacity, performance or flexibility of their networks have significantly improved.

Operators talk a good talk, but where is the actual financial and operating data that supports the NFV business case? Are they refusing to disclose the figures because they are in fact negative or inconclusive? And if this is so, how can we have any confidence that NFV and SDN will deliver anything like the long-term cost and performance benefits that have been touted for them?

 

  • Executive Summary
  • Introduction
  • SDN / NFV runs into problems: a ‘devil’s advocate’ assessment
  • ‘It’ll never work’: the devil’s advocate speaks
  • 1. Questionable financial and operational benefits
  • 2. Wasted investments and built-in obsolescence
  • 3. Depreciation losses
  • 4. Difficulties in testing and deploying
  • 5. Telco cloud or pie in the sky?
  • 6. Losing focus on competitors because of focusing on networks:
  • 7. Change the culture and get agile?
  • 8.It’s too complicated
  • The case for the defense
  • 1. Clear financial and operational benefits:
  • 2. Strong short-term investment and business case
  • 3. Different depreciation and valuation models apply to virtualized assets
  • 4. Short-term pain for long-term gains
  • 5. Don’t cloud your vision of the technological future
  • 6. Telcos can compete in the present while building the future
  • 7. Operators both can and must transform their culture and skills base to become more agile
  • 8. It may be complicated, but is that a reason not to attempt it
  • A balanced view of NFV: ‘making a virtual out of necessity’ without making NFV a virtue in itself

Full Article: M-Banking: can Zain’s new business model for ZAP rival M-PESA?

One of the major successes of the mobile industry in recent years has been the growth of m-banking in the developing world. Although a considerable number of well-funded, vendor- and operator-backed efforts to deploy m-payments systems in Europe have failed, m-banking succeeded in Africa and Asia – largely because it catered to needs that the rest of the financial system simply didn’t supply. Now, a major emerging market operator, Zain, has entered the game with a radically different business model.

Another driver of success was that the developers of M-PESA and other systems observed that the airtime credit transfer features built into their prepaid OSS solutions were being used by their subscribers as a crude money transfer system; rather than prescribing a solution, they built on user creativity. Telco 2.0 is interested in this not only because this form of development is profoundly Telco 2.0, but also because m-banking is the ultimate example of the opportunities that appear where there is a large and positive difference between the quantity of data transferred, and its social value.

By far the best-known systems are M-PESA, developed in-house by Safaricom in Kenya and now deployed in several other countries, and Smart Telecom in the Philippines. However, as you’d expect, the success of these has attracted imitators and competitors as well as emulators. If you’d asked most people in the industry which operator was likely to reach the market first with such a product, they would probably have said Celtel, the hugely respected emerging market GSM specialists founded by Mo Ibrahim. After all, by 2006 they’d already integrated their East African HLRs, ending roaming charges in the area and permitting cross-border credit transfer, a single currency of sorts.

Well, Celtel was sold to Kuwait’s MTC not long after that, changing its name to Zain. Mo Ibrahim took his money and began offering African presidents a bonus for retiring peacefully. Now, however, Zain has moved into the mobile money business. It is certain that this will be an important moment in its development; Zain’s sheer scale makes that certain. The initial deployment covers some 100 million subscribers. This also means that some markets now have competing mobile payments services – Tanzania, for example, has Zain’s ZAP and two competing M-PESA deployments. This is probably going to teach us a lot about this business in the next few months.

Cash: the crucial application in cashless payments systems

The killer application for mobile payments is cash. This is one of the reasons projects like Simpay failed; rather than extending the existing financial system they tried to leap directly to a cashless system. Network effects are vital to understanding this; if the money in the system can’t be converted into cash, the whole system is afflicted by a first-fax problem as no-one is likely to accept payment from it. It’s therefore crucial that it deals with cash.

Cash is also the form of payment that mobile banking systems compete with. This is another reason why the successes were in cash or pre-cash economies, rather than in Western Europe – most people where Simpay was trialled have access to modern banking and ATMs readily distribute cash for all and sundry. Handling cash is always expensive and risky, whereever in the world you are; it is frequently stolen or embezzled, it needs guarding. These problems are much aggravated if there is no effective policing. Hence, in large parts of the world, people are excluded from the ability to save (or to borrow), and are reliant on expensive and frequently risky informal transfer networks.

Mobile operators were able to step into the breach because the development of PAYG (Pay As You Go) service had created an alternative, lightweight financial infrastructure, consisting of real-time OSS solutions in the network and an extended user interface, made up of various tokens (vouchers, SMS transfers) and a network of micro-entrepreneurs who sell them. The business process here essentially provides a way of authenticating to the OSS that the user presenting a voucher code has indeed paid cash to acquire a given number of minutes of use, and then recovering cash into the operator through a wholesale business relationship with the vendors. There is really very little difference between this and the corresponding process of ingesting cash into a mobile payments system – which the subscribers were quick to understand and repurpose the airtime-selling network accordingly.

But as the invaluable Valuable Bits blog points out, there is one big difference between informal airtime credit transfer and formal m-banking; transaction cost. You can be confident of getting the minutes of use you pay for, but what happens when it comes to converting them back into cash? Well, you don’t know. Valuable Bits estimates that the transaction cost ranges between 5 and 40 – 40! – per cent of the transaction, a figure that makes even Western Union’s margins look modest. And worse, it’s not a risk but an uncertainty. This form of money varies in value between people and between markets, and also in time. The canonical purposes of money are as a means of exchange, a store of value, and a unit of account – stability is crucial for all of these.

Trusted agent networks are decisive

So, it’s crucial to build a network of agents who are trusted by both the network and the public, so that the system can both accept cash and pay it back out. The golden rule of cellular has always been that superior coverage wins. If you’re already selling airtime this way, you’ve got an advantage; and in fact, there is an earlier alternative system that works this way. In some places, bus companies use the fact they collect cash in strange and remote places to run a similar money transfer business. In fact, you don’t necessarily need a transport system at all – the hawala has worked rather well for many, many years purely on trust and the assumption that transfers roughly balance out.

In a realistic deployment, it’s likely that there will be clearly defined source and sink areas, though – for example, people in the city (a source) send money to the countryside (a sink), migrants to the Gulf (a source) send money back to East Africa (a sink). So it’s more complicated than we often think; the wholesale element may need to advance cash to agents in some places in order to keep the system liquid, rather like a central bank. But whatever else you do, first of all, you need the agents, which means that the business model must make room for them to earn a living.

M-PESA originally used the simplest possible option – a fixed transaction fee. This has the problem that it is regressive; the poor pay more as a percentage of their transactions. In an environment where the competition is cash or the informal sector, this worked against their interests; they later introduced a scale of pricing that tapered the transaction fee off as the transaction size fell. Either way, the pricing was pre-determined with regard to the end user.

ZAP works completely differently. Instead of a rate card, ZAP has a revenue-share between the agents and the network, and the vendors can set whatever price they believe the market will bear. Further, Zain is planning to monetise this by collecting an explicit transaction fee from their agents in cash; most other operators have instead used an implicit fee by charging for SMS or USSD traffic used by the service.

Reducing uncertainty – Zain and the Kerala example

In an oversimplified way, this ought to have the effect of rapidly discovering the market clearing price. However, it’s also true that the market for this service is likely to be geographically fragmented, locally monopolistic, and skewed by asymmetric information. In pure economic theory, this may be a problem but it won’t be for long – the markets will eventually converge. But businesses don’t live in theory – they live in practice, and a bad start can easily wreck your chances for good. Remember WAP.

It’s a brave decision from Zain, but we’re concerned it may defeat the purpose of m-banking. After all, one of the main sources of value to the end-user is getting rid of the uncertainty, risk, and transaction costs associated with informal solutions. The famous Kerala study showed that the deployment of GSM radically cut the volatility of the price of fish, and also the spreads between different markets, with the result that the volume of fish that failed to find a buyer before going off was drastically curtailed. The chart below shows the price of fish over time at three markets which successively received GSM coverage; the drop in volatility is clearly shown.

jensenplot.jpg (Source here.)

Uncertainty and transaction costs are exactly the friction that Telco 2.0 keeps saying that telcos should specialise in getting rid of; they are also very often the reason why people decide to form a two-sided trading hub.Hernando de Soto, the Peruvian economist who argues that secure title to property and land is the crucial factor in economic development, has paid the price of success by having his views turn into an oversimplified cliche, but few would disagree with his basic contention that uncertainty and insecurity are a major brake on bottom-up economic development. Therefore we’re concerned that a degree of this seems to be inherent in this model.

Conclusions: more two-sidedness needed

Perhaps this is intended to encourage the recruitment of agents. However, field reports suggest that the agents themselves are harder to find than their competitors. Zain is also charging for both deposits and withdrawals; two-sided theory would suggest that it would be wiser to choose one side to subsidise in order to build transaction numbers.

Experience in West Africa with Orange’s m-banking operations shows that a significant (15%) share of revenue can come from bank interest on customer balances, and the greater the volume of money in the system, the more likely it is that transactions will be carried out by credit transfer rather than cash.

Our preliminary analysis is therefore that deposits should probably be free, that pricing should be as stable and transparent as possible and probably collected implicitly (as SMS or USSD service charges), and that agents should perhaps receive an allocation of free minutes of use for sale in recognition of their recruitment of users rather than cash, minimising the complexity of the system’s internal economy and its need for internal cash transfers.

Do’s and Don’ts of M-banking

On this score, we suspect that Zain may need to change its m-banking business model to compete with M-PESA and Z-PESA effectively.

  • M-banking’s value proposition is reduced cost and uncertainty

  • Agent recruitment is vital

  • Minimise internal cash transfers as far as possible