Facebook: Telcos’ New Best Friend?

How Facebook is changing

A history of adaptation

One of the things that sets Facebook apart from its largely defunct predecessors, such as MySpace, Geocities and Friends Reunited, is its ability to adapt to the evolution of the Internet and consumer behaviour. In its decade-long history, Facebook has evolved from a text-heavy, PC-based experience used by American students into a world-leading digital communications and commerce platform used by people of all ages. The basic student matchmaking service Zuckerberg and his fellow Harvard students created in 2004 now matches buyers and sellers in competition with Google, Amazon and eBay (see Figure 1).

Figure 1: From student matchmaking service to a leading digital commerce platform

Source: Zuckerberg’s Facebook page and Facebook investor relations

Launched in early 2004, Facebook initially served as a relatively basic directory with photos and limited communications functionality for Harvard students only. In the spring of 2004, it began to expand to other universities, supported by seed funding from Peter Thiel (co-founder of Paypal). In September 2005, Facebook was opened up to the employees of some technology companies, including Apple and Microsoft. By the end of 2005, it had reached five million users.

Accel Partners invested US$12.7 million in the company in May 2005 and Greylock Partners and others followed this up with another US$27.5 million in March 2006. The additional investment enabled Facebook to expand rapidly. During 2006, it added the hugely popular newsfeed and the share functions and opened up the registration process to anyone. By December 2006, Facebook had 12 million users.

The Facebook Platform was launched in 2007, enabling affiliate sites and developers to interact and create applications for the social network. In a far-sighted move, Microsoft invested US$240 million in October 2007, taking a 1.6% stake and valuing Facebook at US$15 billion. By August 2008, Facebook had 100 million users.

Achieving the 100 million user milestone appears to have given Facebook ‘critical mass’ because at that point growth accelerated dramatically. The company doubled its user base to 200 million in nine months (May 2009) and has continued to grow at a similar rate since then.

As usage continue to grow rapidly, it was increasingly clear that Facebook could erode Google’s dominant position in the Internet advertising market. In June 2011, Google launched the Google + social network – the latest move in a series of efforts by the search giant to weaken Facebook’s dominance of the social networking market. But, like its predecessors, Google+ has had little impact on Facebook.

2012-2013 – the paranoid years

Although Facebook shrugged off the challenge from Google+, the rapid rise of the mobile Internet did cause the social network to wobble in 2012. The service, which had been designed for use on desktop PCs, didn’t work so well on mobile devices, both in terms of providing a compelling user experience and achieving monetisation. Realising Facebook could be disrupted by the rise of the mobile Internet, Zuckerberg belatedly called a mass staff meeting and announced a “mobile first” strategy in early 2012.

In an IPO filing in February 2012, Facebook acknowledged it wasn’t sure it could effectively monetize mobile usage without alienating users. “Growth in use of Facebook through our mobile products, where we do not currently display ads, as a substitute for use on personal computers may negatively affect our revenue and financial results,” it duly noted in the filing.

Although usage of Facebook continued to rise on both the desktop and the mobile, there was increasing speculation that it could be superseded by a more mobile-friendly service, such as fast-growing photo-sharing service Instagram. Zuckerberg’s reaction was to buy Instagram for US$1 billion in April 2012 (a bargain compared with the $21 billion plus Facebook paid for WhatsApp less than two years later).

Moreover, Facebook did figure out how to monetise its mobile usage. Cautiously at first, it began embedding adverts into consumers’ newsfeeds, so that they were difficult to ignore. Although Facebook and some commentators worried that consumers would find these adverts annoying, the newsfeed ads have proven to be highly effective and Facebook continued to grow. In October 2012, now a public company, Facebook triumphantly announced it had one billion active users, with 604 million of them using the mobile site.

Even so, Facebook spent much of 2013 tinkering and experimenting with changes to the user experience. For example, it altered the design of the newsfeed making the images bigger and adding in new features. But some commentators complained that the changes made the site more complicated and confusing, rather than simplifying it for mobile users equipped with a relatively small screen. In April 2013, Facebook tried a different tack, launching Facebook Home, a user interface layer for Android-compatible phones that provides a replacement home screen.

And Zuckerberg continued to worry about upstart mobile-orientated competitors. In November 2013, a number of news outlets reported that Facebook offered to buy Snapchat, which enables users to send messages that disappear after a set period, for US$3 billion. But the offer was turned down.

A few months later, Facebook announced it was acquiring the popular mobile messaging app WhatsApp for what amounted to more than US$21 billion at the time of completion.

In 2014 – going on the offensive

By acquiring WhatsApp at great expense, Facebook alleviated immediate concerns that the social network could be dislodged by another disruptor, freeing up Zuckerberg to turn his attention to new technologies and new markets. The acquisition also put to rest investors’ immediate fears that Facebook could be superseded by a more fashionable, dedicated mobile service, pushing up the share price (see the section on Facebook’s valuation). In May 2014, Facebook wrong-footed many industry watchers and some of its rivals by announcing it had agreed to acquire Oculus VR, Inc., a leading virtual reality company, for US$2 billion in cash and stock.

Zuckerberg has since described the WhatsApp and Oculus acquisitions as “big bets on the next generation of communication and computing platforms.” And Facebook is also investing heavily in organic expansion, increasing its headcount by 45% in 2014, while opening another data center in Altoona, Iowa.

Zuckerberg also continues to devote time and attention to Internet.org, a multi-company initiative to bring free basic Internet services to people who aren’t connected. Announced in August 2013, Internet.org has since launched free basic internet services in six developing countries. For example, in February 2015, Facebook and Reliance Communications launched Internet.org in India. As a result, Reliance customers in six Indian states (Tamil Nadu, Mahararashtra, Andhra Pradesh, Gujarat, Kerala, and Telangana) now have access to about 40 services ranging from news, maternal health, travel, local jobs, sports, communication, and local government information.

Zuckerberg said that more than 150 million people now have the option to connect to the internet using Internet.org, and the initiative had, so far, succeeded in connecting seven million people to the internet who didn’t before have access. “2015 is going to be an important year for our long term plans,” he noted.

The Facebook exception – no fear, more freedom

Although it is now listed, Facebook is clearly not a typical public company. Its massive lead in the social networking market has given it an unusual degree of freedom. Zuckerberg has a controlling stake in the social network (he is able to exercise voting rights with respect to a majority of the voting power of the outstanding capital stock) and the self-confidence to ignore any grumblings on Wall Street. Facebook is able to make acquisitions most other companies couldn’t contemplate and can continue to put Zuckerberg’s long-term objectives ahead of those of short-term shareholders. Like Amazon, Facebook frequently reminds investors that it isn’t trying to maximise short-term profitability. And unlike Amazon, Facebook may not even be trying to maximize long-term profitability.

On Facebook’s quarterly earning calls, Zuckerberg likes to talk about Facebook’s broad, long-term aims, without explaining clearly how fulfilling these objectives will make the company money. “In the next decade, Facebook is focused on our mission to connect the entire world, welcoming billions of people to our community and connecting many more people to the internet through Internet.org (see Figure 2),” he said in the January 2015 earnings call. “Similar to our transition to mobile over the last couple of years, now we want to really focus on serving everyone in the world.”

Figure 2: Zuckerberg is pushing hard for the provision of basic Internet services

Source: Facebook.com

Not all of the company’s investors are entirely comfortable with this mission. On that earnings call, one analyst asked Zuckerberg: “Mark, I think during your remarks in every earnings call, you talk to your investors for a considerable amount of time about Facebook’s efforts to connect the world, and specifically about Internet.org which suggest you think this is important to investors. Can you clarify why you think this matters to investors?”

Zuckerberg’s response: “It matters to the kind of investors that we want to have, because we are really a mission-focused company. We wake up every day and make decisions because we want to help connect the world. That’s what we’re doing here.

“Part of the subtext of your question is that, yes, if we were only focused on making money, we might put all of our energy on just increasing ads to people in the US and the other most developed countries. But that’s not the only thing that we care about here.

“I do think that over the long term, that focusing on helping connect everyone will be a good business opportunity for us, as well. We may not be able to tell you exactly how many years that’s going to happen in. But as these countries get more connected, the economies grow, the ad markets grow, and if Facebook and the other services in our community, or the number one, and number two, three, four, five services that people are using, then over time we will be compensated for some of the value that we’ve provided. This is why we’re here. We’re here because our mission is to connect the world. I just think it’s really important that investors know that.”

Takeaways

Facebook may be a public company, but it doesn’t worry much about shareholders’ short-term aspirations. It often behaves like a private company that is focused first and foremost on fulfilling the goals of its founder. It is clear Zuckerberg is playing the long game. But it isn’t clear what yardsticks he is using to measure success. Although Zuckerberg knows Facebook needs to be profitable enough to ensure investors’ continued support, his primary goal may be to bring hundreds of millions more people online and secure his place in posterity. There is a danger that Zuckerberg’s focus on connecting people in Africa and developing Asia means that there won’t be sufficient top management attention on the multi-faceted digital commerce struggle with Google in North America and Western Europe.

Financials and business model

Network effects still strong

Within that wider mission to connect the world, Facebook continues to do a great job of connecting people to Facebook. Fuelled by network effects, Facebook says that 1.39 billion people now use Facebook each month (see Figure 3) and 890 million people use the service daily, an increase of 165 million monthly active users and 133 million daily active users in 2014. In developed markets, many consumers use Facebook as a primary medium for communications, relying on it to send messages, organize events and relay their news. As a result, in parts of Europe and North America, adults without a Facebook account are increasingly considered eccentric.

Figure 3: Facebook’s user base continues to grow rapidly

Source: Facebook and STL Partners analysis

Having said that, some active users are clearly more active and valuable than others. In a regulatory filing, Facebook admits that some active users may, in fact, be bots: “Some of our metrics have also been affected by applications on certain mobile devices that automatically contact our servers for regular updates with no user action involved, and this activity can cause our system to count the user associated with such a device as an active user on the day such contact occurs. The impact of this automatic activity on our metrics varied by geography because mobile usage varies in different regions of the world.”

This automatic polling of Facebook’s servers by mobile devices makes it difficult to judge the true value of the social network’s user base. Anecdotal evidence suggests many people with Facebook profiles are kept active on Facebook primarily by their smartphone apps, rather than because they are actively choosing to use the service. Still, Facebook would argue that these people are seeing the notifications on their mobile devices and are, therefore, at least partially engaged.

 

  • Executive Summary
  • How Facebook is changing
  • A history of adaptation
  • The Facebook exception – no fear, more freedom
  • Financials and business model
  • Growth prospects for the core business
  • User growth
  • Monetisation – better targeting, higher prices
  • Mobile advertising spend lags behind usage
  • The Facebook Platform – Beyond the Walled Garden
  • Multimedia – taking on YouTube
  • Search – challenging Google’s core business
  • Enabling transactions – moving beyond advertising
  • Virtual reality – a long-term game
  • Takeaways
  • Threats and risks
  • Facebook fatigue
  • Google – Facebook enemy number one
  • Privacy concerns
  • Wearables and the Internet of Things
  • Local commerce – in need of a map
  • Facebook and communication services
  • Conclusions
  • Facebook is spread too thin
  • Partnering with Facebook – why and how
  • Competing with Facebook – why and how

 

  • Figure 1: From student matchmaking service to a leading digital commerce platform
  • Figure 2: Zuckerberg is pushing hard for the provision of basic Internet services
  • Figure 3: Facebook’s user base continues to grow rapidly
  • Figure 4: Facebook’s revenue growth has accelerated in the past two years
  • Figure 5: Facebook’s ARPU has risen sharply in the past two years
  • Figure 6: After wobbling in 2012, investors’ belief in Facebook has strengthened
  • Figure 7: Despite a rebound, Facebook’s valuation per user is still below its peak
  • Figure 8: Facebook could be serving 2.3 billion people by 2020
  • Figure 9: Share of digital advertising – Facebook is starting to close the gap on Google but remains a long way behind
  • Figure 10: The gap between click through rates for search and social remains substantial
  • Figure 11: Social networks’ revenue per click is rising but remains 40% of search
  • Figure 12: Facebook’s advertising has moved from the right column to centre stage
  • Figure 13: Facebook’s startling mobile advertising growth
  • Figure 14: Zynga’s share price reflects decline of Facebook.com as an app platform
  • Figure 15 – Facebook Connect – an integral part of the Facebook Platform
  • Figure 16: Leading Internet players’ share of social log-ins over time
  • Figure 17: Facebook’s personalised search proposition
  • Figure 18: Facebook’s new buy button – embedded in a newsfeed post
  • Figure 19: The rise and rise of Android – not good for Facebook
  • Figure 21: Facebook and Google are both heavily associated with privacy issues
  • Figure 22: Facebook wants to conquer the Wheel of Digital Commerce
  • Figure 23: Facebook’s cash flow is far behind that of Google and Apple
  • Figure 24: Facebook’s capital expenditure is relatively modest compared with peers
  • Figure 25: Facebook’s capex/revenue ratio has been high but is falling

 

NFV: Great Promises, but How to Deliver?

Introduction

What’s the fuss about NFV?

Today, it seems that suddenly everything has become virtual: there are virtual machines, virtual LANs, virtual networks, virtual network interfaces, virtual switches, virtual routers and virtual functions. The two most recent and highly visible developments in Network Virtualisation are Software Defined Networking (SDN) and Network Functions Virtualisation (NFV). They are often used in the same breath, and are related but different.

Software Defined Networking has been around as a concept since 2008, has seen initial deployments in Data Centres as a Local Area Networking technology and according to early adopters such as Google, SDNs have helped to achieve better utilisation of data centre operations and of Data Centre Wide Area Networks. Urs Hoelzle of Google can be seen discussing Google’s deployment and findings here at the OpenNet summit in early 2012 and Google claim to be able to get 60% to 70% better utilisation out of their Data Centre WAN. Given the cost of deploying and maintaining service provider networks this could represent significant cost savings if service providers can replicate these results.

NFV – Network Functions Virtualisation – is just over two years old and yet it is already being deployed in service provider networks and has had a major impact on the networking vendor landscape. Globally the telecoms and datacomms equipment market is worth over $180bn and has been dominated by 5 vendors with around 50% of the market split between them.

Innovation and competition in the networking market has been lacking with very few major innovations in the last 12 years, the industry has focussed on capacity and speed rather than anything radically new, and start-ups that do come up with something interesting get quickly swallowed up by the established vendors. NFV has started to rock the steady ship by bringing the same technologies that revolutionised the IT computing markets, namely cloud computing, low cost off the shelf hardware, open source and virtualisation to the networking market.

Software Defined Networking (SDN)

Conventionally, networks have been built using devices that make autonomous decisions about how the network operates and how traffic flows. SDN offers new, more flexible and efficient ways to design, test, build and operate IP networks by separating the intelligence from the networking device and placing it in a single controller with a perspective of the entire network. Taking the ‘intelligence’ out of many individual components also means that it is possible to build and buy those components for less, thus reducing some costs in the network. Building on ‘Open’ standards should make it possible to select best in class vendors for different components in the network introducing innovation and competiveness.

SDN started out as a data centre technology aimed at making life easier for operators and designers to build and operate large scale data centre operations. However, it has moved into the Wide Area Network and as we shall see, it is already being deployed by telcos and service providers.

Network Functions Virtualisation (NFV)

Like SDN, NFV splits the control functions from the data forwarding functions, however while SDN does this for an entire network of things, NFV focusses specifically on network functions like routing, firewalls, load balancing, CPE etc. and looks to leverage developments in Common Off The Shelf (COTS) hardware such as generic server platforms utilising multi core CPUs.

The performance of a device like a router is critical to the overall performance of a network. Historically the only way to get this performance was to develop custom Integrated Circuits (ICs) such as Application Specific Integrated Circuits (ASICs) and build these into a device along with some intelligence to handle things like route acquisition, human interfaces and management. While off the shelf processors were good enough to handle the control plane of a device (route acquisition, human interface etc.), they typically did not have the ability to process data packets fast enough to build a viable device.

But things have moved on rapidly. Vendors like Intel have put specific focus on improving the data plane performance of COTS based devices and the performance of the devices has risen exponentially. Figure 1 clearly demonstrates that in just 3 years (2010 – 2013) a tenfold increase in packet processing or data plane performance has been achieved. Generally, CPU performance has been tracking Moore’s law which originally stated that the number of components in an integrated circuit would double very two years. If the number of components are related to performance, the same can be said about CPU performance. For example Intel will ship its latest processor family in the second half of 2015 which could have up to 72 individual CPU cores compared to the four or 6 used in 2010/2013.

Figure 1 – Intel Hardware performance

Source: ETSI & Telefonica

NFV was started by the telco industry to leverage the capability of COTS based devices to reduce the cost or networking equipment and more importantly to introduce innovation and more competition to the networking market.

Since its inception in 2012 and running as a special interest group within ETSI (European Telecommunications Standards Institute), NFV has proven to be a valuable initiative, not just from a cost perspective, but more importantly with what it means to telcos and service providers in being able to develop, test and launch new services quickly and efficiently.

ETSI set up a number of work streams to tackle the issues of performance, management & orchestration, proof of concept, reference architecture etc. and externally organisations like OPNFV (Open Platform for NFV) have brought together a number of vendors and interested parties.

Why do we need NFV? What we already have works!

NFV came into being to solve a number of problems. Dedicated appliances from the big networking vendors typically do one thing and do that thing very well, switching or routing packets, acting as a network firewall etc. But as each is dedicated to a particular task and has its own user interface, things can get a little complicated when there are hundreds of different devices to manage and staff to keep trained and updated. Devices also tend to be used for one specific application and reuse is sometimes difficult resulting in expensive obsolescence. By running network functions on a COTS based platform most of these issues go away resulting in:

  • Lower operating costs (some claim up to 80% less)
  • Faster time to market
  • Better integration between network functions
  • The ability to rapidly develop, test, deploy and iterate a new product
  • Lower risk associated with new product development
  • The ability to rapidly respond to market changes leading to greater agility
  • Less complex operations and better customer relations

And the real benefits are not just in the area of cost savings, they are all about time to market, being able to respond quickly to market demands and in essence becoming more agile.

The real benefits

If the real benefits of NFV are not just about cost savings and are about agility, how is this delivered? Agility comes from a number of different aspects, for example the ability to orchestrate a number of VNFs and the network to deliver a suite or chain of network functions for an individual user or application. This has been the focus of the ETSI Management and Orchestration (MANO) workstream.

MANO will be crucial to the long term success of NFV. MANO provides automation and provisioning and will interface with existing provisioning and billing platforms such as existing OSS/BSS. MANO will allow the use and reuse of VNFs, networking objects, chains of services and via external APIs allow applications to request and control the creation of specific services.

Figure 2 – Orchestration of Virtual Network Functions

Source: STL Partners

Figure 2 shows a hypothetical service chain created for a residential user accessing a network server. The service chain is made up of a number of VNFs that are used as required and then discarded when not needed as part of the service. For example the Broadband Remote Access Server becomes a VNF running on a common platform rather than a dedicated hardware appliance. As the users STB connects to the network, the authentication component checks that the user is valid and has a current account, but drops out of the chain once this function has been performed. The firewall is used for the duration of the connection and other components are used as required for example Deep Packet Inspection and load balancing. Equally as the user accesses other services such as media, Internet and voice services different VNFs can be brought into play such as SBC and Network Storage.

Sounds great, but is it real, is anyone doing anything useful?

The short answer is yes, there are live deployments of NFV in many service provider networks and NFV is having a real impact on costs and time to market detailed in this report. For example:

  • Vodafone Spain’s Lowi MVNO
  • Telefonica’s vCPE trial
  • AT&T Domain 2.0 (see pages 22 – 23 for more on these examples)

 

  • Executive Summary
  • Introduction
  • WTF – what’s the fuss about NFV?
  • Software Defined Networking (SDN)
  • Network Functions Virtualisation (NFV)
  • Why do we need NFV? What we already have works!
  • The real benefits
  • Sounds great, but is it real, is anyone doing anything useful?
  • The Industry Landscape of NFV
  • Where did NFV come from?
  • Any drawbacks?
  • Open Platform for NFV – OPNFV
  • Proprietary NFV platforms
  • NFV market size
  • SDN and NFV – what’s the difference?
  • Management and Orchestration (MANO)
  • What are the leading players doing?
  • NFV – Telco examples
  • NFV Vendors Overview
  • Analysis: the key challenges
  • Does it really work well enough?
  • Open Platforms vs. Walled Gardens
  • How to transition?
  • It’s not if, but when
  • Conclusions and recommendations
  • Appendices – NFV Reference architecture

 

  • Figure 1 – Intel Hardware performance
  • Figure 2 – Orchestration of Virtual Network Functions
  • Figure 3 – ETSI’s vision for Network Functions Virtualisation
  • Figure 4 – Typical Network device showing control and data planes
  • Figure 5 – Metaswitch SBC performance running on 8 x CPU Cores
  • Figure 6 – OPNFV Membership
  • Figure 7 – Intel OPNFV reference stack and platform
  • Figure 8 – Telecom equipment vendor market shares
  • Figure 9 – Autonomy Routing
  • Figure 10 – SDN Control of network topology
  • Figure 11 – ETSI reference architecture shown overlaid with functional layers
  • Figure 12 – Virtual switch conceptualised

 

Connected Car: Key Trends, Players and Battlegrounds

Introduction: Putting the Car in Context

A growing mythology around M2M and the Internet of Things

The ‘Internet of Things’, which is sometimes used interchangeably with ‘machine-to-machine’ communication (M2M), is not a new idea: as a term, it was coined by Kevin Ashton as early as 1999. Although initially focused on industrial applications, such as the use of RFID for tagging items in the supply chain, usage of the term has now evolved to more broadly describe the embedding of sensors, connectivity and (to varying degrees) intelligence into traditionally ‘dumb’ environments. Figure 1 below outlines some of the service areas potentially disrupted, enabled or enhanced by the Internet of Things (IoT):

Figure 1: Selected Internet of Things service areas

Source: STL Partners

To put the IoT in context, one can conceive of the Internet as having experienced three key generations to date. The first generation dates back to the 1970s, which involved ARPANET and the interconnection of various military, government and educational institutions around the United States. The second, beginning in the 1990s, can be thought of as the ‘AOL phase’, with email and web browsing becoming mainstream. Today’s generation is dominated by ‘mobile’ and ‘social’, with the two inextricably linked. The fourth generation will be signified by the arrival of the Internet of Things, in which the majority of internet traffic is generated by ‘things’ rather than humans.

The enormous growth of networks, cheaper connectivity, proliferation of smart devices, more efficient wireless protocols (e.g. ZigBee) and various government incentives/regulations have led many to confidently predict that the fourth generation of the Internet – the Internet of Things – will soon be upon us. Visions include the “Internet of Everything” (Cisco) or a “connected future” with 50 billion connected devices by 2020 (Ericsson). Similarly rapid growth is also forecasted by the MIT Technology Review, as detailed below:

Figure 2: Representative connected devices forecast, 2010-20

Source: MIT Technology Review

This optimism is reflected in broader market excitement, which has been intensified by such headline-grabbing announcements as Google’s $3.2bn acquisition of Nest Labs (discussed in depth in the Connected Home EB) and Apple’s recently announced Watch. Data extracted from Google Trends (Figure 3) shows that the popularity of ‘Internet of Things’ as a search term has increased fivefold since 2012:

Figure 3: The popularity of ‘Internet of Things’ as a search term on Google since 2004

Source: Google Trends

However, the IoT to date has predominantly been a case study in hype vs. reality. Technologists have argued for more than a decade about when the army of connected devices will arrive, as well as what we should be calling this phenomenon, and with this a mythology has grown around the Internet of Things: widespread disruption was promised, but it has not yet materialised. To many consumers the IoT can sound all too far-fetched: do I really need a refrigerator with a web browser?

Yet for every ‘killer app’ that wasn’t we are now seeing inroads being made elsewhere. Smart meters are being deployed in large numbers around the world, wearable technology is rapidly increasing in popularity, and many are hailing the connected car as the ‘next big thing’. Looking at the connected car, for example, 2013 saw a dramatic increase in the amount of VC funding it received:

Figure 4: Connected car VC activity, 2010-13

Source: CB Insights Venture Capital Database

The Internet of Things is potentially an important phenomenon for all, but it is of particular relevance to mobile network operators (MNOs) and network equipment providers. Beyond providing cellular connectivity to many of these devices, the theory is that MNOs can expand across the value chain and generate material and sustainable new revenues as their core business continues to decline (for more, see the ‘M2M 2.0: New Approaches Needed’ Executive Briefing).

Nevertheless, the temptation is always to focus on the grandiose but less well-defined opportunities of the future (e.g. smart grids, smart cities) rather than the less expansive but more easily monetised ones of today. It is easy to forget that MNOs have been active to varying degrees in this space for some time: for example, O2 UK had a surprisingly large business serving fleet operators with the 9.6Kbps Mobitex data network for much of the 2000s. To further substantiate this context, we will address three initial questions:

  1. Is there a difference between M2M and the Internet of Things?
  2. Which geographies are currently seeing the most traction?
  3. Which verticals are currently seeing the most traction?

These are now addressed in turn…

 

  • Executive Summary
  • Introduction: Putting the Car in Context
  • A growing mythology around M2M and the Internet of Things
  • The Internet of Things: a vision of what M2M can become
  • M2M today: driven by specific geographies and verticals
  • Background: History and Growth Drivers
  • History: from luxury models to mass market deployment
  • Growth drivers: macroeconomics, regulation, technology and the ‘connected consumer’
  • Ecosystem: Services and Value Chain
  • Service areas: data flows vs. consumer value proposition
  • Value chain: increasingly complex with two key battlegrounds
  • Markets: Key Geographies Today
  • Conclusions

 

  • Figure 1: Selected Internet of Things service areas
  • Figure 2: Representative connected devices forecast, 2010-20
  • Figure 3: The popularity of ‘Internet of Things’ as a search term on Google since 2004
  • Figure 4: Connected car VC activity, 2010-13
  • Figure 5: Candidate differences between M2M and the Internet of Things
  • Figure 6: Selected leading MNOs by M2M connections globally
  • Figure 7: M2M market maturity vs. growth by geographic region
  • Figure 8: Global M2M connections by vertical, 2013-20
  • Figure 9: Global passenger car profit by geography, 2007-12
  • Figure 10: A connected car services framework
  • Figure 11: Ericsson’s vision of the connected car’s integration with the IoT
  • Figure 12: The emerging connected car value chain
  • Figure 13: Different sources of in-car connectivity
  • Figure 14: New passenger car sales vs. consumer electronics spending by market
  • Figure 15: Index of digital content spending (aggregate and per capita), 2013
  • Figure 16: OEM embedded modem shipments by region, 2014-20
  • Figure 17: Telco 2.0™ ‘two-sided’ telecoms business model

Connected Home: Telcos vs Google (Nest, Apple, Samsung, +…)

Introduction 

On January 13th 2014, Google announced its acquisition of Nest Labs for $3.2bn in cash consideration. Nest Labs, or ‘Nest’ for short, is a home automation company founded in 2010 and based in California which manufactures ‘smart’ thermostats and smoke/carbon monoxide detectors. Prior to this announcement, Google already had an approximately 12% equity stake in Nest following its Series B funding round in 2011.

Google is known as a prolific investor and acquirer of companies: during 2012 and 2013 it spent $17bn on acquisitions alone, which was more than Apple, Microsoft, Facebook and Yahoo combined (at $13bn) . Google has even been known to average one acquisition per week for extended periods of time. Nest, however, was not just any acquisition. For one, whilst the details of the acquisition were being ironed out Nest was separately in the process of raising a new round of investment which implicitly valued it at c. $2bn. Google, therefore, appears to have paid a premium of over 50%.

This analysis can be extended by examining the transaction under three different, but complementary, lights.

Google + Nest: why it’s an interesting and important deal

  • Firstly, looking at Nest’s market capitalisation relative to its established competitors suggests that its long-run growth prospects are seen to be very strong

At the time of the acquisition, estimates placed Nest as selling 100k of its flagship product (the ‘Nest Thermostat’) per month . With each thermostat retailing at c. $250 each, this put its revenue at approximately $300m per annum. Now, looking at the ratio of Nest’s market capitalisation to revenue compared to two of its established competitors (Lennox and Honeywell) tells an interesting story:

Figure 1: Nest vs. competitors’ market capitalisation to revenue

 

Source: Company accounts, Morgan Stanley

Such a disparity suggests that Nest’s long-run growth prospects, in terms of both revenue and free cash flow, are believed to be substantially higher than the industry average. 
  • Secondly, looking at Google’s own market capitalisation suggests that the capital markets see considerable value in (and synergies from) its acquisition of Nest

Prior to the deal’s announcement, Google’s share price was oscillating around the $560 mark. Following the acquisition, Google’s share price began averaging closer to $580. On the day of the announcement itself, Google’s share price increased from $561 to $574 which, crucially, reflected a $9bn increase in market capitalisation . In other words, the value placed on Google by the capital markets increased by nearly 300% of the deal’s value. This is shown in Figure 2 below:

Figure 2: Google’s share price pre- and post-Nest acquisition

 

Source: Google Finance

This implies that the capital markets either see Google as being well positioned to add unique value to Nest, Nest as being able to strongly complement Google’s existing activities, or both.

  • Thirdly, viewing the Nest acquisition in the context of Google’s historic and recent M&A activity shows both its own specific financial significance and the changing face of Google’s acquisitions more generally

At $3.2bn, the acquisition of Nest represents Google’s second largest acquisition of all time. The largest was its purchase of Motorola Mobility in 2011 for $12.5bn, but Google has since reached a deal to sell the majority of its assets (excluding its patent portfolio) to Lenovo for $2.9bn. In other words, Nest is soon to become Google’s largest active, inorganic investment. Google’s ten largest acquisitions, as well as some smaller but important ones, are shown in Figure 3 below:

Figure 3: Selected acquisitions by Google, 2003-14

Source: Various

Beyond its size, the Nest acquisition also continues Google’s recent trend of acquiring companies seemingly less directly related to its core business. For example, it has been investing in artificial intelligence (DeepMind Technologies), robotics (Boston Dynamics, Industrial Perception, Redwood Robotics) and satellite imagery (Skybox Imaging).

Three questions raised by Google’s acquisition of Nest

George Geis, a professor at UCLA, claims that Google develops a series of metrics at an early stage which it later uses to judge whether or not the acquisition has been successful. He further claims that, according to these metrics, Google on average rates two-thirds of its acquisitions as successful. This positive track record, combined with the sheer size of the Nest deal, suggests that the obvious question here is also an important one:

  • What is Nest’s business model? Why did Google spend $3.2bn on Nest?

Nest’s products, the Nest Thermostat and the Nest Protect (smoke/carbon monoxide detector), sit within the relatively young space referred to as the ‘connected home’, which is defined and discussed in more detail here. One natural question following the Nest deal is whether Google’s high-profile involvement and backing of a (leading) company in the connected home space will accelerate its adoption. This suggests the following, more general, question:

  • What does the Nest acquisition mean for the broader connected home market?

Finally, there is a question to be asked around the implications of this deal for Telcos and their partners. Many Telcos are now active in this space, but they are not alone: internet players (e.g. Google and Apple), big technology companies (e.g. Samsung), utilities (e.g. British Gas) and security companies (e.g. ADT) are all increasing their involvement too. With different strategies being adopted by different players, the following question follows naturally:

  • What does the Nest acquisition mean for telcos?

 

  • Executive Summary
  • Introduction
  • Google + Nest: why it’s an interesting and important deal
  • Three questions raised by Google’s acquisition of Nest
  • Understanding Nest and Connected Homes
  • Nest: reinventing everyday objects to make them ‘smart’
  • Nest’s future: more products, more markets
  • A general framework for connected home services
  • Nest’s business model, and how Google plans to get a return on its $3.2bn investment 
  • Domain #1: Revenue from selling Nest devices is of only limited importance to Google
  • Domain #2: Energy demand response is a potentially lucrative opportunity in the connected home
  • Domain #3: Data for advertising is important, but primarily within Google’s broader IoT ambitions
  • Domain #4: Google also sees Nest as partial insurance against IoT-driven disruption
  • Domain #5: Google is pushing into the IoT to enhance its advertising business and explore new monetisation models
  • Implications for Telcos and the Connected Home
  • The connected home is happening now, but customer experience must not be overlooked
  • Telcos can employ a variety of monetisation strategies in the connected home
  • Conclusions

 

  • Figure 1: Nest vs. competitors’ market capitalisation relative to revenue
  • Figure 2: Google’s share price, pre- and post-Nest acquisition
  • Figure 3: Selected acquisitions by Google, 2003-14
  • Figure 4: The Nest Thermostat and Protect
  • Figure 5: Consumer Electronics vs. Electricity Spending by Market
  • Figure 6: A connected home services framework
  • Figure 7: Nest and Google Summary Motivation Matrix
  • Figure 8: Nest hardware revenue and free cash flow forecasts, 2014-23
  • Figure 9: PJM West Wholesale Electricity Prices, 2013
  • Figure 10: Cooling profile during a Rush Hour Rewards episode
  • Figure 11: Nest is attempting to position itself at the centre of the connected home
  • Figure 12: US smartphone market share by operating system (OS), 2005-13
  • Figure 13: Google revenue breakdown, 2013
  • Figure 14: Google – Generic IoT Strategy Map
  • Figure 15: Connected device forecasts, 2010-20
  • Figure 16: Connected home timeline, 1999-Present
  • Figure 17: OnFuture EMEA 2014: The recent surge in interest in the connected home is due to?
  • Figure 18: A spectrum of connected home strategies between B2C and B2B2C (examples)
  • Figure 19: Building, buying or partnering in the connected home (examples)
  • Figure 20: Telco 2.0™ ‘two-sided’ telecoms business model

Disruptive Strategy: ‘Uncarrier’ T-Mobile vs. AT&T, VZW, and Free.fr

Introduction

Ever since the original Softbank bid for Sprint-Nextel, the industry has been awaiting a wave of price disruption in the United States, the world’s biggest and richest mobile market, and one which is still very much dominated by the dynamic duo, Verizon Wireless and AT&T Mobility.

Figure 1: The US, a rich and high-spending market

The US a rich and high-spending market

Source: Onavo, Ofcom, CMT, BNETZA, TIA, KCC, Telco accounts, STL Partners

However, the Sprint-Softbank deal saga delayed any aggressive move by Sprint for some time, and in the meantime T-Mobile USA stole a march, implemented its own very similar ‘uncarrier’ proposition strategy, and achieved a dramatic turnaround of their customer numbers.

As Figure 2 shows, the duopoly marches on, with Verizon in the lead, although the gap with AT&T has closed a little lately. Sprint, meanwhile, looks moribund, while T-Mobile has closed half the gap with the duopolists in an astonishingly short period of time.

Figure 2: The duopolists hold a lead, but a new challenger arises…

The duopolists hold a lead but a new challenger arises
Source: STL Partners

Now, a Sprint-T-Mobile merger is seriously on the cards. Again, Softbank CEO Masayoshi Son is on record as promising to launch a price war. But to what extent is a Free Mobile-like disruption event already happening? And what strategies are carriers adopting?

For more STL analysis of the US cellular market, read the original Sprint-Softbank EB , the Telco 2.0 Transformation Index sections on Verizon  and AT&T , and our Self-Disruption: How Sprint Blew It EB . Additional coverage of the fixed domain can be found in the Triple-Play in the USA: Infrastructure Pays Off EB  and the Telco 2.0 Index sections mentioned above

The US Market is Changing

In our previous analysis Self-Disruption: How Sprint Blew It, we used the following chart, Figure 3, under the title “…And ARPU is Holding Up”. Updating it with the latest data, it becomes clear that ARPU – and in this case pricing – is no longer holding up so well. Rather than across-the-board deflation, though, we are instead seeing increasingly diverse strategies.

Figure 3: US carriers are pursuing diverse pricing strategies, faced with change

US carriers are pursuing diverse pricing strategies, faced with change

Source: STL Partners

AT&T’s ARPU is being very gradually eroded (it’s come down by $5 since Q1 2011), while Sprint’s plunged sharply with the shutdown of Nextel (see report referenced above for more detail). Since then, AT&T and Sprint have been close to parity, a situation AT&T management surely can’t be satisfied with. T-Mobile USA has slashed prices so much that the “uncarrier” has given up $10 of monthly ARPU since the beginning of 2012. And Verizon Wireless has added almost as much monthly ARPU in the same timeframe.

Each carrier has adopted a different approach in this period:

  • T-Mobile has gone hell-for-leather after net adds at any price.
  • AT&T has tried to compete with T-Mobile’s price slashing by offering more hardware and bigger bundles and matching T-Mobile’s eye-catching initiatives, while trying to hold the line on headline pricing, perhaps hoping to limit the damage and wait for Deutsche Telekom to tire of the spending. For example, AT&T recently increased its device activation fee by $4, citing the increased number of smartphone activations under its early-upgrade plan. This does not appear in service-ARPU or in headline pricing, but it most certainly does contribute to revenue, and even more so, to margin.
  • Verizon Wireless has declined to get involved in the price war, and has concentrated on maintaining its status as a premium brand, selling on coverage, speed, and capacity. As the above chart shows, this effort to achieve network differentiation has met with a considerable degree of success.
  • Sprint, meanwhile, is responding tactically with initiatives like its “Framily” tariff, while sorting out the network, but is mostly just suffering. The sharp drop in mid-2012 is a signature of high-value SMB customers fleeing the shutdown of Nextel, as discussed in Self-Disruption: How Sprint Blew It.

Figure 4: Something went wrong at Sprint in mid-2012

Something went wrong at Sprint in mid-2012

Source: STL Partners, Sprint filings

 

  • Executive Summary
  • Contents
  • Introduction
  • The US Market is Changing
  • Where are the Customers Coming From?
  • Free Mobile: A Warning from History?
  • T-Mobile, the Expensive Disruptor
  • Handset subsidy: it’s not going anywhere
  • Summarising change in the US and French cellular markets
  • Conclusions

 

  • Figure 1: The US, a rich and high-spending market
  • Figure 2: The duopolists hold a lead, but a new challenger arises…
  • Figure 3: US carriers are pursuing diverse pricing strategies, faced with change
  • Figure 4: Something went wrong at Sprint in mid-2012
  • Figure 5: US subscriber net-adds by source
  • Figure 6: The impact of disruption – prices fall across the board
  • Figure 7: Free’s spectacular growth in subscribers – but who was losing out?
  • Figure 8: The main force of Free Mobile’s disruption didn’t fall on the carriers
  • Figure 9: Disruption in France primarily manifested itself in subscriber growth, falling ARPU, and the death of the MVNOs
  • Figure 10: T-Mobile has so far extended $3bn of credit to its smartphone customers
  • Figure 11: T-Mobile’s losses on device sales are large and increasing, driven by smartphone volumes
  • Figure 12: Size and profitability still go together in US mobile – although this conceals a lot of change below the surface
  • Figure 13: Fully-developed disruption, in France
  • Figure 14: Quality beats quantity. Sprint repeatedly outspent VZW on its network

Facing Up to the Software-Defined Operator

Introduction

At this year’s Mobile World Congress, the GSMA’s eccentric decision to split the event between the Fira Gran Via (the “new Fira”, as everyone refers to it) and the Fira Montjuic (the “old Fira”, as everyone refers to it) was a better one than it looked. If you took the special MWC shuttle bus from the main event over to the developer track at the old Fira, you crossed a culture gap that is widening, not closing. The very fact that the developers were accommodated separately hints at this, but it was the content of the sessions that brought it home. At the main site, it was impressive and forward-thinking to say you had an app, and a big deal to launch a new Web site; at the developer track, presenters would start up a Web service during their own talk to demonstrate their point.

There has always been a cultural rift between the “netheads” and the “bellheads”, of which this is just the latest manifestation. But the content of the main event tended to suggest that this is an increasingly serious problem. Everywhere, we saw evidence that core telecoms infrastructure is becoming software. Major operators are moving towards this now. For example, AT&T used the event to announce that it had signed up Software Defined Networks (SDN) specialists Tail-F and Metaswitch Networks for its next round of upgrades, while Deutsche Telekom’s Terastream architecture is built on it.

This is not just about the overused three letter acronyms like “SDN and NFV” (Network Function Virtualisation – see our whitepaper on the subject here), nor about the duelling standards groups like OpenFlow, OpenDaylight etc., with their tendency to use the word “open” all the more the less open they actually are. It is a deeper transformation that will affect the device, the core network, the radio access network (RAN), the Operations Support Systems (OSS), the data centres, and the ownership structure of the industry. It will change the products we sell, the processes by which we deliver them, and the skills we require.

In the future, operators will be divided into providers of the platform for software-defined network services and consumers of the platform. Platform consumers, which will include MVNOs, operators, enterprises, SMBs, and perhaps even individual power users, will expect a degree of fine-grained control over network resources that amounts to specifying your own mobile network. Rather than trying to make a unitary public network provide all the potential options as network services, we should look at how we can provide the impression of one network per customer, just as virtualisation gives the impression of one computer per user.

To summarise, it is no longer enough to boast that your network can give the customer an API. Future operators should be able to provision a virtual network through the API. AT&T, for example, aims to provide a “user-defined network cloud”.

Elements of the Software-Defined Future

We see five major trends leading towards the overall picture of the ‘software defined operator’ – an operator whose boundaries and structure can be set and controlled through software.

1: Core network functions get deployed further and further forwards

Because core network functions like the Mobile Switching Centre (MSC) and Home Subscriber Server (HSS) can now be implemented in software on commodity hardware, they no longer have to be tied to major vendors’ equipment deployed in centralised facilities. This frees them to migrate towards the edge of the network, providing for more efficient use of transmission links, lower latency, and putting more features under the control of the customer.

Network architecture diagrams often show a boundary between “the Internet” and an “other network”. This is called the ‘Gi interface’ in 3G and 4G networks. Today, the “other network” is usually itself an IP-based network, making this distinction simply that between a carrier’s private network and the Internet core. Moving network functions forwards towards the edge also moves this boundary forwards, making it possible for Internet services like content-delivery networking or applications acceleration to advance closer to the user.

Increasingly, the network edge is a node supporting multiple software applications, some of which will be operated by the carrier, some by third-party services like – say – Akamai, and some by the carrier’s customers.

2: Access network functions get deployed further and further back

A parallel development to the emergence of integrated small cells/servers is the virtualisation and centralisation of functions traditionally found at the edge of the network. One example is so-called Cloud RAN or C-RAN technology in the mobile context, where the radio basebands are implemented as software and deployed as virtual machines running on a server somewhere convenient. This requires high capacity, low latency connectivity from this site to the antennas – typically fibre – and this is now being termed “fronthaul” by analogy to backhaul.

Another example is the virtualised Optical Line Terminal (OLT) some vendors offer in the context of fixed Fibre to the home (FTTH) deployments. In these, the network element that terminates the line from the user’s premises has been converted into software and centralised as a group of virtual machines. Still another would be the increasingly common “virtual Set Top Box (STB)” in cable networks, where the TV functions (electronic programming guide, stop/rewind/restart, time-shifting) associated with the STB are actually provided remotely by the network.

In this case, the degree of virtualisation, centralisation, and multiplexing can be very high, as latency and synchronisation are less of a problem. The functions could actually move all the way out of the operator network, off to a public cloud like Amazon EC2 – this is in fact how Netflix does it.

3: Some business support and applications functions are moving right out of the network entirely

If Netflix can deliver the world’s premier TV/video STB experience out of Amazon EC2, there is surely a strong case to look again at which applications should be delivered on-premises, in the private cloud, or moved into a public cloud. As explained later in this note, the distinctions between on-premises, forward-deployed, private cloud, and public cloud are themselves being eroded. At the strategic level, we anticipate pressure for more outsourcing and more hosted services.

4: Routers and switches are software, too

In the core of the network, the routers that link all this stuff together are also turning into software. This is the domain of true SDN – basically, the effort to substitute relatively smart routers with much cheaper switches whose forwarding rules are generated in software by a much smarter controller node. This is well reported elsewhere, but it is necessary to take note of it. In the mobile context, we also see this in the increasing prevalence of virtualised solutions for the LTE Enhanced Packet Core (EPC), Mobility Management Entity (MME), etc.

5: Wherever it is, software increasingly looks like the cloud

Virtualisation – the approach of configuring groups of computers to work like one big ‘virtual computer’ – is a key trend. Even when, as with the network devices, software is running on a dedicated machine, it will be increasingly found running in its own virtual machine. This helps with management and security, and most of all, with resource sharing and scalability. For example, the virtual baseband might have VMs for each of 2G, 3G, and 4G. If the capacity requirements are small, many different sites might share a physical machine. If large, one site might be running on several machines.

This has important implications, because it also makes sharing among users easier. Those users could be different functions, or different cell sites, but they could also be customers or other operators. It is no accident that NEC’s first virtualised product, announced at MWC, is a complete MVNO solution. It has never been as easy to provide more of your carrier needs yourself, and it will only get easier.

The following Huawei slide (from their Carrier Business Group CTO, Sanqi Li) gives a good visual overview of a software-defined network.

Figure 1: An architecture overview for a software-defined operator
An architecture overview for a software-defined operator March 2014

Source: Huawei

 

  • The Challenges of the Software-Defined Operator
  • Three Vendors and the Software-Defined Operator
  • Ericsson
  • Huawei
  • Cisco Systems
  • The Changing Role of the Vendors
  • Who Benefits?
  • Who Loses?
  • Conclusions
  • Platform provider or platform consumer
  • Define your network sharing strategy
  • Challenge the coding cultural cringe

 

  • Figure 1: An architecture overview for a software-defined operator
  • Figure 2: A catalogue for everything
  • Figure 3: Ericsson shares (part of) the vision
  • Figure 4: Huawei: “DevOps for carriers”
  • Figure 5: Cisco aims to dominate the software-defined “Internet of Everything”

Are Telefonica, AT&T, Ooredoo, SingTel, and Verizon aiming for the right goals?

The importance of setting Telco 2.0 goals…

Communications Service Providers (CSPs) in all markets are now embracing new Telco 2.0 business models in earnest.  However, this remains a period of exploration and experimentation and a clear Telco 2.0 goal has not yet emerged for most players. At the most basic level, senior managers and strategists face a fundamental question:

What is an appropriate Telco 2.0 goal given my organisation’s current performance and market conditions?

This note introduces a framework based on analysis undertaken for the Telco 2.0 Transformation Index and offers some initial thoughts on how to start addressing this question [1] by exploring 5 CSPs in the context of the markets in which they operate and their current business model transformation performances.

Establishing the right Telco 2.0 goal for the organisation is an important first-step for senior management in the telecoms industry because:

  • Setting a Telco 2.0 goal that is unrealistically bold will quickly result in a sense of failure and a loss of morale among employees;
  • Conversely, a lack of ambition will see the organisation squeezed slowly and remorselessly into a smaller and smaller addressable market as a utility pipe provider.

Striking the right balance is critical to avoid these two unattractive outcomes.

…and the shortcomings of traditional frameworks

Senior management teams and strategists within the telecoms industry already have tools and approaches for managing investments and setting corporate goals.  So why is a fresh approach needed?  Put simply, the telecoms market is in the process of being irreversibly disrupted.  As we show in the first part of this note, traditional thinking and frameworks offer a view of the ‘as-is’ world but one which is changing fast because CSPs’ core communications services are being substituted by alternate offerings from new competitors.  The game is changing before our eyes and managers must think (and act) differently.  The framework outlined in summary here and covered in detail in the Telco 2.0 Transformation Index is designed to facilitate this fresh thinking.

Traditional strategic frameworks are useful to assess the ‘Telco 1.0’ situation

Understanding CSP groups’ ‘Telco 1.0’ strategic positioning: Ooredoo in a position of strength

Although they lack the detailed information and deep knowledge of the telecoms industry, investors have the benefit of an impartial view of different CSPs.  Unlike CSP management teams, they generally carry little personal ‘baggage’ and instead take a cold arm’s length approach to evaluating companies.  Their investment decisions obviously take into account future profit prospects and the current share price for each company to determine whether a stock is good value or not.  Leaving aside share prices, how might an investor sensibly appraise the ‘traditional’ Telco 1.0 telecoms market?

One classic framework plots competitive position against market attractiveness.  STL Partners has conducted this for 5 CSP groups in different markets as part of the analysis undertaken for the Telco 2.0 Transformation Index (see Figure 1).  According to the data collected, Ooredoo appears to be in the strongest position and, therefore, the most attractive potential investment vehicle.  Telefonica and SingTel appear to be moderately attractive and, surprisingly to many, Verizon and AT&T least attractive.

Figure 1: Strategic positioning framework for 5 CSP groups
Strategic Positioning Framework March 2014

Source: STL Partners’ Telco 2.0 Transformation Index, February 2014

Determining a CSP’s Telco 1.0 competitive position: Ooredoo enjoying life in the least competitive markets

As with all analytical tools, the value of the framework in Figure 1 is dependent upon the nature of the data collected and the methodology for converting it into comparable scores.  The full data set, methodology, and scoring tables for this and other analyses are available in the Telco 2.0 Transformation Index Benchmarking Report.  In this report, we will explore a small part of the data which drives part of the vertical axis scores in Figure 1 – Competitive Position (we exclude Customer Engagement in this report for simplicity).  In the Index methodology, there are 7 factors that determine ‘Competitive Position’ which are split into 2 categories:

  • Market competition, a consolidated score driven by:
  • Herfindahl score.  A standard economic indicator of competitiveness, reflecting the state of development of the underlying market structure, with more consolidated markets being less competitive and scoring more highly on the Herfindahl score.
  • Mobile revenue growth.  The compound annual growth of mobile revenues over a 2-year period.  Growing markets generally display less competition as individual players need to fight less hard to achieve growth.
  • Facebook penetration.  A proxy for the strength of internet and other ‘OTT’ players in the market.
  • CSP market positioning, driven by:
  • CSP total subscribers. The overall size of the CSP across all its markets.
  • CSP monthly ARPU as % of GDP per capita. The ability of the CSP to provide value to consumers relative to their income – essentially the CSP’s share of consumer wallet.
  • CSP market share. Self-explanatory – the relative share of subscribers.
  • CSP market share gain/loss. The degree to which the CSP is winning or losing subscribers relative to its peers.

If we look at the first 3 factors – those that drive fundamental market competition – it is clear why Ooredoo scores highly:

  • Its markets are substantially more consolidated than those of the other players (Figure 2).  Surprisingly, given the regular accusations of the US market being a duopoly, Verizon and AT&T have the most fragmented and competitive markets in the US.  For the fixed market, this latter point may be overstated since the US, for consumer and SME segments at least, is effectively carved up into regional areas where major fixed operators like Verizon and AT&T often do not compete head-to-head.
  • Its markets enjoy the strongest mobile revenue growth at 8.1% per annum between 2010 and 2012 versus 4.6% in Telefonica’s markets (fast in Latin America and negative in Europe), 5% in the US, and an annual decline (-1.7% ) for SingTel (Figure 3).
  • Facebook and the other internet players are much weaker in Ooredoo’s Middle Eastern markets than in Asia Pacific and Australia (SingTel), Europe and Latin America (Telefonica) and particularly the US (Verizon and AT&T) – see Figure 4.

 Figure 2: Herfindahl Score – Ooredoo enjoys the least competitive markets

Market Herfindahl Score March 2014

Note: Verizon and AT&T have slightly different scores owing the different business mixes between fixed and mobile within the US market

Source: STL Partners’ Telco 2.0 Transformation Index, February 2014

Figure 3: Ooredoo enjoying the strongest mobile market growth
Mobile Market Revenue Growth 2010-2012 March 2014

Source: STL Partners’ Telco 2.0 Transformation Index, February 2014

Ooredoo also operates in markets that have less competition from new players. For example, social network penetration is 56% in North America where AT&T and Verizon operate, 44% in Europe and South America where Telefonica operates, 58% in Singapore but only 34% in Qatar (Ooredoo’s main market) and 24% in the Middle East on average.

 

  • Identifying an individual CSP’s Telco 1.0 strategy: Telefonica Group in ‘harvest’ mode in most markets – holding prices, sacrificing share, generating cash
  • Frameworks used in the Telco 2.0 Transformation Index help identify evolving goals and strategies for CSPs
  • Traditional frameworks fail to account for new competitors, new services, new business models…
  • …but understanding how well each CSP is transforming to a new business model uncovers the optimum Telco 2.0 goal
  • STL Partners and the Telco 2.0™ Initiative

 

  • Figure 1: Strategic positioning framework for 5 CSP groups
  • Figure 2: Herfindahl Score – Ooredoo enjoys the least competitive markets
  • Figure 3: Ooredoo enjoying the strongest mobile market growth
  • Figure 4: Telefonica in harvest mode – milking companies for cash
  • Figure 5: Telco 2.0 Transformation Index strategic goals framework

The Future Value of Voice and Messaging

Background – ‘Voice and Messaging 2.0’

This is the latest report in our analysis of developments and strategies in the field of voice and messaging services over the past seven years. In 2007/8 we predicted the current decline in telco provided services in Voice & Messaging 2.0 “What to learn from – and how to compete with – Internet Communications Services”, further articulated strategic options in Dealing with the ‘Disruptors’: Google, Apple, Facebook, Microsoft/Skype and Amazon in 2011, and more recently published initial forecasts in European Mobile: The Future’s not Bright, it’s Brutal. We have also looked in depth at enterprise communications opportunities, for example in Enterprise Voice 2.0: Ecosystem, Species and Strategies, and trends in consumer behaviour, for example in The Digital Generation: Introducing the Participation Imperative Framework.  For more on these reports and all of our other research on this subject please see here.

The New Report


This report provides an independent and holistic view of voice and messaging market, looking in detail at trends, drivers and detailed forecasts, the latest developments, and the opportunities for all players involved. The analysis will save valuable time, effort and money by providing more realistic forecasts of future potential, and a fast-track to developing and / or benchmarking a leading-edge strategy and approach in digital communications. It contains

  • Our independent, external market-level forecasts of voice and messaging in 9 selected markets (US, Canada, France, Germany, Spain, UK, Italy, Singapore, Taiwan).
  • Best practice and leading-edge strategies in the design and delivery of new voice and messaging services (leading to higher customer satisfaction and lower churn).
  • The factors that will drive best and worst case performance.
  • The intentions, strategies, strengths and weaknesses of formerly adjacent players now taking an active role in the V&M market (e.g. Microsoft)
  • Case studies of Enterprise Voice applications including Twilio and Unified Communications solutions such as Microsoft Office 365
  • Case studies of Telco OTT Consumer Voice and Messaging services such as like Telefonica’s TuGo
  • Lessons from case studies of leading-edge new voice and messaging applications globally such as Whatsapp, KakaoTalk and other so-called ‘Over The Top’ (OTT) Players


It comprises a 18 page executive summary, 260 pages and 163 figures – full details below. Prices on application – please email contact@telco2.net or call +44 (0) 207 247 5003.

Benefits of the Report to Telcos, Technology Companies and Partners, and Investors


For a telco, this strategy report:

  • Describes and analyses the strategies that can make the difference between best and worst case performance, worth $80bn (or +/-20% revenues) in the 9 markets we analysed.
  • Externally benchmarks internal revenue forecasts for voice and messaging, leading to more realistic assumptions, targets, decisions, and better alignment of internal (e.g. board) and external (e.g. shareholder) expectations, and thereby potentially saving money and improving contributions.
  • Can help improve decisions on voice and messaging services investments, and provides valuable insight into the design of effective and attractive new services.
  • Enables more informed decisions on partner vs competitor status of non-traditional players in the V&M space with new business models, and thereby produce better / more sustainable future strategies.
  • Evaluates the attractiveness of developing and/or providing partner Unified Communication services in the Enterprise market, and ‘Telco OTT’ services for consumers.
  • Shows how to create a valuable and realistic new role for Voice and Messaging services in its portfolio, and thereby optimise its returns on assets and capabilities


For other players including technology and Internet companies, and telco technology vendors

  • The report provides independent market insight on how telcos and other players will be seeking to optimise $ multi-billion revenues from voice and messaging, including new revenue streams in some areas.
  • As a potential partner, the report will provide a fast-track to guide product and business development decisions to meet the needs of telcos (and others).
  • As a potential competitor, the report will save time and improve the quality of competitor insight by giving strategic insights into the objectives and strategies that telcos will be pursuing.


For investors, it will:

  • Improve investment decisions and strategies returning shareholder value by improving the quality of insight on forecasts and the outlook for telcos and other technology players active in voice and messaging.
  • Save vital time and effort by accelerating decision making and investment decisions.
  • Help them better understand and evaluate the needs, goals and key strategies of key telcos and their partners / competitors


The Future Value of Voice: Report Content Summary

  • Executive Summary. (18 pages outlining the opportunity and key strategic options)
  • Introduction. Disruption and transformation, voice vs. telephony, and scope.
  • The Transition in User Behaviour. Global psychological, social, pricing and segment drivers, and the changing needs of consumer and enterprise markets.
  • What now makes a winning Value Proposition? The fall of telephony, the value of time vs telephony, presence, Online Service Provider (OSP) competition, operators’ responses, free telco offerings, re-imaging customer service, voice developers, the changing telephony business model.
  • Market Trends and other Forecast Drivers. Model and forecast methodology and assumptions, general observations and drivers, ‘Peak Telephony/SMS’, fragmentation, macro-economic issues, competitive and regulatory pressures, handset subsidies.
  • Country-by-Country Analysis. Overview of national markets. Forecast and analysis of: UK, Germany, France, Italy, Spain, Taiwan, Singapore, Canada, US, other markets, summary and conclusions.
  • Technology: Products and Vendors’ Approaches. Unified Comminications. Microsoft Office 365, Skype, Cisco, Google, WebRTC, Rich Communications Service (RCS), Broadsoft, Twilio, Tropo, Voxeo, Hypervoice, Calltrunk, Operator voice and messaging services, summary and conclusions.
  • Telco Case Studies. Vodafone 360, One Net and RED, Telefonica Digital, Tu Me, Tu Go, Bluvia and AT&T.
  • Summary and Conclusions. Consumer, enterprise, technology and Telco OTT.

Telco 2.0 Transformation Index: Technology Survey

Summary: 150 senior execs from Vodafone, Telefonica, Etisalat, Ooredoo (formerly Qtel), Axiata and Singtel supported our technology survey for the Telco 2.0 Transformation Index. This analysis of the results includes findings on prioritisation, alignment, accountability, speed of change, skills, partners, projects and approaches to transformation. It shows that there are common issues around urgency, accountability and skills, and interesting differences in priorities and overall approach to technology as an enabler of transformation. (November 2013, Executive Briefing Service, Transformation Stream.) Telco 2.0 Transformation Index Tech Survey Cover Small
  Read in Full (Members only)   To Subscribe click here

Below are a brief extract and detailed contents from a 29 page Telco 2.0 Briefing Report that can be downloaded in full in Powerpoint slideshow format by members of the Premium Telco 2.0 Executive Briefing service and the Telco 2.0 Transformation stream here.

This report is an extract from the overall analysis for the Telco 2.0 Transformation Index, a new service from Telco 2.0 Research. Non-members can find out more about subscribing to the Briefing Service here and the Transformation Index here. There will be a world first preview of the Telco 2.0 Transformation Index at our Digital Arabia Executive Brainstorm in Dubai on 11-13th November 2013. To find out more about any of these services please email contact@telco2.net or call +44 (0) 207 247 5003.

To share this article easily, please click:



 

Introduction


Details of the objectives and key benefits of the overall Telco 2.0 Transformation Index can be found here, and the methodology and approach here. There’s also an example of Telefonica’s market position here.

One component of our analysis has been a survey of 150 senior execs on the reality of developing and implementing technology strategy in their organisations, and the results are now available to download to members of the Telco 2.0 Executive Briefing Service.

Key Benefits

  • The report’s highly graphical and interactive Powerpoint show format makes it extremely easy to digest and reach valuable insights quickly
  • The structure of the analysis allows the reader to rapidly and concisely assimilate the complex similarities and differences between players
  • It is underpinned with detailed and sourced numerical and qualitative data

 

Example charts from the report

The report analyses similarities and differences in priorities across the six players.Telco 2.0 Transformation Index - Tech Prioritisation Differences - Singtel, Axiata, Vodafone, Telefonica, Etisalat, Ooredoo

 

It also assesses the skills profiles of the players against different strategic areas.

Telco 2.0 Transformation Index - Technology Skills analysis, Telefonica, Vodafone, Etisalat, Ooredoo, Axiata, Singtel
Contents

To access the contents of the report, including…

  • Introduction and Methodology
  • Background – the Telco 2.0 Transformation Index
  • Executive Summary
  • Survey respondents
  • Drivers of network and IT projects
  • Degree of challenge of ‘Transformation’ by operator
  • Priority areas for Transformation by operator
  • What are the preferred project approaches for transformation?
  • Alignment of techology and commercial priorities
  • Accountability for leveraging and generating value from technology projects
  • IT Skills – ‘Telco 1.0’ Vs ‘Telco 2.0’
  • Nature of strategic partnerships by operator
  • Technology project life-cycles by operator
  • Groupings by attitude to technology as a driver of success
  • Priority areas for technological improvement or transformation

Members of the Telco 2.0 Executive Briefing Subscription Service and the Telco 2.0 Transformation stream can download the full 29 page report in interactive Powerpoint slideshow format hereNon-Members, please subscribe here. For other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

Cloud 2.0: Securing Trust to Survive the ‘One-In-Five’ CSP Shake-Out

Summary: The Cloud market is on the verge of the next wave of market penetration, yet it’s likely that only one in five Cloud Service Providers (CSPs) in today’s marketplace will still be around by 2018, as providers fail or are swallowed up by aggressive competitors. So what do CSPs need to do to survive and prosper? (October 2013, Foundation 2.0, Executive Briefing Service, Cloud & Enterprise ICT Stream.) Technology adoption rates Sept 2013


Introduction: one in five Cloud providers will survive 

The Cloud market is on the verge of the next wave of market penetration, yet it’s likely that only one in five Cloud Service Providers (CSPs) in today’s marketplace will still be around by 2018, as providers fail or are swallowed up by aggressive competitors. So what do CSPs need to do to survive and prosper?

This research was sponsored by Trend Micro but the analysis and recommendations represent STL Partners’ independent view. STL Partners carried out an independent study based on in-depth interviews with 27 senior decision makers representing Cloud Service Providers and enterprises across Europe. These discussions explored from both perspectives cloud maturity, the barriers to adoption and how these might be overcome. The findings and observations are detailed in this three-part report, together with practical recommendations on how CSPs can address enterprise security concerns and ensure the sustainability of the cloud model itself.

Part 1: Cloud – coming of age or troubled adolescent?

While the concept of organising computing as a utility dates back to the 1960s, the cloud computing model as we know it today is built on the sub-classifications of Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS).

We’ve covered telcos’ role in Cloud Services in depth in our Cloud research stream, and found that hype, hope and uncertainty have been notable features of the early stages of development of the market, with many optimistic forecasts of adoption being somewhat premature.

In terms of the adoption cycle adoption today, our analysis is that Cloud Services are on the brink of ‘the chasm’: well established among early adopters but less well known, trusted and used by the mass market segment of the enterprise market.

Building trust among new customer segments is the key to bridging this gap. For the industry it is a make or break point in terms of achieving scale. For CSPs, trust will be a key to survival and prosperity in the next phase of the market, enabling them to open up new opportunities and expand the amenable market, as well as to compete to retain and grow their individual market shares.

Many of the obstacles to and inhibitors of cloud adoption stem from customers’ perceptions of product immaturity – “will it be safe and work how we want without too much hassle and commitment?” In this report we examine findings on the general inhibitors and drivers of adoption, and then those related to the main inhibitor, data security, and how they might be addressed.

Overcoming the obstacles

Enterprise decision-makers in the study admitted to being deterred from the cloud by the prospect of migration, with the “enterprise/cloud barrier” perceived as a significant technical hurdle. While CSPs with enterprise-grade propositions have in place the business model, margins and consultative resources to offer customers an assisted journey to the cloud, standard public offerings are provided on a Do-It-Yourself basis.

However, data privacy and security remain the biggest inhibitors to cloud adoption among enterprises, due in no small part to a perceived loss of visibility and control.  Recent headline-grabbing events relating to mass surveillance programmes such as PRISM have only served to feed these fears.  As will be seen in this report, a lack of consistent industry standards, governance and even terminology heightens the confusion. Internal compliance procedures, often rooted in an out-dated “physical” mind-set, fail to reflect today’s technological realty and the nature of potential threats.

According to the UK Department for Business Innovation & Skills, the direct cost of a security breach (any unauthorised access of data, applications, services, networks or devices) is around £65,000 for SMEs and £850,000 for larger enterprises. However, add to this financial penalties for failure to protect customer data, reputational damage, diminished goodwill and lost business, and the consequential losses can be enough to put a company out of business.  It’s little wonder some enterprises still regard cloud as a risk too far.

In reality, CSPs with a heritage in managed services and favourable economies of scale can typically match or better the security provisions of on-premise data centres.  However, as “super enterprises” they present a larger and therefore more attractive target for malicious activity than a single business.  There is simply no room for complacency.

CSPs must shift their view of security from a business inhibitor to a business enabler: crucial to maintaining and expanding the overall cloud market and confidence in the model by winning customer trust.  This requires a fundamental rethink of compliance – both on the part of CSPs and enterprises – from a tick-box exercise to achieve lowest-cost perimeter protection to cost effectively meeting the rigorous demands of today’s information-reliant enterprises.

Cloud services cannot be considered mature until enterprises en masse are prepared to entrust anything more than low-sensitivity data to third party CSPs.  The more customer security breaches that occur, the more trust will be undermined, and the greater the risk of the cloud model imploding altogether.

State of the nation

The journey to the cloud is often presented in the media as a matter of “when” rather than “if”.  However, while several CSPs in our study believed that the cloud model was starting to approach maturity, enterprise participants were more likely to contend that cloud was still at an experimental or “early adopter” stage.

The requirements of certain vertical markets were perceived by some respondents to make cloud a non-starter, for example, broadcasters that need to upload and download multi-terabyte sized media files, or low-latency trading environments in the financial sector.  Similarly, the value of intellectual property was cited by pharmaceutical companies as justifying the retention of data in a private cloud or internal data centre at any cost.

CSPs universally acknowledged that their toughest competitor continues to be enterprises’ own in-house data centres.  IT departments are accustomed to having control over their applications, services, servers, storage, network and security. While notionally, they accept they will have to be less “hands on” in the cloud, a lack of trust persists among many. This reticence was typically seen by CSPs as unwarranted fear and parochialism, yet many are still finding it a challenge to educate prospective customers and correct misconceptions. CSPs suggested that IT professionals may be as likely to voice support for the cloud as turkeys voting for Christmas. However, more enlightened IT functions have embraced the opportunity to evolve their remit to working with their CSP to monitor services against SLAs, enforce compliance requirements and investigate new technologies rather than maintaining the old.

For tentative enterprises, security is still seen as a barrier to, rather than an accelerant of, cloud adoption, and one of the most technically challenging issues for both IT and compliance owners. Enterprises that had advanced their cloud strategy testified that successful adoption relies on effective risk management when evaluating and engaging a cloud partner. Proponents of cloud solutions will need compelling proof points to win over their CISO, security team or compliance officer.  However, due diligence is a lengthy and often convoluted process that should be taken into account by those drawn to the cloud model for the agility it promises.

The majority of CSPs interviewed were relatively dismissive of customer security concerns, making the valid argument that their security provisions were at least equal to, if not better than, that of most enterprise data centres.  However, as multiple companies concentrate their data into the hands of a few CSPs, the larger and more attractive those providers become to hackers as an attack target. Nonetheless, CSPs rarely offer any indemnification against hacking (aside from financial compensation for a breach of SLA) and SaaS providers tend to be more obscure than IaaS/PaaS providers in terms of the security of their operations.  Further commercial concerns explored in this report relate to migration and punitive contractual lock-in. Enterprises need to feel that they can easily relocate services and data across the cloud boundary, whether back in house or to another provider.  This creates the added challenge of being able to provide end-to-end audit continuity as well as in transit.

There are currently around 800 cloud service providers (CSPs) in Europe.  Something of a land grab is taking place as organisations whose heritage lies in software, telecoms and managed hosting are launching cloud-enabled services, primarily IaaS and SaaS.

However, “cloudwashing” – a combination of vendor obfuscation and hyperbole – is already slowing down the sales cycles at a time when greater transparency would be likely to lead to more proofs of concept, accelerated uptake and expansion of the overall market.

Turbulence in the macro economy is exacerbating the problem: business creation and destruction are among the most telling indicators of economic vitality.  A landmark report from RSM shows that the net rate of business creation (business births minus deaths) for the G7 countries was just 0.8% on a compound annual basis over the five-year period of the study. The BRICs, by contrast, show a net rate of business creation of 6.2% per annum – approximately eight times the G7 rate.

In parallel, the pace of technology success is accelerating.  Technologies are considered to have become “mainstream” once they have achieved 25% penetration. As cloud follows this same trajectory, with a rash of telcos, cable operators, data centre specialists and colocation providers entering the market, significant consolidation will be inevitable, since cloud economics are inextricably linked to scale.

Figure 1 – Technology adoption rates
Technology Adoption Rates Sept 2013

Source: STL Partners

Lastly, customers are adapting and evolving faster than ever, due in no small part to the advent of social media and digital marketing practices, creating a hyper-competitive environment.  As a by-product, the rate of business failure is rising.  In the 1950s, two-thirds of the Fortune 500 companies failed. Throughout the 1980s, almost nine out of ten of the so-called “Excellent” companies went to the wall, and 98% of firms borne out of the “Dot Com” revolution in the late 1990s are not expected to survive.

As a result, STL Partners anticipates that by 2018, a combination of consolidation and natural wastage will leave only 160 CSPs in the marketplace – a survival rate of one in five.

Drivers of cloud adoption

The business benefits of the cloud are well documented, so the main value drivers cited by participants in the study can be briefly summarised as follows:

Figure 2 – Business and IT Drivers of cloud adoption
Business and IT Drivers of cloud adoption Sept 2013

Report Contents

  • Introduction: one in five Cloud providers will survive
  • Part 1: Cloud – coming of age or troubled adolescent?
  •    Overcoming the obstacles
  •    State of the nation
  •    Drivers of cloud adoption
  •    Inhibitors to cloud adoption
  •       Cloud migration and integration with internal systems
  •       Vendor lock-in and exit strategies
  •       Governance and compliance issues
  •       Supplier credibility and longevity
  •       Testing and assurance
  • Part 2: Cloud security and data privacy challenges
  •    Physical security
  •    Data residency and jurisdiction
  •    Compliance and audit
  •    Encryption
  •    Identity and Access Management
  •    Shared resources and data segregation
  •    Security incident management
  •    Continuity services
  •    Data disposal
  •    Cloud provider assessment
  •    Industry standards and codes of practice
  •    Migration strategy
  •    Customer visibility
  • Part 3: Improving your ‘security posture’
  •    The ethos, tools and know-how needed to win customers’ trust
  •    The Four Levels of Cloud Security
  • Key take-aways for Cloud Services Providers
  • About STL Partners
  • About Trend Micro

Table of Figures

  • Figure 1 – Technology adoption rates
  • Figure 2 – Business and IT Drivers of cloud adoption
  • Figure 3 – Information security breaches 2013
  • Figure 4 – The four levels of Cloud security
  • Figure 5 – A 360 Degree Framework for Cloud Security

Telco Opportunities in the ‘New Mobile Web’?

Summary: The transformed mobile web experience, brought about by the adoption of a range of new technologies, is creating a new arena for operators seeking to (re)build their role in the digital marketplace. Operators are potentially well-placed to succeed in this space; they have the requisite assets and capabilities and the desire to grow their digital businesses. This report examines the findings of interviews and a survey conducted amongst key industry players, supplemented by STL Partners’ research and analysis, with the objective of determining the opportunities for operators in the New Mobile Web and the strategies they can implement in order to succeed. (September 2013, Foundation 2.0, Executive Briefing Service.) Operator Opportunities in the “New Mobile Web”

This report explores new opportunities for telecom operators (telcos) in Digital, facilitated by the emergence of the “New Mobile Web”. The New Mobile Web is a term we have used to describe the transformed mobile Web experience achieved through advances in technology; HTLM5, faster, cheaper (4G) connectivity, better mobile devices. This paper argues that the New Mobile Web will lead to a shift away from native (Apple & Android) app ecosystems to browser-based consumption of media and services. This shift will create new opportunities for operators seeking to re(build) their digital presence.

STL Partners has undertaken research in this domain through interviews and surveys with operators and other key players in the market. In this report, we present our findings and analysis, as well as providing recommendations for operators.

The New Mobile Web

The emergence of the New Mobile Web is creating a new arena for operators seeking to (re)build their role in the digital marketplace. Many telecoms operators (telcos) are looking to build big “digital” businesses to offset the forecasted decline in their core voice and messaging businesses over the next 5-7 years. Growth in data services and revenues will only partly offset these declines.

In general, despite a lot of effort and noise, telcos have been marginalised from the explosion in mobile Apps and Content, except insofar as it has helped them upgrade customers to smartphones and data-plans. Most notably, there has been a shift in market influence to Google & Apple, and spiralling traffic and signalling loads from easy-to-use interactive apps on smartphones.

Technical developments, including the adoption of HTML5, better mobile devices and faster networks, are transforming the user experience on mobile devices thereby creating a “New Mobile Web”. This New Mobile Web extends beyond “pages”, to content that looks and behaves more like “apps”. By having such “Web-apps” that work across different operating systems and devices – not just phones, but also PCs, TVs and more – the Web may be able to wrest back its role and influence in mobile Apps and Content.

The Key Opportunities for Operators

This new digital arena is in turn creating new opportunities to support others; STL’s research found that respondents felt the key opportunities for operators in the New Mobile Web were around: Monetisation, Discovery, Distribution and Loyalty.

Figure 1 – Operators see the New Mobile Web creating most value around Payments, Monetisation and Loyalty
Operators see the New Mobile Web Creating most value

Telcos can leverage their assets

Telcos have the requisite assets and capabilities to succeed in this area; they are strong candidates for assisting in monetisation, discovery, distribution and loyalty, especially if they can link in their other capabilities such as billing and customer-knowledge.

This report sets out some of the existing activities and assets that operators should seek to exploit and expand in pursuing their ambitions in the New Mobile Web:

Strategic Options for telcos to succeed

Operators that are aiming to become ‘digital players’ need to adopt coherent strategies that exploit and build on their assets and capabilities. This report identifies 5 broad strategic options that operators should look to pursue and it sets out the rationale for each. These strategies are not necessarily mutually exclusive and can be combined to develop clear direction and focus across the organisation.

Seizing the opportunity

Although many operators believe that they urgently need to build strong digital businesses, most are struggling to do so. Telcos are not going to get too many chances to re-engage with customers and carve-out a bigger role for themselves in the digital economy. If it fulfils its promise, the New Mobile Web will disrupt the incumbent mobile Apps and Content value networks. This disruption will provide new opportunities for operators.

The operator community needs to participate in shaping the New Mobile Web and its key enabling technologies. Telcos also need to understand the implications of these technologies at a strategic level – not just something that the Web techies get excited about.

If telcos are not deeply involved – from board level downwards – they risk being overtaken by events, once again. Continued marginalisation from the digital economy will leave operators with the prospect of facing a grim future of endless cost-cutting, commoditisation and consolidation. This should not be inevitable.

Report Contents

  • Preface
  • Executive Summary
  • Introduction to the New Mobile Web
  • Meeting Operators’ strategic goals
  • Key opportunities in the New Mobile Web
  • Operators have plenty of existing assets and could add more
  • Case Studies
  • Telco Strategies in the New Mobile Web
  • Appendix 1: The New Mobile Web – “Rebalancing” from “Native”

Table of Figures

  • Figure 1: On-line survey respondents
  • Figure 2: Key opportunities in the New Mobile Web.  Enabling…
  • Figure 3: Areas of Value for Operators
  • Figure 4: Telco assets that should be used to address the opportunity
  • Figure 5:  Operator Strategies
  • Figure 6: Drivers of the New Mobile Web
  • Figure 7: Data growth alone will not fill the gap in declining Voice and Messaging Revenue
  • Figure 8: Survey results on operator ambitions
  • Figure 9: Asian and MEA operators are the most ambitious
  • Figure 10: Telcos in native app dominated geographies are more likely to believe that their ambitions could not be met in the current world. However, as stated above, there are notable exceptions…
  • Figure 11: Key opportunities in the New Mobile Web.  Enabling…
  • Figure 12: Operators see the New Mobile Web creating most value around Payments, Monetisation and Loyalty
  • Figure 13: A vast display ecosystem enables Web content providers to indirectly monetise their content
  • Figure 14: Within Digital, operators see most value in Self-care, Mobile Payments and Banking, Video and Music
  • Figure 15: Existing operator assets to build a role in the New Mobile Web
  • Figure 16: iRadio Overview
  • Figure 17: Tapjoy Overview
  • Figure 18: Mozilla Firefox OS Overview
  • Figure 19: Globe Telecom promotion
  • Figure 20: Financial Times Overview
  • Figure 21: AppsFuel Overview
  • Figure 22: Summary of the 5 Broad Strategies
  • Figure 23: Percentage of (US) smartphone and tablet users’ time by application area
  • Figure 24: The industry is beginning to see a “re-birth of the Web”
  • Figure 25: HTML5 seeks to bring the best of both Web and app worlds:
  • Figure 26: Telcos see most HTML5 value in reducing the cost of service & maintenance and improving the time to market.
  • Figure 27: The Industry sees the dominance of existing ecosystems as the biggest barrier to HTML5’s success

Finding the Next Golden Egg: Sourcing Great Telecoms Innovations

The telco innovation problem…

The challenge facing the telecoms industry has been well documented (not least by STL Partners). The solution, the need for telcos to develop a new telecoms ’business model’ is also now generally accepted. For some, the new business model may entail eschewing service development and instead focusing on cost efficiency and network performance – the Telco 2.0 Happy Piper.

For many, however, the desire to compete in the ‘services layer’ remains strong. These would-be Telco 2.0 Service Providers must seek to replace the contracting voice and messaging revenue streams with new revenues from new products and services and customers.

How to develop these new products and services and customer relationships is the $1 trillion question for telcos and their partners.

STL Partners has spent much time exploring both the nature of new opportunities and the processes for realising them. The problem for telcos is that they are not natural innovators. Their raisin d’etre historically has been to build infrastructure and generate returns from services that were only available because they owned and controlled the infrastructure – voice, messaging, and connectivity. The result was very low levels of innovation in telecoms but stable high-margin returns from ‘protected services’.

The Internet has changed the game. Now, voice and messaging and other communications services are available from alternate service providers – the internet giants and start-ups in particular. These new players have innovation in their DNA – they are product and service-oriented; they have sexy brands; they understand the value of customer data and how to exploit it; with lower capital expenditures, they can generate returns on investment with much lower margins.

…and one part of the solution addressed in this report

For telcos to develop competitive enabling or end-user services, whether consumer or enterprise, they need to develop the same skills and relationships enjoyed by the new competitors. As we discuss at length A Practical Guide to Implementing Telco 2.0 and we measure in the forthcoming Telco 2.0 Transformation Index, this requires a fundamental business model transformation that encompasses the whole telco industry: services, organisation structure and processes, partnerships, technology, and the cost and revenue model.

Rather than cover all the elements of the transformation, this report focuses narrowly on the process of developing compelling new propositions and services that deliver what customers want better than existing available solutions. It is based on a simple premise: that innovation and creativity is based on ‘associative thinking’ – the ability to link together ideas and concepts. For example, it was associative thinking in 2006 that led Apple’s iPhone designers to spot how an accelerometer – a widely used device in the transport, construction and medical industries – could be integrated into an iPhone to manage automatic screen rotation and countless applications we now take for granted on mobile.

Two ‘associative thinking’ approaches to identifying Telco 2.0 innovations

1. Existing tried and tested solutions

Rather than start with a blank sheet of paper, one way to innovate is to copy solutions that others have brought to market successfully. This does not necessarily imply a ‘me too’ approach entirely as there is scope, or course, to improve the solutions that others have created. In fact, most innovations are actually an extension of an existing product or service. For example:

  • Apple’s iPhone, with its capacitive screen and integrated content ecosystem was a massive improvement on previous smartphones but clearly drew on early work done by, for example, Nokia with its 9210 Communicator and Ericsson with the R380.
  • Google’s powerful search algorithm and clean user interface contrasted with the clutter of earlier search sites such AltaVista but also built on their idea of helping people find things on the web. Interestingly, AltaVista has now made a comeback with a slick clean interface that looks remarkably similar to Google!

If there is value in taking another firm’s idea and improving it, what are the sources of such concepts for CSPs?

STL Partners sees three main ones:

1. Your local telecoms market.

Scan the offerings of your competitors and if you spot something that looks attractive or seems to be getting traction in the marketplace, find ways to improve it and launch a better competitive offering yourself. You may remember in the view of Telefonica and Vodafone we mentioned that Freebees was a copy of O2’s earlier Top-up Surprises. Two important points here that Vodafone failed to do:

  • Follow fast. The Freebees programme was launched around three years after Top-up Surprises and so Vodafone missed out on being seen as an innovator. Vodafone also missed out on the financial benefits that O2 enjoyed in those intervening years.
  • Improve the original concept. Freebees is fine but fails to materially improve on what was offered by O2 – rewards for customers that top-up their prepay account.

2. The global telecoms market.

Look outside your market to other geographies to see what has worked in other parts of the world and then explore how these solutions might work in your own market. Clearly, you need to make allowance for different local customs and behaviours, industry structures, regulations and so on but the global nature of (tele)communications means that things that have worked in one market can often be easily adapted to others. STL Partners carries out this global scouting service for clients looking at what is available from other CSPs, vendors and start-ups and believes it is a sensible low-risk strategy for many CSPs – see on page 17 of this document, for more details.

Contents

To access the contents of the report, including…

  • The telco innovation problem…
  • …and one part of the solution addressed in this report
  • Two ‘associative thinking’ approaches to identifying Telco 2.0 innovations
  • 1. Existing tried and tested solutions
  • 2. Customer Goal-led Innovation (CGLI)
  • Case study on identifying Telco 2.0 innovations: The STL Partners scouting service
  • About STL Partners

…and the following table of exhibits…

  • Figure 1: Sources for tried and tested Telco 2.0 solutions
  • Figure 2: The limitations of asking customers what they need when innovating, some examples
  • Figure 3: How Customer Goal-led Innovation focuses on real needs and uncovers innovation opportunities
  • Figure 4: The STL Partners’ Customer Goal-led Innovation process
  • Figure 5: Producing a customer activity map to support a goal statement
  • Figure 6: Customer goal-led innovation – activity analysis table, example
  • Figure 7: Identifying opportunity areas for innovation, example
  • Figure 8: The STL Partners scouting service in a nutshell

Telco 2.0: Making Money from Location Insights

Preface

The provision of Location Insight Services (LIS) represents a significant opportunity for Telcos to monetise subscriber data assets. This report examines the findings of a survey conducted amongst representatives of key stakeholders within the emerging ecosystem, supplemented by STL Partners’ research and analysis with the objective of determining how operators can release the value from their unique position in the location value chain.

The report concentrates on the Location Insight Services (LIS), which leverage the aggregated and anonymised data asset derived from connected consumers’ mobile location data, as distinct from Location Based Services (LBS), which are dependent on the availability of individual real time data.

The report draws the distinction between Location Insight Services that are Person-centric and those that are Place-centric and assesses the different uses for each data set.

In order to service the demand from specific use cases as diverse as Benchmarking, Transport & Infrastructure Planning, Site Selection and Advertising Evaluation, operators face a choice between fulfilling the role of Data Supplier, providing the market with Raw Big Data or offering Professional Services, adding value through a combination of location insight reports and interpretation consultancy.

The report concludes with a comparative evaluation of options for operators in the provision of LIS services and a series of recommendations for operators to enable them to release the value in Location Insight Services.

Location data – untapped oil

The ubiquity of mobile devices has led to an explosion in the amount of location-specific data available and the market has been quick to capitalise on the opportunity by developing a range of Location-Based Services offering consumers content (in the form of information, promotional offers and advertising). Industry analysts predict that this market sector is already worth nearly $10 billion.

The vast majority of these Location Based Services (LBS) are dependent on the availability of real time data, on the reasonable assumption that knowing an individual’s location enables a company to make an offer that is more relevant, there and then.  But within the mobile operator community, there is a growing conviction that a wider opportunity exists in deriving Location Insight Services (LIS) from connected consumers’ mobile location data. This opportunity does not necessarily require real time data (see Figure 9). The underlying premise is that identification of repetitive patterns in location activity over time not only enables a much deeper understanding of the consumer in terms of behaviour and motivation, but also builds a clearer picture of the visitor profile of the location itself.

Figure 1:  Focus of this study is on Location Insight Services
Focus of this Study on Location Insight Services

  • As part of our Telco 2.0 Initiative, we have surveyed a number of companies from within the evolving location ecosystem to assess the potential value of operator subscriber data assets in the provision of Location Insight Services. This report examines the findings and illustrates how operators can release the value from their unique position in the location value chain.

Location Insight Services is a fast growing, high value opportunity

The demand is “Where”?

For operators to invest in the technology and resources required to enter this market, a compelling business case is required. Firstly, various analysts have confirmed that there is a massive latent demand for location-centric information within the business community to enable the delivery of location-specific products and services that are context-relevant to the consumer. According to the Economist Business Unit, there is a consensus amongst marketers that location information is an important element in developing marketing strategy, even for those companies where data on customer and prospect location is not currently collected.3

Figure 2: Location is seen as the most valuable information for developing marketing strategy
Location is seen as the most valuable information for developing marketing strategy

Source: Mind the marketing gap – A report from Economist Business Intelligence Unit

Scoping the LIS opportunity by industry and function

In order to understand the market potential for Location Insight Services, we have considered both industry sectors and job functions where insights derived from location data at scale improve business efficiencies. Our research has suggested that Location Insight Services have an application to many organisations that are seeking to address the broader issue of how to extract the benefits concealed within Big Data.

A recent report from Cisco concentrating on how to unlock the value of digital analytics suggested that Big Data has an almost universal application and

“Big Data could help almost any organization run better and more efficiently. A service provider could improve the day-to-day operations of its network. A retailer could create more efficient and lucrative point-of-sale interactions. And virtually any supply chain would run more smoothly. Overall, a common information fabric would improve process efficiency and provide a complete asset view.” 

Our research suggests that the following framework facilitates understanding of the different elements that together comprise the market for non-real time Location Insight Services.

The matrix considers the addressable market by reference to vertical industry sectors and horizontal function or disciplines.

We have rated the opportunities High, Medium and Low based on a high level assessment of the potential for uptake within each defined segment. In order to produce an estimate of the potential market size for non-real time Location Insight Services, STL Partners have taken into account the current revenue estimates for both industry sectors and functions.

Figure 3:  Location Insight Market Overview (telecoms excluded)
Location Insight Services Market Taxonomy

Report Contents

  • Preface
  • Executive Summary
  • Location data – untapped oil
  • Location Insight Services is a fast growing, high value opportunity
  • Scoping the LIS opportunity by industry and function
  • Location Insight Services could be worth $11bn globally by 2016
  • Which use cases will drive uptake of LIS?
  • Use cases – industry-specific illustrations
  • How should Telcos “productise” location insights services?
  • Operators are uniquely placed to deliver location insights and secure a significant share of this opportunity
  • What is the operator LIS value proposition?
  • Location insight represents a Big Data challenge for Telcos.
  • There is a demand for more granular location data
  • Increasing precision commands a premium
  • Meeting LIS requirements – options for operators
  • What steps should operators take?
  • Methodology and reference sources
  • References
  • Appendix 1 – Opportunity Sizing
  • Definition
  • Methodology

 

  • Figure 1: Focus of this study is on Location Insight Services
  • Figure 2: Location is seen as the most valuable information for developing marketing strategy
  • Figure 3: Location Insight Market Overview (telecoms excluded)
  • Figure 4: The value of Global Location Insight Services by industry and sector (by 2016)
  • Figure 5: How UK retail businesses use location based insights
  • Figure 6: Illustrative use cases within the Location Insights taxonomy
  • Figure 7: How can Telcos create value from customer data?
  • Figure 8: Key considerations for Telco LIS service strategy formulation
  • Figure 9: Real time service vs. Insight
  • Figure 10: The local link in global digital markets
  • Figure 11: Customer Data generated by Telcos
  • Figure 12: Power of insight from combining three key domains
  • Figure 13: Meeting LIS Requirements – Options for Operators

Telco 2.0 Transformation Index: Understanding Telefonica’s Markets and Market Position

Summary: This extract from the Telco 2.0 Transformation Index shows our analysis of Telefonica’s markets and market position, including economic and digital market maturity, regulation, customers, competition and pricing. It is one part of our overall analysis of Telefonica’s progress towards transformation to the Telco 2.0 business model. The other parts of the Telefonica analysis are: Service Proposition, Finances, Technology, Value Network, and an overall summary. Telefonica is one of the companies analysed and compared in the first tranche of analysis that also addresses Vodafone, AT&T, Verizon, Axiata, SingTel, Etisalat and Ooredoo (formerly Qtel). (August 2013, Executive Briefing Service, Transformation Stream.) Telefonica Telco 2.0 Transformation Index Small

Introduction


Details of the objectives and key benefits of the overall Telco 2.0 Transformation Index can be found here, and the methodology and approach here.

Telefonica is one of the first companies featured in our Transformation Index, and one that is viewed with great interest by others. With operating companies facing very different conditions in Europe and South America, Telefonica faces some interesting strategic challenges, and has attempted to stimulate growth through innovation with the development of Telefonica Digital.

The ‘Markets and Position’ section of the analysis puts Telefonica’s current global position, risks and opportunities in context, and is now available to download to members of the Telco 2.0 Executive Briefing Service. The rest of the analysis (covering Service Proposition, Value Network, Technology and Finances), and the analyses of the other seven companies initially covered (Vodafone, AT&T, Verizon, Etisalat, Ooredoo [formerly Qtel], Singtel and Axiata) will be published from September 2013.

Key Benefits

  • The report’s highly graphical format makes it extremely easy to digest and reach valuable insights quickly into both Telefonica’s current position and future strategic needs
  • The structure of the analysis allows the reader to rapidly and concisely assimilate the complex picture of Telefonica’s international businesses, risks and opportunities
  • It is underpinned with detailed and sourced numerical and qualitative data

 

Example charts from the report

The report analyses Telefonica’s market share position across markets against their regulatory strength.
Telco 2.0 Transformation Index - Market Positioning Detail

 

It also assesses the economic and demographic make-up of Telefonica’s markets.

Telco 2.0 Transformation Index - Market Analysis Detail Example, Telefonica

The market analyses are consolidated into an overall summary of market positioning by Operating Company, which is further refined into an assessment of strategic approach and operational performance.

Telco 2.0 Transformation Index - Market Share and Profitability Detail

 

Contents

To access the contents of the report, including…

  • Introduction and Methodology
  • Market Position Summary: Economic, Regulatory, Competitive and Customers
  • Summary analysis of growth, GDP, prices and economics of key markets
  • Comparison and contrasts between European and Latin American markets
  • Regulation vs EBITDA margins
  • Mobile revenue growth by market
  • Subscribers and revenues by region
  • Mari-Meko of Subscribers and Shares in key markets
  • Market Share Vs. Regulation
  • Market Vs. Telefonica Growth by national market
  • Telefonica’s commercial strategy
  • Strength of OTT entrants in Telefonica’s markets
  • Pre-Pay, Post-Pay and Churn by Market
  • Telefonica’s relative brand strength

 

Cloud 2.0: Network Functions Virtualisation (NFV) vs. Software Defined Networking (SDN)

Network Functions Virtualisation

What is Network Functions Virtualisation?

Network Functions Virtualisation (NFV)  is an ominous sounding term, but on examination relatively easy to understand what it is and why it is needed.

If you run a network whether as an enterprise customer or as a service provider you will end up a stack of dedicated hardware appliances performing a variety of functions needed to make the network work or to optimise its performance. Boxes like Routers, Application Load Balancers, Session Border Controllers (SBC), Network Address Translation (NAT), Deep Packet Inspection (DPI) and Firewalls to pick just a few. Each one of these hardware appliances needs space, power, cooling, configuration, backup, capital investment, replacement as they become obsolete and people who can deploy and manage them leading to on-going capex and opex. And with a few exceptions, each performs a single purpose, so a firewall is always a firewall or an SBC is always an SBC and neither can perform the function of the other.

Contrast this model with the virtualised server or cloud computing world where Virtual Machines run on standard PC/Server hardware, where you can add more compute power/storage on an elastic basis should you need it and where network cards are only required when you connect one physical device to another.

What problems does NFV solve?

NFV seeks to solve the problems of dedicated hardware by deploying the network functions on a virtualised PC/server environment. NFV started as a special interest group running under the auspices of the European Telecommunications Standards Institute (ETSI) by 7 of the world’s largest telecoms operators and has now been joined by additional telecoms companies, equipment vendors and a variety of technology providers.

While NFV can replace many dedicated hardware devices with a virtualised software platform, it is yet to be seen if this approach can deliver the sustained performance and low latency that is currently delivered by some specialised hardware appliances such as load balancing, real time encryption or deep packet inspection.

Figure 8 shows ETSI’s vision of NFV.

Figure 8 – ETSI’s vision for Network Functions Virtualisation
Network Virtualisation Approach June 2013

 Source ETSI

Report Contents

  • Network Functions Virtualisation
  • What is Network Functions Virtualisation?
  • What problems does NFV solve?
  • How does NFV relate to Software Defined Networking (SDN)?
  • Relative benefits of NFV and SDN
  • STL Partners and the Telco 2.0™ Initiative

Report Figures

  • Figure 8 – ETSI’s vision for Network Functions Virtualisation
  • Figure 9 – Network Functions Virtualised and managed by SDN
  • Figure 10 – Network Functions Virtualisation relationship with SDN

Software Defined Networking (SDN): A Potential ‘Game Changer’

Summary: Software Defined Networking is a technological approach to designing and managing networks that has the potential to increase operator agility, lower costs, and disrupt the vendor landscape. Its initial impact has been within leading-edge data centres, but it also has the potential to spread into many other network areas, including core public telecoms networks. This briefing analyses its potential benefits and use cases, outlines strategic scenarios and key action plans for telcos, summarises key vendor positions, and why it is so important for both the telco and vendor communities to adopt and exploit SDN capabilities now. (May 2013, Executive Briefing Service, Cloud & Enterprise ICT Stream, Future of the Network Stream). Potential Telco SDN/NFV Deployment Phases May 2013

Figure 1 – Potential Telco SDN/NFV Deployment Phases
Potential Telco SDN/NFV Deployment Phases May 2013

Source STL Partners

Introduction

Software Defined Networking or SDN is a technological approach to designing and managing networks that has the potential to increase operator agility, lower costs, and disrupt the vendor landscape. Its initial impact has been within leading-edge data centres, but it also has the potential to spread into many other network areas, including core public telecoms networks.

With SDN, networks no longer need to be point to point connections between operational centres; rather the network becomes a programmable fabric that can be manipulated in real time to meet the needs of the applications and systems that sit on top of it. SDN allows networks to operate more efficiently in the data centre as a LAN and potentially also in Wide Area Networks (WANs).

SDN is new and, like any new technology, this means that there is a degree of hype and a lot of market activity:

  • Venture capitalists are on the lookout for new opportunities;
  • There are plenty of start-ups all with “the next big thing”;
  • Incumbents are looking to quickly acquire new skills through acquisition;
  • And not surprisingly there is a degree of SDN “Washing” where existing products get a makeover or a software upgrade and are suddenly SDN compliant.

However there still isn’t widespread clarity of what SDN is and how it might be used outside of vendor papers and marketing materials, and there are plenty of important questions to be answered. For example:

  • SDN is open to interpretation and is not an industry standard, so what is it?
  • Is it better than what we have today?
  • What are the implications for your business, whether telcos, or vendors?
  • Could it simply be just a passing fad that will fade into the networking archives like IP Switching or X.25 and can you afford to ignore it?
  • What will be the impact on LAN and WAN design and for that matter data centres, telcos and enterprise customers? Could it be a threat to service providers?
  • Could we see a future where networking equipment becomes commoditised just like server hardware?
  • Will standards prevail?

Vendors are to a degree adding to the confusion. For example, Cisco argues that it already has an SDN-capable product portfolio with Cisco One. It says that its solution is more capable than solutions dominated by open-source based products, because these have limited functionality.

This executive briefing will explain what SDN is, why it is different to traditional networking, look at the emerging market with some likely use cases and then look at the implications and benefits for service providers and vendors.

How and why has SDN evolved?

SDN has been developed in response to the fact that basic networking hasn’t really evolved much over the last 30 plus years, and that new capabilities are required to further the development of virtualised computing to bring innovation and new business opportunities. From a business perspective the networking market is a prime candidate for disruption:

  • It is a mature market that has evolved steadily for many years
  • There are relatively few leading players who have a dominant market position
  • Technology developments have generally focussed in speed rather than cost reduction or innovation
  • Low cost silicon is available to compete with custom chips developed by the market leaders
  • There is a wealth of open source software plus plenty of low cost general purpose computing hardware on which to run it
  • Until SDN, no one really took a clean slate view on what might be possible

New features and capabilities have been added to traditional equipment, but have tended to bloat the software content increasing costs to both purchase and operate the devices. Nevertheless – IP Networking as we know it has performed the task of connecting two end points very well; it has been able to support the explosion of growth required by the Internet and of mobile and mass computing in general.

Traditionally each element in the network (typically a switch or a router) builds up a network map and makes routing decisions based on communication with its immediate neighbours. Once a connection through the network has been established, packets follow the same route for the duration of the connection. Voice, data and video have differing delivery requirements with respect to delay, jitter and latency, but in traditional networks there is no overall picture of the network – no single entity responsible for route planning, or ensuring that traffic is optimised, managed or even flows over the most appropriate path to suit its needs.

One of the significant things about SDN is that it takes away the independence or autonomy from every networking element in order to remove its ability to make network routing decisions. The responsibility for establishing paths through the network, their control and their routing is placed in the hands of one or more central network controllers. The controller is able to see the network as complete entity and manage its traffic flows, routing, policies and quality of service, in essence treating the network as a fabric and then attempting to get maximum utilisation from that fabric. SDN Controllers generally offer external interfaces through which external applications can control and set up network paths.

There has been a growing demand to make networks programmable by external applications – data centres and virtual computing are clear examples of where it would be desirable to deploy not just the virtual computing environment, but all the associated networking functions and network infrastructure from a single console. With no common control point the only way of providing interfaces to external systems and applications is to place agents in the networking devices and to ask external systems to manage each networking device. This kind of architecture has difficulty scaling, creates lots of control traffic that reduces overall efficiency, it may end up with multiple applications trying to control the same entity and is therefore fraught with problems.

Network Functions Virtualisation (NFV)

It is worth noting that an initiative complementary to SDN was started in 2012 called Network Functions Virtualisation (NFV). This complicated sounding term was started by the European Telecommunications Standards Institute (ETSI) in order to take functions that sit on dedicated hardware like load balancers, firewalls, routers and other network devices and run them on virtualised hardware platforms lowering capex, extending their useful life and reducing operating expenditures. You can read more about NFV later in the report on page 20.

In contrast, SDN makes it possible to program or change the network to meet a specific time dependant need and establish end-to-end connections that meet specific criteria. The SDN controller holds a map of the current network state and the requests that external applications are making on the network, this makes it easier to get best use from the network at any given moment, carry out meaningful traffic engineering and work more effectively with virtual computing environments.

What is driving the move to SDN?

The Internet and the world of IP communications have seen continuous development over the last 40 years. There has been huge innovation and strict control of standards through the Internet Engineering Task Force (IETF). Because of the ad-hoc nature of its development, there are many different functions catering for all sorts of use cases. Some overlap, some are obsolete, but all still have to be supported and more are being added all the time. This means that the devices that control IP networks and connect to the networks must understand a minimum subset of functions in order to communicate with each other successfully. This adds complexity and cost because every element in the network has to be able to process or understand these rules.

But the system works and it works well. For example when we open a web browser and a session to stlpartners.com, initially our browser and our PC have no knowledge of how to get to STL’s web server. But usually within half a second or so the STL Partners web site appears. What actually happens can be seen in Figure 1. Our PC uses a variety of protocols to connect first to a gateway (1) on our network and then to a public name server (2 & 3) in order to query the stlpartners.com IP address. The PC then sends a connection to that address (4) and assumes that the network will route packets of information to and from the destination server. The process is much the same whether using public WAN’s or private Local Area Networks.

Figure 2 – Process of connecting to an Internet web address
Process of connecting to an Internet web address May 2013

Source STL Partners

The Internet is also highly resilient; it was developed to survive a variety of network outages including the complete loss of sub networks. Popular myth has it that the US Department of Defence wanted it to be able to survive a nuclear attack, but while it probably could, nuclear survivability wasn’t a design goal. The Internet has the ability to route around failed networking elements and it does this by giving network devices the autonomy to make their own decisions about the state of the network and how to get data from one point to any other.

While this is of great value in unreliable networks, which is what the Internet looked like during its evolution in the late 70’s or early 80’s, networks of today comprise far more robust elements and more reliable network links. The upshot is that networks typically operate at a sub optimum level, unless there is a network outage, routes and traffic paths are mostly static and last for the duration of the connection. If an outage occurs, the routers in the network decide amongst themselves how best to re-route the traffic, with each of them making their own decisions about traffic flow and prioritisation given their individual view of the network. In actual fact most routers and switches are not aware of the network in its entirety, just the adjacent devices they are connected to and the information they get from them about the networks and devices they in turn are connected to. Therefore, it can take some time for a converged network to stabilise as we saw in the Internet outages that affected Amazon, Facebook, Google and Dropbox last October.

The diagram in Figure 2 shows a simple router network, Router A knows about the networks on routers B and C because it is connected directly to them and they have informed A about their networks. B and C have also informed A that they can get to the networks or devices on router D. You can see from this model that there is no overall picture of the network and no one device is able to make network wide decisions. In order to connect a device on a network attached to A, to a device on a network attached to D, A must make a decision based on what B or C tell it.

Figure 3 – Simple router network
Simple router network May 2013

Source STL Partners

This model makes it difficult to build large data centres with thousands of Virtual Machines (VMs) and offer customers dynamic service creation when the network only understands physical devices and does not easily allow each VM to have its own range of IP addresses and other IP services. Ideally you would configure a complete virtual system consisting of virtual machines, load balancing, security, network control elements and network configuration from a single management console and then these abstract functions are mapped to physical hardware for computing and networking resources. VMWare have coined the term ‘Software Defined Data Centre’ or SDDC, which describes a system that allows all of these elements and more to be controlled by a single suite of management software.

Moreover, returning to the fact that every networking device needs to understand a raft of Internet Request For Comments (or RFC’s), all the clever code supporting these RFC’s in switches and routers costs money. High performance processing systems and memory are required in traditional routers and switches in order to inspect and process traffic, even in MPLS networks. Cisco IOS supports over 600 RFC’s and other standards. This adds to cost, complexity, compatibility, future obsolescence and power/cooling needs.

SDN takes a fresh approach to building networks based on the technologies that are available today, it places the intelligence centrally using scalable compute platforms and leaves the switches and routers as relatively dumb packet forwarding engines. The control platforms still have to support all the standards, but the platforms the controllers run on are infinitely more powerful than the processors in traditional networking devices and more importantly, the controllers can manage the network as a fabric rather than each element making its own potentially sub optimum decisions.

As one proof point that SDN works, in early 2012 Google announced that it had migrated its live data centres to a Software Defined Network using switches it designed and developed using off-the-shelf silicon and OpenFlow for the control path to a Google-designed Controller. Google claims many benefits including better utilisation of its compute power after implementing this system. At the time Google stated it would have liked to have been able to purchase OpenFlow-compliant switches but none were available that suited its needs. Since then, new vendors have entered the market such as BigSwitch and Pica8, delivering relatively low cost OpenFlow-compliant switches.

To read the Software Defined Networking in full, including the following sections detailing additional analysis…

  • Executive Summary including detailed recommendations for telcos and vendors
  • Introduction (reproduced above)
  • How and why has SDN evolved? (reproduced above)
  • What is driving the move to SDN? (reproduced above)
  • SDN: Definitions and Advantages
  • What is OpenFlow?
  • SDN Control Platforms
  • SDN advantages
  • Market Forecast
  • STL Partners’ Definition of SDN
  • SDN use cases
  • Network Functions Virtualisation
  • What are the implications for telcos?
  • Telcos’ strategic options
  • Telco Action Plans
  • What should telcos be doing now?
  • Vendor Support for OpenFlow
  • Big switch networks
  • Cisco
  • Citrix
  • Ericssson
  • FlowForwarding
  • HP
  • IBM
  • Nicira
  • OpenDaylight Project
  • Open Networking Foundation
  • Open vSwitch (OVS)
  • Pertino
  • Pica8
  • Plexxi
  • Tellabs
  • Conclusions & Recommendations

…and the following figures…

  • Figure 1 – Potential Telco SDN/NFV Deployment Phases
  • Figure 2 – Process of connecting to an Internet web address
  • Figure 3 – Simple router network
  • Figure 4 – Traditional Switches with combined Control/Data Planes
  • Figure 5 – SDN approach with separate control and data planes
  • Figure 6 – ETSI’s vision for Network Functions Virtualisation
  • Figure 7 – Network Functions Virtualised and managed by SDN
  • Figure 8 – Network Functions Virtualisation relationship with SDN
  • Table 1 – Telco SDN Strategies
  • Figure 9 – Potential Telco SDN/NFV Deployment Phases
  • Figure 10 – SDN used to apply policy to Internet traffic
  • Figure 11 – SDN Congestion Control Application

 

Digital Commerce: Time to redefine the Mobile Wallet

Summary: The ‘Mobile/Digital Wallet’ needs to evolve to support authentication, search and discovery, as well as payments, vouchers, tickets and loyalty programmes. Moreover, consumers will want to be able to tailor the functionality of this “commerce assistant” or “commerce agent” to fit with their own interests and preferences. Key findings and next steps from the Digital Commerce stream of our Silicon Valley 2013 brainstorm. (April 2013, Executive Briefing Service, Dealing with Disruption Stream.)

Who is best placed to win in local commerce April 2013

  Read in Full (Members only)   To Subscribe click here

Below are the high-level analysis and detailed contents from a 35 page Telco 2.0 Briefing Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and the Dealing with Disruption Stream  here. Digital Commerce strategies and the findings of this report will also be explored in depth at the EMEA Executive Brainstorm in London, 5-6 June, 2013. Non-members can find out more about subscribing here, or to find out more about this and/or the brainstorm by emailing contact@telco2.net or calling +44 (0) 207 247 5003.

To share this article easily, please click:

 



Introduction

Part of the New Digital Economics Executive Brainstorm 2013 series, the Digital Commerce 2.0 event took place at the InterContinental Hotel, San Francisco on the 20th March and looked at how to get the mobile commerce flywheel moving, how to digitise local commerce, how to improve digital advertising and how to effectively leverage customer data and personal data. The Brainstorm considered how to harness telco assets and capabilities, as well as those of banks and payment networks, to deliver Digital Commerce 2.0.

Analysis: Time to redefine the wallet?

The Executive Brainstorm uncovered widespread confusion and dissatisfaction with the concept of a digital or mobile wallet. Some executives feel that a wallet, with its connotations of a highly personal item that is controlled entirely by the consumer and used primarily for transactions, may be the wrong term. There is a view that the concept of a digital wallet may have to evolve into a more multi-faceted application that supports authentication, search and discovery, as well as payments, vouchers, tickets and loyalty programmes.

Moreover, consumers will likely want to be able to tailor the functionality of this “commerce assistant” or “commerce agent” to fit with their own interests and preferences, rather than having to use an inflexible off-the-shelf application. This gateway application may also act as a personal cloud/locker service, providing access to the individual’s media and content, as well as enabling them to control their privacy settings. In other words, ultimately, consumers may want an assistant or agent that amalgamates the personalised discovery services offered by apps, such as Google Now, online media services, such as iCloud, and the traditional functions of a wallet, such as payments, receipts, coupons and loyalty programmes.

Business model battles

The Brainstorm confirmed that the digital commerce market continues to be held back by the slow and familiar dance between the established interests of banks/payment networks, telcos, and retailers. Designing business models that sufficiently incentivise each partner is tough: big retailers, for example, are likely to resist digital commerce solutions that don’t address their dissatisfaction about transaction fees – there was some excitement about digital commerce solutions that workaround the major payment networks’ interchange systems.

Some of the participants in the Brainstorm held strongly entrenched views about which players can contribute to growth in digital commerce and should therefore benefit most from that growth. The arguments boiled down to:

  • The banking ecosystem believes it is well placed because of the requirement for transactions to be processed by entities with banking licenses and that comply with know your customer (KYC) regulations.
  • Telcos believe that, as digital commerce-related data travels over their networks, they will understand the market better than other players.
  • Retailers believe that they have the customer relationships and that digital commerce offers opportunities to strengthen those relationships and reduce the costs of transactions.

The length and complexity of the digital commerce value chain raises significant questions about whether one entity could and should own the customer relationship and manage customer care across the whole experience. Moreover, there may be a disconnect between elements of the value chain and the overall value proposition. For example, individual retailers may wish to offer fully-customised digital commerce experiences delivered through their own branded apps, but consumers may not want to see the complexity of the existing marketplace, in which they are asked to register and carry multiple loyalty cards, continue in an increasingly digitised world.

While the traditional players jostle for the best positions in the value chain, the door is wide open for market entrants to come with radically disruptive business models. Although telcos have the customer data to be play a pivotal role in digital commerce, other players will work around them unless telcos are prepared to move quickly and partner on equitable terms. In many cases, telcos (and other would-be digital commerce) brokers may have to compromise on margins to seed the market and ultimately gain scale – small merchants (the long tail), which have highly inefficient marketing today, have a greater incentive than large retailers to adopt such solutions. Participants in the Silicon Valley Brainstorm thought that either established Internet players or a start up would ultimately win over the banks and telcos in local commerce.

Who is best placed to win in local commerce April 2013

Consumers are most likely to adopt digital commerce services that offer convenience and breadth. Therefore, such services need to act as open and flexible brokers, which enable a wide range of merchants to use application programming interfaces (APIs) to plug in vouchers and loyalty schemes quickly and easily.

Mobile advertising – still very immature

Immature and messy, the mobile advertising market is still a long way from being as structured as, for instance, television advertising, in terms of standardising metrics for buyers and creating an efficient procurement process. The Brainstorm highlighted the profusion of different technologies and platforms that is making the mobile advertising market highly-fragmented and very resource-intensive for media buyers. In many cases, the advertising industry may be struggling to differentiate between mobile networks, mobile users and mobile devices. For example, a consumer using a tablet on a sofa may be seeing the same adverts as a smartphone user travelling to work on a train.

In essence, the creatives working in advertising agencies are not certain what messages and formats work on a mobile screen, as buyers don’t have reliable ROI data and the advertising networks continue to struggle to deliver precise targeting, stymied by multiple barriers, such as privacy fears, walled gardens and bandwidth constraints. As a result, there is widespread dissatisfaction among both media buyers and consumers with mobile advertising. The mobile advertising market needs robust tools and processes – standardised, proven formats and reliable, trusted metrics – to will enable brands to purchase advertising at scale and with confidence.

Some media buyers are looking for solutions that make the delivery of digital advertising more transparent to consumers, so they have a clearer understanding of why they are seeing a particular advert.

To address these issues, telcos, looking to broker advertising, need to create better platforms that are easy for media buyers to access, offer precise targeting and provide transparent metrics that are straightforward to monitor. Despite the formation of telco marketing and advertising joint ventures in some markets, such as the U.K., some advertising executives believe telcos don’t see a big enough revenue opportunity to build these platforms.

Instead of brand building and customer acquisition, which is the traditional use of mass advertising, it seems likely that the mobile channel will be used primarily for customer loyalty and retention. So-called active advertising (advertising that is designed to enable the individual to complete a specific task) may be well suited to mobile devices, which people typically use to get something done. As attention spans are short and screen space is limited in the mobile medium, the advertising value chain will need to change its mindset to put the needs of the consumer, rather than the brand, front and centre.

Big data – how to monetize?

The Brainstorm reinforced the sense that big data/personal data has the potential to create exceptional insights and disruptive new business models. But most people working in this space only have a high-level, theoretical view of how this might happen, rather than a collection of compelling case studies and use cases. Finding big data projects offering a respectable return on investment is going to be a hit and miss affair, requiring an open mind and the patience to experiment.

Although self-authenticated data could potentially make advertising and marketing more efficient, it may also increase transparency for consumers: The Internet has given consumers more control and is driving deflation in many sectors. The rise of personal data could have negative implications for companies’ profit margins as consumers use vendor relationship management systems to systematically secure the best price.

Many start-ups seem to still be pursuing advertising-funded business models, but big data and personal data business models may depend on a different approach. They should be asking: “How do you fund a search engine that is not ad-funded and can social networks not be ad-funded?” Computational contracts, which machines can execute and people can actually understand, could be part of the answer. Rather than trying to infer interests and movements, a social network might explicitly ask the following question. “If you give me your location and the brands you like, I’ll give you two coupons a day.” This is basically the Placecast model, which seems to be gaining traction in some markets. In any case, telcos and banks could and should use transparent and user-friendly privacy policies as a competitive weapon against Facebook and Google, which currently dominate the online advertising market.

The concept of companies interacting with individuals through the web presence of their objects, such as their car, their bike or their pet, seems sound. Both individuals and companies could benefit from a two-way flow of information around these objects. For example, a consumer with a specific make of printer or camera could benefit from personalised and timely discounts on accessories, such as cartridges and lenses.

Next steps for STL Partners

We will:

  • Continue to research and explore ‘Digital Commerce’ at our Executive Brainstorms, with particular emphasis on practical steps to create the Digital Wallet, enable ‘SoMoLo’, and the key role of personal data and trust frameworks;
  • Look further into the needs and applications of ‘Big Data’ into the field, as well as continuing our involvement in the World Economic Forum’s (WEF) work on Trust Networks for personal data;
  • Publish further research on the business case for personal data, and a full Strategy Report on the Digital Commerce area.


To read the note in full, including the following sections detailing additional analysis…

  • Closing the loop between advertising and payments
  • First stimulus presentation
  • Second stimulus presentation
  • Innovation showcase
  • Brainstorm
  • Key takeaways
  • Advertising & Marketing: Radical Game Change Ahead
  • First and Second stimulus presentations
  • Final stimulus presentation
  • Brainstorm
  • Key takeaways
  • Session 3: Big Data – Exploiting the New Oil for the New Economy
  • Stimulus Speakers and Panellists
  • Stimulus presentations
  • Voting, feedback, discussions
  • Key takeaways

…and the following figures…

  • Figure 1 – Customer Data is at the centre of Digital Commerce
  • Figure 2 – What will North American consumers value most from digital commerce?
  • Figure 3- Leading players’ strengths and weaknesses upstream and downstream
  • Figure 4 – The key elements of the digital commerce flywheel
  • Figure 5 – Vast majority of commerce is still offline
  • Figure 6 – Linking location-based offers to payment cards
  • Figure 7 – Participants’ views on likely winners in ‘local’ digital commerce
  • Figure 8 – Mobile ad spend doesn’t reflect the time people spend in this medium
  • Figure 9 – What does the advertising industry need to do to stay relevant?
  • Figure 10 – Why personal data isn’t like oil
  • Figure 11 – A strawman process for personal data
  • Figure 12 – A decentralised architecture for the Internet of My Things
  • Figure 13 – Kynetx: companies can connect through ‘things’

Members of the Telco 2.0 Executive Briefing Subscription Service and the Dealing with Disruption Stream can download the full 35 page report in PDF format here. Non-Members, please subscribe here. Digital Commerce strategies and the findings of this report will also be explored in depth at the EMEA Executive Brainstorm in London, 5-6 June, 2013. For this or any other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

Background & Further Information

Produced and facilitated by business innovation firm STL Partners, the Silicon Valley 2013 event brought together 150 specially-invited senior executives from across the communications, media, retail, banking and technology sectors, including:

  • Apigee, Arete Research, AT&T,ATG, Bain & Co, Beecham Research, Blend Digital Group, Bloomberg, Blumberg Capital, BMW, Brandforce, Buongiorno, Cablelabs, CenturyLink, Cisco, CITI Group, Concours Ventures, Cordys, Cox Communications, Cox Mobile, CSG International, Cycle Gear, Discovery, DoSomething.Org, Electronic Transactions Association, EMC Corporation, Epic, Ericsson, Experian, Fraun Hofer USA, GE, GI Partners, Group M, GSMA, Hawaiian Telecom, Huge Inc, IBM, ILS Technology, IMI Mobile Europe, Insight Enterprises, Intel, Ketchum Digital, Kore Telematics, Kynetx, MADE Holdings, MAGNA Global, Merchant Advisory Group, Message Systems, Microsoft, Milestone Group, Mimecast, MIT Media Lab, Motorola, MTV, Nagra, Nokia, Oracle, Orange, Panasonic, Placecast, Qualcomm, Rainmaker Capital, ReinCloud, Reputation.com, SalesForce, Samsung, SAP, Sasktel, Searls Group, Sesame Communications, SK Telecom Americas, Sprint, Steadfast Financial, STL Partners/Telco 2.0, SystemicLogic Ltd., Telephone & Data Systems, Telus, The Weather Channel, TheFind Inc, T-Mobile USA, Trujillo Group LLC, UnboundID, University of California Davis, US Cellular Corp, USC Entertainment Technology Center, Verizon, Virtustream, Visa, Vodafone, Wavefront, WindRiver, Xtreme Labs.

Around 40 of these executives participated in the ‘Digital Commerce’ session.

The Brainstorm used STL’s unique ‘Mindshare’ interactive format, including cutting-edge new research, case studies, use cases and a showcase of innovators, structured small group discussion on round-tables, panel debates and instant voting using on-site collaborative technology.

We’d like to thank the sponsors of the Brainstorm:
Silicon Valley 2013 Sponsors