Enterprise Wi-Fi 6/7 is here to stay: 5G is not enough

Overview of Wi-Fi 6/7 for enterprises

This report is not a traditional analyst report on Wi-Fi covering market segments, shares and forecasts. Numerous peer organisations have a long tradition of quantitative marketing modelling and prediction; we are not intending to compete with them. For illustration purposes, we have used a couple of charts with the kind permission of Chris DePuy from 650 Group presented at a recent Wi-Fi Now conference, during a joint panel session with the author of this report.

Instead, this report looks more at the strategic issues around Wi-Fi and the enterprise – and the implications and recommendations for CIOs and network architects in corporate user organisations, opportunities for different types of CSPs, important points for policymakers and regulators, plus a preview of the most important technical innovations likely to emerge in the next few years. There may be some differences in stance or opinion compared to certain other STL reports.

The key themes covered in this report are:

    • Background to enterprise Wi-Fi: key uses, channels and market trends
    • Understanding “Wi-Fi for verticals”
    • Decoding the changes and new capabilities that come with Wi-Fi 6, 6E and 7
    • How and where public and private 5G overlaps or competes with Wi-Fi
    • CSP opportunities in enterprise Wi-Fi
    • Wi-Fi and regulation – and the importance of network diversity.

Enter your details below to request an extract of the report

Wi-Fi’s background and history

Today, most readers will first think of Wi-Fi as prevalent in the home and across consumer devices such as smartphones, laptops, TVs, game consoles and smart speakers. In total, there are over 18 billion Wi-Fi devices in use, with perhaps 3-4bn new products shipping annually.

Yet the history of Wi-Fi – and its underlying IEEE802.11 technology standards – is anchored in the enterprise.

The earliest incarnations of “wireless ethernet” in the 1990s were in sectors like warehousing and retail, connecting devices such as barcode scanners and point-of-sale terminals. Early leaders around 2000-2005 were companies such as Symbol, Proxim, 3Com and Lucent, supplying both industrial applications and (via chunky plug-in “PCMCIA” cards) laptops, mostly used by corporate employees.

During the 2003-2010 period, Wi-Fi exploded for both enterprises and (with the help of Apple and Intel) consumer laptops, and eventually early smartphones based on Windows and Symbian OS’s, then later iOS and Android.

The corporate world in “carpeted offices” started deploying more dedicated, heavyweight switched systems designed for dense networks of workers at desks, in meeting rooms and in cubicles. Venue Wi-Fi grew quickly as well, with full coverage becoming critical in locations such as airports and hotels, both for visitors and for staff and some connected IT systems. A certain amount of outdoor Wi-Fi was deployed, especially for city centres, but gained little traction as it coincided with broader coverage (and falling costs) of cellular data.

A new breed of enterprise Wi-Fi vendors emerged – and then quickly became consolidated by major networking and IT providers. This has occurred in several waves over the last 20 years. Cisco bought Airespace (and later Meraki and others), Juniper bought Trapeze and Mist Systems, and HP (later HPE) acquired Aruba. There has also been some telecom-sector acquisitions of Wi-Fi vendors, with Commscope acquiring Ruckus, and Ericsson buying BelAir.

While telcos have had some important roles in public or guest Wi-Fi deployments, including working with enterprises in sectors such as cafes, retail, and transport, they have had far less involvement with Wi-Fi deployed privately in enterprise offices, warehouses, factories, and similar sites. For the most part that has been integrated with the wired LAN infrastructure and broader IT domain, overseen by corporate IT/network teams and acquired via a broad array of channels and systems integrators. For industrial applications, many solution providers integrate Wi-Fi (and other wireless mechanisms) directly into machinery and automation equipment.

Looking to the future, enterprise Wi-Fi will coexist with both public and private 5G (including systems or perhaps slices provided by telcos), as well as various other wireless and fibre/fixed connectivity modes. Some elements will converge while others will stay separate. CSPs should “go with the grain” of enterprise networks and select/integrate/operate the right tools for the job, rather than trying to force-fit their preferred technical solution.

Roles and channels for enterprise Wi-Fi

Today, there are multiple roles for Wi-Fi in a business or corporate context. The most important include:

  • Traditional use in offices, both for normal working areas and shared spaces such as meeting and conference rooms. There is often a guest access option.
  • Small businesses use Wi-Fi extensively, as many workers rely on laptops and similar devices, plus vertical-specific endpoints such as payment terminals. Often, they will obtain Wi-Fi capabilities along with their normal retail business broadband connection from a service provider. This may include various types of guest-access option. Common use of shared buildings such as multi-tenant office blocks or retail malls means there may be a reliance on the landlord or site operator for network connectivity.
  • Working from home brings a wide range of new roles for Wi-Fi, especially where there is an intersection of corporate applications and security, with normal home and consumer demand. A growing range of solutions targets this type of converged situation.
  • Large visitor-led venues such as sports stadia, hotels and resorts are hugely important for the Wi-Fi industry. They often have guests with very high expectations of Wi-Fi reliability, coverage, and performance – and also often use the infrastructure themselves for staff, displays and various IoT and connected systems.
  • Municipal and city authorities have gone through two or more rounds of Wi-Fi deployments. Initial 2010-era visions for connectivity often stalled because of a mismatch between usage at the time (mostly on laptops, indoors) and coverage (mostly outdoors). Since then, the rise of smartphone ubiquity, plus a greater array of IoT and smart city devices has made city-centre Wi-Fi more useful again. Increasingly, it is being linked to 5G small cell deployments, metro fibre networks – and made more usable with easier roaming / logon procedures. Some local authorities’ scope also covers Wi-Fi use within education and healthcare settings.
  • Public Wi-Fi hotspots overlap with various enterprise sectors, most notably in transport, cafes/restaurants and hospitality sectors. Where organisations have large venues or multiple sites, such as chain of cafes or retail outlets, there is likely to be some wider enterprise proposition involved.
  • The transport industry is a hugely important sector for enterprise Wi-Fi solutions. Vehicles themselves (buses, planes, trains, taxis) require connectivity for passengers, while transport hubs (airports, stations, etc.) have huge requirements for ease-of-access and performance for Wi-Fi.
  • Wi-Fi technology is also widely used as the basis for fixed-wireless access over medium-to-wide areas. Sometimes using vendor-specific enhancements, it is common to use unlicenced spectrum and 802.11-based networks for connectivity to rural businesses or specific fixed assets. A new version of Wi-Fi technology (802.11ah HaLow) also allows low-power wide area applications for sensors and other IoT devices, which can potentially compete against LoRa and 4G NB-IoT, although it is very late to the market.
  • Niche applications for Wi-Fi technology also exist, for example backhauling other wireless technologies such as Bluetooth, for in-building sensing and automation. There are also emerging propositions such as using high-capacity 60GHz Wi-Fi to replace fibres and cabling inside buildings, especially for rapid installation or in environments where drilling holes in walls requires permits.

Enterprise Wi-Fi solutions cover a broad range of contexts and uses

Given the range of Wi-Fi enterprise market sectors and use cases, it is unsurprising that there are also multiple ways for companies and organisations to obtain the infrastructure, as well as operate the connectivity functions or services.

Some of the options include:

  • Self-provision: Many large organisations will source, install, and operate their own Wi-Fi networks via their IT and networking teams, as they do for fixed LAN and sometimes WAN equipment. They may rely on vendor or outsourced support and specific tasks such as wiring installation.
  • Broadband CSP: Especially for smaller sites, Wi-Fi is often obtained alongside business broadband connectivity, perhaps from an integrated router managed by the ISP.
  • Enterprise MSP: Larger businesses may use dedicated enterprise-grade service providers for their Internet connections, UCaaS services, SD-WAN / SASE networks and so on. These organisations may also provide on-site Wi-Fi installation and management services, or work with sub-contractors to deliver them.
  • Solution providers: Various IT and OT systems, such as building management systems or industrial automation solutions, may come with Wi-Fi embedded into the fabric of the proposition.
  • Managed Wi-Fi specialists: Especially for visitor-centric locations like transport hubs, Wi-Fi coverage and operation may be outsourced to a third party managed service operator. They will typically handle the infrastructure (and any upgrades), authentication, security and backhaul on a contractual basis. They will also likely provide staff/IoT connections as well as guest access.
  • Network integrators: Enterprises may obtain Wi-Fi installations as a one-off project from a network specialist (perhaps with separate maintenance / upgrade agreements). This may well be combined with fixed LAN infrastructure and other relevant elements. This may also be a channel for hybrid Wi-Fi / private cellular in future.
  • Vertical specialists: Various industries such as hotels, aviation, hospitals, mining and so on will often have dedicated companies catering to sector-specific needs, standards, regulations, or business practices. They may tie together various other technology elements, such as IoT connections, asset tracking, voice communications and so forth, using Wi-Fi where appropriate.
  • In-building wireless specialists: Various companies specialise in both indoor cellular coverage systems and Wi-Fi. Often linked to tower companies or neutral-host business models.

Table of Contents

  • Executive Summary
  • Introduction
    • Structure and objectives of this report
    • Background and history
    • Roles and channels for enterprise Wi-Fi
    • Recent enterprise Wi-Fi market trends
    • Note on terminology
  • The evolution of “Wi-Fi for verticals”
    • Understanding Wi-Fi “verticals”
    • Existing vertical-specific Wi-Fi solutions
    • Wi-Fi in industry verticals – building ecosystems
  • Wi-Fi 6, 6E & 7: Rapid cadence or confusion?
    • Continual evolution of Wi-Fi capabilities: 6, 6E, 7
    • Wi-Fi 7 may be a game-changer for enterprise
    • The long-term future: Wi-Fi 8 and beyond
    • Other Wi-Fi variants: 60GHz and HaLow
  • Where do Wi-Fi and 5G overlap competitively?
    • Does private 5G change the game?
    • Convergence / divergence
  • The political and regulatory dimensions of enterprise wireless
    • 6GHz spectrum
  • CSPs and enterprise Wi-Fi
    • CSPs and large enterprise / industrial Wi-Fi
    • Wi-Fi service value-adds
    • Wi-Fi and edge compute
  • Conclusions

Related research

Enter your details below to request an extract of the report

Vendors vs. telcos? New plays in enterprise managed services

Digital transformation is reshaping vendors’ and telcos’ offer to enterprises

What does ‘digital transformation’ mean?

The enterprise market for telecoms vendors and operators is being radically reshaped by digital transformation. This transformation is taking place across all industry verticals, not just the telecoms sector, whose digital transformation – desirable or actual – STL Partners has forensically mapped out for several years now.

The term ‘digital transformation’ is so familiar that it breeds contempt in some quarters. Consequently, it is worth taking a while to refresh our thinking on what ‘digital transformation’ actually means. This will in turn help explain how the digital needs and practices of enterprises are impacting on vendors and telcos alike.

The digitisation of enterprises across all sectors can be described as part of a more general social, economic and technological evolution toward ever more far-reaching use of software-, computing- and IP-based modes of: interacting with customers and suppliers; communicating; networking; collaborating; distributing and accessing media content; producing, marketing and selling goods and services; consuming and purchasing those goods and services; and managing money flows across the economy. Indeed, one definition of the term ‘digital’ in this more general sense could simply be ‘software-, computing- and IP-driven or -enabled’.

For the telecoms industry, the digitisation of society and technology in this sense has meant, among other things, the decline of voice (fixed and mobile) as the primary communications service, although it is still the single largest contributor to turnover for many telcos. Voice mediates an ‘analogue’ economy and way of working in the sense that the voice is a form of ‘physical’ communication between two or more persons. In addition, the activity and means of communication (i.e. the actual telephone conversation to discuss project issues) is a separate process and work task from other work tasks, in different physical locations, that it helps to co-ordinate. By contrast, in an online collaboration session, the communications activity and the work activity are combined in a shared virtual space: the digital service allows for greater integration and synchronisation of tasks previously carried out by physical means, in separate locations, and in a less inherently co-ordinated manner.

Similarly, data in the ATM and Frame Relay era was mainly a means to transport a certain volume of information or files from one work place to another, without joining those work places together as one: the work places remained separate, both physically and in terms of the processes and work activities associated with them. The traditional telecoms network itself reflected the physical economy and processes that it enabled: comprising massive hardware and equipment stacks responsible for shifting huge volumes of voice signals and data packets (so called on the analogy of postal packets) from one physical location to another.

By contrast, with the advent of the digital (software-, computing- and IP-enabled) society and economy, the value carried by communications infrastructure has increasingly shifted from voice and data (as ‘physical’ signals and packets) to that of new modes of always-on, virtual interconnectedness and interactivity that tend towards the goal of eliminating or transcending the physical separation and discontinuity of people, work processes and things.

Examples of this digital transformation of communications, and associated experiences of work and life, could include:

  • As stated above, simple voice communications, in both business and personal life, have been increasingly superseded by ‘real-time’ or near-real-time, one-to-one or one-to-many exchange and sharing of text and audio-visual content across modes of communication such as instant messaging, unified communications (UC), social media (including increasingly in the work place) or collaborative applications enabling simultaneous, multi-party reviewing and editing of documents and files
  • Similarly, location-to-location file transfers in support of discrete, geographically separated business processes are being replaced by centralised storage and processing of, and access to, enterprise data and applications in the cloud
  • These trends mean that, in theory, people can collaborate and ‘meet’ with each other from any location in the world, and the digital service constitutes the virtual activity and medium through which that collaboration takes place
  • Similarly, with the Internet of Things (IoT), physical objects, devices, processes and phenomena generate data that can be transmitted and analysed in ‘real time’, triggering rapid responses and actions directed towards those physical objects and processes based on application logic and machine learning – resulting in more efficient, integrated processes and physical events meeting the needs of businesses and people. In other words, the IoT effectively involves digitising the physical world: disparate physical processes, and the action of diverse physical things and devices, are brought together by software logic and computing around human goals and needs.

‘Virtualisation’ effectively means ‘digital optimisation’

In addition to the cloud and IoT, one of the main effects of enterprise digital transformation on the communications infrastructure has of course been Network Functions Virtualisation (NFV) and SoftwareDefined Networking (SDN). NFV – the replacement of network functionality previously associated with dedicated hardware appliances by software running on standard compute devices – could also simply be described as the digitisation of telecoms infrastructure: the transformation of networks into software-, computing- and IP-driven (digital) systems that are capable of supporting the functionality underpinning the virtual / digital economy.

This functionality includes things like ultrafast, reliable, scalable and secure routing, processing, analysis and storage of massive but also highly variable data flows across network domains and on a global scale – supporting business processes ranging from ‘mere’ communications and collaboration to co-ordination and management of large-scale critical services, multi-national enterprises, government functions, and complex industrial processes. And meanwhile, the physical, Layer-1 elements of the network have also to become lightning-fast to deliver the massive, ‘real-time’ data flows on which the digital systems and services depend.

Virtualisation creates opportunities for vendors to act like Internet players, OTT service providers and telcos

Virtualisation frees vendors from ‘operator lock-in’

Virtualisation has generally been touted as a necessary means for telcos to adapt their networks to support the digital service demands of their customers and, in the enterprise market, to support those customers’ own digital transformations. It has also been advocated as a means for telcos to free themselves from so-called ‘vendor lock-in’: dependency on their network hardware suppliers for maintenance and upgrades to equipment capacity or functionality to support service growth or new product development.

From the other side of the coin, virtualisation could also be seen as a means for vendors to free themselves from ‘operator lock-in’: a dependency on telcos as the primary market for their networking equipment and technology. That is to say, the same dynamic of social and enterprise digitisation, discussed above, has driven vendors to virtualise their own product and service offerings, and to move away from the old business model, which could be described as follows:

▪ telcos and their implementation partners purchase hardware from the vendor
▪ deploy it at the enterprise customer
▪ and then own the business relationship with the enterprise and hold the responsibility for managing the services

By contrast, once the service-enabling technology is based on software and standard compute hardware, this creates opportunities for vendors to market their technology direct to enterprise customers, with which they can in theory take over the supplier-customer relationship.

Of course, many enterprises have continued to own and operate their own private networks and networking equipment, generally supplied to them by vendors. Therefore, vendors marketing their products and services direct to enterprises is not a radical innovation in itself. However, the digitisation / virtualisation of networking technology and of enterprise networks is creating a new competitive dynamic placing vendors in a position to ‘win back’ direct relationships to enterprise customers that they have been serving through the mediation of telcos.

Virtualisation changes the competitive dynamic

Virtualisation changes the competitive dynamic

Contents:

  • Executive Summary: Digital transformation is changing the rules of the game
  • Digital transformation is reshaping vendors’ and telcos’ offer to enterprises
  • What does ‘digital transformation’ mean?
  • ‘Virtualisation’ effectively means ‘digital optimisation’
  • Virtualisation creates opportunities for vendors to act like Internet players, OTT service providers and telcos
  • Vendors and telcos: the business models are changing
  • New vendor plays in enterprise networking: four vendor business models
  • Vendor plays: Nokia, Ericsson, Cisco and IBM
  • Ericsson: changing the bet from telcos to enterprises – and back again?
  • Cisco: Betting on enterprises – while operators need to speed up
  • IBM: Transformation involves not just doing different things but doing things differently
  • Conclusion: Vendors as ‘co-Operators’, ‘co-opetors’ or ‘co-opters’ – but can telcos still set the agenda?
  • How should telcos play it? Four recommendations

Figures:

  • Figure 1: Virtualisation changes the competitive dynamic
  • Figure 2: The telco as primary channel for vendors
  • Figure 3: New direct-to-enterprise opportunities for vendors
  • Figure 4: Vendors as both technology supplier and OTT / operator-type managed services provider
  • Figure 5: Vendors as digital service creators, with telcos as connectivity providers and digital service enablers
  • Figure 6: Vendors as digital service enablers, with telcos as digital service creators / providers
  • Figure 7: Vendor manages communications / networking as part of overall digital transformation focus
  • Figure 8: Nokia as technology supplier and ‘operator-type’ managed services provider
  • Figure 9: Nokia’s cloud-native core network blueprint
  • Figure 10: Nokia WING value chain
  • Figure 11: Ericsson’s model for telcos’ roles in the IoT ecosystem
  • Figure 12: Ericsson generates the value whether operators provide connectivity only or also market the service
  • Figure 13: IBM’s model for telcos as digital service enablers or providers – or both

MWC 2017: The big themes from behind the scenes

Introduction

It was notable that the main halls at the GSMA’s Mobile World Congress 2017 in Barcelona last week were still buzzing on Thursday morning, the last of four days. Previously the crowds have always noticeably thinned by then, but there was no let up this year – certainly not until around 2pm, and the event closes at 4pm on the Thursday.

If you’ve never been, your first experience of the Congress can be quite overwhelming. There is so much going on, so many people, and an almost bewildering number of companies and halls. Even for seasoned MWC-ers, the activity on Tuesday in particular reached a new level of intensity. Just walking between stands was a battle in places. The extra energy at this years’ show was surprising because mobile is not really a growth industry any more, although it is still a huge and profitable sector.

However, despite the frenetic activity, many commentators have struggled to identify an over-arching theme or message for this year’s MWC. Nokia’s retro-phone announcement was one surprising success.  In the light of its backward-facing nature, the popularity of this story is rather confusing, but perhaps it is a sign of people looking hard for something interesting to say.

Of course, the diversity and scale of the Congress can make it hard to discern the big picture. Usually there is an announcement or keynote (such as those by Google’s Eric Schmidt in 2010 or Microsoft’s Steve Ballmer in 2012) that seems to frame the moment. Not so this year though.

This absence of one unifying theme reflects the results of our client feedback survey that we conducted in August 2016: telco strategy teams need to understand and evaluate the potential of an increasingly diverse range of new technologies, business models, and other opportunities (or threats) in order to succeed.

Behind the scenes at MWC, we found several major themes which we summarise in this report:

  • Telco change
  • 5G
  • IoT

Beyond these three areas there was a multitude of information and demonstrations about new technologies and services such as Rich Communications Services (RCS), AI and blockchain. This report summarises what we learnt about these topics at MWC, and we will continue to research these areas in the future, to assess how they will impact telcos and what strategy they need to adopt to make the most of these opportunities.

 

  • Executive Summary
  • Introduction
  • Telco change
  • 5G
  • 5G – the next generation?
  • The business case for telcos is not yet that convincing
  • The path to 5G and the “first mover” risk
  • Super low latency – what is it good for?
  • The spectrum case remains unclear
  • EHF and mmWave
  • 5G – Telco recommendations in summary
  • IoT
  • What role will telcos play?
  • The IoT challenge: Data privacy and security
  • Connectivity consolidation
  • Topics to watch
  • Rich Communications Services (RCS)
  • Enterprise digital transformation – Companies must be proactive, not reactive
  • AI – The human element

B2B growth: How can telcos win in ICT?

Introduction

The telecom industry’s growth profile over the last few years is a sobering sight. As we have shown in our recent report Which operator growth strategies will remain viable in 2017 and beyond?, yearly revenue growth rates have been clearly slowing down globally since 2009 (see Figure 1). In three major regions (North America, Europe, Middle East) compound annual growth rates have even been behind GDP growth.

 

Figure 1: Telcos’ growth performance is flattening out (Sample of sixty-eight operators)

Source: Company accounts; STL Partners analysis

To break out of this decline telcos are constantly searching for new sources of revenue, for example, by expanding into adjacent, digital service areas which are largely placed within mass consumer markets (e.g. content, advertising, commerce).

However, in our ongoing conversations with telecoms operators, we increasingly come across the notion that a large part of future growth potential might actually lie in B2B (business-to-business) markets and that this customer segment will have an increasing impact of overall revenue growth.

This report investigates the rationale behind this thinking in detail and tries to answer the following key questions:

  1. What is the current state of telco’s B2B business?
  2. Where are the telco growth opportunities in the wider enterprise ICT arena?
  3. What makes an enterprise ICT growth strategy difficult for telcos to execute?
  4. What are the pillars of a successful strategy for future B2B growth?

 

  • Executive Summary
  • Introduction
  • Telcos may have different B2B strategies, but suffer similar problems
  • Finding growth opportunities within the wider enterprise ICT arena could help
  • Three complications for revenue growth in enterprise ICT
  • Complication 1: Despite their potential, telcos struggle to marshal their capabilities effectively
  • Complication 2: Telcos are not alone in targeting enterprise ICT for growth
  • Complication 3: Telcos’ core services are being disrupted by OTT players – this time in B2B
  • STL Partners’ recommendations: strategic pillars for future B2B growth
  • Conclusion

 

  • Figure 1: Telcos’ growth performance is flattening out (Sample of sixty-eight operators)
  • Figure 2: Telcos’ B2B businesses vary significantly by scale and performance (selected operators)
  • Figure 3: High-level structure of the telecom industry’s revenue pool (2015) – the consumer segment dominates
  • Figure 4: Orange aims to expand the share of “IT & integration services” in OBS’s revenue mix
  • Figure 5: Global enterprise ICT expenditures are projected to growth 7% p.a.
  • Figure 6: Telcos and Microsoft are moving in opposite directions
  • Figure 7: SD-WAN value chain
  • Figure 8: Within AT&T Business Solutions’ revenue mix, growth in fixed strategic services cannot yet offset the decline in legacy services

SD-WAN: New Enterprise Opportunity for Telcos, or a Threat to MPLS, SDN & NFV?

Rapid growth in SD-Wan networks

Software-defined Wide Area Networks (SD-WAN) have catapulted to prominence in the enterprise networking world in the last 12 months. They allow businesses to manage their connections between sites, data-centres, the Internet and external cloud services much more cost-effectively and flexibly than in the past.

Driven by the growth of enterprise demand for access to cloud applications, and businesses’ desire to control WAN costs, various start-ups and existing network-optimisation vendors have catalysed SD-WAN’s emergence. Its rapid growth as a new “intermediary” layer in the network has the potential to disrupt telcos’ enterprise aspirations, especially around NFV/SDN.

In essence, SD-WAN allows the creation of an “OTT intelligent network infrastructure”, as an overlay on top of one or more providers’ physical connections. SD-WANs allow combinations of multiple types of access network – and multiple network providers. This can improve QoS in certain areas, reliability and security of corporate networks, while simultaneously reducing costs.  SD-WANs also enable greater flexibility and agility in allocating enterprise network resources.

Why SD-WAN is at least in in part a threat

However, SD-WAN potentially poses major risks to traditional telcos’ enterprise offerings. It allows enterprise customers to deploy least-cost-routing more easily, or highest-quality-routing, by arbitraging differences in price or performance between multiple providers. It enables high-margin MPLS connections to be (at least partly) replaced with commodity Internet connectivity. And it reduces loyalty / lock-in by establishing an “abstraction” layer above the network, controlled by in-house IT teams or competing managed service providers.

SD-WAN has another, medium-term, set of implications for telcos, when considered through the lens of the emerging world of NFV/SDN and “telco cloud” – a topic on which STL Partners has written widely. By disconnecting the physical provision of corporate networks and a business’s data/application assets or clouds, SD-WAN may make it harder for telcos to move up the value chain in serving enterprise customers. Capabilities such as security systems, or unified communications services, may become associated with the SD-WAN, rather than the underlying connection(s); and would thus be provisioned by the SD-WAN provider, rather than by the telco that is providing basic connectivity.

In other words, SD-WAN represents three distinct threats for telcos:

  • Potential reduction in MPLS & other WAN services revenues
  • Potential reduction in today’s enterprise solution value-adds such as UCaaS & managed security services
  • Potential restriction of future telco enterprise SDN/NFV services opportunities to basic Network as a Service (NaaS) offers, with lower scope for upsell.

The current global market for WAN services is $60-100bn annually, depending on how it is defined; therefore, any risk of significant change is central to many operators’ strategic concerns.
Table of Contents

  • Executive Summary
  • Introduction
  • Background: Enterprise WANs
  • Shifting trends in WAN usage
  • The rise of SD-WAN
  • Overview – the holy grail of ‘good/fast/cheap’ in the WAN
  • SD-WAN technology and use-cases
  • SD-WAN vendors include start-ups and established enterprise market players
  • The role of service providers in SD-WAN
  • Bundling hosted voice/UCaaS and SD-WAN
  • Telcos Should take a Proactive Approach to SD-WAN
  • SD-WAN vs. SDN & NFV: Timing and Positioning
  • Future of SD-WAN and Recommendations
  • Recommendations

 

  • Figure 1: SD-WAN architecture example
  • Figure 2: SD-WAN & NaaS may help telcos maintain revenues in enterprise WAN
  • Figure 3: SD-WAN may reduce telco opportunities for SDN/NFV/cloud services
  • Figure 4: Different paths for SD-WAN service offer provision & procurement

NFV: Great Promises, but How to Deliver?

Introduction

What’s the fuss about NFV?

Today, it seems that suddenly everything has become virtual: there are virtual machines, virtual LANs, virtual networks, virtual network interfaces, virtual switches, virtual routers and virtual functions. The two most recent and highly visible developments in Network Virtualisation are Software Defined Networking (SDN) and Network Functions Virtualisation (NFV). They are often used in the same breath, and are related but different.

Software Defined Networking has been around as a concept since 2008, has seen initial deployments in Data Centres as a Local Area Networking technology and according to early adopters such as Google, SDNs have helped to achieve better utilisation of data centre operations and of Data Centre Wide Area Networks. Urs Hoelzle of Google can be seen discussing Google’s deployment and findings here at the OpenNet summit in early 2012 and Google claim to be able to get 60% to 70% better utilisation out of their Data Centre WAN. Given the cost of deploying and maintaining service provider networks this could represent significant cost savings if service providers can replicate these results.

NFV – Network Functions Virtualisation – is just over two years old and yet it is already being deployed in service provider networks and has had a major impact on the networking vendor landscape. Globally the telecoms and datacomms equipment market is worth over $180bn and has been dominated by 5 vendors with around 50% of the market split between them.

Innovation and competition in the networking market has been lacking with very few major innovations in the last 12 years, the industry has focussed on capacity and speed rather than anything radically new, and start-ups that do come up with something interesting get quickly swallowed up by the established vendors. NFV has started to rock the steady ship by bringing the same technologies that revolutionised the IT computing markets, namely cloud computing, low cost off the shelf hardware, open source and virtualisation to the networking market.

Software Defined Networking (SDN)

Conventionally, networks have been built using devices that make autonomous decisions about how the network operates and how traffic flows. SDN offers new, more flexible and efficient ways to design, test, build and operate IP networks by separating the intelligence from the networking device and placing it in a single controller with a perspective of the entire network. Taking the ‘intelligence’ out of many individual components also means that it is possible to build and buy those components for less, thus reducing some costs in the network. Building on ‘Open’ standards should make it possible to select best in class vendors for different components in the network introducing innovation and competiveness.

SDN started out as a data centre technology aimed at making life easier for operators and designers to build and operate large scale data centre operations. However, it has moved into the Wide Area Network and as we shall see, it is already being deployed by telcos and service providers.

Network Functions Virtualisation (NFV)

Like SDN, NFV splits the control functions from the data forwarding functions, however while SDN does this for an entire network of things, NFV focusses specifically on network functions like routing, firewalls, load balancing, CPE etc. and looks to leverage developments in Common Off The Shelf (COTS) hardware such as generic server platforms utilising multi core CPUs.

The performance of a device like a router is critical to the overall performance of a network. Historically the only way to get this performance was to develop custom Integrated Circuits (ICs) such as Application Specific Integrated Circuits (ASICs) and build these into a device along with some intelligence to handle things like route acquisition, human interfaces and management. While off the shelf processors were good enough to handle the control plane of a device (route acquisition, human interface etc.), they typically did not have the ability to process data packets fast enough to build a viable device.

But things have moved on rapidly. Vendors like Intel have put specific focus on improving the data plane performance of COTS based devices and the performance of the devices has risen exponentially. Figure 1 clearly demonstrates that in just 3 years (2010 – 2013) a tenfold increase in packet processing or data plane performance has been achieved. Generally, CPU performance has been tracking Moore’s law which originally stated that the number of components in an integrated circuit would double very two years. If the number of components are related to performance, the same can be said about CPU performance. For example Intel will ship its latest processor family in the second half of 2015 which could have up to 72 individual CPU cores compared to the four or 6 used in 2010/2013.

Figure 1 – Intel Hardware performance

Source: ETSI & Telefonica

NFV was started by the telco industry to leverage the capability of COTS based devices to reduce the cost or networking equipment and more importantly to introduce innovation and more competition to the networking market.

Since its inception in 2012 and running as a special interest group within ETSI (European Telecommunications Standards Institute), NFV has proven to be a valuable initiative, not just from a cost perspective, but more importantly with what it means to telcos and service providers in being able to develop, test and launch new services quickly and efficiently.

ETSI set up a number of work streams to tackle the issues of performance, management & orchestration, proof of concept, reference architecture etc. and externally organisations like OPNFV (Open Platform for NFV) have brought together a number of vendors and interested parties.

Why do we need NFV? What we already have works!

NFV came into being to solve a number of problems. Dedicated appliances from the big networking vendors typically do one thing and do that thing very well, switching or routing packets, acting as a network firewall etc. But as each is dedicated to a particular task and has its own user interface, things can get a little complicated when there are hundreds of different devices to manage and staff to keep trained and updated. Devices also tend to be used for one specific application and reuse is sometimes difficult resulting in expensive obsolescence. By running network functions on a COTS based platform most of these issues go away resulting in:

  • Lower operating costs (some claim up to 80% less)
  • Faster time to market
  • Better integration between network functions
  • The ability to rapidly develop, test, deploy and iterate a new product
  • Lower risk associated with new product development
  • The ability to rapidly respond to market changes leading to greater agility
  • Less complex operations and better customer relations

And the real benefits are not just in the area of cost savings, they are all about time to market, being able to respond quickly to market demands and in essence becoming more agile.

The real benefits

If the real benefits of NFV are not just about cost savings and are about agility, how is this delivered? Agility comes from a number of different aspects, for example the ability to orchestrate a number of VNFs and the network to deliver a suite or chain of network functions for an individual user or application. This has been the focus of the ETSI Management and Orchestration (MANO) workstream.

MANO will be crucial to the long term success of NFV. MANO provides automation and provisioning and will interface with existing provisioning and billing platforms such as existing OSS/BSS. MANO will allow the use and reuse of VNFs, networking objects, chains of services and via external APIs allow applications to request and control the creation of specific services.

Figure 2 – Orchestration of Virtual Network Functions

Source: STL Partners

Figure 2 shows a hypothetical service chain created for a residential user accessing a network server. The service chain is made up of a number of VNFs that are used as required and then discarded when not needed as part of the service. For example the Broadband Remote Access Server becomes a VNF running on a common platform rather than a dedicated hardware appliance. As the users STB connects to the network, the authentication component checks that the user is valid and has a current account, but drops out of the chain once this function has been performed. The firewall is used for the duration of the connection and other components are used as required for example Deep Packet Inspection and load balancing. Equally as the user accesses other services such as media, Internet and voice services different VNFs can be brought into play such as SBC and Network Storage.

Sounds great, but is it real, is anyone doing anything useful?

The short answer is yes, there are live deployments of NFV in many service provider networks and NFV is having a real impact on costs and time to market detailed in this report. For example:

  • Vodafone Spain’s Lowi MVNO
  • Telefonica’s vCPE trial
  • AT&T Domain 2.0 (see pages 22 – 23 for more on these examples)

 

  • Executive Summary
  • Introduction
  • WTF – what’s the fuss about NFV?
  • Software Defined Networking (SDN)
  • Network Functions Virtualisation (NFV)
  • Why do we need NFV? What we already have works!
  • The real benefits
  • Sounds great, but is it real, is anyone doing anything useful?
  • The Industry Landscape of NFV
  • Where did NFV come from?
  • Any drawbacks?
  • Open Platform for NFV – OPNFV
  • Proprietary NFV platforms
  • NFV market size
  • SDN and NFV – what’s the difference?
  • Management and Orchestration (MANO)
  • What are the leading players doing?
  • NFV – Telco examples
  • NFV Vendors Overview
  • Analysis: the key challenges
  • Does it really work well enough?
  • Open Platforms vs. Walled Gardens
  • How to transition?
  • It’s not if, but when
  • Conclusions and recommendations
  • Appendices – NFV Reference architecture

 

  • Figure 1 – Intel Hardware performance
  • Figure 2 – Orchestration of Virtual Network Functions
  • Figure 3 – ETSI’s vision for Network Functions Virtualisation
  • Figure 4 – Typical Network device showing control and data planes
  • Figure 5 – Metaswitch SBC performance running on 8 x CPU Cores
  • Figure 6 – OPNFV Membership
  • Figure 7 – Intel OPNFV reference stack and platform
  • Figure 8 – Telecom equipment vendor market shares
  • Figure 9 – Autonomy Routing
  • Figure 10 – SDN Control of network topology
  • Figure 11 – ETSI reference architecture shown overlaid with functional layers
  • Figure 12 – Virtual switch conceptualised

 

Software Defined Networking (SDN): A Potential ‘Game Changer’

Summary: Software Defined Networking is a technological approach to designing and managing networks that has the potential to increase operator agility, lower costs, and disrupt the vendor landscape. Its initial impact has been within leading-edge data centres, but it also has the potential to spread into many other network areas, including core public telecoms networks. This briefing analyses its potential benefits and use cases, outlines strategic scenarios and key action plans for telcos, summarises key vendor positions, and why it is so important for both the telco and vendor communities to adopt and exploit SDN capabilities now. (May 2013, Executive Briefing Service, Cloud & Enterprise ICT Stream, Future of the Network Stream). Potential Telco SDN/NFV Deployment Phases May 2013

Figure 1 – Potential Telco SDN/NFV Deployment Phases
Potential Telco SDN/NFV Deployment Phases May 2013

Source STL Partners

Introduction

Software Defined Networking or SDN is a technological approach to designing and managing networks that has the potential to increase operator agility, lower costs, and disrupt the vendor landscape. Its initial impact has been within leading-edge data centres, but it also has the potential to spread into many other network areas, including core public telecoms networks.

With SDN, networks no longer need to be point to point connections between operational centres; rather the network becomes a programmable fabric that can be manipulated in real time to meet the needs of the applications and systems that sit on top of it. SDN allows networks to operate more efficiently in the data centre as a LAN and potentially also in Wide Area Networks (WANs).

SDN is new and, like any new technology, this means that there is a degree of hype and a lot of market activity:

  • Venture capitalists are on the lookout for new opportunities;
  • There are plenty of start-ups all with “the next big thing”;
  • Incumbents are looking to quickly acquire new skills through acquisition;
  • And not surprisingly there is a degree of SDN “Washing” where existing products get a makeover or a software upgrade and are suddenly SDN compliant.

However there still isn’t widespread clarity of what SDN is and how it might be used outside of vendor papers and marketing materials, and there are plenty of important questions to be answered. For example:

  • SDN is open to interpretation and is not an industry standard, so what is it?
  • Is it better than what we have today?
  • What are the implications for your business, whether telcos, or vendors?
  • Could it simply be just a passing fad that will fade into the networking archives like IP Switching or X.25 and can you afford to ignore it?
  • What will be the impact on LAN and WAN design and for that matter data centres, telcos and enterprise customers? Could it be a threat to service providers?
  • Could we see a future where networking equipment becomes commoditised just like server hardware?
  • Will standards prevail?

Vendors are to a degree adding to the confusion. For example, Cisco argues that it already has an SDN-capable product portfolio with Cisco One. It says that its solution is more capable than solutions dominated by open-source based products, because these have limited functionality.

This executive briefing will explain what SDN is, why it is different to traditional networking, look at the emerging market with some likely use cases and then look at the implications and benefits for service providers and vendors.

How and why has SDN evolved?

SDN has been developed in response to the fact that basic networking hasn’t really evolved much over the last 30 plus years, and that new capabilities are required to further the development of virtualised computing to bring innovation and new business opportunities. From a business perspective the networking market is a prime candidate for disruption:

  • It is a mature market that has evolved steadily for many years
  • There are relatively few leading players who have a dominant market position
  • Technology developments have generally focussed in speed rather than cost reduction or innovation
  • Low cost silicon is available to compete with custom chips developed by the market leaders
  • There is a wealth of open source software plus plenty of low cost general purpose computing hardware on which to run it
  • Until SDN, no one really took a clean slate view on what might be possible

New features and capabilities have been added to traditional equipment, but have tended to bloat the software content increasing costs to both purchase and operate the devices. Nevertheless – IP Networking as we know it has performed the task of connecting two end points very well; it has been able to support the explosion of growth required by the Internet and of mobile and mass computing in general.

Traditionally each element in the network (typically a switch or a router) builds up a network map and makes routing decisions based on communication with its immediate neighbours. Once a connection through the network has been established, packets follow the same route for the duration of the connection. Voice, data and video have differing delivery requirements with respect to delay, jitter and latency, but in traditional networks there is no overall picture of the network – no single entity responsible for route planning, or ensuring that traffic is optimised, managed or even flows over the most appropriate path to suit its needs.

One of the significant things about SDN is that it takes away the independence or autonomy from every networking element in order to remove its ability to make network routing decisions. The responsibility for establishing paths through the network, their control and their routing is placed in the hands of one or more central network controllers. The controller is able to see the network as complete entity and manage its traffic flows, routing, policies and quality of service, in essence treating the network as a fabric and then attempting to get maximum utilisation from that fabric. SDN Controllers generally offer external interfaces through which external applications can control and set up network paths.

There has been a growing demand to make networks programmable by external applications – data centres and virtual computing are clear examples of where it would be desirable to deploy not just the virtual computing environment, but all the associated networking functions and network infrastructure from a single console. With no common control point the only way of providing interfaces to external systems and applications is to place agents in the networking devices and to ask external systems to manage each networking device. This kind of architecture has difficulty scaling, creates lots of control traffic that reduces overall efficiency, it may end up with multiple applications trying to control the same entity and is therefore fraught with problems.

Network Functions Virtualisation (NFV)

It is worth noting that an initiative complementary to SDN was started in 2012 called Network Functions Virtualisation (NFV). This complicated sounding term was started by the European Telecommunications Standards Institute (ETSI) in order to take functions that sit on dedicated hardware like load balancers, firewalls, routers and other network devices and run them on virtualised hardware platforms lowering capex, extending their useful life and reducing operating expenditures. You can read more about NFV later in the report on page 20.

In contrast, SDN makes it possible to program or change the network to meet a specific time dependant need and establish end-to-end connections that meet specific criteria. The SDN controller holds a map of the current network state and the requests that external applications are making on the network, this makes it easier to get best use from the network at any given moment, carry out meaningful traffic engineering and work more effectively with virtual computing environments.

What is driving the move to SDN?

The Internet and the world of IP communications have seen continuous development over the last 40 years. There has been huge innovation and strict control of standards through the Internet Engineering Task Force (IETF). Because of the ad-hoc nature of its development, there are many different functions catering for all sorts of use cases. Some overlap, some are obsolete, but all still have to be supported and more are being added all the time. This means that the devices that control IP networks and connect to the networks must understand a minimum subset of functions in order to communicate with each other successfully. This adds complexity and cost because every element in the network has to be able to process or understand these rules.

But the system works and it works well. For example when we open a web browser and a session to stlpartners.com, initially our browser and our PC have no knowledge of how to get to STL’s web server. But usually within half a second or so the STL Partners web site appears. What actually happens can be seen in Figure 1. Our PC uses a variety of protocols to connect first to a gateway (1) on our network and then to a public name server (2 & 3) in order to query the stlpartners.com IP address. The PC then sends a connection to that address (4) and assumes that the network will route packets of information to and from the destination server. The process is much the same whether using public WAN’s or private Local Area Networks.

Figure 2 – Process of connecting to an Internet web address
Process of connecting to an Internet web address May 2013

Source STL Partners

The Internet is also highly resilient; it was developed to survive a variety of network outages including the complete loss of sub networks. Popular myth has it that the US Department of Defence wanted it to be able to survive a nuclear attack, but while it probably could, nuclear survivability wasn’t a design goal. The Internet has the ability to route around failed networking elements and it does this by giving network devices the autonomy to make their own decisions about the state of the network and how to get data from one point to any other.

While this is of great value in unreliable networks, which is what the Internet looked like during its evolution in the late 70’s or early 80’s, networks of today comprise far more robust elements and more reliable network links. The upshot is that networks typically operate at a sub optimum level, unless there is a network outage, routes and traffic paths are mostly static and last for the duration of the connection. If an outage occurs, the routers in the network decide amongst themselves how best to re-route the traffic, with each of them making their own decisions about traffic flow and prioritisation given their individual view of the network. In actual fact most routers and switches are not aware of the network in its entirety, just the adjacent devices they are connected to and the information they get from them about the networks and devices they in turn are connected to. Therefore, it can take some time for a converged network to stabilise as we saw in the Internet outages that affected Amazon, Facebook, Google and Dropbox last October.

The diagram in Figure 2 shows a simple router network, Router A knows about the networks on routers B and C because it is connected directly to them and they have informed A about their networks. B and C have also informed A that they can get to the networks or devices on router D. You can see from this model that there is no overall picture of the network and no one device is able to make network wide decisions. In order to connect a device on a network attached to A, to a device on a network attached to D, A must make a decision based on what B or C tell it.

Figure 3 – Simple router network
Simple router network May 2013

Source STL Partners

This model makes it difficult to build large data centres with thousands of Virtual Machines (VMs) and offer customers dynamic service creation when the network only understands physical devices and does not easily allow each VM to have its own range of IP addresses and other IP services. Ideally you would configure a complete virtual system consisting of virtual machines, load balancing, security, network control elements and network configuration from a single management console and then these abstract functions are mapped to physical hardware for computing and networking resources. VMWare have coined the term ‘Software Defined Data Centre’ or SDDC, which describes a system that allows all of these elements and more to be controlled by a single suite of management software.

Moreover, returning to the fact that every networking device needs to understand a raft of Internet Request For Comments (or RFC’s), all the clever code supporting these RFC’s in switches and routers costs money. High performance processing systems and memory are required in traditional routers and switches in order to inspect and process traffic, even in MPLS networks. Cisco IOS supports over 600 RFC’s and other standards. This adds to cost, complexity, compatibility, future obsolescence and power/cooling needs.

SDN takes a fresh approach to building networks based on the technologies that are available today, it places the intelligence centrally using scalable compute platforms and leaves the switches and routers as relatively dumb packet forwarding engines. The control platforms still have to support all the standards, but the platforms the controllers run on are infinitely more powerful than the processors in traditional networking devices and more importantly, the controllers can manage the network as a fabric rather than each element making its own potentially sub optimum decisions.

As one proof point that SDN works, in early 2012 Google announced that it had migrated its live data centres to a Software Defined Network using switches it designed and developed using off-the-shelf silicon and OpenFlow for the control path to a Google-designed Controller. Google claims many benefits including better utilisation of its compute power after implementing this system. At the time Google stated it would have liked to have been able to purchase OpenFlow-compliant switches but none were available that suited its needs. Since then, new vendors have entered the market such as BigSwitch and Pica8, delivering relatively low cost OpenFlow-compliant switches.

To read the Software Defined Networking in full, including the following sections detailing additional analysis…

  • Executive Summary including detailed recommendations for telcos and vendors
  • Introduction (reproduced above)
  • How and why has SDN evolved? (reproduced above)
  • What is driving the move to SDN? (reproduced above)
  • SDN: Definitions and Advantages
  • What is OpenFlow?
  • SDN Control Platforms
  • SDN advantages
  • Market Forecast
  • STL Partners’ Definition of SDN
  • SDN use cases
  • Network Functions Virtualisation
  • What are the implications for telcos?
  • Telcos’ strategic options
  • Telco Action Plans
  • What should telcos be doing now?
  • Vendor Support for OpenFlow
  • Big switch networks
  • Cisco
  • Citrix
  • Ericssson
  • FlowForwarding
  • HP
  • IBM
  • Nicira
  • OpenDaylight Project
  • Open Networking Foundation
  • Open vSwitch (OVS)
  • Pertino
  • Pica8
  • Plexxi
  • Tellabs
  • Conclusions & Recommendations

…and the following figures…

  • Figure 1 – Potential Telco SDN/NFV Deployment Phases
  • Figure 2 – Process of connecting to an Internet web address
  • Figure 3 – Simple router network
  • Figure 4 – Traditional Switches with combined Control/Data Planes
  • Figure 5 – SDN approach with separate control and data planes
  • Figure 6 – ETSI’s vision for Network Functions Virtualisation
  • Figure 7 – Network Functions Virtualised and managed by SDN
  • Figure 8 – Network Functions Virtualisation relationship with SDN
  • Table 1 – Telco SDN Strategies
  • Figure 9 – Potential Telco SDN/NFV Deployment Phases
  • Figure 10 – SDN used to apply policy to Internet traffic
  • Figure 11 – SDN Congestion Control Application