Telco Cloud Deployment Tracker: 5G core deep dive

Deep dive: 5G core deployments 

In this July 2022 update to STL Partners’ Telco Cloud Deployment Tracker, we present granular information on 5G core launches. They fall into three categories:

  • 5G Non-standalone core (5G NSA core) deployments: The 5G NSA core (agreed as part of 3GPP Release in December 2017), involves using a virtualised and upgraded version of the existing 4G core (or EPC) to support 5G New Radio (NR) wireless transmission in tandem with existing LTE services. This was the first form of 5G to be launched and still accounts for 75% of all 5G core network deployments in our Tracker.
  • 5G Standalone core (5G SA core) deployments: The SA core is a completely new and 5G-only core. It has a simplified, cloud-native and distributed architecture, and is designed to support services and functions such as network slicing, Ultra-Reliable Low-Latency Communications (URLLC) and enhanced Machine-Type Communications (eMTC, i.e. massive IoT). Our Tracker indicates that the upcoming wave of 5G core deployments in 2022 and 2023 will be mostly 5G SA core.
  • Converged 5G NSA/SA core deployments: this is when a dual-mode NSA and SA platform is deployed; in most cases, the NSA core results from the upgrade of an existing LTE core (EPC) to support 5G signalling and radio. The principle behind a converged NSA/SA core is the ability to orchestrate different combinations of containerised network functions, and automatically and dynamically flip over from an NSA to an SA configuration, in tandem – for example – with other features and services such as Dynamic Spectrum Sharing and the needs of different network slices. For this reason, launching a converged NSA/SA platform is a marker of a more cloud-native approach in comparison with a simple 5G NSA launch. Ericsson is the most commonly found vendor for this type of platform with a handful coming from Huawei, Samsung and WorkingGroupTwo. Albeit interesting, converged 5G NSA/SA core deployments remain a minority (7% of all 5G core deployments over the 2018-2023 period) and most of our commentary will therefore focus on 5G NSA and 5G SA core launches.

Enter your details below to request an extract of the report

75% of 5G cores are still Non-standalone (NSA)

Global 5G core deployments by type, 2018–23

  • There is renewed activity this year in 5G core launches since the total number of 5G core deployments so far in 2022 (effective and in progress) stands at 49, above the 47 logged in the whole of 2021. At the very least, total 5G deployments in 2022 will settle between the level of 2021 and the peak of 2020 (97).
  • 5G in whichever form now exists in most places where it was both in demand and affordable; but there remain large economies where it is yet to be launched: Turkey, Russia and most notably India. It also remains to be launched in most of Africa.
  • In countries with 5G, the next phase of launches, which will see the migration of NSA to SA cores, has yet to take place on a significant scale.
  • To date, 75% of all 5G cores are NSA. However, 5G SA will outstrip NSA in terms of deployments in 2022 and represent 24 of the 49 launches this year, or 34 if one includes converged NSA/SA cores as part of the total.
  • All but one of the 5G launches announced for 2023 are standalone; they all involve Tier-1 MNOs including Orange (in its European footprint involving Ericsson and Nokia), NTT Docomo in Japan and Verizon in the US.

The upcoming wave of SA core (and open / vRAN) represents an evolution towards cloud-native

  • Cloud-native functions or CNFs are software designed from the ground up for deployment and operation in the cloud with:​
  • Portability across any hardware infrastructure or virtualisation platform​
  • Modularity and openness, with components from multiple vendors able to be flexibly swapped in and out based on a shared set of compute and OS resources, and open APIs (in particular, via software ‘containers’)​
  • Automated orchestration and lifecycle management, with individual micro-services (software sub-components) able to be independently modified / upgraded, and automatically re-orchestrated and service-chained based on a persistent, API-based, ‘declarative’ framework (one which states the desired outcome, with the service chain organising itself to deliver the outcome in the most efficient way)​
  • Compute, resource, and software efficiency: as a concomitant of the automated, lean and logically optimal characteristics described above, CNFs are more efficient (both functionally and in terms of operating costs) and consume fewer compute and energy resources.​
  • Scalability and flexibility, as individual functions (for example, distributed user plane functions in 5G networks) can be scaled up or down instantly and dynamically in response to overall traffic flows or the needs of individual services​
  • Programmability, as network functions are now entirely based on software components that can be programmed and combined in a highly flexible manner in accordance with the needs of individual services and use contexts, via open APIs.​

Previous telco cloud tracker releases and related research

Each new release of the tracker is global, but is accompanied by an analytical report which focusses on trends in given regions from time to time:

Enter your details below to request an extract of the report

NFV Deployment Tracker: North American data and trends

Introduction

NFV in North America – how is virtualisation moving forward in telcos against global benchmarks?

Welcome to the sixth edition of the ‘NFV Deployment Tracker’

This report is the sixth analytical report in the NFV Deployment Tracker series and is intended as an accompaniment to the updated Tracker Excel spreadsheet.

This extended update covers seven months of deployments worldwide, from October 2018 to April 2019. The update also includes an improved spreadsheet format: a more user-friendly, clearer lay-out and a regional toggle in the ‘Aggregate data by region’ worksheet, which provides much quicker access to the data on each region separately.

The present analytical report provides an update on deployments and trends in the North American market (US, Canada and the Caribbean) since the last report focusing on that region (December 2017).

Scope, definitions and importance of the data

We include in the Tracker only verified, live deployments of NFV or SDN technology powering commercial services. The information is taken mainly from public-domain sources, such as press releases by operators or vendors, or reports in reputable trade media. However, a small portion of the data also derives from confidential conversations we have had with telcos. In these instances, the deployments are included in the aggregate, anonymised worksheets in the spreadsheet, but not in the detailed dataset listing deployments by operator and geography, and by vendor where known.

Our definition of a ‘deployment’, including how we break deployments down into their component parts, is provided in the ‘Explanatory notes’ worksheet, in the accompanying Excel document.

NFV in North America in global context

We have gathered data on 120 live, commercial deployments of NFV and SDN in North America between 2011 and April 2019. These were completed by 33 mainly Tier-One telcos and telco group subsidiaries: 24 based in the US, four in Canada, one Caribbean, three European (Colt, T-Mobile and Vodafone), and one Latin American (América Móvil). The data includes information on 217 known Virtual Network Functions (VNFs), functional sub-components and supporting infrastructure elements that have formed part of these deployments.

This makes North America the third-largest NFV/SDN market worldwide, as is illustrated by the comparison with other regions in the chart below.

Total NFV/SDN deployments by region, 2011 to April 2019

total NFV deployments by region North America Africa Asia-Pacific Europe Middle East

Source: STL Partners

Deployments of NFV in North America account for around 24% of the global total of 486 live deployments (or 492 deployments counting deployments spanning multiple regions as one deployment for each region). Europe is very marginally ahead on 163 deployments versus 161 for Asia-Pacific: both equating to around 33% of the total.

The NFV North America Deployment Tracker contains the following data, to May 2019:

  • Global aggregate data
  • Deployments by primary purpose
  • Leading VNFs and functional components
  • Leading operators
  • Leading vendors
  • Leading vendors by primary purpose
  • Above data points broken down by region
  • North America
  • Asia-Pacific
  • Europe
  • Latin America
  • Middle East
  • Africa
  • Detailed dataset on individual deployments

 

Contents of the accompanying analytical report:

  • Executive Summary
  • Introduction
  • Welcome to the sixth edition of the ‘NFV Deployment Tracker’
  • Scope, definitions and importance of the data
  • Analysis of NFV in North America
  • The North American market in global context
  • SD-WAN and core network functions are the leading categories
  • 5G is driving core network virtualisation
  • Vendor trends: Open source and operator self-builds outpace vendors
  • Operator trends: Verizon and AT&T are the clear leaders
  • Conclusion: Slow-down in enterprise platform deployments while 5G provides new impetus

MWC 2016: The Cloud/NFV Transformation Needle Moves

Enter the open-source software leaders: IT takes telco cloud seriously

One of the most important trends from MWC 2016 was the increased presence, engagement, and enthusiasm of the key open-source software vendors. Companies like Red Hat, IBM, Canonical, HP Enterprise, and Intel are the biggest contributors of code, next to independent developers, to the key open-source projects like OpenStack, OPNFV, and Linux itself. Their growing engagement in telecoms software is a major positive for the prospects of NFV/SDN and telco re-engagement in cloud.

OpenStack, the open-source cloud operating system, is emerging as the key platform for telco cloud and also for NFV implementations. Figure 1, taken from the official OpenStack statistics tracker at Stackalytics.com, shows contributions to the current release of OpenStack by organisational affiliation and by module; this highlights both which companies are contributing heavily to OpenStack development, and which modules are attracting the most development effort.

AT&T’s specialist partner Mirantis shows up as a leading contributor of code for OpenStack, some of which we believe is developed inside AT&T Shannon Labs. Tellingly, among OpenStack modules, the single biggest focus area is Neutron, the OpenStack module which takes care of its networking functions. Anything NFV-related will tend to end up in here.

Figure 1: The contributor ecosystem for OpenStack (% of commits, bug fixes, and reviews by company and module)

Source: Stackalytics

 

  • Executive Summary
  • Enter the open-source software leaders: IT takes telco cloud seriously
  • And (some) telcos get serious about software
  • Open-source development is influencing the standards process
  • The cloud is the network is the cloud
  • Nokia and Intel: ever closer union?

 

  • Figure 1: The contributor ecosystem for OpenStack (% of commits, bug fixes, and reviews by company and module)
  • Figure 2: Mirantis contributes more to OpenStack networking than Red Hat or Cisco (% of commits, bug fixes, and reviews by company, for networking module)
  • Figure 3: Mirantis (and therefore AT&T) drive the key Fuel project forwards

The Open Source Telco: Taking Control of Destiny

Preface

This report examines the approaches to open source software – broadly, software for which the source code is freely available for use, subject to certain licensing conditions – of telecoms operators globally. Several factors have come together in recent years to make the role of open source software an important and dynamic area of debate for operators, including:

  • Technological Progress: Advances in core networking technologies, especially network functions virtualisation (NFV) and software-defined networking (SDN), are closely associated with open source software and initiatives, such as OPNFV and OpenDaylight. Many operators are actively participating in these initiatives, as well as trialling their software and, in some cases, moving them into production. This represents a fundamental shift away from the industry’s traditional, proprietary, vendor-procured model.
    • Why are we now seeing more open source activities around core communications technologies?
  • Financial Pressure: However, over-the-top (OTT) disintermediation, regulation and adverse macroeconomic conditions have led to reduced core communications revenues for operators in both developed and emerging markets alike. As a result, operators are exploring opportunities to move away from their core, infrastructure business, and compete in the more software-centric services layer.
    • How do the Internet players use open source software, and what are the lessons for operators?
  • The Need for Agility: In general, there is recognition within the telecoms industry that operators need to become more ‘agile’ if they are to succeed in the new, rapidly-changing ICT world, and greater use of open source software is seen by many as a key enabler of this transformation.
    • How can the use of open source software increase operator agility?

The answers to these questions, and more, are the topic of this report, which is sponsored by Dialogic and independently produced by STL Partners. The report draws on a series of 21 interviews conducted by STL Partners with senior technologists, strategists and product managers from telecoms operators globally.

Figure 1: Split of Interviewees by Business Area

Source: STL Partners

Introduction

Open source is less optional than it once was – even for Apple and Microsoft

From the audience’s point of view, the most important announcement at Apple’s Worldwide Developer Conference (WWDC) this year was not the new versions of iOS and OS X, or even its Spotify-challenging Apple Music service. Instead, it was the announcement that Apple’s highly popular programming language ‘Swift’ was to be made open source, where open source software is broadly defined as software for which the source code is freely available for use – subject to certain licensing conditions.

On one level, therefore, this represents a clever engagement strategy with developers. Open source software uptake has increased rapidly during the last 15 years, most famously embodied by the Linux operating system (OS), and with this developers have demonstrated a growing preference for open source tools and platforms. Since Apple has generally pushed developers towards proprietary development tools, and away from third-party ones (such as Adobe Flash), this is significant in itself.

An indication of open source’s growth can be found in OS market shares in consumer electronics devices. As Figure 2 shows below, Android (open source) had a 49% share of shipments in 2014; if we include the various other open source OS’s in ‘other’, this increases to more than 50%.

Figure 2: Share of consumer electronics shipments* by OS, 2014

Source: Gartner
* Includes smartphones, tablets, laptops and desktop PCs

However, one of the components being open sourced is Swift’s (proprietary) compiler – a program that translates written code into an executable program that a computer system understands. The implication of this is that, in theory, we could even see Swift applications running on non-Apple devices in the future. In other words, Apple believes the risk of Swift being used on Android is outweighed by the reward of engaging with the developer community through open source.

Whilst some technology companies, especially the likes of Facebook, Google and Netflix, are well known for their activities in open source, Apple is a company famous for its proprietary approach to both hardware and software. This, combined with similar activities by Microsoft (who open sourced its .NET framework in 2014), suggest that open source is now less optional than it once was.

Open source is both an old and a new concept for operators

At first glance, open source also appears to now be less optional for telecoms operators, who traditionally procure proprietary software (and hardware) from third-party vendors. Whilst many (but not all) operators have been using open source software for some time, such as Linux and various open source databases in the IT domain (e.g. MySQL), we have in the last 2-3 years seen a step-change in operator interest in open source across multiple domains. The following quote, taken directly from the interviews, summarises the situation nicely:

“Open source is both an old and a new project for many operators: old in the sense that we have been using Linux, FreeBSD, and others for a number of years; new in the sense that open source is moving out of the IT domain and towards new areas of the industry.” 

AT&T, for example, has been speaking widely about its ‘Domain 2.0’ programme. Domain 2.0 has the objectives to transform AT&T’s technical infrastructure to incorporate network functions virtualisation (NFV) and software-defined networking (SDN), to mandate a higher degree of interoperability, and to broaden the range of alternative suppliers available across its core business. By 2020, AT&T hopes to virtualise 75% of its network functions, and it sees open source as accounting for up to 50% of this. AT&T, like many other operators, is also a member of various recently-formed initiatives and foundations around NFV and SDN, such as OPNFV – Figure 3 lists some below.

Figure 3: OPNFV Platinum Members

Source: OPNFV website

However, based on publicly-available information, other operators might appear to have lesser ambitions in this space. As ever, the situation is more complex than it first appears: other operators do have significant ambitions in open source and, despite the headlines NFV and SDN draw, there are many other business areas in which open source is playing (or will play) an important role. Figure 4 below includes three quotes from the interviews which highlight this broad spectrum of opinion:

Figure 4: Different attitudes of operators to open source – selected interview quotes

Source: STL Partners interviews

Key Questions to be Addressed

We therefore have many questions which need to be addressed concerning operator attitudes to open source software, adoption (by area of business), and more:

  1. What is open source software, what are its major initiatives, and who uses it most widely today?
  2. What are the most important advantages and disadvantages of open source software? 
  3. To what extent are telecoms operators using open source software today? Why, and where?
  4. What are the key barriers to operator adoption of open source software?
  5. Prospects: How will this situation change?

These are now addressed in turn.

  • Preface
  • Executive Summary
  • Introduction
  • Open source is less optional than it once was – even for Apple and Microsoft
  • Open source is both an old and a new concept for operators
  • Key Questions to be Addressed
  • Understanding Open Source Software
  • The Theory: Freely available, licensed source code
  • The Industry: Dominated by key initiatives and contributors
  • Research Findings: Evaluating Open Source
  • Open source has both advantages and disadvantages
  • Debunking Myths: Open source’s performance and security
  • Where are telcos using open source today?
  • Transformation of telcos’ service portfolios is making open source more relevant than ever…
  • … and three key factors determine where operators are using open source software today
  • Open Source Adoption: Business Critical vs. Service Area
  • Barriers to Telco Adoption of Open Source
  • Two ‘external’ barriers by the industry’s nature
  • Three ‘internal’ barriers which can (and must) change
  • Prospects and Recommendations
  • Prospects: An open source evolution, not revolution
  • Open Source, Transformation, and Six Key Recommendations
  • About STL Partners and Telco 2.0
  • About Dialogic

 

  • Figure 1: Split of Interviewees by Business Area
  • Figure 2: Share of consumer electronics shipments* by OS, 2014
  • Figure 3: OPNFV Platinum Members
  • Figure 4: Different attitudes of operators to open source – selected interview quotes
  • Figure 5: The Open IT Ecosystem (incl. key industry bodies)
  • Figure 6: Three Forms of Governance in Open Source Software Projects
  • Figure 7: Three Classes of Open Source Software License
  • Figure 8: Web Server Share of Active Sites by Developer, 2000-2015
  • Figure 9: Leading software companies vs. Red Hat, market capitalisation, Oct. 2015
  • Figure 10: The Key Advantages and Disadvantages of Open Source Software
  • Figure 11: How Google Works – Failing Well
  • Figure 12: Performance gains from an open source activation (OSS) platform
  • Figure 13: Intel Hardware Performance, 2010-13
  • Figure 14: Open source is more likely to be found today in areas which are…
  • Figure 15: Framework mapping current telco uptake of open source software
  • Figure 16: Five key barriers to telco adoption of open source software
  • Figure 17: % of employees with ‘software’ in their LinkedIn job title, Oct. 2015
  • Figure 18: ‘Waterfall’ and ‘Agile’ Software Development Methodologies Compared
  • Figure 19: Four key cultural attributes for successful telco transformation

NFV: Great Promises, but How to Deliver?

Introduction

What’s the fuss about NFV?

Today, it seems that suddenly everything has become virtual: there are virtual machines, virtual LANs, virtual networks, virtual network interfaces, virtual switches, virtual routers and virtual functions. The two most recent and highly visible developments in Network Virtualisation are Software Defined Networking (SDN) and Network Functions Virtualisation (NFV). They are often used in the same breath, and are related but different.

Software Defined Networking has been around as a concept since 2008, has seen initial deployments in Data Centres as a Local Area Networking technology and according to early adopters such as Google, SDNs have helped to achieve better utilisation of data centre operations and of Data Centre Wide Area Networks. Urs Hoelzle of Google can be seen discussing Google’s deployment and findings here at the OpenNet summit in early 2012 and Google claim to be able to get 60% to 70% better utilisation out of their Data Centre WAN. Given the cost of deploying and maintaining service provider networks this could represent significant cost savings if service providers can replicate these results.

NFV – Network Functions Virtualisation – is just over two years old and yet it is already being deployed in service provider networks and has had a major impact on the networking vendor landscape. Globally the telecoms and datacomms equipment market is worth over $180bn and has been dominated by 5 vendors with around 50% of the market split between them.

Innovation and competition in the networking market has been lacking with very few major innovations in the last 12 years, the industry has focussed on capacity and speed rather than anything radically new, and start-ups that do come up with something interesting get quickly swallowed up by the established vendors. NFV has started to rock the steady ship by bringing the same technologies that revolutionised the IT computing markets, namely cloud computing, low cost off the shelf hardware, open source and virtualisation to the networking market.

Software Defined Networking (SDN)

Conventionally, networks have been built using devices that make autonomous decisions about how the network operates and how traffic flows. SDN offers new, more flexible and efficient ways to design, test, build and operate IP networks by separating the intelligence from the networking device and placing it in a single controller with a perspective of the entire network. Taking the ‘intelligence’ out of many individual components also means that it is possible to build and buy those components for less, thus reducing some costs in the network. Building on ‘Open’ standards should make it possible to select best in class vendors for different components in the network introducing innovation and competiveness.

SDN started out as a data centre technology aimed at making life easier for operators and designers to build and operate large scale data centre operations. However, it has moved into the Wide Area Network and as we shall see, it is already being deployed by telcos and service providers.

Network Functions Virtualisation (NFV)

Like SDN, NFV splits the control functions from the data forwarding functions, however while SDN does this for an entire network of things, NFV focusses specifically on network functions like routing, firewalls, load balancing, CPE etc. and looks to leverage developments in Common Off The Shelf (COTS) hardware such as generic server platforms utilising multi core CPUs.

The performance of a device like a router is critical to the overall performance of a network. Historically the only way to get this performance was to develop custom Integrated Circuits (ICs) such as Application Specific Integrated Circuits (ASICs) and build these into a device along with some intelligence to handle things like route acquisition, human interfaces and management. While off the shelf processors were good enough to handle the control plane of a device (route acquisition, human interface etc.), they typically did not have the ability to process data packets fast enough to build a viable device.

But things have moved on rapidly. Vendors like Intel have put specific focus on improving the data plane performance of COTS based devices and the performance of the devices has risen exponentially. Figure 1 clearly demonstrates that in just 3 years (2010 – 2013) a tenfold increase in packet processing or data plane performance has been achieved. Generally, CPU performance has been tracking Moore’s law which originally stated that the number of components in an integrated circuit would double very two years. If the number of components are related to performance, the same can be said about CPU performance. For example Intel will ship its latest processor family in the second half of 2015 which could have up to 72 individual CPU cores compared to the four or 6 used in 2010/2013.

Figure 1 – Intel Hardware performance

Source: ETSI & Telefonica

NFV was started by the telco industry to leverage the capability of COTS based devices to reduce the cost or networking equipment and more importantly to introduce innovation and more competition to the networking market.

Since its inception in 2012 and running as a special interest group within ETSI (European Telecommunications Standards Institute), NFV has proven to be a valuable initiative, not just from a cost perspective, but more importantly with what it means to telcos and service providers in being able to develop, test and launch new services quickly and efficiently.

ETSI set up a number of work streams to tackle the issues of performance, management & orchestration, proof of concept, reference architecture etc. and externally organisations like OPNFV (Open Platform for NFV) have brought together a number of vendors and interested parties.

Why do we need NFV? What we already have works!

NFV came into being to solve a number of problems. Dedicated appliances from the big networking vendors typically do one thing and do that thing very well, switching or routing packets, acting as a network firewall etc. But as each is dedicated to a particular task and has its own user interface, things can get a little complicated when there are hundreds of different devices to manage and staff to keep trained and updated. Devices also tend to be used for one specific application and reuse is sometimes difficult resulting in expensive obsolescence. By running network functions on a COTS based platform most of these issues go away resulting in:

  • Lower operating costs (some claim up to 80% less)
  • Faster time to market
  • Better integration between network functions
  • The ability to rapidly develop, test, deploy and iterate a new product
  • Lower risk associated with new product development
  • The ability to rapidly respond to market changes leading to greater agility
  • Less complex operations and better customer relations

And the real benefits are not just in the area of cost savings, they are all about time to market, being able to respond quickly to market demands and in essence becoming more agile.

The real benefits

If the real benefits of NFV are not just about cost savings and are about agility, how is this delivered? Agility comes from a number of different aspects, for example the ability to orchestrate a number of VNFs and the network to deliver a suite or chain of network functions for an individual user or application. This has been the focus of the ETSI Management and Orchestration (MANO) workstream.

MANO will be crucial to the long term success of NFV. MANO provides automation and provisioning and will interface with existing provisioning and billing platforms such as existing OSS/BSS. MANO will allow the use and reuse of VNFs, networking objects, chains of services and via external APIs allow applications to request and control the creation of specific services.

Figure 2 – Orchestration of Virtual Network Functions

Source: STL Partners

Figure 2 shows a hypothetical service chain created for a residential user accessing a network server. The service chain is made up of a number of VNFs that are used as required and then discarded when not needed as part of the service. For example the Broadband Remote Access Server becomes a VNF running on a common platform rather than a dedicated hardware appliance. As the users STB connects to the network, the authentication component checks that the user is valid and has a current account, but drops out of the chain once this function has been performed. The firewall is used for the duration of the connection and other components are used as required for example Deep Packet Inspection and load balancing. Equally as the user accesses other services such as media, Internet and voice services different VNFs can be brought into play such as SBC and Network Storage.

Sounds great, but is it real, is anyone doing anything useful?

The short answer is yes, there are live deployments of NFV in many service provider networks and NFV is having a real impact on costs and time to market detailed in this report. For example:

  • Vodafone Spain’s Lowi MVNO
  • Telefonica’s vCPE trial
  • AT&T Domain 2.0 (see pages 22 – 23 for more on these examples)

 

  • Executive Summary
  • Introduction
  • WTF – what’s the fuss about NFV?
  • Software Defined Networking (SDN)
  • Network Functions Virtualisation (NFV)
  • Why do we need NFV? What we already have works!
  • The real benefits
  • Sounds great, but is it real, is anyone doing anything useful?
  • The Industry Landscape of NFV
  • Where did NFV come from?
  • Any drawbacks?
  • Open Platform for NFV – OPNFV
  • Proprietary NFV platforms
  • NFV market size
  • SDN and NFV – what’s the difference?
  • Management and Orchestration (MANO)
  • What are the leading players doing?
  • NFV – Telco examples
  • NFV Vendors Overview
  • Analysis: the key challenges
  • Does it really work well enough?
  • Open Platforms vs. Walled Gardens
  • How to transition?
  • It’s not if, but when
  • Conclusions and recommendations
  • Appendices – NFV Reference architecture

 

  • Figure 1 – Intel Hardware performance
  • Figure 2 – Orchestration of Virtual Network Functions
  • Figure 3 – ETSI’s vision for Network Functions Virtualisation
  • Figure 4 – Typical Network device showing control and data planes
  • Figure 5 – Metaswitch SBC performance running on 8 x CPU Cores
  • Figure 6 – OPNFV Membership
  • Figure 7 – Intel OPNFV reference stack and platform
  • Figure 8 – Telecom equipment vendor market shares
  • Figure 9 – Autonomy Routing
  • Figure 10 – SDN Control of network topology
  • Figure 11 – ETSI reference architecture shown overlaid with functional layers
  • Figure 12 – Virtual switch conceptualised