NFV: Great Promises, but How to Deliver?

Introduction

What’s the fuss about NFV?

Today, it seems that suddenly everything has become virtual: there are virtual machines, virtual LANs, virtual networks, virtual network interfaces, virtual switches, virtual routers and virtual functions. The two most recent and highly visible developments in Network Virtualisation are Software Defined Networking (SDN) and Network Functions Virtualisation (NFV). They are often used in the same breath, and are related but different.

Software Defined Networking has been around as a concept since 2008, has seen initial deployments in Data Centres as a Local Area Networking technology and according to early adopters such as Google, SDNs have helped to achieve better utilisation of data centre operations and of Data Centre Wide Area Networks. Urs Hoelzle of Google can be seen discussing Google’s deployment and findings here at the OpenNet summit in early 2012 and Google claim to be able to get 60% to 70% better utilisation out of their Data Centre WAN. Given the cost of deploying and maintaining service provider networks this could represent significant cost savings if service providers can replicate these results.

NFV – Network Functions Virtualisation – is just over two years old and yet it is already being deployed in service provider networks and has had a major impact on the networking vendor landscape. Globally the telecoms and datacomms equipment market is worth over $180bn and has been dominated by 5 vendors with around 50% of the market split between them.

Innovation and competition in the networking market has been lacking with very few major innovations in the last 12 years, the industry has focussed on capacity and speed rather than anything radically new, and start-ups that do come up with something interesting get quickly swallowed up by the established vendors. NFV has started to rock the steady ship by bringing the same technologies that revolutionised the IT computing markets, namely cloud computing, low cost off the shelf hardware, open source and virtualisation to the networking market.

Software Defined Networking (SDN)

Conventionally, networks have been built using devices that make autonomous decisions about how the network operates and how traffic flows. SDN offers new, more flexible and efficient ways to design, test, build and operate IP networks by separating the intelligence from the networking device and placing it in a single controller with a perspective of the entire network. Taking the ‘intelligence’ out of many individual components also means that it is possible to build and buy those components for less, thus reducing some costs in the network. Building on ‘Open’ standards should make it possible to select best in class vendors for different components in the network introducing innovation and competiveness.

SDN started out as a data centre technology aimed at making life easier for operators and designers to build and operate large scale data centre operations. However, it has moved into the Wide Area Network and as we shall see, it is already being deployed by telcos and service providers.

Network Functions Virtualisation (NFV)

Like SDN, NFV splits the control functions from the data forwarding functions, however while SDN does this for an entire network of things, NFV focusses specifically on network functions like routing, firewalls, load balancing, CPE etc. and looks to leverage developments in Common Off The Shelf (COTS) hardware such as generic server platforms utilising multi core CPUs.

The performance of a device like a router is critical to the overall performance of a network. Historically the only way to get this performance was to develop custom Integrated Circuits (ICs) such as Application Specific Integrated Circuits (ASICs) and build these into a device along with some intelligence to handle things like route acquisition, human interfaces and management. While off the shelf processors were good enough to handle the control plane of a device (route acquisition, human interface etc.), they typically did not have the ability to process data packets fast enough to build a viable device.

But things have moved on rapidly. Vendors like Intel have put specific focus on improving the data plane performance of COTS based devices and the performance of the devices has risen exponentially. Figure 1 clearly demonstrates that in just 3 years (2010 – 2013) a tenfold increase in packet processing or data plane performance has been achieved. Generally, CPU performance has been tracking Moore’s law which originally stated that the number of components in an integrated circuit would double very two years. If the number of components are related to performance, the same can be said about CPU performance. For example Intel will ship its latest processor family in the second half of 2015 which could have up to 72 individual CPU cores compared to the four or 6 used in 2010/2013.

Figure 1 – Intel Hardware performance

Source: ETSI & Telefonica

NFV was started by the telco industry to leverage the capability of COTS based devices to reduce the cost or networking equipment and more importantly to introduce innovation and more competition to the networking market.

Since its inception in 2012 and running as a special interest group within ETSI (European Telecommunications Standards Institute), NFV has proven to be a valuable initiative, not just from a cost perspective, but more importantly with what it means to telcos and service providers in being able to develop, test and launch new services quickly and efficiently.

ETSI set up a number of work streams to tackle the issues of performance, management & orchestration, proof of concept, reference architecture etc. and externally organisations like OPNFV (Open Platform for NFV) have brought together a number of vendors and interested parties.

Why do we need NFV? What we already have works!

NFV came into being to solve a number of problems. Dedicated appliances from the big networking vendors typically do one thing and do that thing very well, switching or routing packets, acting as a network firewall etc. But as each is dedicated to a particular task and has its own user interface, things can get a little complicated when there are hundreds of different devices to manage and staff to keep trained and updated. Devices also tend to be used for one specific application and reuse is sometimes difficult resulting in expensive obsolescence. By running network functions on a COTS based platform most of these issues go away resulting in:

  • Lower operating costs (some claim up to 80% less)
  • Faster time to market
  • Better integration between network functions
  • The ability to rapidly develop, test, deploy and iterate a new product
  • Lower risk associated with new product development
  • The ability to rapidly respond to market changes leading to greater agility
  • Less complex operations and better customer relations

And the real benefits are not just in the area of cost savings, they are all about time to market, being able to respond quickly to market demands and in essence becoming more agile.

The real benefits

If the real benefits of NFV are not just about cost savings and are about agility, how is this delivered? Agility comes from a number of different aspects, for example the ability to orchestrate a number of VNFs and the network to deliver a suite or chain of network functions for an individual user or application. This has been the focus of the ETSI Management and Orchestration (MANO) workstream.

MANO will be crucial to the long term success of NFV. MANO provides automation and provisioning and will interface with existing provisioning and billing platforms such as existing OSS/BSS. MANO will allow the use and reuse of VNFs, networking objects, chains of services and via external APIs allow applications to request and control the creation of specific services.

Figure 2 – Orchestration of Virtual Network Functions

Source: STL Partners

Figure 2 shows a hypothetical service chain created for a residential user accessing a network server. The service chain is made up of a number of VNFs that are used as required and then discarded when not needed as part of the service. For example the Broadband Remote Access Server becomes a VNF running on a common platform rather than a dedicated hardware appliance. As the users STB connects to the network, the authentication component checks that the user is valid and has a current account, but drops out of the chain once this function has been performed. The firewall is used for the duration of the connection and other components are used as required for example Deep Packet Inspection and load balancing. Equally as the user accesses other services such as media, Internet and voice services different VNFs can be brought into play such as SBC and Network Storage.

Sounds great, but is it real, is anyone doing anything useful?

The short answer is yes, there are live deployments of NFV in many service provider networks and NFV is having a real impact on costs and time to market detailed in this report. For example:

  • Vodafone Spain’s Lowi MVNO
  • Telefonica’s vCPE trial
  • AT&T Domain 2.0 (see pages 22 – 23 for more on these examples)

 

  • Executive Summary
  • Introduction
  • WTF – what’s the fuss about NFV?
  • Software Defined Networking (SDN)
  • Network Functions Virtualisation (NFV)
  • Why do we need NFV? What we already have works!
  • The real benefits
  • Sounds great, but is it real, is anyone doing anything useful?
  • The Industry Landscape of NFV
  • Where did NFV come from?
  • Any drawbacks?
  • Open Platform for NFV – OPNFV
  • Proprietary NFV platforms
  • NFV market size
  • SDN and NFV – what’s the difference?
  • Management and Orchestration (MANO)
  • What are the leading players doing?
  • NFV – Telco examples
  • NFV Vendors Overview
  • Analysis: the key challenges
  • Does it really work well enough?
  • Open Platforms vs. Walled Gardens
  • How to transition?
  • It’s not if, but when
  • Conclusions and recommendations
  • Appendices – NFV Reference architecture

 

  • Figure 1 – Intel Hardware performance
  • Figure 2 – Orchestration of Virtual Network Functions
  • Figure 3 – ETSI’s vision for Network Functions Virtualisation
  • Figure 4 – Typical Network device showing control and data planes
  • Figure 5 – Metaswitch SBC performance running on 8 x CPU Cores
  • Figure 6 – OPNFV Membership
  • Figure 7 – Intel OPNFV reference stack and platform
  • Figure 8 – Telecom equipment vendor market shares
  • Figure 9 – Autonomy Routing
  • Figure 10 – SDN Control of network topology
  • Figure 11 – ETSI reference architecture shown overlaid with functional layers
  • Figure 12 – Virtual switch conceptualised

 

Software Defined People: How it Shapes Strategy (and us)

Introduction: software’s defining influence

Our knowledge, employment opportunities, work itself, healthcare, potential partners, purchases from properties to groceries, and much else can now be delivered or managed via software and mobile apps.

So are we all becoming increasingly ‘Software Defined’? It’s a question that has been stimulated in part by producing research on ‘Software Defined Networks (SDN): A Potential Game Changer’ and Enterprise Mobility, this video from McKinsey and Eric Schmidt, Google’s Exec Chairman, a number of observations throughout the past year, and particularly at this and last year’s Mobile World Congress (MWC).

But is software really the key?

The rapid adoption of smartphones and tablets, enabled by ever faster networks, is perhaps the most visible and tangible phenomenon in the market. Less visible but equally significant is the huge growth in ‘big data’ – the use of massive computing power to process types and volume of data that were previously inaccessible, as well as ‘small data’ – the increasing use of more personalised datasets.

However, what is now fuelling these trends is that many core life and business tools are now software of some form or another. In other words, programmes and ‘apps’ that create economic value, utility, fun or efficiency. Software is now the driving force, and the evolving data and hardware are by-products and enablers of the applications respectively.

Software: your virtual extra hand

In effect, mobile software is the latest great tool in humanity’s evolutionary path. With nearly a quarter of the world’s population using a smartphone, the human race has never had so much computing power by its side in every moment of everyday life. Many feature phones also possess significant processing power, and the extraordinary reach of mobile can now deliver highly innovative solutions like mobile money transfer even in markets with relatively underdeveloped financial service infrastructure.

How we are educated, employed and cared for are all starting to change with the growing power of mobile technologies, and will all change further and with increasing pace in the next phase of the mobile revolution. Knowing how to get the best from this world is now a key life skill.

The way that software is used is changing and will change further. While mobile apps have become a mainstream consumer phenomenon in many markets in the last few years, the application of mobile, personalised technologies is also changing education, health, employment, and the very fabric of our social lives. For example:

  • Back at MWC 2013 we saw the following fascinating video from Ericsson as part of its ‘Networked Society’ vision of why education has evolved as is has (to mass-produce workers to work in factories), and what the possibilities are with advanced technology, which is well worth a few minutes of your time whether you have kids or not.
  • We also saw this education demo video from a Singapore school from Qualcomm, based on the creative use of phones in all aspects of schooling in the WE Learn project.
  • There are now a growing number of eHealth applications (heart rate, blood pressure, stroke and outpatient care), and productivity apps and outreach of CRM applications like Salesforce into the mobile employment context are having an increasingly massive impact.
  • While originally a ‘fixed’ phenomena, the way we meet and find partners has seen a massive change in recent years. For example, in the US, 17% of recent marriages and 20% of ‘committed relationships’ started in the $1Bn online dating world – another world which is now increasingly going mobile.

The growing sophistication in human-software interactivity

Horace Dediu pointed out at a previous Brainstorm that the disruptive jumps in mobile handset technology have come from changes in the user interface – most recently in the touch-screen revolution accompanying smartphones and tablets.

And the way in which we interact with the software will continue to evolve, from the touch screens of smartphones, through voice activation, gesture recognition, retina tracking, on-body devices like watches, in-body sensors in the blood and digestive system, and even potentially by monitoring brainwaves, as illustrated in the demonstration from Samsung labs shown in Figure 1.

Figure 1: Software that reads your mind?

Source: Samsung Labs

Clearly, some of these techniques are still at an early stage of development. It is a hard call as to which will be the one to trigger the next major wave of innovation (e.g. see Facebook’s acquisition of Oculus Rift), as there are so many factors that influence the likely take-up of new technologies, from price through user experience to social acceptance.

Exploring and enhancing the senses

Interactive goggles / glasses such as Google Glass have now been around for over a year, and AR applications that overlay information from the virtual world onto images of the real world continue to evolve.

Search is also becoming a visual science – innovations such as Cortexica, recognise everyday objects (cereal packets, cars, signs, advertisements, stills from a film, etc.) and return information on how and where you can buy the related items. While it works from a smartphone today, it makes it possible to imagine a world where you open the kitchen cupboard and tell your glasses what items you want to re-order.

Screens will be in increasing abundance, able to interact with passers-by on the street or with you in your home or car. What will be on these screens could be anything that is on any of your existing screens or more – communication, information, entertainment, advertising – whatever the world can imagine.

Segmented by OS?

But is it really possible to define a person by the software they use? There is certainly an ‘a priori’ segmentation originating from device makers’ segmentation and positioning:

  • Apple’s brand and design ethos have held consistently strong appeal for upmarket, creative users. In contrast, Blackberry for a long time held a strong appeal in the enterprise segment, albeit significantly weakened in the last few years.
  • It is perhaps slightly harder to label Android users, now the largest group of smartphone users. However, the openness of the software leads to freedom, bringing with it a plurality of applications and widgets, some security issues, and perhaps a greater emphasis on ‘work it out for yourself’.
  • Microsoft, once ubiquitous through its domination of the PC universe, now finds itself a challenger in the world of mobiles and tablets, and despite gradually improving sales and reported OS experience and design has yet to find a clear identity, other than perhaps now being the domain of those willing to try something different. While Microsoft still has a strong hand in the software world through its evolving Office applications, these are not yet hugely mobile-friendly, and this is creating a niche for new players, such as Evernote and others, that have a more focused ‘mobile first’ approach.

Other segments

From a research perspective, there are many other approaches to thinking about what defines different types of user. For example:

  • In adoption, the Bass Diffusion Model segments e.g. Innovators, Early Adopters, Mass Market, Laggards;
  • Segments based on attitudes to usage, e.g. Lovers, Haters, Functional Users, Social Users, Cost Conscious, etc.;
  • Approaches to privacy and the use of personal data, e.g. Pragmatic, Passive, Paranoid.

It is tempting to hypothesise that there could be meta-segments combining these and other behavioural distinctions (e.g. you might theorise that there would be more ‘haters’ among the ‘laggards’ and the ‘paranoids’ than the ‘innovators’ and ‘pragmatics’), and there may indeed be underlying psychological drivers such as extraversion that drive people to use certain applications (e.g. personal communications) more.

However, other than anecdotal observations, we don’t currently have the data to explore or prove this. This knowledge may of course exist within the research and insight departments of major players and we’d welcome any insight that our partners and readers can contribute (please email contact@telco2.net if so).

Hypothesis: a ‘software fingerprint’?

The collection of apps and software each person uses, and how they use them, could be seen as a software fingerprint – a unique combination of tools showing interests, activities and preferences.

Human beings are complex creatures, and it may be a stretch to say a person could truly be defined by the software they use. However, there is a degree of cause and effect with software. Once you have the ability to use it, it changes what you can achieve. So while the software you use may not totally define you, it will play an increasing role in shaping you, and may ultimately form a distinctive part of your identity.

For example, Minecraft is a phenomenally successful and addictive game. If you haven’t seen it, imagine interactive digital Lego (or watch the intro video here). Children and adults all over the world play on it, make YouTube films about their creations, and share knowledge and stories from it as with any game.

To be really good at it, and to add enhanced features, players install ‘mods’ – essentially software upgrades, requiring the use of quite sophisticated codes and procedures, and the understanding of numerous file types and locations. So through this one game, ten year old kids are developing creative, social and IT skills, as well as exploring and creating new identities for themselves.

Figure 2: Minecraft – building, killing ‘creepers’ and coding by a kid near you

Minecraft March 2014

Source: Planetminecraft.com

But who is in charge – you or the software?

There are also two broad schools of thought in advanced IT design. One is that IT should augment human abilities and its application should always be controlled by its users. The other is the idea that IT can assist people by providing recommendations and suggestions that are outside the control of the user. An example of this second approach is Google showing you targeted ads based on your search history.

Being properly aware of this will become increasingly important to individuals’ freedom from unrecognised manipulation. Just as knowing that embarrassing photos on Facebook will be seen by prospective employers, knowing who’s pulling your data strings will be an increasingly important to controlling one’s own destiny in the future.

Back to the law of the Jungle?

Many of the opportunities and abilities conferred by software seem perhaps trivial or entertaining. But some will ultimately confer advantages on their users over those who do not possess the extra information, gain those extra moments, or learn that extra winning idea. The questions are: which will you use well; and which will you enable others to use? The answer to the first may reflect your personal success, and the second that of your business.

So while it used to be that your genetics, parents, and education most strongly steered your path, now how you take advantage of the increasingly mobile cyber-world will be a key additional competitive asset. It’s increasingly what you use and how you use it (as well as who you know, of course) that will count.

And for businesses, competing in an ever more resource constrained world, the effective use of software to track and manage activities and assets, and give insight to underlying trends and ways to improve performance, is an increasingly critical competence. Importantly for telcos and other ICT providers, it’s one that is enabled and enhanced by cloud, big data, and mobile.

The Software as a Service (SaaS) application Salesforce is an excellent case in point. It can brings instantaneous data on customers and business operations to managers’ and employees’ fingertips to any device. This can confer huge advantages over businesses without such capabilities.

Figure 3: Salesforce delivers big data and cloud to mobile

Salesforce delivers big data and cloud to mobile March 2014

Source: Powerbrokersoftware.com

 

  • Executive Summary: the key role of mobile
  • Why aren’t telcos more involved?
  • Revenue Declines + Skills Shortage = Digital Hunger Gap
  • What should businesses do about it?
  • All Businesses
  • Technology Businesses and Enablers
  • Telcos
  • Next steps for STL Partners and Telco 2.0

 

  • Figure 1: Software that reads your mind?
  • Figure 2: Minecraft – building, killing ‘creepers’ and coding by a kid near you
  • Figure 3: Salesforce delivers big data and cloud to mobile
  • Figure 4: The Digital Hunger Gap for Telcos
  • Figure 5: Telcos need Software Skills to deliver a ‘Telco 2.0 Service Provider’ Strategy
  • Figure 6: The GSMA’s Vision 2020

Software Defined Networking (SDN): A Potential ‘Game Changer’

Summary: Software Defined Networking is a technological approach to designing and managing networks that has the potential to increase operator agility, lower costs, and disrupt the vendor landscape. Its initial impact has been within leading-edge data centres, but it also has the potential to spread into many other network areas, including core public telecoms networks. This briefing analyses its potential benefits and use cases, outlines strategic scenarios and key action plans for telcos, summarises key vendor positions, and why it is so important for both the telco and vendor communities to adopt and exploit SDN capabilities now. (May 2013, Executive Briefing Service, Cloud & Enterprise ICT Stream, Future of the Network Stream). Potential Telco SDN/NFV Deployment Phases May 2013

Figure 1 – Potential Telco SDN/NFV Deployment Phases
Potential Telco SDN/NFV Deployment Phases May 2013

Source STL Partners

Introduction

Software Defined Networking or SDN is a technological approach to designing and managing networks that has the potential to increase operator agility, lower costs, and disrupt the vendor landscape. Its initial impact has been within leading-edge data centres, but it also has the potential to spread into many other network areas, including core public telecoms networks.

With SDN, networks no longer need to be point to point connections between operational centres; rather the network becomes a programmable fabric that can be manipulated in real time to meet the needs of the applications and systems that sit on top of it. SDN allows networks to operate more efficiently in the data centre as a LAN and potentially also in Wide Area Networks (WANs).

SDN is new and, like any new technology, this means that there is a degree of hype and a lot of market activity:

  • Venture capitalists are on the lookout for new opportunities;
  • There are plenty of start-ups all with “the next big thing”;
  • Incumbents are looking to quickly acquire new skills through acquisition;
  • And not surprisingly there is a degree of SDN “Washing” where existing products get a makeover or a software upgrade and are suddenly SDN compliant.

However there still isn’t widespread clarity of what SDN is and how it might be used outside of vendor papers and marketing materials, and there are plenty of important questions to be answered. For example:

  • SDN is open to interpretation and is not an industry standard, so what is it?
  • Is it better than what we have today?
  • What are the implications for your business, whether telcos, or vendors?
  • Could it simply be just a passing fad that will fade into the networking archives like IP Switching or X.25 and can you afford to ignore it?
  • What will be the impact on LAN and WAN design and for that matter data centres, telcos and enterprise customers? Could it be a threat to service providers?
  • Could we see a future where networking equipment becomes commoditised just like server hardware?
  • Will standards prevail?

Vendors are to a degree adding to the confusion. For example, Cisco argues that it already has an SDN-capable product portfolio with Cisco One. It says that its solution is more capable than solutions dominated by open-source based products, because these have limited functionality.

This executive briefing will explain what SDN is, why it is different to traditional networking, look at the emerging market with some likely use cases and then look at the implications and benefits for service providers and vendors.

How and why has SDN evolved?

SDN has been developed in response to the fact that basic networking hasn’t really evolved much over the last 30 plus years, and that new capabilities are required to further the development of virtualised computing to bring innovation and new business opportunities. From a business perspective the networking market is a prime candidate for disruption:

  • It is a mature market that has evolved steadily for many years
  • There are relatively few leading players who have a dominant market position
  • Technology developments have generally focussed in speed rather than cost reduction or innovation
  • Low cost silicon is available to compete with custom chips developed by the market leaders
  • There is a wealth of open source software plus plenty of low cost general purpose computing hardware on which to run it
  • Until SDN, no one really took a clean slate view on what might be possible

New features and capabilities have been added to traditional equipment, but have tended to bloat the software content increasing costs to both purchase and operate the devices. Nevertheless – IP Networking as we know it has performed the task of connecting two end points very well; it has been able to support the explosion of growth required by the Internet and of mobile and mass computing in general.

Traditionally each element in the network (typically a switch or a router) builds up a network map and makes routing decisions based on communication with its immediate neighbours. Once a connection through the network has been established, packets follow the same route for the duration of the connection. Voice, data and video have differing delivery requirements with respect to delay, jitter and latency, but in traditional networks there is no overall picture of the network – no single entity responsible for route planning, or ensuring that traffic is optimised, managed or even flows over the most appropriate path to suit its needs.

One of the significant things about SDN is that it takes away the independence or autonomy from every networking element in order to remove its ability to make network routing decisions. The responsibility for establishing paths through the network, their control and their routing is placed in the hands of one or more central network controllers. The controller is able to see the network as complete entity and manage its traffic flows, routing, policies and quality of service, in essence treating the network as a fabric and then attempting to get maximum utilisation from that fabric. SDN Controllers generally offer external interfaces through which external applications can control and set up network paths.

There has been a growing demand to make networks programmable by external applications – data centres and virtual computing are clear examples of where it would be desirable to deploy not just the virtual computing environment, but all the associated networking functions and network infrastructure from a single console. With no common control point the only way of providing interfaces to external systems and applications is to place agents in the networking devices and to ask external systems to manage each networking device. This kind of architecture has difficulty scaling, creates lots of control traffic that reduces overall efficiency, it may end up with multiple applications trying to control the same entity and is therefore fraught with problems.

Network Functions Virtualisation (NFV)

It is worth noting that an initiative complementary to SDN was started in 2012 called Network Functions Virtualisation (NFV). This complicated sounding term was started by the European Telecommunications Standards Institute (ETSI) in order to take functions that sit on dedicated hardware like load balancers, firewalls, routers and other network devices and run them on virtualised hardware platforms lowering capex, extending their useful life and reducing operating expenditures. You can read more about NFV later in the report on page 20.

In contrast, SDN makes it possible to program or change the network to meet a specific time dependant need and establish end-to-end connections that meet specific criteria. The SDN controller holds a map of the current network state and the requests that external applications are making on the network, this makes it easier to get best use from the network at any given moment, carry out meaningful traffic engineering and work more effectively with virtual computing environments.

What is driving the move to SDN?

The Internet and the world of IP communications have seen continuous development over the last 40 years. There has been huge innovation and strict control of standards through the Internet Engineering Task Force (IETF). Because of the ad-hoc nature of its development, there are many different functions catering for all sorts of use cases. Some overlap, some are obsolete, but all still have to be supported and more are being added all the time. This means that the devices that control IP networks and connect to the networks must understand a minimum subset of functions in order to communicate with each other successfully. This adds complexity and cost because every element in the network has to be able to process or understand these rules.

But the system works and it works well. For example when we open a web browser and a session to stlpartners.com, initially our browser and our PC have no knowledge of how to get to STL’s web server. But usually within half a second or so the STL Partners web site appears. What actually happens can be seen in Figure 1. Our PC uses a variety of protocols to connect first to a gateway (1) on our network and then to a public name server (2 & 3) in order to query the stlpartners.com IP address. The PC then sends a connection to that address (4) and assumes that the network will route packets of information to and from the destination server. The process is much the same whether using public WAN’s or private Local Area Networks.

Figure 2 – Process of connecting to an Internet web address
Process of connecting to an Internet web address May 2013

Source STL Partners

The Internet is also highly resilient; it was developed to survive a variety of network outages including the complete loss of sub networks. Popular myth has it that the US Department of Defence wanted it to be able to survive a nuclear attack, but while it probably could, nuclear survivability wasn’t a design goal. The Internet has the ability to route around failed networking elements and it does this by giving network devices the autonomy to make their own decisions about the state of the network and how to get data from one point to any other.

While this is of great value in unreliable networks, which is what the Internet looked like during its evolution in the late 70’s or early 80’s, networks of today comprise far more robust elements and more reliable network links. The upshot is that networks typically operate at a sub optimum level, unless there is a network outage, routes and traffic paths are mostly static and last for the duration of the connection. If an outage occurs, the routers in the network decide amongst themselves how best to re-route the traffic, with each of them making their own decisions about traffic flow and prioritisation given their individual view of the network. In actual fact most routers and switches are not aware of the network in its entirety, just the adjacent devices they are connected to and the information they get from them about the networks and devices they in turn are connected to. Therefore, it can take some time for a converged network to stabilise as we saw in the Internet outages that affected Amazon, Facebook, Google and Dropbox last October.

The diagram in Figure 2 shows a simple router network, Router A knows about the networks on routers B and C because it is connected directly to them and they have informed A about their networks. B and C have also informed A that they can get to the networks or devices on router D. You can see from this model that there is no overall picture of the network and no one device is able to make network wide decisions. In order to connect a device on a network attached to A, to a device on a network attached to D, A must make a decision based on what B or C tell it.

Figure 3 – Simple router network
Simple router network May 2013

Source STL Partners

This model makes it difficult to build large data centres with thousands of Virtual Machines (VMs) and offer customers dynamic service creation when the network only understands physical devices and does not easily allow each VM to have its own range of IP addresses and other IP services. Ideally you would configure a complete virtual system consisting of virtual machines, load balancing, security, network control elements and network configuration from a single management console and then these abstract functions are mapped to physical hardware for computing and networking resources. VMWare have coined the term ‘Software Defined Data Centre’ or SDDC, which describes a system that allows all of these elements and more to be controlled by a single suite of management software.

Moreover, returning to the fact that every networking device needs to understand a raft of Internet Request For Comments (or RFC’s), all the clever code supporting these RFC’s in switches and routers costs money. High performance processing systems and memory are required in traditional routers and switches in order to inspect and process traffic, even in MPLS networks. Cisco IOS supports over 600 RFC’s and other standards. This adds to cost, complexity, compatibility, future obsolescence and power/cooling needs.

SDN takes a fresh approach to building networks based on the technologies that are available today, it places the intelligence centrally using scalable compute platforms and leaves the switches and routers as relatively dumb packet forwarding engines. The control platforms still have to support all the standards, but the platforms the controllers run on are infinitely more powerful than the processors in traditional networking devices and more importantly, the controllers can manage the network as a fabric rather than each element making its own potentially sub optimum decisions.

As one proof point that SDN works, in early 2012 Google announced that it had migrated its live data centres to a Software Defined Network using switches it designed and developed using off-the-shelf silicon and OpenFlow for the control path to a Google-designed Controller. Google claims many benefits including better utilisation of its compute power after implementing this system. At the time Google stated it would have liked to have been able to purchase OpenFlow-compliant switches but none were available that suited its needs. Since then, new vendors have entered the market such as BigSwitch and Pica8, delivering relatively low cost OpenFlow-compliant switches.

To read the Software Defined Networking in full, including the following sections detailing additional analysis…

  • Executive Summary including detailed recommendations for telcos and vendors
  • Introduction (reproduced above)
  • How and why has SDN evolved? (reproduced above)
  • What is driving the move to SDN? (reproduced above)
  • SDN: Definitions and Advantages
  • What is OpenFlow?
  • SDN Control Platforms
  • SDN advantages
  • Market Forecast
  • STL Partners’ Definition of SDN
  • SDN use cases
  • Network Functions Virtualisation
  • What are the implications for telcos?
  • Telcos’ strategic options
  • Telco Action Plans
  • What should telcos be doing now?
  • Vendor Support for OpenFlow
  • Big switch networks
  • Cisco
  • Citrix
  • Ericssson
  • FlowForwarding
  • HP
  • IBM
  • Nicira
  • OpenDaylight Project
  • Open Networking Foundation
  • Open vSwitch (OVS)
  • Pertino
  • Pica8
  • Plexxi
  • Tellabs
  • Conclusions & Recommendations

…and the following figures…

  • Figure 1 – Potential Telco SDN/NFV Deployment Phases
  • Figure 2 – Process of connecting to an Internet web address
  • Figure 3 – Simple router network
  • Figure 4 – Traditional Switches with combined Control/Data Planes
  • Figure 5 – SDN approach with separate control and data planes
  • Figure 6 – ETSI’s vision for Network Functions Virtualisation
  • Figure 7 – Network Functions Virtualised and managed by SDN
  • Figure 8 – Network Functions Virtualisation relationship with SDN
  • Table 1 – Telco SDN Strategies
  • Figure 9 – Potential Telco SDN/NFV Deployment Phases
  • Figure 10 – SDN used to apply policy to Internet traffic
  • Figure 11 – SDN Congestion Control Application