Elisa Smart Factory: How to win over industry leaders in two years

Elisa’s Smart Factory solution

As STL Partners has described in The Coordination Age: A third age of telecoms, moves are afoot in the global digital economy to improve the efficiency of resource utilisation by combining the digital and physical worlds in new and innovative ways. Elisa’s Smart Factory solution is a prime example of how telcos can address this need.

Coordinating manufacturing

In the case of manufacturing industries, understanding and managing the flow and progress of materials and goods through production processes has long been a critical component of business success.

Managing and continually improving complex processes is central to operational success on the supply-side of the manufacturing industry. This includes everything from a floor manager overseeing production, to time-and-motion studies, total quality management, just-in-time production, robotics and automation, and many other managerial and operational approaches.

A number of new concepts and practices are now emerging, driven by the same imperatives but arising to a degree independently and in different disciplines, for example:

  • Industry 4.0 ‘the fourth industrial revolution’ – the trend of automation and data exchange in manufacturing industries
  • Digital twins – a virtualised version of a real thing, a bit like an avatar but for a thing rather than a person. It can simulate the real item, interact with it, and exchange information and commands with other digital twins based on pre-defined rules
  • The Industrial Internet of Things (IIoT) – connecting industrial devices, sensors, equipment, etc., to gather and exchange information, and sometimes perform remote control

Numerous companies have embarked on the journey to incorporate and use such connected technologies. However the degree of progress made varies greatly.

Enter your details below to request an extract of the report

var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);

A growing industry

Connecting machinery is far from a new idea. Many industrial machines and processes are already highly connected and automated, and this goes back as far as sixty years in SCADA (Supervisory Control and Data Acquisition) systems in electricity power station control.

What is new is the ability and desire to link these systems together and allow data exchange and a degree of autonomy within managed bounds. This can optimise performance, improve productivity, and ultimately lead to new operational business models.

There are many different possible paths to achieving these ends. For instance, powerful industrial players and consortia are all trying to establish leadership in different ways. Heavyweight contenders on the industry side include GE, Bosch, Siemens, and PTC, with consortia including the somewhat mystically titled All Seeing Alliance.

STL Partners will explore the wider opportunity and main players competing in this field in an upcoming report titled ‘Why we need an Internet for Things’.

Enter Elisa, the innovative Finlander

Elisa is the leading Finnish mobile and fixed operator and No.2 player in Estonia. It has 6.2 million customers.

Yet despite its relatively small footprint compared to some of the industry giants, STL Partners regards Elisa as one of the most innovative operators in the world, and certainly in Europe. Indeed, 18% of Finnish business customers say that it is the most innovative IT actor in its market, compared to 6% for CGI and 5% for Fujitsu.

One of its notable recent innovations is a totally automated Network Operations Centre (NOC). To create this, Elisa had to go through its own journey of process engineering and automation.

Elisa now resells its Elisa Automate NOC solutions to other operators. Similarly, it has leveraged the IP and learning to create Elisa Smart Factory, a solution to help global enterprise customers achieve the levels of success Elisa has achieved itself.

Our thanks to Henri Korpi, EVP New Business Development, and Kari Terho, General Manager, Smart Factory at Elisa, who talked to us openly about the proposition, the business, and how it came into existence.

Contents:

  • Executive Summary 
  • Introduction
  • Understanding manufacturing customers’ problems
  • Unplanned downtime
  • Unstable production quality
  • Lack of visibility
  • Practical obstacles to smart manufacturing
  • How Elisa approached the solution
  • Creating a service operation centre
  • Smart Factory’s claims
  • How did Elisa get here?
  • “There’s loads of discussion of which platform is best. What you actually need is a solution”
  • Conclusions
  • Success factors and lessons for others
  • Challenges
  • Next steps

Figures:

  1. Downtime, data usage and visibility – the three dogs of manufacturing
  2. Elisa Smart Factory Schematic
  3. Elisa Smart Factory screenshot
  4. Typical business objectives of Smart Factory solutions
  5. What an Elisa 3D Digital Twin looks like
  6. A high level view from Elisa’s “End-to-End Cockpit”
  7. Results from Elisa’s automated NOC

Enter your details below to request an extract of the report

var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);

Facebook’s Telecom Infra Project: What is it good for?

Introduction

In early 2016, Facebook launched the Telecom Infra Project (TIP). It was set up as an open industry initiative, to reduce costs in creating telecoms network equipment, and associated processes and operations, primarily through open-source concepts applied to network hardware, interfaces and related software.

One of the key objectives was to split existing proprietary vendor “black boxes” (such as cellular base stations, or optical multiplexers) into sub-components with standard interfaces. This should enable competition for each constituent part, and allow the creation of lower-cost “white box” designs from a wider range of suppliers than today’s typical oligopoly. Critically, this is expected to enable much-broader adoption of networks in developing markets, where costs – especially for radio networks – remain too high for full deployments. Other outcomes may be around cheaper 5G infrastructure, or specialised networks for indoor use or vertical niches.

TIP’s emergence parallels a variety of open-source initiatives elsewhere in telecoms, notably ONAP – the merger of two NFV projects being developed by AT&T (ECOMP) and the Linux Foundation (Open-O). It also parallels many other approaches to improving network affordability for developing markets.

TIP got early support from a number of operators (including SK Telecom, Deutsche Telekom, BT/EE and Globe), hosting/cloud players like Equinix and Bandwidth, semiconductor suppliers including Intel, and various (mostly radio-oriented) network vendors like Radisys, Vanu, IP Access, Quortus and – conspicuously – Nokia. It has subsequently expanded its project scope, governance structure and member base, with projects on optical transmission and core-network functions as well as cellular radios.

More recently, it has signalled that not all its output will be open-source, but that it will also support RAND (reasonable and non-discriminatory) intellectual property rights (IPR) licensing as well. This reflected push-back from some vendors on completely relinquishing revenues from their (R&D-heavy) IPR. While services, integration and maintenance offered around open-source projects have potential, it is less clear that they will attract early-stage investment necessary for continued deep innovation in cutting-edge network technology.

At first sight, it is not obvious why Facebook should be the leading light here. But contrary to popular belief, Facebook – like Google and Amazon and Alibaba – is not really just a “web” company. They all design or build physical hardware as well – servers, network gear, storage, chips, data-centres and so on. They all optimise the entire computing / network chain to serve their needs, with as much efficiency as possible in terms of power consumption, physical space requirements and so on. They all have huge hardware teams and commit substantial R&D resources to the messy, expensive business of inventing new kit. Facebook in particular has set up Internet.org to help get millions online in the developing world, and is still working on its Aquila communications drones. It also set up OCP (Open Computing Platform) as a very successful open-source project for data-centre design; in many ways TIP is OCP’s newer and more telco-oriented cousin.

Many in the telecom industry often overlook the fact that their Internet peers now have more true “technology” investment – and especially networking innovation – than most operators. Some operators – notably DT and SKT – are pushing back against the vendor “establishment”, which they see as stifling network innovation by continuing to push monolithic, proprietary black boxes.

Contents:

  • Executive Summary
  • Introduction
  • What does Open-Source mean, applied to hardware?
  • Focus areas for TIP
  • Overview
  • Voyager
  • OpenCellular
  • Strategic considerations and implications
  • Operator involvement with TIP
  • A different IPR model to other open-source domains
  • Fit with other Facebook initiatives
  • Who are the winners?
  • Who are the losers?
  • Conclusions and Recommendations

Figures:

  • Figure 1: A core TIP philosophy is “unbundling” components of vendor “black boxes”
  • Figure 2: OpenCellular functional architecture and external design
  • Figure 3: SKT sees open-source, including TIP, as fundamental to 5G

Mobile/Multi-Access Edge Computing: How can telcos monetise this cloud?

Introduction

A formal definition of MEC is that it enables IT, NFV and cloud-computing capabilities within the access network, in close proximity to subscribers. Those edge-based capabilities can be provided to internal network functions, in-house applications run by the operator, or potentially third-party partners / developers.

There has long been a vision in the telecoms industry to put computing functions at local sites. In fixed networks, operators have often worked with CDN and other partners on distributed network capabilities, for example. In mobile, various attempts have been made to put computing or storage functions alongside base stations – both big “macro” cells and in-building small/pico-cells. Part of the hope has been the creation of services tailored to a particular geography or building.

But besides content-cacheing, none of these historic concepts and initiatives have gained much traction. It turns out that “location-specific” services can be easily delivered from central facilities, as long as the endpoint knows its own location (e.g. using GPS) and communicates this to the server.

This is now starting to change. In the last three years, various market and technical trends have re-established the desire for localised computing. Standards have started to evolve, and early examples have emerged. Multiple groups of stakeholders – telcos and their network vendors, application developers, cloud providers, IoT specialists and various others have (broadly) aligned to drive the emergence of edge/fog computing. While there are numerous competing architectures and philosophies, there is clearly some scope for telco-oriented approaches.

While the origins of MEC (and the original “M”) come from the mobile industry, driven by visions of IoT, NFV and network-slicing, the pitch has become more nuanced, and now embraces fixed/cable networks as well – hence the renaming to “multi-access”.

Figure 1: A taxonomy of mobile edge computing

Source: IEEE Conference Paper, Ahmed & Ahmed, https://www.researchgate.net/publication/285765997

Background market drivers for MEC

Before discussing specific technologies and use-cases for MEC, it is important to contextualise some other trends in telecoms that are helping build a foundation for it:

  • Telcos need to reduce costs & increase revenues: This is a bit “obvious” but bears repeating. Most initiatives around telco cloud and virtualisation are driven by these two fundamental economic drivers. Here, they relate to a desire to (a) reduce network capex/opex by shifting from proprietary boxes to standardised servers, and (b) increase “programmability” of the network to host new functions and services, and allow them to be deployed/updated/scaled rapidly. These underpin broader trends in NFV and SDN, and then indirectly to MEC and edge-computing.
  • New telco services may be inherently “edge-oriented”: IoT, 5G, vertical enterprise applications, plus new consumer services like IPTV also fit into both the virtualisation story and the need for distributed capabilities. For example, industrial IoT connectivity may need realtime control functions for machinery, housed extremely close by, for millisecond (or less) latency. Connected vehicles may need roadside infrastructure. Enterprises might demand on-premise secure data storage, even for cloud-delivered services, for compliance reasons. Various forms of AI (such as machine vision and deep learning) involve particular needs and new ways of handling data.
  • The “edge” has its own context data: Some applications are not just latency-sensitive in terms of response between user and server, but also need other local, fast-changing data such as cell congestion or radio-interference metrics. Going all the way to a platform in the core of the network, to query that status, may take longer than it takes the status to change. The length of the “control loop” may mean that old/wrong contextual data is given, and the wrong action taken by the application. Locally-delivered information, via “edge APIs” could be more timely.
  • Not all virtual functions can be hosted centrally: While a lot of the discussion around NFV involves consolidated data-centres and the “telco cloud”, this does not apply to all network functions. Certain things can indeed be centralised (e.g. billing systems, border/gateway functions between core network and public Internet), but other things make more sense to distribute. For example, Virtual CPE (customer premises equipment) and CDN caches need to be nearer to the edge of the network, as do some 5G functions such as mobility management. No telco wants to transport millions of separate video streams to homes, all the way from one central facility, for instance.
  • There will therefore be localised telco compute sites anyway: Since some telco network functions have to be located in a distributed fashion, there will need to be some data-centres either at aggregation points / central offices or final delivery nodes (base stations, street cabinets etc.). Given this requirement, it is understandable that vendors and operators are looking at ways to extend such sites from the “necessary” to the “possible” – such as creating more generalised APIs for a broader base of developers.
  • Radio virtualisation is slightly different to NFV/SDN: While most virtualisation focus in telecoms goes into developments in the core network, or routers/switches, various other relevant changes are taking place. In particular, the concept of C-RAN (cloud-RAN) has taken hold in recent years, where traditional mobile base stations (usually called eNodeB’s) are sometimes being split into the electronics “baseband” units (BBUs) and the actual radio transmit/receive components, called the remote “radio head”, RRH. A number of eNodeB’s BBUs can be clustered together at one site (sometimes called a “hotel”), with fibre “front-haul” connecting the RRHs. This improves the efficiency of both power and space utilisation, and also means the BBUs can be combined and virtualised – and perhaps have extra compute functions added.
  • Property business interests: Telcos have often sold or rented physical space in their facilities – colocation of equipment racks for competitive carriers, or servers in hosting sites and data-centres. In turn, they also rely on renting space for their own infrastructure, especially for siting mobile cell-towers on roofs or walls. This two-way trade continues today – and the idea of mobile edge computing as a way to sell “virtual” space in distributed compute facilities maps well to this philosophy.

Contents:

  • Executive Summary
  • Introduction
  • Background market drivers for MEC
  • Why Edge Computing matters
  • The ever-wider definition of “Edge”
  • Wider market trends in edge-computing
  • Use-cases & deployment scenarios for MEC
  • Horizontal use-cases
  • Addressing vertical markets – the hard realities
  • MEC involves extra costs as well as revenues
  • Current status & direction of MEC
  • Standards path and operator involvement
  • Integration challenges
  • Conclusions & Recommendations

Figures:

  • Figure 1: A taxonomy of mobile edge computing
  • Figure 2: Even within “low latency” there are many different sets of requirements
  • Figure 3: The “network edge” is only a slice of the overall cloud/computing space
  • Figure 4: Telcos can implement MEC at various points in their infrastructure
  • Figure 5: Networks, Cloud and IoT all have different starting-points for the edge
  • Figure 6: Network-centric use-cases for MEC suggested by ETSI
  • Figure 7: MEC needs to integrate well with many adjacent technologies and trends