Full Article: Aligning M2M with Telco 2.0 Strategies

Summary: A review of Telenor, Jasper Wireless and KPN’s approaches to M2M,
examining how M2M strategy needs to fit with an operators’ future
broadband business model strategy. (October 2010)

NB A PDF copy of this article can be downloaded here.

M2M: escaping the cottage industry

The M2M (Machine-to-Machine) market, also known as “Embedded
Mobile”, has frequently been touted as a major source of future growth for the
industry. Verizon Wireless, for example, has set a target of 400% mobile
penetration, implying three embedded devices for each individual subscriber.
However, it is widely considered that this market is cursed by potential –
success always seems to be five years away. At this Spring’s Telco 2.0
Executive Brainstorm, delegates described it as being “sub-scale” and a
“cottage industry”.

Specific technical, operational, and economic issues have
driven this initial fragmentation. M2M is characterised by diversity- this is
inevitable, as there are thousands of business processes in each of tens of
thousands of sub-sectors across the industrial economy. As well as the highly
tailored nature of the applications, there is considerable diversity in
hardware and software products, and new products will have to coexist with many
legacy systems. These many diverse but necessary combinations have provided
fertile ground for the separate ‘cottage industries’.

As a result, it is conversely difficult to build scale,
despite the large total market size. Also, the high degree of specialisation in
each sub-market acts as a barrier to entry. Volume is critical, as typical
ARPUs for embedded devices are only a fraction of those we have come to expect
from human subscribers. This also implies that efficiency and project execution
are extremely important – there is little margin for error.  Finally, with so much specialisation at both
the application and device ends of the equation, it is hard to see if and where
there is much room for generic functionality in the middle. 

Special Technical and Operational Challenges

The technical problems are challenging. M2M applications are
frequently safety-critical, operations-critical, or both. This sets a high bar
in terms of availability and reliability. They often have to operate in
difficult environments. Information security issues will be problematic and new
technologies such as the “Internet of things”/Ubiquitous Computing will make
new demands in terms of disclosure that contradict efforts to secure the
system. An increasingly common requirement is for embedded devices to
communicate directly and to self-organise – in the past, M2M systems have
typically used a client-server architecture and guaranteed security by
isolating their communications networks from the wider world. The security
requirements of a peer-to-peer, internetworked M2M system are qualitatively
different to those of traditional Supervisory, Control, and Data Acquisition
(SCADA) systems.

One of the reasons for customer interest in self-organising
systems is that M2M projects often involve large numbers of endpoints, which
may be difficult to access once deployed, and the costs of managing the system
can be very high. How are the devices deployed, activated, maintained, and
withdrawn? How does the system authenticate them? Can a new device appearing on
the scene be automatically detected, authenticated, and connected? A related
problem is that devices are commonly integrated in industrial assets that have
much longer design lives than typical cellular electronics; computers are
typically depreciated over 3 years, but machine tools, vehicles, and other
plant may have a design life of 30 years or more.

This implies that the
M2M element must be repeatedly upgraded during its lifetime, and if possible,
this should happen without a truckroll. (The asset, after all, may be an
offshore wind turbine, in which case no-one will be able to visit it without using
a helicopter
.) This also requires that upgrades can be rolled-back in the
event they go wrong.

The Permanent Legacy Environment

We’ve already noted that there are a great variety of
possible device classes and vendors, and that new deployments will have to
co-exist with legacy systems. In fact, given the disparity between their
upgrade cycles and the design lives of the assets they monitor, it’s more
accurate to say that these devices will exist in a permanent legacy
environment.

Solution: The Importance of System Assurance

Given the complex needs of M2M applications, just providing
GPRS connectivity and modules will not be enough. Neither is there any reason
to think operators will be better than anyone else at developing industrial
process control or management-information systems. However, look again at the
issues we’ve just discussed – they cluster around what might be termed “system
assurance”. Whatever the application or the vendor, customers will need to be
able to activate, deactivate, identify, authenticate, read-out, locate,
command, update, and rollback their fleet of embedded devices. It is almost
certainly best that device designers decide what interfaces their product will
have as extensions to a standard management protocol. This also implies that
the common standard will need to include a function to read out what extensions
are available on a given device. The

similarities with the well-known SNMP (Simple Network
Management Protocol) and with USSD are extensive.

These are the problems we need to solve. Are there
technology strategies and business models that operators can use to profit by
solving them?

We have encountered a number of examples of how operators
and others have answered this question.

Three Operators’ Approaches

1.  Telenor:
Comprehensive Platform

Telenor Objects is a platform for handling the management,
systems administration, information assurance, and applications development of
large, distributed M2M systems. The core of the product is an open-source
software application developed in-house at Telenor. Commercially, Objects is
offered as a managed service hosted in Telenor’s data centres, either with or
without Telenor data network capacity. This represents concentration on the
system assurance problems we discussed above, with a further concern for rapid
applications development and direct device-to-device communication.

2.  Jasper:
Connectivity Broker.

Several companies – notably Jasper Wireless, Wyless plc.,
and Telenor’s own Connexion division – specialise in providing connectivity for
M2M applications as a managed service. Various implementations exist, but a
typical one is a data-only MVNO with either wholesale or roaming relationships
to multiple physical operators. As well as acting as a broker in wholesale data
service, they may also provide some specialised BSS-OSS features for M2M work,
thus tackling part of the problems given above.

3.  KPN:
M2M Happy Pipe

KPN (an investor in Jasper Wireless) has recently announced
that it intends to deploy a CDMA450 network in the Netherlands exclusively for
M2M applications. Although this is a significant CAPEX commitment to the low
margin connectivity element of the M2M market, it may be a valid option.
Operating at 450MHz, as opposed to 900/1800/1900MHz GSM or 2100MHz UMTS,
provides much better building penetration and coverage at the cost of reduced
capacity. The majority of M2M applications are low bandwidth, many of them will
be placed inside buildings or other radio-absorbing structures, and the low
ARPUs imply that cost minimisation will be significantly more important than
capacity. Where suitable spectrum is available, and a major launch customer –
for example, a smart grid project – exists to justify initial expenditure, such
a multi-tenant data network may be an attractive opportunity. However, this
assumes that the service-enablement element of the product is provided by
someone else – which may be the profitable element.

Finally, Verizon Wireless’s Open Development Initiative,
rather than being a product, is a standardisation initiative intended to
increase the variety of devices available for M2M implementers by speeding up
the process of homologating (the official term) new modules. The intention is
to create a catalogue of devices whose characteristics can be trusted and whose
control interfaces are published. This is not a lucrative business, but
something like it is necessary to facilitate the development of M2M hardware
and software.

Horizontal Enablers

These propositions have in common that they each represent a
different horizontal element of the total M2M system-of-systems –
whether it’s the device-management and applications layer, as in Telenor
Objects, a data-only MVNO such as Connexion or Wyless, or a radio network like
KPN’s, it’s a feature or capability that is shared between different vertical
silos and between multiple customers.

In developing horizontal enabler capabilities, operators
need to consider how to both drive development and growth of what is
effectively a new market and ensure that they are adding
value and that they are getting paid for it.   There is a natural tension between these
objectives.

The tension is between providing a compelling opportunity to
potential ecosystem partners (and in particular, offering them low cost access
to a large potential market) and securing a clear role for providers to extract
value (in particular, through differentiation).


Tensions between operators
and users

Linux: a case study

To explore operator options, we have looked to the
experience of Linux. This is an example of how the demands of a highly diverse
user base can be tackled through horizontalisation, modular design, and open
source development. Since the 1990s, the operating system has come to include a
huge diversity of specialised variants, known as distributions. These consist
of elements that are common to all Linux systems – such as one of the various
kernels which provide the core operating system functions, the common API,
drivers for different hardware devices, and a subset of a wide range of
software libraries that provide key utility programs – and modules specific to
the distribution, that implement its special features.

 For example, Red Hat
Enterprise Linux and OpenSUSE are enterprise-optimised distributions, CentOS is
frequently used for Asterisk and other VoIP applications, Ubuntu is a consumer
distribution (which itself has several specialised variants such as Edubuntu
for educational applications), Android is a mobile-optimised distribution,
Slackware and Debian exist for hardcore developers, Quagga and Zebra are
optimised for use as software-based IP routers, and WindRiver produces
ultra-low power systems for embedded use.

In fact, it’s probably easier to illustrate this than it is
to describe it. The following diagram illustrates the growing diversity of the
Linux family.

The evolution of Linux
distributions over time

The reason why this has been a) possible and b)
tolerable  is the horizontalised,
open-source, and modular nature of Linux. It could easily have been far too
difficult to do a version for minimal industrial devices, another for desktop
PCs, and yet another for supercomputers. Or the effort to do so could have
created a morass of highly incompatible subprojects

In creating a specialised distribution (or ‘distro’), it’s
possible to rely on the existing features that span the various distributions
and deal with the requirements they have in common. Similarly, a major
improvement in one of those core features has a natural source of scale, and
will tend to attract community involvement in its maintenance, as all the other
distros will rely on it. This structure both supports specialisation and
innovation, and helps to scale up support for the features everyone uses.

The Linux kernel – horizontal
specialisation in action

 

To recap, we think that M2M devices may be a little like
this – very different, but relying on at least a subset of the features in a
common specification. The Linux analogy is especially important given that a
lot of them are likely to use some sort of embedded Linux platform. Minimal
common features are likely to cluster around:

  • Activation/Deactivation – initial switch on of a
    device, provisioning it with connectivity, and eventually switching it off

  • Authentication – checking if this is the device
    it should be

  • Update/Rollback – updating the software and
    firmware on a device, and reversing this if it goes wrong

  • Device Discovery – detecting the presence of new
    devices

  • State Readout – get the current values for
    whichever parameters the device is monitoring

  • Location – where is the device?

  • Device Status – is it working?

  • Generic Event notification parameters – provide
    for notifications to and from devices that are specified by the user

This list is likely to be extended by device implementers
and software developers with device- and application-specific commands and data
formats, so there will also need to be a function to get a device’s interfaces
and capabilities. Technically, this has considerable commonality with formats
like USB, SNMP (Simple Network Management Protocol), SyncML, etc. – it’s
possible that these might be implemented as extensions to one of these
protocols.

For our purposes, it’s more interesting to note that these
functions have a lot in common with telcos’ own competences with regard to
device management, activation, OSS/BSS, and the SIM/USIM. Operators in general,
and specifically mobile operators, already have to detect, authenticate,
provision-on, update, and eventually end-of-life a diverse variety of mobile
devices. As much of this as possible must happen over-the-air and
automatically.

It is worth noting that Telstra recently announced their
move into the M2M market. Although they are doing so with Jasper Wireless as a
partner, the product (Telstra
M2M Wireless Control Centre
) is a Web-based self-service application for
customers to activate and administer their own devices.

The commercial strategies of Linux vendors

Returning to the IT world, it’s worth asking “how do the
Linux vendors make money?” After all, their product is at least theoretically
free. We see three options.

  • Option 1 – Red Hat, Novell

Both of these major IT companies maintain their own Linux
distribution (RHEL and OpenSUSE respectively, two of the most common enterprise
distros) and are very significant contributors of code to the core development
process. They also develop much application-layer software.

As well as releasing the source code, though, they also
offer paid-for, shrink-wrapped versions of the distributions, often including
added extras, and custom installations for major enterprise projects.

Typically, a large part of the commercial offering in such a
deal consists of the vendor providing technical support, from first line up to
systems integration and custom software development, and consulting services
over the lifetime of the product. It has been remarked that Linux is only free
if you value your own time at zero – this business model externalises the
maintenance costs and makes them into a commercial product that supports the
“free” element.

  •  Option 2 – IBM

Although IBM has long had its own proprietary Unix-like
operating system, through the 2000s it has become an ever more significant
Linux company – the only enterprise that could claim to be bigger would be
Google. Essentially, they use it as just another software option for their IT
consulting and managed services operation to sell, with the considerable
advantages of no upstream licence costs, very broad compatibility, and maximum
scope for custom development. In return, IBM contributes significant resources
to Linux, and to other open-source projects, notably OpenOffice.

  • Option 3 – Rackspace

And, of course, one way to make money from Linux is good
old-fashioned hosting – they call it the cloud these days. Basically, this
option captures any sort of managed-service offering that uses it as a core
enabler, or even as the product itself.

The big divide between the options, in the end, is the cost
of entry and the form it takes. If you aim to tackle Option 1, there is no
substitute for very significant investment in technical R&D, at least to
the level of Objects. Building up the team, the infrastructure, and significant
original technology is the entry ticket. Operators aren’t – with some
honourable exceptions – the greatest at internal innovation, so beware.

Telenor: flexibility through integration of multiple strategies

With Objects, Telenor has chosen this daring course.
However, they have also hedged their bets between the Red Hat/Novell model and
the managed-service model, by integrating elements of options 1 and 3. Objects
is technically an open-source software play, and commercially/operationally a
hosted service based in their existing data centre infrastructure. Its business
model is solidly based on usage-based subscription.

This doesn’t mean, however, that they couldn’t flex to a
different model in markets where they don’t have telco infrastructure –
offering technical support and consulting to third party implementers of the
software would be an option, and so would rolling it into a broader
systems-integration/consulting offering. In this way, horizontalisation offers
flexibility.

Option 2, of course, demands a significant specialisation in
IT, SI, and related trades.. This is probably achievable for those operators,
like BT and DTAG, who maintain a significant IT services line of business.
Otherwise, this would require a major investment and a risky change of focus.

Connectivity: needs a launch customer…

Option 3 – pure-play connectivity – is a commodity business
in a sector where ARPU is typically low. However, oil is also a commodity, and
nobody thinks that’s not a good business to be in. Two crucial elements for
success will be operations excellence – customers will demand high
availability, while low ARPU will constrain operators to an obsessive focus on
cost – and volume. It will be vital to acquire a major launch customer to get
the business off the ground. A smart grid project, for example, would be ideal.
Once there, you can sell the remaining capacity to as many other customers as
you can drum up.

Existing operators, like KPN, will have the enormous
advantage of being able to re-use their existing physical footprint of cell
sites, power, and backhaul, by adding a radio network more suited to the
demands of cheap coverage, building penetration, and relatively low bandwidth
demands, such as CDMA450 or WiMAX at relatively low frequencies.

Conclusion: M2M must fit into a total strategy

In conclusion, the future M2M market tends to map onto other
ideas about the future of operators. We identified three key strategies in the Future Broadband Business Models
strategy report
, and they have significant relevance here.

“Telco 2.0”, with its aim to be a highly agile development
platform, is likely to look to the software-led Option 1, and perhaps consider partnering
with a suitable player. They might license the Objects brand and make use of the
source code, or else come to an arrangement with Telenor to bring the product
to their customers as a managed service.

The wholesale-led and cost-focused  “Happy Pipe”, and its close cousin,
“Government Department”, is likely to take its specialisation in cheap,
reliable connectivity into a string of new vertical markets, picking
appropriate technology and looking for opportunities in the upcoming spectrum
auctions.

“Device Specialists”, with their deep concern for finding
the right content, brands, and channels to market are likely to pick Option 2 –
if they have a business like T-Systems or BT Global Services, they’ll integrate
it, otherwise they’ll partner with an IT player.

Telco 2.0 Further Reading

If you found this article interesting, you may also be interested in Enterprise 2.0 – Machine-to-Machine Opening for Business, a report of the M2M session at the last Telco 2.0 Executive Brainstorm, and M2M / Embedded Market Overview, Healthcare Focus, and Strategic Options, our Executive Briefing on the M2M market and the healthcare industry vertical.