The battle for the edge
This document examines the role of “edge” devices that sit at the periphery of a telco’s network – products like mobile phones or broadband gateways that live in the user’s hand or home. Formerly called “terminals”, with the inclusion of ever-better chips and software, such devices are now getting “smarter”. In particular, they are capable of absorbing many new functions and applications – and permit the user or operator to install additional software at a later point in time.
In fact, there is fairly incontrovertible evidence that “intelligence” always moves towards the edge of telecom networks, particularly when it can exploit the Internet and IP data connections. This has already been seen in PCs connected to fixed broadband, or in the shift from mainframes to client/server architectures in the enterprise. The trend is now becoming clearer in mobile, with the advent of the iPhone and other smartphones, as well as 3G-connected notebooks. Home networking boxes like set-tops, gaming consoles and gateways are further examples, which also get progressively more powerful.
This is all a consequence of Moore’s Law: as processors get faster and cheaper, there is a tendency for simple massmarket devices to gain more computing capability and take on new roles. Unsurprisingly, we therefore see a continued focus on the “edge” as a key battleground – who controls and harnesses that intelligence? Is it device vendors, operators, end users themselves, or 3rd-party application providers (“over-the-top players”, to use the derogatory slang term)? Is the control at a software, application or hardware level? Can operators deploy a device strategy that complements their network capabilities, to strengthen their position within the digital value chain and foster two-sided business models? Do developments like Android and femtocells help? Should the focus be on dedicated single-application devices, or continued attempts to control the design, OS or browser of multi-purpose products like PCs and smartphones?
Where’s the horsepower?
First, an illustration of the power of the edge.
If we go back five years, the average mobile phone had a single processor, probably an ARM7, clocking perhaps 30MHz. Much of this was used for the underlying radio and telephony functions, with a little “left over” for some basic applications and UI tools, like Java games.
Today, many the higher-end devices have separate applications processors, and often graphics and other accelerators too. An iPhone has a 600MHz+ chip, and Toshiba recently announced one of the first devices with a 1GHz Qualcomm Snapdragon. Even midrange featurephones can have 200MHz+ to play with, most of which is actually usable for “cool stuff” rather than the radio. [note: 1,000,000,000,000MHz (Megahertz) = 1,000,000,000GHz (Gigahertz) = 1,000,000THz (Terahertz) = 1,000PHz (Petahertz) = 1EHz (Exahertz)] Now project forward another five years. The average device (in developed markets at least) will have 500MHz, with top-end devices at 2GHz+, especially if they are not phones but 3G-connected PCs or MIDs. (These numbers are simplified – in the real world there’s lots of complexity because of different sorts of chips like digital signal processors, graphics accelerators or multicore processors). Set-top boxes, PVRs, game consoles and other CPE devices are growing smarter in parallel.
Now multiply by (say) 8 billion endpoints – mobile handsets, connected PCs, broadband modems, smart consumer electronics and so forth. In developed markets, people may well have 2-4 such devices each. That’s 4 Exahertz (EHz, 1018) of application-capable computing power in people’s hands or home networks, without even considering ordinary PCs and “smart TVs” as well. And much – probably most – of that power will be uncontrolled by the operators, instead being the playground of user- or vendor-installed applications.
Even smart pipes are dumb in comparison
It’s tricky to calculate an equivalent figure for “the network”, but let’s take an approximation of 10 million network nodes (datapoint: there are 3 million cell sites worldwide), at a generous 5GHz each. That means there would be 50 Petahertz (PHz, 1015) in the carrier cloud. In other words, about an 80th of the collective compute power of the edge.
Now clearly, it’s not quite as bad as that makes it sound – the network can obviously leverage intelligence in a few big control points in the core like DPI boxes, as traffic funnels through them. But at the other end of the pipe is the Internet, with Google and Amazon’s and countless other companies’ servers and “cloud computing” infrastructures. Trying to calculate the aggregate computing power of the web isn’t easy either, but it’s likely to be in the Exahertz range too. Google is thought to have 0.5-1.0 million servers on its own, for example.
So one thing is certain – the word “terminal” is obsolete. Whatever else happens, the pipe will inevitably become “dumber” (OK, less smart) than the edge, irrespective of smart Telco 2.0 platforms and 4G/NGN networks.
Now, add in all the cool new “web telco” companies (eComm 2009 was full of them) like BT/Ribbit, Voxeo, Jaduka, IfByPhone, Adhearsion and the Telco 2.0 wings of longtime infrastructure players like Broadsoft and Metaswitch (not to mention Skype and Google Voice), and the legacy carrier network platforms look even further disadvantaged.
Intelligent mobile devices tend to be especially hard to control, because they can typically connect to multiple networks – the operator cellular domain, public or private WiFi, Bluetooth, USB and so forth – which makes it easier for applications to “arbitrage” between them for access, content and services – and price.
Controlling device software vs. hardware
The answer is for telcos to try to take control of more of this enormous “edge intelligence”, and exploit it for their own benefit and inhouse services or two-sided strategies. There are three main strategies for operators wanting to exert influence on edge devices:
- Provide dedicated and fully-controlled and customised hardware and software end-points which are “locked down” – such as cable set-top boxes, or operator-developed phones in Japan. This is essentially an evolution of the old approach of providing “terminals” that exist solely to act as access points for network-based services. This concept is being reinvented with new Telco-developed consumer electronic products like digital picture frames, but is a struggle for variants of multi-function devices like PCs and smartphones.
- Provide separate hardware products that sit “at the edge” between the user’s own smart device and the network, such as cable modems, femtocells, or 3G modems for PCs. These can act as hosts for certain new services, and may also exert policy and QoS control on the connection. Arguably the SIM card fits into this category as well.
- Develop control points, in hardware or software, that live inside otherwise notionally “open” devices. This includes Telco-customised UI and OS layers, “policy-capable” connection manager software for notebooks, application certification for smartphones, or secured APIs for handset browsers.
Controlling mobile is even harder than fixed
Fixed operators have long known what their mobile peers are now learning – as intelligence increases in the devices at the edge, it becomes far more difficult to control how they are used. And as control ebbs away, it becomes progressively easier for those devices to be used in conjunction with services or software provided by third parties, often competitive or substitutive to the operators’ own-brand offerings.
But there is a difference between fixed and mobile worlds – fixed broadband operators have been able to employ the second strategy outlined above – pushing out their own fully-controlled edge devices closer to the customer. Smart home gateways, set-top boxes and similar devices are able to sit “in front” of the TV and PC, and can therefore perform a number of valuable roles. IPTV, operator VoIP, online backups and various other “branded” services can exploit the home gateways, in parallel with Internet applications resident on the PC.
Conversely, mobile operators are still finding it extremely hard to control handset software at the OS level. Initiatives like SavaJe have failed, while more recently LiMO is struggling outside Japan. Endless complexities outside of Telcos’ main competence, such as software integration and device power management, are to blame. Meanwhile, other smartphone OS’s from firms like Nokia, Apple, RIM and Microsoft have continually evolved – albeit given huge investments. But most of the “smarts” are not controlled by the operators, most of the time. Further, low-end devices continue to be dominated by closed and embedded “RTOSs” (realtime operating systems), which tend to be incapable of supporting much carrier control either.
In fact, operators are continually facing a “one step forward, two steps back” battle for handset application and UI control . For every new Telco-controlled initiative like branded on-device portals, customised/locked smartphone OS’s, BONDI-type web security, or managed “policy” engines, there is another new source of “control leakage” – Apple’s device management, Nokia’s Ovi client, or even just open OS’s and usable appstores enabling easy download of competing (and often better/free) software apps.
The growing use of mobile broadband computing devices – mostly bought through non-operator channels – makes things worse. Even when sold by Telcos, most end users will not accept onerous operator control-points in their PCs’ application or operating systems, even where those computers are subsidised. There may be 300m+ mobile-connected computers by 2014.
Telcos need to face the inevitable – in most cases, they will not be able to control more than a fraction of the total computing and application power of the network edge, especially in mobile or for “contested” general-purpose devices. But that does not mean they should give up trying to exert influence wherever possible. Single-application “locked” mobile devices, perhaps optimised for gaming or navigation or similar functions have a lot of potential as true “terminals”, albeit used in parallel with users’ other smart devices.
It is far easier for the operator to exert its control at the edge with a wholly-owned and managed device, than via a software agent on a general computing device like a smartphone or notebook PC. Femtocells may turn out to be critical application control points for mobile operators in future. Telcos should look to exploit home networking gateways and other CPE with added-value software and services as soon as possible. Otherwise, consumer electronic devices like TVs and HiFi’s will adopt “smarts” themselves and start to work around the carrier core, perhaps accessing YouTube or Facebook directly from the remote control.
For handsets, controlling smartphone OS’s looks like a lost battle. But certain tactical or upper layers of the stack – browser, UI and connection-manager in particular – are perhaps still winnable. Even where the edge lies outside Telcos’ spheres of control, there are still many network-side capabilities that could be exploited and offered to those that do control the edge intelligence. Telco 2.0 platforms can manage security, QoS, billing, provide context data on location or roaming and so forth. However, carriers need to push hard and fast, before these are disintermediated as well. Google’s clever mapping and location capabilities should be seen as a warning sign that there will be work-arounds for “exposable” network capabilities, if Telcos’ offerings are too slow or too expensive.
Overall, the battle for control of the edge is multi-dimensional, and outcomes are highly uncertain, particularly given the economy and wide national variations in areas like device subsidy and brand preference. But Telcos need to focus on winnable battles – and exploit Moore’s Law rather than beat against it with futility.
We’ll be drilling into this area in much more depth during the Devices panel session at the upcoming Telco 2.0 Brainstorm in Nice in early May 2009.