Decouple from cloud connectivity to succeed in edge compute

To make the case for telco edge compute, we tend to focus on applications that require very low latency. This focuses on 5G and edge compute’s ability to deliver millisecond round trip times that enable futuristic, headline-grabbing examples such as self-flying drones or AR-assisted repairs.

However, there is another group of more mundane requirements that edge compute supports: high-throughput data ingest and processing. In exploring edge compute, less attention has been given to reducing network loads by moving processing from hyperscale cloud to the edge of a telco’s network. This is sometimes mentioned as a potential cost saving or internal efficiency – a marginal side-benefit for the carrier.

There are already near-term use cases relating to reduced backhaul. For example, with applications that generate large volumes of data to be analysed quickly (such as high-definition video), the processing and abstraction near the data source can streamline the ingest and ensure only limited amounts of relevant information are sent over the network. Relevant information could be items such as footfall stats, facial recognition match, potential road hazard such as a pet crossing a road.

For example, a camera generates raw data at 10 Gbps. That’s 5 TB of data an hour.

  • The actual information needed from the camera, such as an incident alert, might only represent a 10 GB file that is generated once a week.
  • If analytics happened in the core cloud, the network would need to continuously deliver 10 Gbps of end-to-end connectivity. And although this is possible with 5G, it would have negative implications. More cameras would mean more backhaul capacity – and that would mean more cloud connectivity. The core would also need to process vast amounts of data.
  • In practice, the preference would be to move some of the analytics much closer to the camera. This would dramatically reduce the volume of data being sent over the network to the core cloud. This edge compute could be on the network  – in which case the large volume of data would still need to be supported over the access network, and the much smaller data set transferred over the backhaul.

Reduced backhaul costs are key benefit of edge computing

Telcos need to see reduced backhaul costs as a key benefit of telco edge computing. And they should see this alongside low latency and other factors such as localised autonomy, resilience and sovereignty. However, cost-saving will be shared with the application provider and this could be a third party or another division within the operator. This in turn means separating out access from backhaul connectivity. This encourages application providers to run as many of their applications on the edge rather than in a centralised cloud or off-net.

Our graph shows the cost breakdown from an application customer’s perspective for a data ingest application. It separates mobile connectivity costs out into access and backhaul, (though this is not how telcos currently charge for connectivity) and sets out three types of compute (telco edge, other customer edge/off-net and core cloud). Four application architectures are set out:

A. All data uploaded to central cloud indiscriminately – no processing or filtering of data at/near the data source

B. Processing and filtering of data on the telco edge but with connectivity charged at full rate (as if end-to-end), despite the full data being carried over access to the telco edge

C. Processing and filtering of data on (potentially non-telco) customer edge compute infrastructure. Telco sees connectivity demand after this process (priced as end-to-end connectivity, but with less data to move)

D. Processing and filtering of data on the telco edge with connectivity charged according to how far the data travels (decoupled pricing)

Chart to show edge and connectivity pricing

Illustrative numbers only

Simply charging the application provider for the extra edge computing without a offering a reduction in connectivity charges (scenario B) may appear as a higher revenue opportunity for the operator (it is), but the application provider will be incentivised to pursue solutions where the compute occurs on the customer edge, potentially leaving even less revenue for the telco (as in scenario C). Though the computing costs are higher for local infrastructure in Scenario C, there is less data to upload to the cloud by the time the application is using (and paying for) telco connectivity.

The ubiquity, flexibility and rapid scalability of telco edge computing will help with its adoption over off-net edge computing, but these advantages also apply to (cheaper) core cloud. Telco edge compute is not the default (proven) option. It is the challenger and needs all the help it can get, including a compelling economic case, to succeed.