Artificial intelligence: a killer app for edge computing?

Artificial intelligence and edge computing becoming increasingly intertwined – AI as a key use case for edge computing, and edge computing as a key enabler for AI to deliver on performance and keep costs down.  

Artificial intelligence (AI) is a data-heavy, compute-intensive technology – a perfect candidate for edge computing. Edge benefits AI by helping overcome the technological challenges associated with AI-enabled applications and it is specifically well positioned to deliver on: 

1. Reduced data transfer to the central cloud machine learning algorithms must ingest very large amounts of data in order to detect trends and provide accurate recommendations. Rather than streaming all of this to the cloud, more processing can happen at the edge, thus reducing backhaul costs. This is particularly important for use cases which require analysis of high-definition video, where streaming to the cloud would need huge amounts of bandwidth. Edge computing helps in reducing the strain on central cloud resources and networking, ensuring that the infrastructure is optimised and scalable. 

2. Real-time decision making (with reduced latency) – when machine learning drives real-time actions, minimising latency is essential. Instead of sending all raw data to a remote cloud for centralised processing, edge computing allows decisions to be made near the data source, enabling faster responses and triggering actions directly at the edge. By processing data locally, edge compute can achieve ultra-low latency, which is critical for real-time applications. By maintaining decision-making processes near the data source, edge computing infrastructure bolsters system reliability and minimise the risks associated with connectivity disruptions. 

3. Local data storage and processing – with edge computing, sensitive or proprietary information, such as customer location data, can be stored locally rather than in the cloud. By performing AI inferencing or model fine-tuning at the edge, only parsed data and key insights is streamed to the cloud while the raw data remains local. This mitigates enterprise vulnerability to security threats and plays a crucial role in ensuring compliance with stringent regulatory standards in highly regulated industries. Additionally, local processing helps enterprises to better manage data lifecycle costs and overreliance on cloud compute. 

Why would you use edge computing for AI and advanced data analytics?  

Currently, most AI inferencing is happening on a device (e.g. on smartphones for virtual assistants like Siri or Cortana) or in the cloud, and most AI model training is happening in the cloud. On-device AI has real limitations for IoT use cases where devices are unlikely to have the compute, storage or battery power to ingest, label and analyse very large amounts of data. On top of this, machine learning algorithms must be fed massive amounts of data aggregated from many devices (rather than just one) for outcomes to be accurate and useful. 

However, the reliance on cloud-based systems comes with its own set of challenges. The need for continuous data streaming to centralised data centres introduces latency and bandwidth constraints, which can hinder performance in applications demanding real-time responsiveness. This centralisation can create bottlenecks during peak usage, further exacerbating latency and leading to degraded or incomplete performance. 

Equally, as the number of IoT devices increases, it will not be feasible in all cases to rely on the cloud to process and analyse data for real-time decision-making. Streaming and storing huge amounts of data may become prohibitively expensive in backhaul and cloud storage costs. As IoT ecosystems grow exponentially, the operational costs associated with transferring data to the cloud are predicted to rise significantly, necessitating more distributed approaches to data processing. 

Instead, the big data analytics can occur closer to the end device at an edge computing location. By processing data at the edge, organisations can reduce their dependency on centralised infrastructures, which not only lowers costs but also ensures greater data sovereignty and compliance with regional data regulations. 

Only updates to algorithms need to be sent back to the cloud to synchronize learnings across multiple sites. This hybrid approach between edge and cloud ensures that macro learnings are preserved while local optimisations are achieved, striking a balance between scalability and efficiency. The synchronization of data sets is less data intensive, reducing the strain on network resources while maintaining the accuracy of the AI models deployed. 

This approach not only enhances operational efficiency but also ensures a more secure and controlled handling of data. By transferring aggregated insights or model updates, enterprises can minimize exposure to risks associated with raw data transmission. The hybrid model allows AI applications to remain resilient even in environments with unreliable connectivity, enabling greater functionality without interruptions. The benefits of edge AI infrastructure will continue to support the scaling of future-ready AI applications and data analytics.

See how STL can help you capitalise on the edge computing opportunity

Develop a rich understanding of the edge computing opportunity with our research service. Book your demo today

Book a demo

What are examples of edge AI applications? 

Use case 1: Precision monitoring and control of manufacturing machinery  

Precision monitoring and control of machinery is one example of a use case that would be well suited to using AI at the edge. This needs to leverage these technologies because it not only requires very large amounts of sensor data to be collected and analysed, but, off the back of this, changes to the machinery (fine-tuning of movements, temperature reduction, vibration control etc.) and overall manufacturing process need to be made in real-time. In a high-speed production line, latency must be kept to a minimum, and therefore doing the data processing closer to the manufacturing plant is highly valuable.  

Use case 2: Video analytics – video surveillance, facial recognition and flow analysis 

There are several specific use cases in various verticals that will make use of video analytics. For example, in retail, video analytics can be used to track customer footfall and analyse the buying patterns of particular customer profile types to improve products, product placement and customer service. In comparison, in smart cities, video analytics may be used for surveillance and tracking of criminals using facial recognition software. With recent high court rulings deeming the use of automated facial recognition as lawful in the UK, it seems likely that adoptions of this video analytics application will rise. Advanced pattern and facial recognition requires significant compute power and doing this at the edge, rather than in a centralised cloud, will reduce latency and backhaul costs.

Edge computing precision monitoring and manufacturing machinery control

Use case 3: Advanced predictive maintenance and analysis 

Advanced predictive maintenance is an important application of edge AI, particularly in industries like manufacturing and energy. By processing data from thousands of sensors locally, edge AI enables real-time monitoring of equipment conditions, identifying potential failures before they occur. This eliminates the need for scheduled maintenance, reducing downtime and maximising efficiency. Edge infrastructure can handle the vast amount of data required for predictive maintenance, which would be too costly and slow to process in a central cloud. 

Edge AI can also be extended into more general predictive analysis which can help support enterprise decision making. ML algorithms can identify macro trends and forecast the challenges and opportunities within a business cycle. For example, in agriculture, edge AI applications can analyse weather, soil, and crop health data to optimise irrigation, fertilisation and pest management. This supports farmers to make pre-emptive decisions and tailor their approach. Predictive analysis is already an important part of the decision-making process across different industries, such as energy grids by forecasting demand and detecting grid vulnerabilities, and logistics, by optimising routes and inventory management. 

Challenges to using edge computing for AI 

One of the key challenges to using edge compute sites (either on-premises or network edge) for AI is the need for heavy duty storage and compute power. Most AI use cases use GPUs in the centralised cloud to provide this, but it is not clear to what extent GPUs will be part of operators’ edge compute roll-out strategies, nor a key component of an on-premises edge at enterprise site. However, Google has developed its own edge TPUs1, purpose-built ASICs2 designed to run AI at the edge. This may prove a viable alternative to the heavy-duty compute capabilities of the centralised cloud.  

Beyond compute power, managing the scale and complexity of distributed edge networks poses another challenge. Unlike centralized cloud systems, edge environments involve numerous geographically dispersed sites, each requiring consistent maintenance, updates, and monitoring. This fragmentation increases the complexity of scaling and managing data-hungry AI workloads, demanding new tools and processes for orchestration across datacentres. 

While edge computing offers advantages in terms of localized data processing, it also introduces security vulnerabilities. Each edge site and device could become a target for cyberattacks if not adequately secured; and interfacing between on-site workloads and public/private cloud also poses security risks. Implementing security measures across a wide array of devices and locations can be challenging for an enterprise. Standardization between edge and IoT protocols also remains a roadblock. The lack of uniformity in edge computing architectures complicates the integration of hardware and software across different vendors. This often forces enterprises to adopt bespoke solutions, increasing costs and limiting scalability. 

Artificial Intelligence’s power consumption raises significant concerns, particularly in resource-constrained environments. While edge computing reduces the energy demands associated with transmitting large volumes of data to centralized clouds, running AI workloads locally still imposes substantial power requirements. This is particularly the case for machine learning models, which require continuous processing and for devices operating in remote locations or with limited battery capacity. The World Economic Forum estimates that generative AI systems consume 33 times more energy to complete a task compared to task-specific software. Although edge computing can mitigate some of this consumption by enabling localized processing, it remains a challenge to balance the computational needs of AI with sustainable energy practices.  

Tilly Gilbert

Tilly Gilbert

Tilly Gilbert

Director, Consulting & Edge Practice Lead

As a Director in STL’s consulting business Tilly has more than five years of experience leading growth projects for technology and telecoms firms. She heads up our research and consulting practice focused on edge and cloud computing and was nominated for Edge Computing Woman of the Year in 2022. Tilly has a BA from Oxford University and an MA from the University of Pennsylvania.

Download this article as a PDF

Are you looking for advisory services in edge computing?

Read more about edge computing

Edge computing market overview

This 33-page document will provide you with a summary of our insights from our edge computing research and consulting work:

Edge Use Case Directory – Update

At STL Partners, our Edge Use Case Directory documents the top 50 use cases we have encountered across our work. We are constantly updating this to reflect the use cases which are garnering the greatest demand, and we have recently introduced three innovative use cases: High Frequency Trading (HFT), Smart ATMs and Sustainability Monitoring/Mapping. These cutting-edge applications highlight how edge computing is driving rapid transformation in financial transactions, banking security, and environmental surveillance

Edge computing in sports: use cases at the 2022 FIFA World Cup and beyond

From real time player tracking to wearable technology and enhanced fan experiences, edge computing is rapidly moving into the mainstream within the sporting world. In this article, we deep dive four examples of how a theoretical use case has been spun into a live deployment, both at the World Cup to across the sporting world.

AWS & Edge computing: Wavelength use cases and applications​

AWS, together with telecoms partners like Vodafone and Verizon, has deployed Wavelength locations across the UK and the US.