

Computer vision enables machines to interpret and analyse visual data like images and video. This article delves into the emerging role of edge computing in computer vision, highlighting its advantages in real-time processing and illustrating key use cases across industry verticals including manufacturing, transport and retail.
What is computer vision?
Computer vision enables computers to “see” in a way that mimics how humans see and understand the world. It is a subfield of artificial intelligence (AI) that allows machines to interpret, process, and extract meaningful insights from raw visual data captured by devices like cameras or scanners.
The visual data is analysed by AI algorithms to identify features and anomalies in the image or video. This process relies on deep learning, which enables machines to learn directly from large volumes of data rather than relying on manually coded rules. These systems automatically identify patterns and improve over time, creating the deep learning model known as a convolutional neural network (CNN). CNNs are specifically designed to process visual data by scanning images in layers. Early layers detect simple features like edges and colours, while deeper layers recognise more complex shapes and objects. This layered analysis allows the system to accurately interpret images as it “learns” visual distinctions across vast datasets. The computer vision process can be broken down into four key stages:
Figure 1: The computer vision process
Source: STL Partners
1. Capture the image: Devices like cameras, drones, or scanners collect raw visual data in the form of images or video.
2. Interpret the image: After capture, the image is typically pre-processed to improve quality and consistency by resizing, adjusting brightness, or removing noise. The captured data is then processed by an AI-powered system to identify and recognise patterns by matching them against an extensive database of known patterns, which may include objects, faces, and even medical images.
3. Analyse image data: Once the system identifies known patterns, it analyses the image, allowing it to contextualise and understand the contents. This could mean recognising objects in a factory setting or identifying individuals in security footage.
4. Deliver insights or take action: Based on its analysis, the system might generate a report, flag an issue, or trigger an automated response such as halting a machine or sending an alert.
What is edge-based computer vision?
Edge computing refers to processing data locally, on or near the device where it is generated, rather than relying on centralised cloud servers. For computer vision, this means analysing images and video directly on edge devices like smart cameras or sensors that typically interact with local compute infrastructure rather than sending data to the cloud or a data centre.
Edge-based computer vision minimises latency as the data travels a shorter distance. This enables real-time insights and reduces dependency on network connectivity in comparison to cloud-based alternatives. Local processing also offers the upside of increased data privacy by ensuring that sensitive data remains on-site rather than being transmitted to external servers. As such, edge computer vision is ideal for applications where speed, data sensitivity, and operational resilience are critical
Figure 2: Comparison of edge-based and cloud-based computer vision
|
Edge computer vision | Cloud computer vision |
Processing location | On device (e.g. camera, sensor, gateway) | Centralised cloud servers |
Latency | Very low – real-time response | Higher – depends on network speed |
Bandwidth usage | Low – only essential data is transmitted | High – raw visual data often sent to the cloud |
Privacy & security | High – data stays local, reducing exposure to risk | Lower – sensitive data travels over networks |
Compute capacity | Limited | Virtually unlimited |
Source: STL Partners
What are the key edge-based computer vision applications?
Some computer vision workloads are particularly suited to being deployed at the edge. We have identified the four key use cases driving the adoption of edge computer vision:
1. Asset monitoring: Computer vision enables continuous, automated monitoring of physical assets like machines or production lines using edge-based cameras and sensors. These systems can detect defects, surface imperfections, or assembly errors in real time. The local data analysis ensures rapid responses that can trigger alerts, signalling anomalies or equipment failures. Asset monitoring at the edge improves quality control, enables predictive maintenance and promotes operational efficiency.
2. Safety and security: Computer vision enhances workplace safety and physical security by monitoring environments in real time. For instance, it can verify whether workers wear proper personal protective equipment (PPE) and detect hazards like spills or unauthorised access, triggering consequent alerts. In public or corporate spaces, AI-powered surveillance systems can identify unusual behaviour or threats, improving response times and reducing risks.
3. Flow analysis: Computer vision systems track foot traffic and customer movement to understand how people interact with spaces. These insights can then inform store layout optimisation, targeted promotions, and inventory planning. For example, analysing dwell time in certain aisles can prompt strategic product placement or identify operational bottlenecks. This is particularly useful in retail environments.
4. Connected car driver assistance: Computer vision is a cornerstone of advanced driver-assistance systems. Cameras embedded in vehicles can identify lane markings, traffic signs, pedestrians, and other vehicles in real time. These insights support features like automatic braking, lane-keeping, and collision avoidance. By processing visual data directly on the vehicle, edge-based systems reduce latency and improve safety on the road.
Real-world examples of computer vision
An increasing number of companies are developing and deploying computer vision solutions across real-world environments, demonstrating a range of practical applications in diverse sectors such as retail, transport, and manufacturing. Here are three useful examples:
#1 IBM: IBM Maximo Visual inspection
Type: Hybrid (edge + cloud)
Use case: Asset monitoring
IBM offers robust computer vision tools through platforms like IBM Maximo Visual Inspection. This software enables enterprises to deploy edge-based vision systems for real-time asset monitoring that detects defects, wear, or anomalies on production and assembly lines. . Typically, models are trained in the cloud and deployed on edge devices like cameras or gateways on factory floors. Real-time detection happens at the edge, but insights and model management are synced back to the cloud for broader analytics.
#2 VusionGroup: Captana and EdgeSense
Type: Hybrid (edge + cloud)
Use case: Flow analysis
VusionGroup is deploying computer vision systems in retail environments to support shelf monitoring, in-store analytics, and customer flow analysis, thus informing store layout adjustments in real time. These systems use edge cameras and sensors in combination with on-premise edge computing infrastructure to perform in-store analysis. Model training and long-term optimisation are typically carried out using cloud infrastructure. Notably,
#3 Wayve: Wayve AI Driver
Type: Edge computer vision
Use case: Connected car driver assistance
Wayve is a UK-based startup developing AI-first self-driving systems that use computer vision and deep learning. Wayve’s models are trained on visual data collected from real-world driving environments rather than relying on HD maps and rules-based programming. Wayve’s models, by comparison, interpret traffic signs, road conditions, and pedestrian behaviour to enable adaptive, camera-led navigation. These autonomous driving systems use cameras as primary sensors and process data on the vehicle in real time. In 2024, Wayve received significant investment from NVIDIA and Microsoft in a funding round led by SoftBank. The company secured a deal with Japanese carmaker Nissan in April 2025, which will install its software in vehicles from 2027.
Final thoughts
Undoubtedly, computer vision is poised to be a transformative force across industries, with edge computing enabling real-time, on-site data processing for applications ranging from asset monitoring to connected car driver assistance. As the market evolves, we expect computer vision to emerge as the largest use case for edge AI by 2030, accounting for 50% of the entire addressable edge AI market. If you would like to learn more about this, have a look at STL’s Edge AI market forecast (2025).
Are you looking for advisory services in edge computing?
Download the Edge insights market overview
Download the Edge insights market overview
This 33-page document will provide you with a summary of our insights from our edge computing research and consulting work:
50 edge computing companies to watch in 2025
As always, this list includes a range of companies, from start-ups to those established in the ecosystem. This year, we’ve asked companies to provide more information on their product development roadmap, with a particular focus on AI-enablement features.
An insight into the future of AI-RAN
Discover insights from our recent interview with Dr. Alex Jinsung Choi on the AI RAN Alliance’s vision, drivers of RAN evolution, and how AI RAN tackles key industry challenges.
Edge computing at MWC 2025: AI is the trigger
Edge computing was present across the Fira this year, though not as the headline act. Instead, it appeared in its rightful place as a key enabler, deeply woven into the discourse surrounding AI monetisation.