Published in Data content on November 2020. 4 minute read

Will Edge AI be the ML architecture of the future?

Learn the fundamentals of Edge AI and its growing importance in the field of artificial intelligence.

Edge AI describes a class of ML architecture in which AI algorithms are processed locally on devices (at the edge of the network). A device using Edge AI does not need to be connected to work properly and can process data and take decisions independently without a connection. Learn why this is becoming increasingly important in modern applications of AI.

A typical ML architecture

One that should be familiar to you, will feature an ML model, lovingly crafted, trained, and hosted on Cloud infrastructure, to which prediction requests are sent from devices. These requests involve sending a request to a Cloud-based API and receiving a response over the internet.

These requests involve sending a request to a cloud-based API, then receiving a response over the internet. This is typically a successful method when the data being transferred is small, like for snippets of text, but where it breaks down is larger data like high-quality photos or videos. Even moderate data sizes can pose a problem in areas with poor (or no) network coverage.

Edge AI

The idea of Edge AI is for the model instead to live on the devices at the edge (hence the name) of the network. The AI algorithms are then processed locally on the device, cutting out the requirement of an internet connection to process data and generate useful results.


In 2020, Deloitte predicts that more than 750 million Edge AI chips that perform or accelerate machine learning tasks on-device, rather than in a remote data center — will be sold, representing US$2.6 billion in revenue.

Advantages of operating at the edge

Edge AI offers lots of improvement over conventional ML architectures. First of all the latency involved with any network transfer is removed, which can be critical in some use cases. The battery drain involved with streaming data is no longer an issue, allowing for better battery life, and associated costs for data communication are significantly reduced.

This is highly beneficial for a number of use cases. Sensors in remote locations like offshore wind farms can come pre-loaded with the algorithms that enable them to make decisions without the complex infrastructure of getting them internet-connected.

edge_ai_windfarmsSimilarly, this approach is being used to monitor the flow rate in underground gas pipes, where a cloud-based strategy is not feasible. Sensors measure flow rate and pressure to determine the health of the pipeline, and valves can be shut off if symptoms of a leak are detected.


Other real-world applications of Edge AI

Edge AI is not exclusive to remote locations, it is already being adopted closer to home on the high street.


UK cosmetics brand Lush has used an Edge AI approach with a new initiative; the Lush Lens feature has recently been added to their Lush Labs app.

Designed to help reduce the need for packaging, the Lens is used by scanning a product with your smartphone’s camera. Under the hood there is an image recognition model living within the app, taking advantage of Edge AI to reduce battery consumption and network requirements. With the product correctly identified, the user is given detailed product information, without the need for packaging.

Learn more about how the Lush Lens uses AI to reduce packaging here.

Finally, Edge AI chips will likely find their way into an increasing number of consumer devices, such as high-end smartphones, tablets, smart speakers, wearables, and bio-implants. They will also be used in many enterprise markets: robots, cameras, sensors, and other IoT devices.

Are there any drawbacks?


Complex machine learning models are often quite large in size and in some cases it’s not feasible to shift these models to small devices. Models need to be simplified, which inevitably reduces accuracy.

Compute power is limited on edge devices, further restricting the AI tasks that can be performed.

Edge AI often involves deploying a model to a wide range of device types (and operating system versions), and this can increase the likelihood of failures. A lot of testing is therefore typically needed before a chip is ready for circulation.

Next steps

1. Learn more from leading Edge AI chip maker, ARM

2. Learn more about Ancoris Data, Analytics & AI

Free resources

Please download any of our resources to help with your research and project specifications