---
title: Machine learning (ML) inference
summary: null
url: https://www.fastly.com/documentation/solutions/demos/edgeml
---


This demo was created to push the boundaries of the platform and inspire new ideas!

## A brief introduction to ML and inference

Machine learning (ML) is an application of artificial intelligence that includes algorithms that parse data, learn from that data, and then apply what they’ve learned to make informed decisions. 

Deep learning is a subfield of ML that emulates the way the human brain learns through networks of connected neurons. A deep learning model uses algorithms that can determine if its prediction is accurate or not, through its own neural network. Models are "trained" using large (and hopefully diverse) datasets to enhance their predictive accuracy. Inference is where capabilities learned during deep learning training are put to work. 

## What's happening here?

This demo runs on a [TensorFlow Lite](https://www.tensorflow.org/lite/guide/hosted_models) efficient convolutional neural network for mobile vision. We're using a pre-trained model for image classification - [MobileNet v2_1.4_224](https://arxiv.org/abs/1801.04381) - with a top-1 prediction accuracy of **74.9%** (top-5: 92.5%). Inference is handled entirely by a **single, originless** Fastly Compute service written in Rust 🦀 and compiled to WebAssembly. 

The code lives at [github.com/doramatadora/edgeml](https://github.com/doramatadora/edgeml).
