AI project: Audio Classification Equipment
Towards the end of 2019, we made the decision to invest in an internal Artificial Intelligence (AI) R&D project, exploring, training, and developing our skills around neural networking and deep learning. Our project work was also backed by funding from AeroSpace Cornwall.
AI has been (and continues to be) an area that we’ve wanted to do more exploration around. Various types of AI are increasingly used in a variety of settings and environments, but its application in embedded systems offers plenty of opportunity for exploration.
Critical for us was not discovering how we become an AI company, but instead focusing on exploring and learning how we can use neural networks and machine learning in our existing embedded space. We also wanted to see how we could add value to our customers through AI.
We started the Audio Classification Equipment (ACE) project earlier in 2020 and in a few sprints had a collection of knowledge and experience we could bring to future projects.
What is ACE?
ACE is an embedded software application that can take a pre-recorded audio sample (or listen to live audio) and categorise the sample into pre-trained categories. It does this by converting the sample into an image (a spectrogram) which represents the sample, then running the image through a deep learning convolutional neural network which infers the category with a confidence level. ACE is built on existing image-based AI, readily available from Fast AI, TensorFlow and more. ACE is an example of edge AI in action.
Why an AI project and why ACE?
A host of reasons convinced us that ACE would be a good match for what we were looking to achieve, including:
- Wanting to build our own embedded toolkit, both skills and libraries, to include this new technology.
- Looking for a project that was embedded-friendly, such as a small footprint and edge-ready (not dependent on a continuous cloud connection).
- Desire to take an existing innovation and process in the AI space and apply it in a new use.
- Focusing on using an image-based AI and applying it to alternative data sources, such as audio, motion, sonar, electromagnetic and so on.
- Starting in a familiar space (sound) because it was an area we’d done work on for a previous client, in an engine monitoring system that used AI.
- Discovering how we could implement it as a feature or element for possible client projects.
- Exploring what compliance would look like in AI, and how we might meet medical compliance when using AI as part of an embedded system.
Achievements of ACE project
How did we do? Well:
- Within the first eight weeks, we were able to train a neural net and get it up and running on a standalone embedded device (prototype on Raspberry Pi 4 with touch screen, mic).
- We were able to get accuracy of over 90% on a dataset of 1,000 audio samples, with only 32 training samples per each of the categories.
- By using the principles outlined in AAMI TIR45, we developed Verification and Validation practices specifically for convolutional neural networks that are aligned with IEC 62304.
And that’s just the start.
Check out our demo videos to see how ACE works
AI in action
Living documentation
Your team for AI in embedded systems
Looking for embedded software engineers and testers? Over the past 20 years, Bluefruit Software have delivered quality software to a range of sectors, including industrial, medical devices and scientific instruments.