The Coming Wave of AI-Enabled Apps — GitHub Edition

Core ML was released in June at WWDC 2017. They “introduc[ed] machine learning frameworks just for you guys to enable all the awesome things.” In the announcement, they promised image recognition, word prediction in keyboards, showing pictures of only red flowers all happening directly on the device. So 8 months later, what awesome things are we doing with Core ML and AI on mobile devices?

There have been a few notable apps that made some waves on Hacker News (Not Hotdog, InstaSaber), but most of the machine learning advances promised by Apple haven’t shown up in many apps yet.

There are still many steps to get a machine learning model running on a mobile device. Most of the time, you have to train a model using TensorFlow, hope that it successfully converts to Core ML, and then write code to interface with low level inputs such as accelerometer data and camera outputs. The steps involved definitely are not straightforward and have pitfalls not easy for everyone to navigate.

But fret not! Developers are building apps using Core ML. I wrote a script to search GitHub for repos that include Core ML models in the repository. The search yielded many interesting projects and trends.

Adoption has been relatively slow but has steadily increased over time. There was a flurry of activity after the WWDC announcement. Apple provided a handful of open source models (MobileNet, SqueezeNet, GoogLeNet, ResNet50, and VGG16) pre-converted to Core ML files. About 70% of the repos use a pre-converted Core ML model.

Those SeeFood variants mostly use the InceptionV3 model provided by Apple; no data science knowledge required!

Top 10 Core ML GitHub Repositories (by stars)

CoreML-in-ARKit (810 ⭐)

Simple project to detect objects and display 3D labels above them in AR. This serves as a basic template for an ARKit project to use CoreML.

Uses the MobileNet model for image recognition.

YOLO-CoreML-MPSNNGraph (355 ⭐)

Tiny YOLO for iOS implemented using Core ML but also using the new MPS graph API.

The YOLO model detects objects and places bounding boxes around them. The author shows how he converted the YOLO model using Core ML.

UnsplashExplorer-CoreML (348 ⭐)

This app takes a random photo from Unsplash and make predictions about what is inside with Core ML framework using InceptionV3 model

Demo app used to show predictions from InceptionV3

ShowAndTell (101 ⭐)

Uses Core ML to generate captions from an image

Uses a custom model built from the Show and Tell paper

Lumina (96 ⭐)

A camera designed in Swift that can use any CoreML model for object recognition, as well as streaming video, images, and qr/bar codes.

Lumina is an SDK that you can use to easily integrate a camera with a CoreML model.

StyleTransfer-iOS (94 ⭐)

App that runs a style transfer model on a set of provided images.

Includes 6 different style transfer models.

Complex-gestures-demo (82 ⭐)

A demonstration of using machine learning to recognize 13 complex gestures in an AI iOS app

This project has two different apps. One app GestureInput is used to generate training data for the model. GestureRecognizer takes a Core ML model trained on input from GestureInput and classifies gestures drawn by users in real time.

Axolotl (51 ⭐)

A machine learning framework to extract data from mobile phone’s accelerometric and gyroscopic data.

This app uses two models to take inputs from accelerometer data and predict where a user is touching on the screen. It is a proof of concept of how deep learning can be used to steal passwords and other sensitive data.

TestCoreML (50 ⭐)

A camera object recognition demo using the CoreML & AVCam framework. Required XCode 9 & iOS 11.

Uses InceptionV3 to categorize image.

Ios-vision-example (41 ⭐)

Using iOS 11 SDKs with pre-trained neural networks to allow your iOS 11 devices to see.

Uses ResNet50 an InceptionV3 to categorize an image.

Takeaways from top 10 Core ML Repos

Computer vision models power 8/10 of these apps

It’s clear that many of the first apps using Core ML will be taking advantage of Core ML. It has a clear use case and is supported with the pre-trained models from Apple.

Core ML developer tools are lacking

Most of the repos are just demos of Core ML in one way or another. Only one, Lumina, is an actual SDK that is used to make developing apps using Core ML easier. In the other cases, developers roll their own integrations, even though most do similar things. Hopefully tools like Lumina catch on and drastically lower the barrier of entry to using Core ML.

The best is yet to come

We are still in the very early days of mobile machine learning. Developers and data scientists are testing the waters and seeing what is possible. I’m extremely excited to see how the community develops. Right now we are still exploring the use cases. As those explorations translate into actual products, the needs of developers will help make the mobile ML community more mature.

In a future post, I will take a look at the hundreds of smaller projects that point at the future direction of mobile ML.

Fritz

Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *

wix banner square