Using coremltools to Convert a Keras Model to Core ML for iOS

Turning a Python-coded neural net into an iOS .mlmodel

So you’ve got your Keras model set up, and it can do everything you want it to do. But how do you get it onto an iOS device? Thanks to Apple’s Core ML library, this process is painless and can be done in less than 10 lines of code. Better yet, once you write the code I’ll show you below, there’s very little you’ll have to change for the next time you need to convert a model. Here’s a link to the GitHub repo:

Now let’s begin!

Overview

Before we start, what does the whole process look like? Let me break it down step by step:

  1. Create our model in Keras
  2. Install coremltools with pip (if you haven’t done so before)
  3. Save model as .h5
  4. Set Xcode meta data (optional)
  5. Convert our model
  6. Save as .mlmodel

This may seem like quite a few steps, but most of them only require 1 or 2 lines of code. Let’s start with our model.

Creating our model in Keras

First we have to have a model to port. I’m not going to go into Keras in depth in this tutorial since there are plenty of resources online. I’ll just show you the code for the model I’m porting over.

from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten, MaxPooling2D
from sklearn import datasets
from sklearn.model_selection import train_test_split


digits = datasets.load_digits()
X = digits["images"]
y = digits["target"]

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1)

X_train = X_train.reshape(1617,8,8,1)
X_test = X_test.reshape(180,8,8,1)

model = Sequential()
model.add(Conv2D(32,3,3, activation="relu", input_shape=(8,8,1)))
model.add(MaxPooling2D(pool_size=(2 , 2)))
model.add(Flatten())
model.add(Dense(units=128,activation="relu"))
model.add(Dense(units=10,activation="softmax"))

model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'])

model.fit(X_train,y_train,epochs=10)

results = model.evaluate(X_test,y_test)

If you’ve used Keras before, you’ll know that there’s nothing special here. The model above is a Convolutional Neural Net trained on the famous MNIST digits database. The model takes as input an image of a handwritten number and predicts the digit that is in that image. Here is the Wikipedia of the database for more info. I’ve also gone ahead and linked to a reference on CNN’s above in case you’re not familiar with them. Now we can move onto porting our model.

Porting Our Model

If you haven’t installed coremltools before, go ahead and do that first:

After that, we go back to our text editor and start coding. We’ll first need to add our imports — please be sure that all the code from here on out is in the same file that your model was written in.

from keras.models import load_model
import coremltools

Then we can save our model as a .h5:

model.save('your_model.h5')

Before we continue, I want to add that saving your model as a .h5 file is not required, but I always like to keep a Keras version of my .mlmodel in case something goes wrong. We can also set our metadata for Xcode to later interpret. Although this step is not required, it allows for better documentation and allows anyone using the model to quickly understand how to use it.

your_model.author = 'your name'
your_model.short_description = 'Digit Recognition with MNIST'
your_model.input_description['image'] = 'Takes as input an image of a handwritten digit'
your_model.output_description['output'] = 'Prediction of Digit

Finally, we go ahead and convert and save our file:

output_labels = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
your_model = coremltools.converters.keras.convert('your_model.h5', input_names=['image'], output_names=['output'], 
                                                   class_labels=output_labels, image_input_names='image')

your_model.save('your_model_name.mlmodel')

Let me explain a bit more about this last code block: First we set the labels of our outputs, then use keras.convert from coremltools in order to convert our Keras model into a .mlmodel, and lastly we save our .mlmodel. Setting our output labels as shown above is not necessary, though it better documents the .mlmodel, which is always a plus.

And that’s it! Now drag and drop the .mlmodel that was saved in your working directory into your Xcode project, and you’re ready to start making your next billion dollar app idea 😊

Common Pitfalls

There are 3 common mistakes that people make when first porting a model over.

1. Not knowing the inputs and outputs of how a model will be used

First, it’s crucial that you know the inputs and outputs of your model and how the targeted iOS device will interact with them.

For example, if you’re going to be doing image recognition via an iPhone’s camera, you need to be certain that the image captured by the camera is changed into the correct dimensions that your model requires.

This could be done by either increasing the height and width of the image, or by shrinking it down. This is hands down the most common mistake made, but it can be avoided by thinking ahead and working out the dimensions beforehand. Worst case, you can change your model and port it over to iOS again when it has the correct dimensions.

2. Not setting the correct input_names parameter

This second issue only applies to models that take as input an image. The input_names parameter in the keras.convert function must be set equal to [‘image’] any time you are taking as input an image. It’s something that can be easily forgotten, but it’s essential to get right—otherwise, you’ll end up with cryptic errors down the road. For models that don’t take as input an image, there’s no need to worry about this.

3. Not keeping coremltools up to date

Lastly, coremltools needs to be kept up to date. Keras is constantly being updated, and Apple is continuously putting out updates on coremltools so that they can make porting models as simple as possible. While it’s unlikely that you’ll build a model that cannot be ported over with an older version of coremltools, it’s better to be safe than sorry. To keep Keras up to date, run this line in your terminal:

Full Source Code

from keras.models import load_model
import coremltools

model.save('your_model.h5')

output_labels = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
your_model = coremltools.converters.keras.convert('your_model.h5', input_names=['image'], output_names=['output'], 
                                                   class_labels=output_labels, image_input_names='image')

#your_model.author = 'your name'
#your_model.short_description = 'Digit Recognition with MNIST'
#your_model.input_description['image'] = 'Takes as input an image'
#your_model.output_description['output'] = 'Prediction of Digit

your_model.save('your_model_name.mlmodel')

Recap

Let’s make sure we’re clear on what we did here. First we created our model using Keras. Next we installed coremltools. Then, we saved our Keras model in order to have it as a backup. And finally, we converted our model and saved it as a .mlmodel. That sounds like a lot for 6 lines of code! But whew, we did it!

If you’d like to further explore porting your Keras models, I recommend these two sources. And as always, you’ll find the full source code below and here is the link to the repo.

Fritz

Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *

wix banner square