Exploring Use Cases of Core ML Tools

Evaluation, transformations, updatable models, and more

Apple’s Core ML is a powerful machine learning framework with an easy-to-use drag-and-drop interface. And the latest iteration, Core ML 3, brought in lots of new layers and gave rise to updatable models.

With the release of so many features, one thing that often gets sidelined is the things you can do with a model outside of Xcode. There’s a lot of functionality for fine-tuning, customizations, and model testing even before you deploy a Core ML model in your applications.

Using the coremltools Python package, you can not only convert models but also use the utility classes for debugging layers, modifying feature shapes, setting hyperparameters, and even running predictions.

With the advent of coremltools 3.0, around 100 more layers have been, in comparison to Core ML 2. Also, it’s now possible to mark layers as updatable to allow for on-device training.

In the following sections, we’ll be walking through the different use cases and scenarios where coremltools is handy for us and our ML model.

Before we get started, go ahead and install coremltools 3.0 using the following command:

Preprocessing and Model Conversion

Core ML Tools provides converters to convert models from popular machine learning libraries such as Keras, Caffe, scikit-learn, LIBSVM, and XGBoost to Core ML.

Additionally, onnx-coreml and tf-coreml neural network converters are built on top of coremltools.

tf-coreml requires setting a minimum deployment target flag in the convert function. This is because the under-the-hood implementation for iOS 13 deployed models is different from the older versions.

For iOS 13 and above, node names need to be passed—instead of tensor shapes—in the parameter input_name_shape_dict.

The following code snippet showcases how to convert a Keras model to Core ML:

import coremltools

coreml_model = coremltools.converters.keras.convert('model.h5', input_names=['image'], output_names=['output'],
                                                   image_input_names='image')

coreml_model.author = 'Anupam Chugh'
coreml_model.short_description = 'Cat Dog Classifier converted from a Keras model'
coreml_model.input_description['image'] = 'Takes as input an image'
coreml_model.output_description['output'] = 'Prediction as cat or dog'


coreml_model.save('catdogcoreml.mlmodel')

Using the Python script shown above, we can do a variety of things, such as changing input and output names, preprocessing, etc.

The image_input_names argument indicates that the input type can be considered as an image. Otherwise, by default, inputs are created as multi-dimensional arrays by Core ML.

image_scale is used to specify the values by which the input is scaled. The pixels get multiplied by that number. This argument is applicable only if image_input_names was set.

red_bias, green_bias, blue_bias, and gray_bias: These values add the R, G, B, or grayscale color values to the pixels.

For classifier models, we can pass an argument class_labels with an array or file containing the class labels, which are mapped to the neural network’s output indexes.

Modifying Input and Output Types

For cases, where you have a Core ML model in hand, but not in the desired input constraints, coremltools is a handy utility. It not only allows resizing the input and output, but it also lets you change the types. For example, if you need to convert an input type that’s MLMultiArray to an image type with a certain color space, the following piece of code does that for you:

Quantization

Application sizes matter a lot, and certain Core ML models can take up huge chunks of storage space. Quantization works to reduce model size without any significant loss of accuracy. When reducing the size of the model, we lower the precision of weights.

The following code shows one such example of quantizing a Core ML model.

Some of the quantization modes which are currently supported are linear ,kmeans ,linear_symmetric, and linear_lut .

Currently, Caffe and Keras converters support full precision and half precision quantization. This can be set in the model_precision argument in the converter functions. It defaults to float32 .

Modifying Layers

Core ML Tools allows us to inspect, add, delete, or modify layers. For layers that coremltools can’t convert, it allows us to set a placeholder layer by setting the argument add_custom_layers to true in the convert function:

Also, we can inspect a number of layers by invoking inspect_layers on the Neural Network Builder instance.

The following code shows examples of adding or removing layers from the builder specs:

Allowing On-Device Model Training

On-device model training is one of the biggest advancements in Core ML 3. It allows us to personalize models from the device itself, without having to retrain server-side. In order to allow ML models to be updated on the device, we need to:

  • Mark certain layers as updatable
  • Set the loss functions and hyperparameters
  • Add training inputs specs in the builder specifications.

In another scenario, you can pass the respect_trainable=True argument to coremltools.converters.keras.convert() during model conversion, if you wish to build directly updatable Core ML models instead of modifying them later.

Currently, only neural networks and KNN models can be made updatable using coremltools .

Also, we need to set the hyperparameters such as epochs, learning rates, and batch size of training samples for the updatable models, as shown below:

Loss functions inside models are just like layers. Currently, binary_entropy and categorical_cross_entropy (for more than label classes) are among the few loss functions that are supported.

Finally, you need to set the isUpdatable flag on the model specification alongside the minimum specification version (Core ML has v4), as shown below:

Run Predictions

It’s easy to run predictions from the coremltools Python script itself. To start, you need to load your .mlmodel using coremltools .

The following code loads an image classification .mlmodel and runs predictions on it:

In the above code, we pass the model name in the .mlmodel class constructor. Optionally, you can restrict the model to run on CPU only by setting the boolean argument useCPUOnly=True in the constructor.

Next, we load the image using PIL(pillow package) into our custom-made Core ML Model and resize it to fit the input constraints (150×150 for this model).

We ran the above Python script using the following image and got the output as a cat (exact output is available in image caption).

Besides testing your model’s accuracy as we did above, you can also debug your model layers and print out the specs, a summary, or you can visualize your model by invoking visualize_spec() on the .mlmodel.

Conclusion

Core ML 3 brings in lots of new control flow layers that can give rise to building different neural network Core ML models programmatically using the Neural Network Builders. Here’s an example of that right from the docs.

Also, the new release of coremltools brings in support for TensorFlow 2.0 converters as well. Moving forward, you can try adding activation layers to models, quantize, and evaluate models before you deploy them in your applications.

That wraps up this piece. I hope you enjoyed reading.

Fritz

Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *

wix banner square