Dog Breed Classification on Mobile with Flutter and TensorFlow Lite

Machine learning and AI are having more and more influence on mobile tech nowadays. Hardware is becoming more and more AI-capable, and machine learning and AI methods are being integrated in mobile apps to optimize and enhance user experiences.

In this tutorial, we’re going to apply machine learning methods provided by the TensorFlow Lite library for the purpose of image classification in a Flutter app. Flutter—as introduced and acknowledged by Google—has a tendency to produce and support libraries related to machine learning.

This TensorFlow lite library will help us load our model as well as apply the model for image classification. Image classification is a computer vision task that allows us to recognize and identify different objects and aspects of an image or video stream.

For testing purposes, we’re going to use a pre-trained model provided by TensorFlow to classify images of dogs and based on their breeds. The idea is to make use of the Image Picker library to fetch the image from the device gallery and then apply image classification to it in order to determine the dog breed present in the image.

So, let’s get started!

Create a new Flutter project

First, we need to create a new Flutter project. For that, make sure that the Flutter SDK and other Flutter app development related requirements are properly installed. If everything is properly set up, then in order to create a project, we can simply run the following command in the desired local directory:

After the project has been set up, we can navigate inside the project directory and execute the following command in the terminal to run the project in either an available emulator or on an actual device:

After a successful build, we will get the following result on the emulator screen:

Creating Image View on Screen

Here, we’re going to implement the UI to fetch an image from the device library and display it on the app screen. For fetching the image from the gallery, we’re going to make use of the Image Picker library. This library offers modules to fetch images and videos from a device’s camera, gallery, etc.

First, we need to install the image_picker library. For that, we need to copy the text provided in the following code snippet and paste it into the pubspec.yaml file of our project:

Next, we need to import the necessary packages in the main.dart file of our project:

In main.dart file, we will have the MyHomePage stateful widget class. In this class object, we need to initialize a constant to store the image file once fetched. Here, we’ll do that in the _imageFile File type variable:

Now we need to implement the UI, which will enable users to pick and display the image. The UI will have a image view section and a button that allows users to pick an image from the gallery. The overall UI template is provided in the code snippet below:

Here, we have used a Container widget with a card-like style for the image display. We have used conditional rendering to display a placeholder image until the actual image is selected and loaded to the display. We have used a RaisedButton widget to render a button just below the image view section.

Hence, we’ll get the result as shown in the emulator screenshot below:

Function to fetch and display the image

We’re now going to implement a function that enables users to open the gallery, select an image, and then show the image in the image view section. The overall implementation of the function is provided in the code snippet below:

Here, we’ve initialized the ImagePicker instance and used the getImage method provided by it to fetch the image from the gallery to the image variable. Then, we have set the _imageFile state to the result of the fetched image using the setState method. This will cause the main build method to re-render and show the image on to the screen.

Now, we need to call the selectImage function in the onPressed property of the RaisedButton widget, as shown in the code snippet below:

We should get the following result:

As we can see, as soon as we select the image from the gallery, it’s shown on the screen instead of the placeholder image.

Performing dog breed classification with TensorFlow Lite

It’s time to configure our image classification model. As previously mentioned, we’ll be working with a pre-trained “starter” model, which we can download here. The overall information about the model is provided in the documentation itself.

This model offers trained images of dogs of various breeds. Hence, we are going to classify an image of a dog for testing purposes. There are also the models credited for other animals as well, which you can check out in the starter models link above.

Once downloaded, we’ll get the zip file. The zip file will contain two model files:

  • mobilenet_v1_1.0_224.txt
  • mobilenet_v1_1.0_224.tflite

Move the two files provided to the ./assets folder in the main project directory:

Then, we need to enable access to assets files in pubspec.yaml:

Installing TensorFlow lite

Next, we need to install the TensorFlow Lite package— a Flutter plugin for accessing TensorFlow Lite APIs. This library supports image classification, object detection, Pix2Pix and Deeplab, and PoseNet for both iOS and Android platforms.

In order to install the plugin, we need to add the following line to the pubspec.yaml file of our project:

For Android, we need to add the following setting to the Android object of the ./android/app/build.gradle file:

Check once to see if the app builds properly by executing flutter run command.

If an error occurs, we may need to increase the minimum SDK version to ≥19 for the TFLite plugin to work.

Once the app builds properly, we’re ready to implement our model.

Implementing the classification model

First, we need to import the package into our main.dart file as shown in the code snippet below:

Loading the Model

Next, we need to load the model files in the app. For that, we’re going to configure a function called loadImageModal. Then, by making use of the loadModel method provided by the Tflite instance, we’re going to load our model files in the assets folder to our app. We are going to set the model and labels parameter inside the loadModel method, as shown in the code snippet below:

Next, we need to call the function inside the initState method so that the function triggers as soon as we enter the screen:

Perform image classification

Now, we’re going to write code to perform the classification itself. First, we need to initialize a variable to store the result of classification as shown in the code snippet below:

This _classifiedResult List type variable will store the result of classification.

Next, we need to devise the function called classifyImage that takes an image file as a parameter. The overall implementation of the function is provided in the code snippet below:

Here, we’ve used the runModelOnImage method provided by Tflite instance to classify the selected image. As parameters, we have passed the image path, result quantity, classification threshold, and other optional configurations for better classification. After the model runs successfully, we have set the result to the _classfiedResult list.

Next, we need to call the function inside the selectImage function and pass the image file as a parameter, as shown in the code snippet below:

This will allow us to set the image to the image view, as well as classify the image as soon as we select it from the gallery.

Now we need to configure the UI template to display the results of the classification. We’re going to show the result of classification in a card style as a list just below the FlatButton widget.

The implementation of the overall UI of the screen is provided in the code snippet below:

Here, just below the FlatButton widget, we have applied the SingleChildScrollView widget so that the content inside it is scrollable. Then, we’ve used the Column widget to list out the widgets inside it horizontally. Inside the Column widget, we have mapped through the result of the classification using the map method and displayed the result in the percentage format inside the Card widget.

Hence, we will get the result as shown in the demo below:

We can see that as soon as we select the image from the gallery, the classification result is displayed on the screen as well.

And that’s it! We have successfully implemented a dog breed classification model in a Flutter app using TensorFlow Lite.

Conclusion

In this tutorial, we were tasked with performing dog breed classification on an image in a Flutter app. The availability of the TensorFlow Lite library made the overall process relatively simple and easy to comprehend. This can be an essential step towards learning the applications of machine learning in a Flutter app. We can even classify the images of other animals as well using the models offered in the starter models on TensorFlow.

We could classify images of other animals, or use the various other models that we downloaded from TensorFlow. Now, the challenge is to train your own model and implement it in a Flutter app. The TensorFlow Lite library is capable of other machine learning processes like object detection and poses estimation as well.

The complete code is available in this GitHub repo.

Fritz

Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *

wix banner square