On-Device Face Detection on Android using Google’s ML Kit

Detecting faces in an image with the power of mobile machine learning

Creating accurate machine learning models capable of identifying multiple faces (or other target objects) in a single image remains a core challenge in computer vision, but now with the advancement of deep learning and computer vision models for mobile, you can much more easily detect faces in an image.

In this article, we’ll do just that on Android with the help of Google ML Kit’s Face Detection API.

What Is Face Detection?

Face detection is a computer vision task with which you can detect faces in an image—it also detects the various part of the face, known as landmarks. Landmarks here are the parts of the human faces like eyes, ears, nose, cheeks, mouth, , etc.

You can also get the contours of detected faces — that is, if you have face contour detection enabled, you also get a list of points for each facial feature that was detected, like LEFT_EYE, RIGHT_EYE, NOSE_BOTTOM for reference below is the contours detected face image. 👇

One thing is to note that we’re talking about face detection, and not facial recognition. The API will detect the areas of the image where they are faces but will not predict who those faces belong to — this is a task known as facial recognition. You can see a quick visual comparison below👇

Possible Use Cases

There are many possible cases where human faces play a pivotal role in the mobile application, where face detection might be particularly useful. Some of the cases are mentioned below.

  • Photo tagging
  • Facial verification
  • Face filters/lenses
  • Crowd counting

What is ML Kit?

ML Kit is a cross-platform mobile SDK (Android and iOS) developed by Google that allows developers to easily access on-device mobile machine learning models.

All the ML Kit’s APIs run on-device, allowing real-time and offline capabilities.

To use the standalone ML Kit on-device SDK, we can just implement it directly—we don’t need to create a project on Firebase and an accompanying google.json file.

If you are using Firebase Machine Learning, then you can check this link to help migrate.

What You’ll Build

In this article, we’re going to build a simple Android app that shows you how to implement ML Kit’s Face Detection on-device API

There are two ways to integrate face detection in Android

  • bundled model (Which is the part of your application)
  • unbundled model (which depends on Google Play Services)

For the purposes of this demo project, I’ve just implemented the unbundled model, which depends on and will be dynamically downloaded via Google Play Services.

By the end of this tutorial, you should see something similar to the screenshot below:

Step 1: Add Dependency

First things first, we need to add a mlkit:face-detection dependency to our Android project in the app/build.gradle file.

For bundling the model in your app:

For using the model within Google Play Services:

Sync the Project

After successfully adding the dependency, just sync the project, as shown in the screenshot below:

Step 2: Configure the face detector

To configure the face detector’s default settings, we have a number of face detector settings, which we can specify with a FaceDetectorOptions object.

  • setPerformanceMode: Determines model prediction speed or accuracy when detecting faces, & the default value is FAST
  • setLandmarkMode: Determines whether to detect facial landmarks or not, like eyes, ears, nose, cheeks, mouth, etc. The default value is NO_LANDMARKS
  • setContourMode: Determines whether to detect the contours of the facial features or not. The default value is NO_CONTOURS
  • setClassificationMode: With this setting, you can classify the faces into categories like “smiling”, “open eyes”, etc. The default value is NO_CLASSIFICATIONS
  • setMinFaceSize: This setting defines the min face size as it relates to the ratio of the width of the head to the width of the image. The default value is 0.1f
  • enableTracking: Determines whether or not you want to assign a face ID that can be used to track the faces across images. The default value is false

Now let’s jump into some code to see how these above steps look in practice:

Step 3: Prepare the input image

To detect faces in an image, we need to prepare the input image. We have a few different options to do this.

  • Bitmap
  • media.Image
  • ByteBuffer
  • ByteArray
  • File

media.Image:

This is used when capturing an image from the device’s camera.

ByteBuffer or ByteArray

We can also create an input image using byteBuffer or ByteArray — to do so we need to create the InputImage object with the buffer or array, together with the image’s height, width, color encoding format, and rotation degree:

File

To create an instance of an input image, we need to pass the app context and file URI to InputImage.fromFilePath

For this demo, I’m using a Bitmap to prepare the input image—let’s jump to the code how it looks.

Step 4: Create an Instance of a Face Detector

In step 4, we create an instance of a face detector using FaceDetection.getClient method, and pass the FaceDetectorOptions instance as a parameter. Or, if you want to use default settings, then there’s no need to pass the FaceDetectorOptions instance as a parameter.

Now let’s jump into some code to see how these above steps look in practice:

With FaceDetectorOptions

Without FaceDetectorOptions:

Step 5: Send Image to the Face Detector and Process the Image

In step 5, we need to pass an image to the face detector via process().

After that, we call process on the face detector, which passes the input image as a parameter. We also use the onSuccessListener to determine when face detection is complete. If successful, we can access the list of Faces and draw the Rect (the bounding box) on the detected faces.

We also add an OnFailureListener—if image processing fails, we’ll be able to show the user an error.

Now let’s jump into some code to see how these above steps look in practice:

Draw a Canvas on Detected Faces:

To draw a canvas on the detected faces, we need to first create a class with the list of <Face> as a constructor, and extend that class with the View.

In the onDraw method, we need to first create a Paint object and then iterate the list of the Face object. In this iteration, we need to set the paint settings like color, stroke width, & style, and after that, we need to just pass the bounding box coordinates and the Paint object to the canvas.drawRect method.

Here’s the DrawRect.kt class, to see how these above steps look in practice:

Calling DrawRect Class to Draw Canvas:

After we’ve successfully created a DrawRect class, now it’s time to call that class to draw the canvas on our detected faces.

In the onSuccessListener method first, we need to create an instance of our DrawRect class, call the draw method and pass the bitmap we created earlier. After that, we need to set our detected face image as the bitmap image.

Now let’s jump into some code to see how these above steps look in practice:

Result

Let’s build and run the application to see our face detection model in practice:

Conclusion

This article taught you how to implement ML Kit’s on-device Face Detection API on Android. To do this, we learned how to configure FaceDetectorOptions, prepare an input image, create a FaceDetector instance, and finally processing the selected image.

We also created a simple application that can pull images locally from the assets folder i.e (drawable folder) and pass that bitmap image to InputImage.fromBitmap(bitmap, 0). After detecting the faces from the API, we then showed the list of faces, and from that list, we were able to draw an accurate bounding.

If you want to explore face detection in more detail, check out the official docs:

I hope this article was helpful. If you think something is missing, have questions, or would like to offer any thoughts or suggestions, go ahead and leave a comment below. I’d appreciate the feedback.

I’ve written some other Android-related content, and if you liked what you read here, you’ll probably also enjoy these:

Fritz

Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *

wix banner square