Lens Studio 3.0 introduces SnapML for adding custom neural networks directly to Snapchat

Including a conversation with Hart Woolery, CEO of 2020CV and a SnapML creator

Snapchat: A pioneer in mobile machine learning

Whenever someone asks me to explain what mobile machine learning is, I instinctively bring up Snapchat as a core example.

In 2015, the incredibly popular social content platform added Lenses to their mobile app—if you’ve ever played with Snapchat, you know these well. They’re essentially augmented reality (AR) filters that give you big strange teeth, turn your face into an alpaca, or trigger digital brand-based experiences.

Here’s me, happy to find my all-time favorite Lens:

In addition to AR, the other core underlying technology is mobile machine learning—neural networks running on-device that do things like create a precise map of your face or separate an image/video’s background from its foreground.

While Lenses have remained a gold standard for mobile ML, the wider developer community has struggled to keep up. And for good reason—even for those that have the infrastructure, ML teams, and resources to build amazing on-device experiences, distribution and access to users remain lingering hurdles.

But that equation is rapidly changing—and once again, Snapchat is leading the way. This time, with the introduction of Lens Studio 3.0, which includes a new platform that allows developers to drop custom neural networks directly into Snapchat Lenses: SnapML.

In this look at Snap’s new platform, we’ll provide a quick rundown, and then go right to the source, chatting with one of the platform’s early creators: Hart Woolery, CEO of 2020CV.

What is SnapML?

Before we jump into our conversation with Hart, here’s quick look at how SnapML works:

  • Using SnapML, engineers, dev teams, and other creators can drop custom ML models directly into Snapchat—these models must be compatible with the ONNX model format.
  • The custom models are uploaded using Snap’s ML Component, which defines model input data, model output data, and how the model should run (i.e. every frame, upon a particular user action, in real-time, etc.).
  • While these models must be built outside of Snapchat and imported, there are templates available (via Notebooks) for a number of use cases: classification, object detection, style transfer, custom segmentation, ground segmentation.
  • Perhaps most importantly, because these models can be dropped directly into Snapchat as Lenses (model pre- and post-processing happens under SnapML’s hood), they instantly become available to millions of users around the world.

If you’re interested in digging deeper, the Lens Studio team has put together some great documentation:

Now that we have a better sense of what SnapML is, let’s hear more from one of its early creators and partners: Hart Woolery, CEO of 2020CV.

In recent years, Hart has also been at the forefront of mobile ML, building a number of impressive, immersive mobile AR experiences powered by on-device neural networks. We’ll share some of his projects below, but first, here’s our conversation. Enjoy!

Our interview with Hart Woolery (CEO, 2020CV)

Austin: To start, I wanted to hear your elevator pitch for SnapML — what it is, and what kinds of experiences you see it enabling.

Hart: Here’s more or less what I told Snap’s team when I first heard about it:

If you think about those viral moments with filters, like the ones that age you or change your gender, Snap ML is going to enable a new wave of novel AR experiences like these.

Austin: As you mentioned, you’ve created a whole bunch of incredible mobile ML experiences (we’ll share some of those below). As someone who’s at the forefront of this technology, what are some of the specific values or benefits of SnapML you envision for folks like yourself (ML engineers, development teams, etc)?

Hart: I would say the main benefit is reducing barriers to entry. Every time I build an app, it’s an arduous process, despite copying over large chunks of boilerplate code from previous apps. There’s design, development (like recording and sharing features, which are now handled by SnapML), testing, etc. With Snap ML, that effort is reduced by a factor of 3–4x, in my opinion. Not only that, but it’s cross-platform by default (Android, iOS, and Desktop).

Austin: What do you think the implications of this new release are for both the broader ML community and specifically folks who’ve been working with mobile ML?

Hart: It’s a game-changer. At least within the subset of people working on live-video ML models. This now becomes the easiest way for ML developers to put their work in front of a large audience. I would say it’s analogous to how YouTube democratized video publishing. It also lowers the investment in publishing, which means developers can take increased risks or test more ideas at the same cost.

Austin: We’ve featured a number of your projects in the past (I’m partial to FaceReplaced myself). Do you have a favorite? Either one that was the most fun/challenging to work on, or maybe one that you’re most pleased with where it ended up?

Hart: I think InstaSaber had the best response, but I probably invested the most effort into YoPuppet. It was also how I first landed an investment in 2020CV — by sending a cold email to Mark Cuban with this video attached (no ML was used for the prototype and it was incredibly unreliable at tracking):

Austin: Anything in the works that you’d like to share?

Hart: I’m going to be investing the next few months into prototyping lenses, most likely. Originally I only had ~30 days to port my models to Snapchat, and a large part of that was converting everything from TF to PyTorch because of some limitations at the time. Then right when I made the deadline COVID-19 hit and threw everything off. So now that I know publishing is live, I should be able to make a new Lens every week or two.

Austin: Anything else you’d like to add or that our readers should know?

Hart: I’ve received a lot of questions about monetization. Currently there are two indirect ways to monetize your model. One is to build a branded lens for another company. Two is to license your model to Snap. Neither is guaranteed money, but at the very least you can get some exposure and test the market. Take a look at Lens Studio 3.0!

Some of Hart’s Work

No better way to showcase the power of on-device ML (and what might be possible when developing for SnapML) than by…well, showcasing it. And Hart’s apps are shining examples:

FaceReplaced

From the app store: “FaceReplaced is a brand new way to create 3D face filters out of virtually anything! Just snap a photo or import one, erase the background, and like magic it creates a mask for you. It works on pets, humans, stuffed animals, household objects, PNG images, and many more things!”

YoPuppet

From the app store: “YoPuppet brings virtual hand puppets anywhere you want to go. Easily create your own puppet shows, record and share with friends, or just enjoy with your family at home. Endless possibilities and new puppets added every week. YoPuppet uses advanced technology to capture 22 points on your hand in realtime and map them to the puppet’s face.”

Say BARK!

From the app store: “Send the best (and weirdest) greeting with Say BARK!, the app that lets you create and share animated cards starring your dog. Take a photo of your pup, add one of our filters and lip synced audio, and share your personalized message with the world.”

InstaSaber

From the app store: “InstaSaber is the first AR app that allows you to instantly turn an ordinary piece of paper into a virtual saber and swing it around in realtime! Choose your saber effects and take a photo or record a video to share with your friends.”

Fritz

Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *

wix banner square