6 Takeaways from Snapchat’s Lens Fest

ML in Lens Creation, Lens Studio 3.3, LiDAR, and more

I’ve never quite attended an event like Lens Fest. Not just because it was entirely virtual, but largely because of the community of people it was celebrating — AR developers, graphic designers, 3D artists, software engineers, ML engineers, and a wide range of other people from around the world who, put simply, create really cool stuff. I was blown away by all the amazing experiences people from all walks of life are building with Lens Studio.

From immersive games, unique brand experiences, and transformations of famous architectural landmarks, to fun Lenses that make you look like a pile of noodles or allow you to wield an AR lightsaber — we’re getting to the point where there truly is something for everyone when it comes to Snapchat Lenses.

For a bit of context, I attended Lens Fest as part of the team at Fritz AI, one of the machine learning partners and session participants. While I was focused primarily on how the Lens Studio team is enabling Creators to leverage their own ML models in their creation (covered in more detail below), I also left with a few other takeaways from a number of the other sessions and conversations I had with Creators.

I’d like to highlight those takeaways in this post.

1. Machine learning front & center

I might be a bit biased, but I don’t think it’s an overstatement to say the machine learning was one theme that popped up again and again at Lens Fest, both in terms of the relatively new capabilities of SnapML and in terms of what Lens Studio provides out of the box.

SnapML did receive a dedicated session along with a networking hour+, giving Creators a chance to familiarize themselves with the capabilities of the framework, see some incredible custom ML models in action, and chat with the ML partners (that’s us!) about how to get started creating Lenses with custom ML.

One line I heard from SnapML interactive engineer Jonathan Solichin really stood out to me—“SnapML is really any feature.” While some of those features might still be really challenging to access (machine learning can be tough and take a lot of time/resource investment), this conception of SnapML as something that extends Lens Studio is incredibly appealing. The ability to add ML-powered features (i.e. custom object tracking) that don’t already exist in the platform has the chance to really change how Lenses are created and interacted with.

All that said, here are my key ML takeaways:

  • Machine learning is integral to almost everything Snapchat does with Lenses. From face tracking to object/scene segmentation and beyond, so much of the Lens creation process happens on top of really powerful mobile ML models.
  • Snapchat is committed to putting this technology in the hands of all kinds of creative people. I’ve had my doubts about this other mobile platforms when it comes to things like ML and AR, but the Lens Studio team is hard at work to make SnapML more accessible, and we’re excited to be an integral part of that process as an ML partner.
  • GANs, GANs, GANs. These are all the rage — the viral face Lenses over the past year (age changer, gender swap, baby face, anime, Pixar character) have all employed generative adversarial networks to some extent to create these mind-blowing experiences. And the fervor has caught on among Creators. GANs are still really difficult to work with on mobile phones, but the hardware and open source model architectures are getting closer, so don’t be surprised to see either a GAN template or Python Notebook pop up from someone in the future.

2. Lens Studio 3.3: Visual Scripting, Texture Compression, & more

A big part of Lens Fest was also the official unveiling of Lens Studio 3.3. I’ve been impressed by the features the Lens Studio team has been apple to pack into iterative 3.x updates since 3.0 launched back in the early summer. And 3.3, from a Creator’s perspective, has to be pretty exciting.

Here’s a quick rundown of what seem like the most impactful new additions:

  • Texture Compression. This new feature allows you to compress texture assets directly inside Lens Studio. From what I could tell in the comments when this announcements, this is going to be a really important improvement workflow, especially for those who have to commonly bounce back and forth between graphics engines and Lens Studio for this task.
  • Visual Scripting: As someone who isn’t a programmer but is code-curious, this was the announcement that most intrigued me. It allows you, instead of programming scripts in JavaScript, to graph that same custom Lens logic with a node system similar to (or maybe the same as?) material graphs. When I hop back into Lens Studio, this is certainly something I want to try.
  • New Templates: In addition to a template getting you up and running with visual scripting, the Lens Studio team also added a Face Morph template, that allows you to morph faces custom 3D meshes; a Tween template that lets you use a dropdown menu to set up animations; and a Configuration template that leverages various UI widgets to help you create adjustable Lenses.

In addition to these new features, there are a number of performance enhancements, bug fixes, and other improvements, with quite a few of those focused on SnapML. You can check those out in the release notes.

3. LiDAR moves Lens Studio closer to enabling full occlusion

For the first time, with the iPhone 12 series, LiDAR has made its way to a mobile device. What is LiDAR? Simply put, it’s a sensor that sends bursts of light (in the form of a laser) and measures the time it takes for that light ray to reflect off a surface back onto the scanner. In practice, what this does is allow the camera to create a 3D topological depth map of a scene.

In one Q&A session with some folks from Apple, their team used an analogy that I really liked. Imagine your entire living room covered tightly in plastic wrap — this sealed wrap basically creates a transparent overlay over all the textures in the space. LiDAR in the iPhone 12 allows the Snapchat camera to create this kind of overlay of any environment, complete with hyperrealistic 3D depth information about it.

In the demo Snapchat and Apple co-produced, the environmental textures and graphics (grass, rocks, vines, starry sky) grow dynamically with the camera scene as it changes—vines only grow on walls, the starry sky only blankets ceilings, etc.

Essentially, this is a huge step towards fine-grained scene understanding, more complete control over occlusion, and more realistic AR experiences more broadly. This tech is in its infancy (at least in terms of mobile experiences), so be on the lookout for some incredible experiments and experiences on this next generation of smartphones powered at least in part by LiDAR.

4. Snapchat announces $3.5M fund for Creators

There’s already been quite a bit of coverage on this item from other awesome outlets, so I won’t go into too much detail, but this feels like, along with the ongoing residency series, to be a first step in Snapchat’s deeper investment in Lenses as (at least in part) commercial products and Lens Creators as the pioneers of the next era of immersive AR experiences. Will be exciting to see how this fund impacts the space.

5. Snapchat Lenses are more than just face filters

This was certainly already true before this week, but I think it really hit home for me that the Lens ecosystem extends far beyond face filters — as I used to call them back in the day when the Big Mouth Lens was a personal revelation (to be honest, it still is).

Snapchat Lenses truly are lenses through which to view the world, not simply layers to place on top of the world. It’s a subtle but powerful difference when you consider features like World Lenses, Landmarkers, and now, SnapML. Combine these features with incredible new tech like LiDAR and Spectacles (more on Snapchat Spectacles here — not something I knew much about before this week), and we’re just scratching the surface of this new era in social AR — and mixed reality experiences more broadly.

I especially appreciated this in-depth conversation from 3 of the engineers that work on the Lens Studio team:

6. Shoutout to the Lens Creator Community

For my last takeaway, I’d like to give a quick shoutout to the Lens Creator community. I and our team have had the privilege of talking to a whole bunch of Creators as we’ve built out support for SnapML in Fritz AI, and though we really enjoyed those 1:1 conversations, seeing this community come together in the most difficult and trying of circumstances truly was special.

I repeatedly saw such healthy, helpful, and encouraging interactions in session chats, networking events, and in other public squares. No negativity, super inclusive, and just a really interesting, diverse group of people.

Additionally, kudos to the Snap team who put this event on, and who clearly cares deeply about their Creator community, partners, and others who make all this magic happen. Thank you for including us, and we hope this is the beginning of a long journey together.

Fritz

Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *

wix banner square