How ARKit 3.5 Enables Immersive Augmented Reality Experiences on iOS

Learn about the cutting-edge technology behind AR apps on iOS

Augmented reality (AR) is all the rage these days, with AR-based apps becoming better each day. And now, with the release of the new iPad Pro with a LiDAR scanner and the latest version of the ARKit framework (version 3.5), it’s more important than ever to understand what makes this amazing technology tick.

With the introduction of ARKit 3.5, Apple has further improved augmented reality technology, including the creation of accurate scene geometries, the ability to instantly start AR-based experiences when launched, and enhanced people occlusion. Whether it’s motion capture or face tracking, this article will discuss the features that allow ARKit to deliver immersive, realistic experiences to end users.

People Occlusion

Conventionally, placing a virtual object in an augmented reality scene results in the entire scene being covered up by the object. In other words, everything in the scene is treated as a “background” for the virtual objects. However, when people are involved, this often causes augmented reality experiences to appear finicky, unrealistic, or inconsistent.

Think about it: if a person passes “between” your device and your virtual object, you’d expect the object to remain behind the person.

With people occlusion in ARKit, developers can take advantage of machine learning technology to detect people and adjust virtual objects in the scene accordingly.

As the name suggests, people occlusion allows virtual objects to be partially occluded, or covered up, by people standing between them and the device. Surprisingly, this technology works even when the entire person isn’t visible in the scene: it works with hands, fingers, and even feet! This creates a more realistic effect, making the virtual objects appear as if they’re actually present in the scene.

Motion Capture

In an attempt to add more support for people’s interactions with virtual objects, ARKit offers the tracking of body movements to allow people to not only occlude virtual objects but also control them.

When ARKit detects a person in a scene, it creates a wireframe skeleton under-the-hood, which it uses to track motion of the most common joints, such as the knees, hips, and elbows. The software can also detect faces, similar to face detection in photos.

Ultimately, the detection of body movements with such detailed precision can prove useful in a variety of applications. In one of Apple’s own demos, the technology was used to create an avatar of the user, which was mimicking their every move. Using this technology, developers can create imaginative experiences to take advantage of such technology, both for entertainment and productivity purposes.

Simultaneous Front and Back Camera

In the early 2010s, when devices began getting two cameras, they were never intended to be used at the same time; they were designed to allow switching based on the use-case.

In ARKit, these cameras can be used at the same time, allowing users to control virtual objects without even touching the screen! Using advanced face detection and dedicated processors, the simultaneous use of both front and back cameras is made possible.

Similar to the full body movement tracking from earlier, the use of both cameras allows for the user to control a scene through their actions. Unlike body tracking, though, face detection is much more precise, as it uses TrueDepth cameras to create a 3D map of the user’s facial features, as opposed to relying solely on images.

With the hardware combined with machine learning technology, the use of front and back cameras can easily immerse users in a given scene, making the experience even more realistic than ever before.

Collaborative Scenes

Traditionally, augmented reality has been a lonely experience, only allowing one user at a time to interact with its magic. But, in recent times, that’s become far from the truth. With the introduction of saved 3D depth maps and a compressed file format, it has become possible to combine the power of more than one mobile device to create a collaborative augmented reality scene using ARKit.

Whether you’re building an imaginary castle or playing a quick game of topple the blocks, the combination of network capabilities and saved depth maps have made it possible to share the joy of augmented reality with your friends and family. And with the added vector of live location tracking, making collaborative scenes with friends is a great way for game developers to add augmented reality to their existing multiplayer games.

Scene Geometry

We’ve slowly been heading towards creating accurate 3D depth maps of surroundings for augmented reality purposes. And, with constant updates to hardware technology, we’re getting closer than ever to that new reality.

Depth maps are important because they allow for more realistic AR object placements and better reliability. For example, in augmented reality measurement apps, the device’s understanding of a scene can mean the difference between a close-to-accurate reading or a seemingly-random guess.

With the introduction of a LiDAR (Light Detection and Ranging) scanner on the newest member of the iPad Pro lineup, accurate 3D maps of a scene are now possible through the use of time-of-flight calculations. In essence, the device can now have both an image and 3D scan of the surroundings to produce more reliable results for augmented reality experiences.

Conclusion

In conclusion, ARKit is an excellent framework for creating immersive and responsive augmented reality apps. Aside from improving leaps and bounds with each new revision, ARKit utilizes cutting-edge technology to give developers the hardware and software tools needed to provide their users with the best possible augmented reality experiences.

Whether it’s support for people occlusion in augmented reality scenes or a new sensor on the latest device, Apple continues to be a pioneer in augmented reality technology and provides excellent tools to make use of it. Hopefully, you’ve learned a lot about how augmented reality works under-the-hood and are now inspired to create your own augmented reality experiences! Check out some of my other work to learn more about how you can do this.

Be sure to smash that “clap” button as many times as you can, share this tutorial on social media, and follow me on Twitter.

Fritz

Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *

wix banner square