Snap previewed its real-time image model that can generate AR experiences
At the Augmented World Expo on Tuesday, Snap teased an early version of its real-time, on-device image diffusion model that can generate lifelike AR experiences. The company also unveiled a generative AI tool for AR creators.
Snap co-founder and CTO Bobby Murphy said on stage that the model is small enough to run on a smartphone and fast enough to re-render frames in real time based on text prompts.
Murphy said that while the emergence of generative AI image diffusion models has been exciting, these models need to be fast enough to be effective for augmented reality, which is why his teams are working to accelerate machine learning models.
Snapchat users will start seeing Lenses with this generative model in the coming months, and Snap plans to bring it to creators by the end of the year.
“This and future real-time device generative ML models point to an exciting new direction for augmented reality, and are giving us space to rethink how we imagine and create AR experiences,” Murphy said.
Murphy also announced that Lens Studio 5.0 is launching today for developers, with access to new generative AI tools that will help them create AR effects much faster than currently, saving them weeks and even months of time.
AR creators can create selfie lenses by generating highly realistic ML face effects. Also, they can generate custom stylization effects that apply realistic transformations to the user’s face, body, and surrounding environment in real-time. Creators can also create 3D assets in minutes and incorporate it into their lenses.
In addition, AR creators can generate characters such as aliens or wizards with text or image prompts using the company’s Face Mesh technology. They can also create face masks, textures, and materials in minutes.
The latest version of Lens Studio also includes an AI assistant that can answer questions from AR creators.