At the Augmented World Expo, Snapchat introduced new tools powered by generative Artificial Intelligence (AI) to enhance augmented reality (AR) experiences and transform images in real time on mobile devices. These new features were showcased by co-founder and CTO Bobby Murphy, who demonstrated how they work.
Snapchat’s teams have been working to speed up machine learning models in order to have a more significant impact on augmented reality. They have developed new generative AI tools for the mobile app that make it easier to create personalized AR effects. For example, users can now describe a specific lens and see it modify photographs in real time based on that description.
These generative AI techniques also power other features of the platform, such as Dreams, Bitmoji Backgrounds, Chat Wallpapers, and AI Pets. Snapchat is integrating this technology into the Lens Studio, allowing creators to generate personalized machine learning models for their lenses. With these new tools, creators can create more comprehensive projects with improved productivity, modularity, and speed.
The new Lens Studio 5.0 version has been rebuilt from scratch to provide users with new ways to express themselves and unleash new dimensions of creativity with the tools available in Lens Studio and GenAI Suite. Through these innovations, Snapchat aims to empower users to explore new possibilities and push the boundaries of what is possible with AR technology on mobile devices.