HeadlinesBriefing favicon HeadlinesBriefing.com

Google Photos AI Now Lets You Re-compose Shots After the Fact

Google AI Blog •
×

Google has launched a new AI-powered feature in Google Photos that lets users re-compose their photos from different angles after the shot was taken. The tool, part of the Auto frame feature, uses machine learning to understand a scene's 3D layout and generates new perspectives that were impossible to capture originally. This addresses the common problem of "almost perfect" shots where the angle or framing feels slightly off.

The system works in two stages: first, it estimates a 3D point map of the scene, particularly focusing on human faces and bodies to preserve identity. Then, a latent diffusion model fills in previously hidden content that would be revealed when moving the virtual camera. Google trained this model specifically on image pairs with known camera parameters to handle the inpainting task.

The technology can adjust both camera pose and focal length, automatically correcting wide-angle distortion that makes features closest to the lens appear unnaturally large. For portraits, it detects face position and orientation to compute ideal framing. This fully automatic solution is now live in Google Photos as a single-action improvement for eligible photos containing people, developed through collaboration between Google DeepMind and Google Platforms & Devices teams.