Luma, an AI startup backed by venture firm Andreessen Horowitz (a16z), released its latest generative video model, Ray3 Modify, which allows users to alter existing video footage by uploading character reference images.
The company said the model can preserve original performance elements, such as motion, timing, eyeline, and emotional delivery while transforming a scene. Ray3 Modify also enables users to guide transitions by submitting start and end reference frames.
According to Luma, the model helps creative studios maintain human performance in AI-generated edits or effects. By closely following input footage, the tool supports production with human actors while enabling post-production transformations, including changing appearances, costumes, or settings without reshooting.
“This means creative teams can capture performances with a camera and then immediately modify them to be in any location imaginable, change costumes, or even go back and reshoot the scene with AI, without recreating the physical shoot,” Luma co-founder and CEO Amit Jain said in a statement.
Ray3 Modify is available through Luma’s Dream Machine platform. Luma competes with other AI video platforms such as Runway and Kling. It initially launched video modification features in June 2025.
The model’s debut follows a $900 million funding round in November, led by Humain, an AI company owned by Saudi Arabia’s Public Investment Fund. Existing investors including a16z, Amplify Partners, and Matrix Partners also participated. Luma also plans to build a 2-gigawatt AI cluster in Saudi Arabia in partnership with Humain.








