Lightricks Partners with ElevenLabs to Launch Audio-Driven Video Generation
Translate this article
Lightricks has announced a new feature for its LTX-2 model: the ability to generate video directly from an audio file. The capability is launching in an exclusive initial partnership with AI audio company ElevenLabs.
The feature, called audio-to-video, positions sound as the primary input for creating video content. According to the announcement, this approach allows the pacing, motion, and scene changes in the generated video to be driven by the timing, rhythm, and emotion of the provided audio—such as a voiceover, dialogue, or music track—rather than starting from a text description.
Solving a Creative Workflow Challenge
Lightricks frames the development as a solution to a common workflow issue where audio is traditionally added to visuals after they are created. The company states this new method is intended for early creative exploration, allowing teams to quickly build video concepts that are inherently synchronized with their audio foundation.
Partnership and Availability
The feature is launching first within ElevenLabs’ platform. Daniel Berkovitz, Chief Product Officer at Lightricks, stated the partnership with the audio AI leader is a “natural” step in expanding creative control for users.
Luke Harries from ElevenLabs added that the integration allows their community to “build professional-grade videos quickly” by turning audio stories into visual ones.
Audio-to-video generation is now available in LTX and ElevenLabs since January 20, with broader access via API and open-source release scheduled for January 27. The tool is presented as infrastructure for developers and studios, designed for integration into real production pipelines rather than as a standalone demonstration.
About the Author

Ryan Chen
Ryan Chan is an AI correspondent from Chain.
Recent Articles
Subscribe to Newsletter
Enter your email address to register to our newsletter subscription!