
Operationalizing TwelveLabs AI with Embrace Orchestration on AWS
Paris, August 28, 2025
Unlocking Video Intelligence at Scale
As the complexity of video workflows grows, Media and Entertainment companies are racing to extract more value from their vast content libraries. With the recent general availability of TwelveLabs’ Marengo and Pegasus video-native foundation models in Amazon Bedrock, AI-powered video understanding is now more accessible than ever. To truly operationalize these capabilities at scale, customers need workflow orchestration, metadata automation, and search infrastructure integration.
Unlocking the Value of Video
Today, many companies are still struggling to extract the most value from their existing video archives. TwelveLabs, through their API-first video foundation models, are making it easier to search and analyze video at scale. Their Marengo embedding model generates vector embeddings to power semantic search and pattern recognition. Pegasus, allows users to summarize, classify and describe videos in natural language.
For many Media and Entertainment companies, in order to unlock the full potential of their video archives they require the additional orchestration, MAM and PAM metadata integration and business process frameworks. Embrace provides these additional capabilities through their Pulse-IT and Automate-IT platforms.
Together, Embrace and TwelveLabs are delivering customers an end-to-end workflow to analyze, enrich, and activate video content at scale and throughout the entire media supply chain — from AI-powered understanding with TwelveLabs to orchestrated metadata injection, content adaptation, and automated distribution with Embrace.
Embrace + TwelveLabs: Actionable Workflows
Embrace’s Pulse-IT and Automate-IT platforms connects TwelveLabs models in Amazon Bedrock with your media supply chain, metadata systems, and production teams.
Real-World Use Case: Enabling Searchable Video Archives & AI-Powered Promo Creation
Imagine sitting on a 10,000-hour video archive. A goldmine but only if you could unlock it.
With Embrace + TwelveLabs that potential becomes action:
- Pulse-IT uses its low-code orchestration engine to manage the entire flow:
- Trigger TwelveLabs in Amazon Bedrock to analyze the content and return rich metadata and embeddings
- Route results to the right downstream tools (MAM, CMS, editors) to enrich metadata and power intelligence search
- Create metadata at scale and assets become easily searchable with traditional MAM search engine
- Facilitate human in the loop interactions where and when it makes sense (e.g. notifications, review and approval, versioning management)
- Surface summaries, chapters, themes, or content categories to delivery endpoints for enhanced discoverability for the audience
- Automate-IT complements this with creative automation:
- Automatically uses enriched metadata (e.g. title, highlights, objects) to generate branded promos and social videos with correct branding and subtitles without the need for an editor to open Adobe After Effects
- Enables hands-free content adaptation across languages, regions, or platforms
Seamless Integration with AWS
- Direct integration with Amazon Bedrock
- Scale dynamically with Kubernetes and EKS
- Enforced security with SSO & Role-Based Access via Amazon Cognito
- Run workloads across cloud and on-prem as needed
Who Benefits?
- Media Archives: unlock searchable knowledge from legacy footage.
- Broadcasters & Studios: accelerate promo versioning and content repurposing.
- Sports Leagues: index and republish highlight reels faster than ever.
- Marketing Teams: personalize campaigns at scale using video insights.
- News & Documentary Teams: quickly locate quotes, scenes, or themes.
Let’s Talk at IBC 2025!
Both Embrace and TwelveLabs will be at IBC 2025. Visit us at Hall 6.C11 to see this game-changing integration in action. We invite all media professionals, technologists, and curious minds to explore how AI and orchestration can turn your video data into performance and creativity.
About TwelveLabs
TwelveLabs delivers industry-leading video AI solutions that unlock the full potential of vast video libraries. Our proprietary multimodal foundation models bring human-like understanding to videos, enabling precise semantic search, summarization, analysis and Q&A through easy-to-integrate APIs. This empowers enterprises to effortlessly search, monetize IP, extract insights, and repurpose content at scale.
Unlike conventional methods that struggle with video; TwelveLabs overcomes the limitations of manual tagging and inadequate computer vision techniques, streamlining processes with customizable models. These models make previously inaccessible video assets searchable, seamlessly integrating into existing workflows. From industry leaders in media and entertainment to governments around the world TwelveLabs is changing the way the world works with video.
About EMBRACE
Since 2015, Embrace has been transforming content creation at scale by connecting people, systems and processes. The company develops advanced automation, orchestration and collaboration solutions for the Media & Entertainment industry and global brands. Embrace aims to unleash creativity and improve performance around video and graphics supply chains.
Our products are heavily used 24/7 by leading media groups such as AMC Networks, Arte Studio, BCE, Be tv, CANAL+, Disney-ABC News, Euronews, Eurosport, Hearst Networks EMEA, Madison Square Garden Networks, Mediawan Thematics, M6, Mercedes-AMG, Orange, Red Bee Media, RTL Group, ProSiebenSat1, Sinclair, TF1, TV5MONDE, Warner Bros. Discovery.
For more information, visit www.twelvelabs.io or www.embrace.fr.
Contact Information:
Aline Rolland, Head of MarCom at Embrace| aline@embrace.fr