From edge inference to generative pipelines: building video AI that works

Central Room
EverywhereOptimization

AI for video now spans two radically different architectures:

  • real-time inference stacks, optimized for millisecond latency on edge devices (XXII) and
  • generative pipelines, capable of multimodal synthesis and large-scale post-production (Aive, Veed).

Between them, broadcasters like France Télévisions must guarantee reliability, content provenance, and metadata integrity under production constraints.

This panel breaks down the engineering behind each layer. Inference pipelines, asynchronous rendering, edge vs cloud trade-offs, and how standards are reshaping video AI at scale.

A technical deep dive into how video AI systems are actually built, deployed, and trusted in production.