Diffusion models have recently achieved remarkable results for video generation. Despite the encouraging performances, the generated videos are typically constrained to a small number of frames, resulting in clips lasting merely a few seconds. The primary challenges in producing longer videos include the substantial memory requirements and the extended processing time required on a single GPU. A straightforward solution would be to split the workload across multiple GPUs, which, however, leads to two issues: (1) ensuring all GPUs communicate effectively to share timing and context information, and (2) modifying existing video diffusion models, which are usually trained on short sequences, to create longer videos without additional training. To tackle these, in this paper we introduce Video-Infinity , a distributed inference pipeline that enables parallel processing across multiple GPUs for long-form video generation. Specifically, we propose two coherent mechanisms: Clip parallelism and Dual-scope attention. Clip parallelism optimizes the gathering and sharing of context information across GPUs which minimizes communication overhead, while Dual-scope attention modulates the temporal self-attention to balance local and global contexts efficiently across the devices. Together, the two mechanisms join forces to distribute the workload and enable the fast generation of long videos. Under an 8 x Nvidia 6000 Ada GPU (48G) setup, our method generates videos up to 2,300 frames in approximately 5 minutes , enabling long video generation at a speed 100 times faster than the prior methods.
Long Video Generation
Capable of generating videos with 2,300 frames in 5 minutes (7.6 fps) 100 times faster than the prior methods *
Comparison with Previous Methods *
Maximum Frames
Video-Infinity
Streaming T2V
OpenSora V1.1
Free Noise
2,300
1,200 *
128
120
Time Costing (120 frames)
Video-Infinity
Free Noise
OpenSora V1.1
Streaming T2V
20s
187s
217s
1,604s
Generated Videos
(Click to play)
Video-Infinity *
Free Noise *
Open Sora V1.1
Streaming T2V
Ablation
GPU1
GPU2
Clip Parallelism
Attention
Conv & GroupNorm
Dual-scope Attention
Global-scope
Local-scope
Multi-Prompts
Gallery
Base Model
[1] Our method generates videos with 2,300 frames in 5 minutes, achieving a frame rate of 7.6 fps. Our sampling steps setting is 30.
[2] ... which is approximately 100 times faster than previous methods. The comparison is made with the time taken by the 'Streaming T2V' method under the settings used for generating extreamly-long videos with 1024 frames.
[3] The maximum frame count for 'Streaming T2V' is noted as 1200, because the longest frame sequence mentioned in the original text is 1200, and currently, there are no videos generated by it that exceed 1200 frames.
[4] Our comparison experiments were conducted on 8 x Nvidia Ada 6000 GPUs.
[5] The methods 'Free Noise' and 'Video-Infinity' are based on the 'VideoCrafter2' model.