X2Video: Adapting Diffusion Models for Multimodal Controllable Neural Video Rendering

Contents:

Abstract

Comparisons

Multimodal Controls

Generalization to Other Scenes

Abstract

We present X2Video, the first diffusion model for generating photorealistic videos guided by intrinsic channels including albedo, normal, roughness, metallicity, and irradiance, while supporting intuitive multi-modal controls with reference images and text prompts for both global and local regions. The intrinsic guidance allows accurate manipulation of color, material, geometry, and lighting, while reference images and text prompts provide intuitive adjustments in the absence of intrinsic information. To enable these functionalities, we extend the intrinsic-guided image generation model XRGB to video generation by employing a novel and efficient Hybrid Self-Attention, which ensures temporal consistency across video frames and also enhances fidelity to reference images. We further develop a Masked Cross-Attention to separately handle global and local text prompts, applying them effectively onto respective local and global regions. For generating long videos, our novel Recursive Sampling method incorporates progressive frame sampling, combining keyframe prediction and interpolation to maintain long-range temporal consistency while preventing error accumulation. To support the training of X2Video, we have assembled a video dataset named InteriorVideo, featuring 1,154 rooms from 295 interior scenes, complete with reliable ground-truth intrinsic channel sequences and smooth camera trajectories. Both qualitative and quantitative evaluations demonstrate that X2Video can produce long, temporally consistent, and photorealistic videos from intrinsic conditions. Additionally, X2Video effectively accommodates multi-modal controls with reference images, global and local text prompts, and simultaneously supports editing on color, material, geometry, and lighting through parametric tuning.

Comparisons

Qualitative comparison XRGB and SVD + CNet

Qualitative comparison 1

Qualitative comparison 2

Qualitative comparison 3

Qualitative comparison 4

Multimodal Controls

Multimodal controls with intrinsic channels, reference images, and text prompts on bota global and local regions.

Control 1

Control 2

Control 3

Control 4

Control 5

Control 6

Control 7

Control 8

Generalization to Other Scenes

Although our model is trained with interior scenes in a single rendering style, where intrinsics are exported from VRay, it can generalize to other unseen scenes and styles. We find 3D models from other sources and extract the intrinsic channels with the open software Blender. Below, we show the rendered videos from our framework, with unseen scenes, such as dynamic scenes and outdoor scenes.