nielsr HF Staff commited on
Commit
4a1d008
·
verified ·
1 Parent(s): d499350

Add pipeline tag, library name, and links to model card

Browse files

This PR improves the model card for [GenCompositor: Generative Video Compositing with Diffusion Transformer](https://huggingface.co/papers/2509.02460).

Key changes include:
- Adding the `pipeline_tag: any-to-any`, which better categorizes the model on the Hub for generative video tasks.
- Specifying `library_name: diffusers` based on evidence in the `config.json` files, enabling automated usage snippets for users.
- Including direct links to the associated paper, project page, and GitHub repository.
- Adding the paper's abstract for comprehensive context.

These additions enhance the model's discoverability, usability, and documentation.

Files changed (1) hide show
  1. README.md +17 -3
README.md CHANGED
@@ -1,3 +1,17 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: any-to-any
4
+ library_name: diffusers
5
+ ---
6
+
7
+ # GenCompositor: Generative Video Compositing with Diffusion Transformer
8
+
9
+ This repository contains the official model for the paper [GenCompositor: Generative Video Compositing with Diffusion Transformer](https://huggingface.co/papers/2509.02460).
10
+
11
+ GenCompositor automates video compositing with generative models, allowing adaptive injection of identity and motion information from foreground video into a target video. Users can interactively customize the size, motion trajectory, and other attributes of dynamic elements in the final video.
12
+
13
+ **Project Page:** [https://gencompositor.github.io/](https://gencompositor.github.io/)
14
+ **Code:** [https://github.com/TencentARC/GenCompositor](https://github.com/TencentARC/GenCompositor)
15
+
16
+ ## Abstract
17
+ Video compositing combines live-action footage to create video production, serving as a crucial technique in video creation and film production. Traditional pipelines require intensive labor efforts and expert collaboration, resulting in lengthy production cycles and high manpower costs. To address this issue, we automate this process with generative models, called generative video compositing. This new task strives to adaptively inject identity and motion information of foreground video to the target video in an interactive manner, allowing users to customize the size, motion trajectory, and other attributes of the dynamic elements added in final video. Specifically, we designed a novel Diffusion Transformer (DiT) pipeline based on its intrinsic properties. To maintain consistency of the target video before and after editing, we revised a light-weight DiT-based background preservation branch with masked token injection. As to inherit dynamic elements from other sources, a DiT fusion block is proposed using full self-attention, along with a simple yet effective foreground augmentation for training. Besides, for fusing background and foreground videos with different layouts based on user control, we developed a novel position embedding, named Extended Rotary Position Embedding (ERoPE). Finally, we curated a dataset comprising 61K sets of videos for our new task, called VideoComp. This data includes complete dynamic elements and high-quality target videos. Experiments demonstrate that our method effectively realizes generative video compositing, outperforming existing possible solutions in fidelity and consistency.