MatAnyone Logo

Scaling Video Matting via a Learned Quality Evaluator

1S-Lab, Nanyang Technological University  2SenseTime Research, Singapore 
Project lead

MatAnyone 2 is a practical human video matting framework that preserves fine details by avoiding segmentation-like boundaries, while also shows enhanced robustness under challenging real-world conditions.

🎥 For more visual results, go checkout our project page


How to use

you can run the following commands to get started and start working with the model

pip install -qqU git+https://github.com/pq-yang/MatAnyone2.git
from matanyone2 import MatAnyone2, InferenceCore
model = MatAnyone2.from_pretrained("PeiqingYang/MatAnyone2")
processor = InferenceCore(model,device="cuda:0")
# inference
processor.process_video(input_path="inputs/video/test-sample2.mp4",
                        mask_path="inputs/mask/test-sample2.png",
                        output_path="results")
Downloads last month
58
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for PeiqingYang/MatAnyone2