By Soumyadip Sengupta, Vivek Jayaram, Brian Curless, Steve Seitz, and Ira Kemelmacher-Shlizerman
This paper will be presented in IEEE CVPR 2020.
Go to Project page for additional details and results.
This work is licensed under the Creative Commons Attribution NonCommercial ShareAlike 4.0 License.
Clone repository:
git clone https://github.com/senguptaumd/Background-Matting.git
Please use Python 3. Create an Anaconda environment and install the dependencies. Our code is tested with Pytorch=1.1.0, Tensorflow=1.14 with cuda10.0
conda create --name back-matting python=3.6
conda activate back-matting
Make sure CUDA 10.0 is your default cuda. If your CUDA 10.0 is installed in /usr/local/cuda-10.0
, apply
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64
export PATH=$PATH:/usr/local/cuda-10.0/bin
Install PyTorch, Tensorflow (needed for segmentation) and dependencies
conda install pytorch=1.1.0 torchvision cudatoolkit=10.0 -c pytorch
pip install tensorflow-gpu=1.14.0
pip install -r requirements.txt
Note: The code is likely to work on other PyTorch and Tensorflow versions compatible with your system CUDA. If you already have a working environment with PyTorch and Tensorflow, only install dependencies with pip install -r requirements.txt
. If our code fails due to different versions, then you need to install specific CUDA, PyTorch and Tensorflow versions.
To perform Background Matting based green-screening, you need to capture:
_img.png
extension)_back.png
extension)data/background
)Use sample_data/
folder for testing and prepare your own data based on that.
Please download the pre-trained models from Google Drive and place Models/
folder inside Background-Matting/
.
Background Matting needs a segmentation mask for the subject. We use tensorflow version of Deeplabv3+.
cd Background-Matting/
git clone https://github.com/tensorflow/models.git
cd models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
cd ../..
python test_segmentation_deeplab.py -i sample_data/input
You can replace Deeplabv3+ with any segmentation network of your choice. Save the segmentation results with extension _masksDL.png
.
Run python test_pre_process.py -i sample_data/input
for pre-processing. It aligns the background image _back.png
and changes its bias-gain to match the input image _img.png
python test_background-matting_image.py -m real-hand-held -i sample_data/input/ -o sample_data/output/ -tb sample_data/background/0001.png
For images taken with fixed camera (with a tripod), choose -m real-fixed-cam
for best results. -m syn-comp-adobe
lets you use the model trained on synthetic-composite Adobe dataset, without real data (worse performance).
For best results capture images following these guidelines:
We will also release the training code, which will allow users to train on labelled data and also on unlabelled real data.
Coming soon ...
We collected 50 videos with both fixed and hand-held camera in indoor and outdoor settings. We plan to release this data to encourage future research on improving background matting.
Coming soon ...
We are eager to hear how our algorithm works on your images/videos. If the algorithm fails on your data, please feel free to share it with us at soumya91@cs.washington.edu. This will help us in improving our algorithm for future research. Also, feel free to share any cool results.
If you use this code for your research, please consider citing:
@InProceedings{BMSengupta20,
title={Background Matting: The World is Your Green Screen},
author = {Soumyadip Sengupta and Vivek Jayaram and Brian Curless and Steve Seitz and Ira Kemelmacher-Shlizerman},
booktitle={Computer Vision and Pattern Regognition (CVPR)},
year={2020}
}
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。
1. 开源生态
2. 协作、人、软件
3. 评估模型