Video-to-Video Synthesis

I’m interested in video synthesis and video imitation for academic research reason. I try to run Pose training and test.

I have tried it to run on Google Colab.

I have some technical issues below :

Issue 1-)
%cd /content/few-shot-vid2vid
!python train.py --name pose --dataset_mode fewshot_pose --adaptive_spade --warp_ref --spade_combine --remove_face_labels --add_face_D --niter_single 100 --niter 200 --batchSize 2


File “/content/few-shot-vid2vid/data/image_folder.py”, line 65, in make_grouped_dataset
assert os.path.isdir(dir), ‘%s is not a valid directory’ % dir
AssertionError: datasets/pose/train_openpose is not a valid directory

How will I use DensePose and/or OpenPose ? I think they are deprecated. Where can I find data for datasets/pose/train_openpose ; datasets/pose/train_images , datasets/pose/train_densepose ?

Issue 2-) What is sample values for PATH_TO_SEQ and PATH_TO_REF_IMG on test.py ?

Poses

To test the trained model (bash ./scripts/pose/test.sh):
python test.py --name pose --dataset_mode fewshot_pose --adaptive_spade --warp_ref --spade_combine --remove_face_labels --finetune --seq_path [PATH_TO_SEQ] --ref_img_path [PATH_TO_REF_IMG]

I am looking forward to hearing from you soon.
Thank you.

Sorry for the late response, we will investigate this issue to see if can provide suggestions.

Hi @papatya222 ,
for pose, would you try GitHub - NVIDIA-AI-IOT/trt_pose: Real-time pose estimation accelerated with NVIDIA TensorRT ?

And GitHub - NVlabs/few-shot-vid2vid: Pytorch implementation for few-shot photorealistic video-to-video translation. is deprecated, please use GitHub - NVlabs/imaginaire: NVIDIA's Deep Imagination Team's PyTorch Library .

Thanks!