Hello.
The data from tutorial notebook of tao FPEnet model inference is static image.
How should I do if I would like to do FPEnet model inference with video data ?
Which command shoud I ran to convert video data and do FPEnet model inference ?
Thank you for your help in advance.
Please refer to deepstream_tao_apps/README.md at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub
Start to run the facial landmark application
cd deepstream-faciallandmark-app
./deepstream-faciallandmark-app [1:file sink|2:fakesink|3:display sink] \
<faciallandmark model config file> <input uri> ... <input uri> <out filename>
OR
./deepstream-faciallandmark-app <app YAML config file>
Thank you @Morganh Will there exist sample video data named opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
after I do the following two steps ?
sudo apt-get install git-lfs git lfs install --skip-repo git clone -b master https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps.git
make export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/nvidia/deepstream/deepstream/lib/cvcore_libs
This mp4 exists by default if users install deepstream or use deepstream docker.
Excuse me @Morganh So I have to do the following steps if I would like to run tao FPEnet model inference on the deepstream container ?
docker pull nvcr.io/nvidia/deepstream:6.2-devel
export DISPLAY=:0
xhost +
docker run --gpus all --name deepsteam_test -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY nvcr.io/nvidia/deepstream:6.2-devel
docker exec -it deepsteam_test bash
cd sources/apps/sample_apps/deepstream-faciallandmark-app
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/tao/fpenet/versions/deployable_v3.0/files/model.etlt -O faciallandmarks.etlt
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/tao/fpenet/versions/deployable_v3.0/files/int8_calibration.txt -O int8_calibration.txt
./deepstream-faciallandmark-app faciallandmark_app_config.yml
Do I have to adjust the steps above ?
But there is no description about the process of doing model inference with deepstream container
In deeptream docker, it is still needed to git clone the github.
So the first five steps below are correct ?
docker pull nvcr.io/nvidia/deepstream:6.2-devel
export DISPLAY=:0
xhost +
docker run --gpus all --name deepsteam_test -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY nvcr.io/nvidia/deepstream:6.2-devel
docker exec -it deepsteam_test bash
Above steps are normal steps for login docker. I think it is ok.
Could I change -v /tmp/.X11-unix:/tmp/.X11-unix -e
to another folder ?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Actually it is for display. It is not related to fpenet or tao.
You can search something for it.
system
Closed
May 16, 2023, 4:21am
13
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.