How do I use video input in BodyPoseNet?

• Network Type : BodyPoseNet

I was proceeded with reference to TAO Toolkit 4.0 and now I succeeded in the Inference part of BodyPoseNet.

But In this process, I have a question.
(I’m currently conducting the Inference of bodyposeNet using the pretrained model)
The sample file runs the model by receiving the image, but how do I receive the video and run the pose estimation right away?

!tao bpnet inference --inference_spec $SPECS_DIR/infer_spec.yaml
–model_filename $USER_EXPERIMENT_DIR/pretrained_model/bodyposenet_vtrainable_v1.0/model.tlt
–input_type json
–input $USER_EXPERIMENT_DIR/data/viz_example_data.json
–results_dir $USER_EXPERIMENT_DIR/results/exp_m1_unpruned/infer_default
–dump_visualizations
-k $KEY


(In the inference code, Just many image paths are in the viz_example_data.json)

If I export the model to .etlt and generate TensorRT engine to another tool, can I use video by input data?
Or
Is there a way to deal with real time data?

Please refer to deepstream_tao_apps/apps/tao_others at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub .

Refer to deepstream_tao_apps/apps/tao_others at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub explanation,

cd deepstream_tao_apps
chmod 755 download_models.sh
export TAO_CONVERTER=the file path of tao-converter
export MODEL_PRECISION=fp16
./download_models.sh

Where is tao-converter? Do I have to download it another way?
I just have pretrained model(bodyposenet_vdeployable_v1.0.1-model.etlt)

Please download from the guide in TAO Converter — TAO Toolkit 4.0 documentation

    export CUDA_VER=cuda version in the device
    make
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/nvidia/deepstream/deepstream/lib/cvcore_libs

I confirmed that tao-converter works normally after downloading. However, when I run “make” in the “Build the application” process, the following error occurs.


The error says “fatal error: NvCaffeParser.h : There is no file or directory”

When I looked for the location of this file, I confirmed that it was in the docker container log that I used before. Currently, I have downloaded deepstream 6.1.1 and run it in a local environment. Do I have to open Docker to proceed with the Github method?

You did not run inside any docker, right?
Did you install TensorRT?

You can try to run with a Tensorrt docker or deepstream docker.

No, I didn’t run inside docker. I installed TensorRT and try to initialize deepstream but the same error occurs. I am referring to these two sites.

https://docs.nvidia.com/tao/tao-toolkit/text/ds_tao/deepstream_tao_integration.html#deepstream-tao-others

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Please refer to Error while building deepstream_tlt_apps - #6 by octa.marian .
Again, please double check the TensorRT installation.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.