I was proceeded with reference to TAO Toolkit 4.0 and now I succeeded in the Inference part of BodyPoseNet.
But In this process, I have a question.
(I’m currently conducting the Inference of bodyposeNet using the pretrained model)
The sample file runs the model by receiving the image, but how do I receive the video and run the pose estimation right away?
(In the inference code, Just many image paths are in the viz_example_data.json)
If I export the model to .etlt and generate TensorRT engine to another tool, can I use video by input data?
Or
Is there a way to deal with real time data?
export CUDA_VER=cuda version in the device
make
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/nvidia/deepstream/deepstream/lib/cvcore_libs
I confirmed that tao-converter works normally after downloading. However, when I run “make” in the “Build the application” process, the following error occurs.
The error says “fatal error: NvCaffeParser.h : There is no file or directory”
When I looked for the location of this file, I confirmed that it was in the docker container log that I used before. Currently, I have downloaded deepstream 6.1.1 and run it in a local environment. Do I have to open Docker to proceed with the Github method?
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
Please refer to Error while building deepstream_tlt_apps - #6 by octa.marian .
Again, please double check the TensorRT installation.