Error loading custom TAO model into PoseEstimation3D

• Hardware: T4, A100
• Network Type: PoseEstimation

Hey there, I’m trying to test out whether I can use the network from the TAO Pose Estimation notebook in the 3DPoseEstimation Repo, but I keep getting an error.

root@:/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream_reference_apps/deepstream-bodypose-3d/sources# ./deepstream-pose-estimation-app --input file://$BODYPOSE3D_HOME/streams/bodypose.mp4 --output $BODYPOSE3D_HOME/streams/bodypose_3dbp.mp4 --focal 800.0 --width 1280 --height 720 --fps --save-pose $BODYPOSE3D_HOME/streams/bodypose_3dbp.json
Now playing: file:///opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream_reference_apps/deepstream-bodypose-3d/streams/bodypose.mp4
0:00:00.169799881  1243 0x56024391c300 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<secondary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
deepstream-pose-estimation-app: ../nvdsinfer/nvdsinfer_model_builder.cpp:1326: NvDsInferStatus nvdsinfer::TrtModelBuilder::configExplicitOptions(nvdsinfer::ExplicitBuildParams&): Assertion `(int)params.inputProfileDims.size() <= network.getNbInputs()' failed.
Aborted (core dumped)

Currently, in terms of configuration, I have simply followed the instructions laid out in the TAO Jupyter notebook, as well as utilized the provided guide to install 3D pose estimation within the DeepStream Docker environment. The only alteration I made was to modify the configuration file in order to include the new path of the models produced by TAO.

The TAO model was originally trained on an A100, while the inference is currently being conducted on a T4.

Any help is welcome.

For TAO bodypose , please follow the bottom of user guide deepstream_tao_apps/apps/tao_others/deepstream-bodypose2d-app at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

Run with DeepStream TAO Integration for BodyPoseNet.

Hey there!

I tried a few times using the documentation as a guide and now I’m getting a different error, but I’m not sure why. It’s weird because the inference works fine inside the TAO container using the TAO commands.

Any help would be appreciated!

E3D_HOME/streams/bodypose_3dbp.mp4 --focal 800.0 --width 1280 --height 720 --fps --save-pose $BODYPOSE3D_HOME/streams/bodypose_3dbp.json
Now playing: file:///opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream_reference_apps/deepstream-bodypose-3d/streams/bodypose.mp4
0:00:00.167128994   294 0x55c9ab188700 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<secondary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
parseModel: Failed to parse ONNX model
ERROR: tlt/tlt_decode.cpp:389 Failed to build network, error in model parsing.
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:728 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:794 Failed to get cuda engine from custom library API
0:00:02.492321007   294 0x55c9ab188700 ERROR                nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<secondary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

Please try to run with official application deepstream_tao_apps/apps/tao_others/deepstream-bodypose2d-app at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub to check if it works.

Sadly didnt work. But after readingthis post I test the version of the TAO container and the deepstream devel and this is the result:

Deepstrea-6.2devel:
TensorRT: 8.5.2-1+cuda11.8
CUDNN: 8.7.0
CUDA Version: 11.8

TAO4.0 BPNET:
TensorRT: 8.2.5-1+cuda11.4
CUDNN: 8.3.2
CUDA Version: 11.6

Sol I wonder if this little difference in version could be the problem. To try this I need to downgrade the tensorRT version in the deepstream container to equal the same in the TAO docker, any way to do this without breaking everything?

EDIT: After looking at the tao bpnet export command I noticed there’s a flag called --engine_file ENGINE_FILE . This flag allows you to specify the path to the exported TRT engine. I was wondering if it’s possible to use this flag to adjust the version of the TensorRT model? Unfortunately, I haven’t been able to find an example that uses this flag. Could you please share an example of the engine_file for testing purposes?

No. This flag is telling users which engine file to generated.

You can login Deepstrea-6.2devel: docker, then git clone GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream and run everything inside the docker. Just put .etlt model . Comment out the line https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/configs/bodypose2d_tao/bodypose2d_pgie_config.txt#L51. This will let deepstream to generate .engine file by itself.

Hi, after a few days trying I could not make it work. I cannot use de bpnet model on the 3d-pose-estimation. After reading this old post I want to ask if the model use by 3dPoseEstimation is the same as the TAO BPNET. If is not, I wonder if a trainable .tlt version can be found somewhere now.

The model use by 3dPoseEstimation is not the same as the TAO BPNET.

TAO bpnet is bodypose2D. The inference way in deepstream can be found in deepstream_tao_apps/apps/tao_others/deepstream-bodypose2d-app at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub.

In BodyPose3DNet | NVIDIA NGC , only deployable files are available. There is no trainable files. Currently, TAO user guide does not contain bodypose3dnet. So users can only run inference with the .etlt models. Retraining is not supported. Usually the retraining needs .tlt model.
The application can run on dgpu devices or Jetson devices.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.