Model - NGC bodypose deployee

• Hardware (Jetson AGX Orin)/Classification/etc)
• deepstream 6.2
I down loaded the body pose estimation from NGC. I am getting the following error for building the engine. How do I build the engine for body pose estimation from the deployee model on NGC.

ython_deepstream.py:39: PyGIDeprecationWarning: Since version 3.11, calling threads_init is no longer needed. See: Projects/PyGObject/Threading - GNOME Wiki!
GObject.threads_init()
Creating source bin
source-bin-00
Creating source bin
source-bin-01
python_deepstream.py:162: PyGIDeprecationWarning: GObject.MainLoop is deprecated; use GLib.MainLoop instead
loop = GObject.MainLoop()

Using winsys: x11
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.2/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
WARNING: Deserialize engine failed because file path: /home/tim/tim_deepstream/models/posenet/model.etlt_b1_gpu0_int8.engine open error
0:00:04.434365244 42325 0x355a0470 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/home/tim/tim_deepstream/models/posenet/model.etlt_b1_gpu0_int8.engine failed
0:00:04.596554034 42325 0x355a0470 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/home/tim/tim_deepstream/models/posenet/model.etlt_b1_gpu0_int8.engine failed, try rebuild
0:00:04.596608307 42325 0x355a0470 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
NvDsInferCudaEngineGetFromTltModel: Failed to open TLT encoded model file /home/tim/tim_deepstream/models/pose
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:07.836902023 42325 0x355a0470 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
ERROR: [TRT]: 2: [logging.cpp::decRefCount::61] Error Code 2: Internal Error (Assertion mRefCount > 0 failed. )
corrupted size vs. prev_size while consolidating
Aborted (core dumped)

Please double check the etlt model is available and its path is set correctly in the config file.

Yes I corrected that and still get this error message.

python_deepstream.py:39: PyGIDeprecationWarning: Since version 3.11, calling threads_init is no longer needed. See: Projects/PyGObject/Threading - GNOME Wiki!
GObject.threads_init()
Creating source bin
source-bin-00
Creating source bin
source-bin-01
python_deepstream.py:162: PyGIDeprecationWarning: GObject.MainLoop is deprecated; use GLib.MainLoop instead
loop = GObject.MainLoop()

Using winsys: x11
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.2/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
WARNING: Deserialize engine failed because file path: /home/tim/tim_deepstream/models/posenet/model.etlt_b1_gpu0_int8.engine open error
0:00:04.053186416 84286 0x3ef23c70 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/home/tim/tim_deepstream/models/posenet/model.etlt_b1_gpu0_int8.engine failed
0:00:04.222679239 84286 0x3ef23c70 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/home/tim/tim_deepstream/models/posenet/model.etlt_b1_gpu0_int8.engine failed, try rebuild
0:00:04.222732295 84286 0x3ef23c70 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:367: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: [TRT]: 4: [network.cpp::validate::3004] Error Code 4: Internal Error (input_1:0: for dimension number 3 in profile 0 does not match network definition (got min=384, opt=384, max=384), expected min=opt=max=3).)
ERROR: Build engine failed from config file
Segmentation fault (core dumped)

Please refer to https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/configs/nvinfer/bodypose2d_tao/bodypose2d_pgie_config.txt . For running bodypose with deepstram_tao_apps repo, please refer to https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/tree/master/apps/tao_others/deepstream-bodypose2d-app

I redid the demission and config file now. I am getting this warning and no video. output.WARNING: [TRT]: - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
WARNING: [TRT]: If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.

Could you please share the full command and full log when you try to run https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/tree/master/apps/tao_others/deepstream-bodypose2d-app ?

Actually I have it working creating the engine now and video comes up just need to look into why there is no overlay happening on to the video.

Could you share a screenshot about it? May I know that are you running with the default setting and the official model? Also, please run with the official repo https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/tree/master/apps/tao_others/deepstream-bodypose2d-app

yes I am running with the default settings. See below. For the config file is there “parse-bbox-instance” file I should be using? I look over segmentation or Yolo config files and I see a there is parse-bbox-instance file.

[property]

model-specific params. The paths will be different if the user sets up in different directory.

gpu-id=0
labelfile-path=…/models/posenet/labels.txt
tlt-encoded-model=…/models/posenet/model.etlt
int8-calib-file=…/models/posenet/int8_calibration_288_384.txt
model-engine-file=…/models/posenet/model.etlt_b16_gpu0_fp16.engine

tlt-model-key=nvidia_tlt

network-input-order=1
infer-dims=3;288;384
#dynamic batch size
batch-size=16

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2

num-detected-classes=1
gie-unique-id=1
output-blob-names=conv2d_transpose_1/BiasAdd:0;heatmap_out/BiasAdd:0

#0=Detection 1=Classifier 2=Segmentation 100=other
network-type=100

Enable tensor metadata output

output-tensor-meta=1

#1-Primary 2-Secondary
process-mode=1
net-scale-factor=0.00390625
offsets=128.0;128.0;128.0

#0=RGB 1=BGR 2=GRAY
model-color-format=0
maintain-aspect-ratio=1
symmetric-padding=1
scaling-filter=1
scaling-compute-hw=1

[class-attrs-all]
threshold=0.0001

Please refer to How to run bpnet in tao toolkit? - #47 by Morganh and double check commands, models. In that topic, it is running in Orin successfully.

I wanted to build it using python binding. I have the deep stream pipeline made and have the engine being create. I get no error when live stream comes up. I just don’t get the overlay at all.

Currently the official app is using https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/tree/master/apps/tao_others/deepstream-bodypose2d-app to run. TAO team did not verify with python binding. So not sure if it works. To narrow down, may I know that if you can run it with https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/tree/master/apps/tao_others/deepstream-bodypose2d-app successfully? Thanks for your time.

I have ran it that way its not the way I want to use it.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

So, there is no issue when you run with https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/tree/master/apps/tao_others/deepstream-bodypose2d-app, right?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.