• Hardware (Jetson AGX Orin)/Classification/etc)
• deepstream 6.2
I down loaded the body pose estimation from NGC. I am getting the following error for building the engine. How do I build the engine for body pose estimation from the deployee model on NGC.
ython_deepstream.py:39: PyGIDeprecationWarning: Since version 3.11, calling threads_init is no longer needed. See: Projects/PyGObject/Threading - GNOME Wiki!
GObject.threads_init()
Creating source bin
source-bin-00
Creating source bin
source-bin-01
python_deepstream.py:162: PyGIDeprecationWarning: GObject.MainLoop is deprecated; use GLib.MainLoop instead
loop = GObject.MainLoop()
Using winsys: x11
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.2/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
WARNING: Deserialize engine failed because file path: /home/tim/tim_deepstream/models/posenet/model.etlt_b1_gpu0_int8.engine open error
0:00:04.434365244 42325 0x355a0470 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/home/tim/tim_deepstream/models/posenet/model.etlt_b1_gpu0_int8.engine failed
0:00:04.596554034 42325 0x355a0470 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/home/tim/tim_deepstream/models/posenet/model.etlt_b1_gpu0_int8.engine failed, try rebuild
0:00:04.596608307 42325 0x355a0470 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
NvDsInferCudaEngineGetFromTltModel: Failed to open TLT encoded model file /home/tim/tim_deepstream/models/pose
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:07.836902023 42325 0x355a0470 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
ERROR: [TRT]: 2: [logging.cpp::decRefCount::61] Error Code 2: Internal Error (Assertion mRefCount > 0 failed. )
corrupted size vs. prev_size while consolidating
Aborted (core dumped)
Yes I corrected that and still get this error message.
python_deepstream.py:39: PyGIDeprecationWarning: Since version 3.11, calling threads_init is no longer needed. See: Projects/PyGObject/Threading - GNOME Wiki!
GObject.threads_init()
Creating source bin
source-bin-00
Creating source bin
source-bin-01
python_deepstream.py:162: PyGIDeprecationWarning: GObject.MainLoop is deprecated; use GLib.MainLoop instead
loop = GObject.MainLoop()
Using winsys: x11
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.2/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
WARNING: Deserialize engine failed because file path: /home/tim/tim_deepstream/models/posenet/model.etlt_b1_gpu0_int8.engine open error
0:00:04.053186416 84286 0x3ef23c70 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/home/tim/tim_deepstream/models/posenet/model.etlt_b1_gpu0_int8.engine failed
0:00:04.222679239 84286 0x3ef23c70 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/home/tim/tim_deepstream/models/posenet/model.etlt_b1_gpu0_int8.engine failed, try rebuild
0:00:04.222732295 84286 0x3ef23c70 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:367: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: [TRT]: 4: [network.cpp::validate::3004] Error Code 4: Internal Error (input_1:0: for dimension number 3 in profile 0 does not match network definition (got min=384, opt=384, max=384), expected min=opt=max=3).)
ERROR: Build engine failed from config file
Segmentation fault (core dumped)
I redid the demission and config file now. I am getting this warning and no video. output.WARNING: [TRT]: - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
WARNING: [TRT]: If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
yes I am running with the default settings. See below. For the config file is there “parse-bbox-instance” file I should be using? I look over segmentation or Yolo config files and I see a there is parse-bbox-instance file.
[property]
model-specific params. The paths will be different if the user sets up in different directory.
I wanted to build it using python binding. I have the deep stream pipeline made and have the engine being create. I get no error when live stream comes up. I just don’t get the overlay at all.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks