• Hardware Platform (Jetson / GPU) RTX 3500
• DeepStream Version 7.1
• TensorRT Version 10.5.0.18
• NVIDIA GPU Driver Version (valid for GPU only) 553.05
• Issue Type (questions, new requirements, bugs) Question
Hello,
I’d like to kindly ask for help in integrating a custom model into a DeepStream pipeline.
I have a pipeline which successfully runs inference on a YOLOv7 pose estimation model using the custom output parser functions provided here. I would like to replace the YOLOv7 model with the RTMO pose estimation model. I converted the corresponding .pth file using the model creators’ indications into a .onnx file. Then, I duplicated the working YOLOv7’s nvdsinfer configuration, modifying the onnx-file property accordingly.
The .engine file of the RTMO model is successfully created when the pipeline is launched.
However, the pipeline then stops with this error:
python: nvdsinfer_backend.cpp:274: virtual NvDsInferStatus nvdsinfer::FullDimTrtBackendContext::initialize(): Assertion `!hasWildcard(fullInferDims)' failed.
As far as I understand, this indicates that at least one of the model’s layers is of dynamic size in at least one direction. I tried to insert a logging statement before the assertion call in FullDimTrtBackendContext::initialize() to identify the layers with dynamic dimension sizes, however after running make and make install, the launch of the pipeline fails with Failed to allocate cuda output buffer during context initialization before the logging statements appear in the console. I therefore abandoned this debugging approach.
I am aware that the custom output parser function has to be adjusted as the RTMO model’s output shape differs from the YOLOv7 model’s output shape, however, in the error’s backtrace below, it can be seen that the error already occurs during the deserialization of the .engine file. I therefore assume, the error is not caused by the incorrect custom output parser function. Is that assumption valid?
I’m afraid I can’t share the pipeline as it is complex and not suited as a minimal pipeline for reproducing the error. However, I don’t think the error is linked to the pipeline, as the pipeline worked well with the YOLOv7 pose estimation model. You can find the .engine file attached below, if you’d like to reproduce this error using another pipeline.
What could be the cause of the error stated above and how could it be solved?
Thank you for your time!
Appendix
- output of an isolated call to trtexec to build the engine file
trtexec --onnx=rtmo.onnx --saveEngine=rtmo.engine --verbose
trtexec_output.txt (5.7 MB)
- backtrace:
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=139862181790336) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=139862181790336) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=139862181790336, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007f3433adf476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4 0x00007f3433ac57f3 in __GI_abort () at ./stdlib/abort.c:79
#5 0x00007f3433ac571b in __assert_fail_base
(fmt=0x7f3433c7a130 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x7f315f0bcbee "!hasWildcard(fullInferDims)", file=0x7f315f0bcbac "nvdsinfer_backend.cpp", line=274, function=<optimized out>)
at ./assert/assert.c:92
#6 0x00007f3433ad6e96 in __GI___assert_fail
(assertion=0x7f315f0bcbee "!hasWildcard(fullInferDims)", file=0x7f315f0bcbac "nvdsinfer_backend.cpp", line=274, function=0x7f315f0bcf28 "virtual NvDsInferStatus nvdsinfer::FullDimTrtBackendContext::initialize()") at ./assert/assert.c:101
#7 0x00007f315f0b17d1 in nvdsinfer::FullDimTrtBackendContext::initialize() () at ///opt/nvidia/deepstream/deepstream-7.1/lib/libnvds_infer.so
#8 0x00007f315f0b2a17 in nvdsinfer::createBackendContext(std::shared_ptr<nvdsinfer::TrtEngine> const&) () at ///opt/nvidia/deepstream/deepstream-7.1/lib/libnvds_infer.so
#9 0x00007f315f082b1d in nvdsinfer::NvDsInferContextImpl::deserializeEngineAndBackend(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, int, std::shared_ptr<nvdsinfer::TrtEngine>&, std::unique_ptr<nvdsinfer::BackendContext, std::default_delete<nvdsinfer::BackendContext> >&) () at ///opt/nvidia/deepstream/deepstream-7.1/lib/libnvds_infer.so
#10 0x00007f315f083873 in nvdsinfer::NvDsInferContextImpl::generateBackendContext(_NvDsInferContextInitParams&) () at ///opt/nvidia/deepstream/deepstream-7.1/lib/libnvds_infer.so
#11 0x00007f315f08a774 in nvdsinfer::NvDsInferContextImpl::initialize(_NvDsInferContextInitParams&, void*, void (*)(INvDsInferContext*, unsigned int, NvDsInferLogLevel, char const*, void*)) ()
at ///opt/nvidia/deepstream/deepstream-7.1/lib/libnvds_infer.so
#12 0x00007f315f08b0ee in createNvDsInferContext(INvDsInferContext**, _NvDsInferContextInitParams&, void*, void (*)(INvDsInferContext*, unsigned int, NvDsInferLogLevel, char const*, void*)) ()
at ///opt/nvidia/deepstream/deepstream-7.1/lib/libnvds_infer.so
#13 0x00007f315f875f48 in () at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so
#14 0x00007f333bce3441 in () at /lib/x86_64-linux-gnu/libgstbase-1.0.so.0
#15 0x00007f333bce3675 in () at /lib/x86_64-linux-gnu/libgstbase-1.0.so.0
- nvdsinfer configuration file:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=rtmo.onnx
model-engine-file=rtmo.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=2
num-detected-classes=1
interval=0
gie-unique-id=1
process-mode=1
infer-dims=3;640;640
network-type=2
cluster-mode=4
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-instance-mask-func-name=NvDsInferParseYoloPose
custom-lib-path=nvdsinfer_custom_impl_yolo_pose/libnvdsinfer_custom_impl_yolo_pose.so
output-instance-mask=1
[class-attrs-all]
pre-cluster-threshold=0.4
topk=300
- .engine file
engine.zip (81.9 MB)