Deepstream input image in NCWH oe NWHC

hello experts
i saw the “uff-input-dims” to set the NCHW or NWHC mode in deepstream guide.
But how can i change the input mode for Onnx and caffe models?
And whats the default input mode for each mode?

Hi @BIgPeng_XX,
Caffe & ONNX both supports NCHW by default.
For ONNX you can refer the below link-

For more details related to Deepstream, request you to raise it to the respective forum.
Thanks!

hello expert, i use this model to apply pose estimation with deepstream 5.0: https://github.com/tensorlayer/hyperpose
but i got the different result of inference. Deepstream 5.0 also use TensorRT7.0 to do the inference, and i use the same to process the input and output, but i cant get the same result. why with the same model, tensorrt and deepstream get the different result?

Hi @BIgPeng_XX,

This is possible if the input is different.
Deepstream supports RGB/BGR/GRAY color format.
Request you to check if the color format is same first.

For detailed desc on Deepstream queries, Kindly raise it on Deepstream forum.

Thanks!

thanks, i have detected the keypoints already, but i use the detector mode, i dont know how to pass the paf infomation to the tracker, how can i pass the paf infomation and draw the line?

Hi @BIgPeng_XX,
I think Deepstream forum should be able to help you better here.
https://forums.developer.nvidia.com/c/accelerated-computing/intelligent-video-analytics/deepstream-sdk/15
Thanks!

Hey BIgPeng_XX and AakankshaS, i am trying to implement in deepstream the same model of human posture detection, by means of onnx models, but i get the following error once the engine has been generated:

0:00:01.412893446 2026 0x7fdd2c002230 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/models/hyperpose/ppn-resnet50-V2-HW-384x384.onnx_b1_gpu0_fp32.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 8
0 INPUT kFLOAT x:0 3x384x384
1 OUTPUT kFLOAT Identity_1:0 18x12x12
2 OUTPUT kFLOAT Identity:0 18x12x12
3 OUTPUT kFLOAT Identity_6:0 17x9x9x12x12
4 OUTPUT kFLOAT Identity_5:0 18x12x12
5 OUTPUT kFLOAT Identity_4:0 18x12x12
6 OUTPUT kFLOAT Identity_3:0 18x12x12
7 OUTPUT kFLOAT Identity_2:0 18x12x12

0:00:01.412977734 2026 0x7fdd2c002230 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/models/hyperpose/ppn-resnet50-V2-HW-384x384.onnx_b1_gpu0_fp32.engine
0:00:01.413902015 2026 0x7fdd2c002230 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary_jiro.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:181>: Pipeline ready

** INFO: <bus_callback:167>: Pipeline running

0:00:02.138895872 2026 0x55b49f93d630 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:02.138915718 2026 0x55b49f93d630 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:734> [UID = 1]: Failed to parse bboxes
Segmentation fault (core dumped)

In order to achieve this, did you have to create something additional to be able to run the models?
Thanks