Infer Context default input_layer is not a image[CHW] error in DeepStream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): NVIDIA GeForce GTX 1650
• DeepStream Version: 6.1
• TensorRT Version: 8.2.5.1
• NVIDIA GPU Driver Version (valid for GPU only): 510.47
• Issue Type( questions, new requirements, bugs): Question

I’m trying to implement a model which uses PeopleNet as primary inference and Pose Classification Pose Classification Pretrained Model on NGC catalogue as secondary inference. When I run the app, it gives the said error.

0:00:02.230412387 28834 0x7f30f4f48d60 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<pose_detection> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 2]: deserialized trt engine from :/home/forgottenlight/GitHub/analytics/data/sgies/st-gcn_3dbp_nvidia.etlt_b10_gpu0_fp32.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 2
0   INPUT  kFLOAT input           3x300x34x1      min: 1x3x300x34x1    opt: 10x3x300x34x1   Max: 10x3x300x34x1   
1   OUTPUT kFLOAT fc_pred         6               min: 0               opt: 0               Max: 0               

0:00:02.242572470 28834 0x7f30f4f48d60 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<pose_detection> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 2]: Use deserialized engine model: /home/forgottenlight/GitHub/analytics/data/sgies/st-gcn_3dbp_nvidia.etlt_b10_gpu0_fp32.engine
0:00:02.242593616 28834 0x7f30f4f48d60 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<pose_detection> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::initInferenceInfo() <nvdsinfer_context_impl.cpp:1112> [UID = 2]: Infer Context default input_layer is not a image[CHW]
ERROR: nvdsinfer_context_impl.cpp:1261 Infer context initialize inference info failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:02.249977075 28834 0x7f30f4f48d60 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<pose_detection> error: Failed to create NvDsInferContext instance
0:00:02.250021728 28834 0x7f30f4f48d60 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<pose_detection> error: Config file path: configs/sgies/pose_detection_sgie.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:pose_detection:
Config file path: configs/sgies/pose_detection_sgie.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR

Following is my SGIE config

[property]
gpu-id=0
net-scale-factor=1
tlt-model-key=nvidia_tao

tlt-encoded-model=../../data/sgies/st-gcn_3dbp_nvidia.etlt
model-engine-file=../../data/sgies/st-gcn_3dbp_nvidia.etlt_b10_gpu0_fp32.engine
labelfile-path=../../data/labels/labels_pose_detection.txt


batch-size=10

# 0=FP32 and 1=INT8 mode
network-mode=2

network-type=1
num-detected-classes=6
interval=0
process-mode=2
model-color-format=1

gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0

input-object-min-width=34
input-object-min-height=300

output-blob-names=fc_pred
classifier-async-mode=1
classifier-threshold=0.51
process-mode=2
scaling-filter=1

My pipeline looks something like this:


self.streammux.link(self.queue[0])
self.queue[0].link(self.pgie)
self.pgie.link(self.queue[1])
self.queue[1].link(self.tracker)
self.tracker.link(self.queue[2])
self.queue[2].link(self.sgie)
self.sgie.link(self.queue[3])
self.queue[3].link(self.nvdsanalytics)
self.nvdsanalytics.link(self.queue[4])
self.queue[4].link(self.tiler)
self.tiler.link(self.queue[5])
self.queue[5].link(self.nvvidconv)
self.nvvidconv.link(self.queue[6])
self.queue[6].link(self.nvosd)
self.nvosd.link(self.queue[7])
self.queue[7].link(self.nvvidconv_postosd)
self.nvvidconv_postosd.link(self.queue[8])
self.queue[8].link(self.caps)
self.caps.link(self.queue[9])
self.queue[9].link(self.encoder)
self.encoder.link(self.queue[10])
self.queue[10].link(self.rtppay)
self.rtppay.link(self.queue[11])
self.queue[11].link(self.sink)

Anyone who could reproduce this problem or have a solution?

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

please refer to deepstream_reference_apps/deepstream-bodypose-3d at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub, it is a ready sample, it will use PeopleNet as pgie, and use bodypose3dnet as sgie.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.