Using custom segmentation model as SGIE getting "network input channels is not 3"

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Orin Nano 8GB
• DeepStream Version 7.1
• JetPack Version (valid for Jetson only) 6.2.1
• TensorRT Version 10.3.0
• Issue Type( questions, new requirements, bugs) questions

Problem Summary:

I’m trying to use a custom segmentation model (ResNet50+UNet) as SGIE in DeepStream pipeline. The model uses NHWC input format instead of the standard NCHW format, and I keep getting the error: RGB/BGR input format specified but network input channels is not 3.

Pipeline Structure:

PGIE (YOLO detector) → SGIE (segmentation on detected bboxes)

Model Details:

The segmentation ONNX model has the following I/O (verified in Netron):
name: inputs
tensor: float32[unk__995,416,608,3]

name: output_0
tensor: float32[unk__996,63232,3]

Configuration Files:

PGIE Detector config (works fine):

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=/my/path/to/colon_lesion_P_O_I.pt.onnx
model-engine-file=/my/path/to/colon_lesion_P_O_I.engine
labelfile-path=/my/path/to/colon_lesion_P_O_I_labels.txt
batch-size=1
network-mode=0
num-detected-classes=3
interval=0
gie-unique-id=62
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.5
pre-cluster-threshold=0.4
topk=20

SGIE Segmentation config (fails with channel error):

[property]
gpu-id=0
network-type=100
gie-unique-id=481
interval=0
process-mode=1
network-mode=0
operate-on-gie-id=62
operate-on-class-ids=0
onnx-file=/my/path/to/resnet50_unet_last.onnx
model-engine-file=/my/path/to/resnet50_unet_last.onnx_b1_gpu0_fp32.engine
batch-size=1
maintain-aspect-ratio=0
network-input-order=1
num-detected-classes=3
output-tensor-meta=1
output-blob-names=output_0
labelfile-path=/my/path/to/colon_lesion_segment_labels.txt

Error Log:

Using winsys: x11
Opening in BLOCKING MODE
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: Deserialize engine failed because file path: /my/path/to/resnet50_unet_last.onnx_b1_gpu0_fp32.engine open error
0:00:00.208681992  5010 0xaaab09ca2610 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<sgie-segment> NvDsInferContext[UID 481]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 481]: deserialize engine from file :/my/path/to/resnet50_unet_last.onnx_b1_gpu0_fp32.engine failed
0:00:00.208711593  5010 0xaaab09ca2610 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<sgie-segment> NvDsInferContext[UID 481]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2185> [UID = 481]: deserialize backend context from engine from file :/my/path/to/resnet50_unet_last.onnx_b1_gpu0_fp32.engine failed, try rebuild
0:00:00.208725546  5010 0xaaab09ca2610 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<sgie-segment> NvDsInferContext[UID 481]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 481]: Trying to create engine from model files
0:02:34.916451819  5010 0xaaab09ca2610 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<sgie-segment> NvDsInferContext[UID 481]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2138> [UID = 481]: serialize cuda engine to file: /my/path/to/resnet50_unet_last.onnx_b1_gpu0_fp32.engine successfully
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0

0:02:35.325581917  5010 0xaaab09ca2610 ERROR                nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger:<sgie-segment> NvDsInferContext[UID 481]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:1034> [UID = 481]: RGB/BGR input format specified but network input channels is not 3
ERROR: Infer Context prepare preprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED
0:02:35.339377178  5010 0xaaab09ca2610 WARN                 nvinfer gstnvinfer.cpp:914:gst_nvinfer_start:<sgie-segment> error: Failed to create NvDsInferContext instance
0:02:35.339426108  5010 0xaaab09ca2610 WARN                 nvinfer gstnvinfer.cpp:914:gst_nvinfer_start:<sgie-segment> error: Config file path: configs/segment_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
[Pipeline] Failed to set pipeline to PLAYING state

Questions:

How can I properly configure DeepStream to work with above custom segment model?

Any guidance would be greatly appreciated!

  1. If sgie works on detected bboxes, please set process-mode to 2 for sgie.
  2. as the log shown, the engine is invalid because layers num is 0. you can comment out model-engine-file configuration to regenerate TRT engine.

Thank you for your reply!

I changed process-mode to 2, commented out model-engine-file, and re-ran the pipeline. However, the newly built engine is still invalid and shows the same log:

0:00:00.208526978  5914 0xaaaab06f95f0 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<sgie-segment> NvDsInferContext[UID 481]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 481]: Trying to create engine from model files
0:02:34.134985802  5914 0xaaaab06f95f0 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<sgie-segment> NvDsInferContext[UID 481]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2138> [UID = 481]: serialize cuda engine to file: /path/resnet50_unet_last.onnx_b1_gpu0_fp32.engine successfully
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0

0:02:34.539228208  5914 0xaaaab06f95f0 ERROR                nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger:<sgie-segment> NvDsInferContext[UID 481]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:1034> [UID = 481]: RGB/BGR input format specified but network input channels is not 3

could you try “force-implicit-batch-dim=1” in sgie configuration file? if still can’t work, could you share a complete log?

After adding force-implicit-batch-dim=1, the log shows the following error:

ERROR: [TRT]: IBuilder::buildSerializedNetwork: Error Code 4: API Usage Error (Network has dynamic or shape inputs, but no optimization profile has been defined.)

The process then aborted.

Full log attached:
log_with_force-implicit-batch-dim=1.txt|attachment (12.5 KB)

Thanks for the update. let’s back to that “but network input channels is not 3” error. please remove “force-implicit-batch-dim=1”. Since nvinfer plugin and low-level lib are opensource. could you add log in NvDsInferContextImpl::preparePreprocess of /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/nvdsinfer_context_impl.cpp to print m_NetworkInfo.channels, m_NetworkInfo.width, m_NetworkInfo.height? please refer to readme for building, and replace /opt/nvidia/deepstream/deepstream/lib/libnvds_infer.so with the new so.

Thank you for your suggestion.
I think I found the problem. For some reason, the NHWC model with network-input-order=1 doesn’t work. After regenerating the ONNX model in NCHW format and setting network-input-order=0, everything works perfectly.

Glad to know you fixed it, thanks for the update! could your provide the NHWC model? Thanks! If this a bug, maybe we can fix it.

Sure. But I’m not able to update the model file. It shows:
“Sorry, the file you are trying to upload is not authorized (authorized extensions: woff2, woff, pdf, doc, docx, txt, gz, zip, log, gif, jpeg, png, jpg, mov, mp4, webm, m4v, 3gp, ogv, avi, mpeg).”

However, after some additional testing, I found that when using an NHWC model with network-input-order=1, you need to specify infer-dims, model-color-format, offsets, and net-scale-factor in the config file for the generated engine to work properly.

I’m not sure which of these parameters actually makes the difference, since I added or removed all four together during testing.

On the other hand, for an NCHW model with network-input-order=0, these parameters are not required.

you can compress the model with zip tool, then upload. Or you can use forum private email. please click forum avatar → "personal messages’ → “view all personal messages” → “new message”. Or you can update the model to some clouddrive, then I will download it. Thanks!

Thanks for the sharing! Please open a new topic if having other DeepStream problems.