• Hardware Platform (Jetson / GPU): Jetson AGX Orin Dev Kit • DeepStream Version: 6.3 • JetPack Version (valid for Jetson only): 5.1.2 • TensorRT Version: 8.5.2 • NVIDIA GPU Driver Version (valid for GPU only): R35.6.0 • Issue Type( questions, new requirements, bugs): questions • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
tcambin name=tcam0 ! video/x-raw,format=GRAY8,width=1920,height=1080,framerate=30/1
! videoconvert ! “video/x-raw,format=BGR”
! nvvideoconvert ! “video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080”
! mux.sink_0 nvstreammux name=mux batch-size=1 width=1920 height=1080 live-source=1
! nvinfer config-file-path=$HOME/my_code/config_infer_primary.txt
! nvvidconv ! nvdsosd
! nvvideoconvert ! “video/x-raw(memory:NVMM),format=NV12”
! nvv4l2h264enc bitrate=2000000
! h264parse ! mp4mux
! filesink location=$HOME/Videos/output.mp4 -e
Setting pipeline to PAUSED …
Opening in BLOCKING MODE
0:00:03.507798404 483809 0xaaaad914fa40 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1174> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:06.302814168 483809 0xaaaad914fa40 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/home/fpascal/my_code/model.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_6 224x224x3
1 OUTPUT kFLOAT cls 10x4
2 OUTPUT kFLOAT bbox 10x4
0:00:06.470124552 483809 0xaaaad914fa40 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /home/fpascal/my_code/model.engine
0:00:06.470188174 483809 0xaaaad914fa40 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:974> [UID = 1]: RGB/BGR input format specified but network input channels is not 3
ERROR: Infer Context prepare preprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED
0:00:06.477269765 483809 0xaaaad914fa40 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:06.477317674 483809 0xaaaad914fa40 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start: error: Config file path: /home/fpascal/my_code/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
ERROR: Pipeline doesn’t want to pause.
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer0: Failed to create NvDsInferContext instance
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:nvinfer0:
Config file path: /home/fpascal/my_code/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Setting pipeline to NULL …
Freeing pipeline ...
From the model. the network-input-order should be set to 1, meaning NHWC. please modify infer-dims=224;224;3 to infer-dims=3:224;224 because it needs “3 as [c;h;w] order”.
This is a new error. The default postprocess funtion is for resnet10 model. Notcing parse-bbox-func-name is set, you also need to set custom-lib-path, please refer to this cfg.