I am trying to create pipeline with our models basing on deepstream_test_3 example. But I’ve got following problem that I cannot resolve after long searching. In the original model of deepstream_test_3 example, the input layer shape is 3x368x640 (channel first):
But our model’s input layer shape is 416x416x3 (channel last) :
So I get this error:
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:02.966757544 32253 0x20f1e10 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/utvm_models/yolov4_-1_3_416_416_dynamic.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT input 416x416x3 min: 1x416x416x3 opt: 4x416x416x3 Max: 24x416x416x3
1 OUTPUT kFLOAT boxes 10647x1x4 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT confs 10647x6 min: 0 opt: 0 Max: 0
0:00:02.966959085 32253 0x20f1e10 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/sources/utvm_models/yolov4_-1_3_416_416_dynamic.engine
0:00:02.966992536 32253 0x20f1e10 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:874> [UID = 1]: RGB/BGR input format specified but network input channels is not 3
ERROR: nvdsinfer_context_impl.cpp:1157 Infer Context prepare preprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED
0:00:02.995604710 32253 0x20f1e10 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:02.995651649 32253 0x20f1e10 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<primary-inference> error: Config file path: dstest3_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(809): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest3_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app
The question is: how to convert the input stream from channel last to channel first format? What is needed to add to pipeline?