Creating LL OSD context new
0:00:04.254747581 27928 0x5637940944f0 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
0:00:04.262776487 27928 0x5637940944f0 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): RGB/BGR input format specified but network input channels is not 3
0:00:04.263354770 27928 0x5637940944f0 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Failed to create NvDsInferContext instance
0:00:04.263365019 27928 0x5637940944f0 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Config file path: /deepstream-4.0/samples/configs/deepstream-app/config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: <main:651>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie_classifier: Failed to create NvDsInferContext instance
Debug info: gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier:
Config file path: /deepstream-4.0/samples/configs/deepstream-app/config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed
I checked my ONNX model, the input size is (1, 3, 240, 320), just like an usual detection model. I don’t know what’s wrong with this model that cause the error: “RGB/BGR input format specified but network input channels is not 3”.
Please noticed that there is also an onnx converter inside the DeepstreamSDK, which is integrated into TensorRT.
Would you mind to remove the TenosrRT engine and the path to use the Deepstream onnx parser for a try?
If it is still not working, would you mind to share your onnx file for us checking?
We try your model with TensorRT engine directly and found out this error:
WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
WARNING: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Successfully casted down to INT32.
[b]While parsing node number 76 [Gather]:
ERROR: onnx2trt_utils.hpp:347 In function convert_axis:[/b]
[8] Assertion failed: axis >= 0 && axis < nbDims
[02/06/2020-14:25:12] [E] Failed to parse onnx file
[02/06/2020-14:25:12] [E] Parsing model failed
[02/06/2020-14:25:12] [E] Engine could not be created
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=./version-RFB-320.onnx
So this issue occurs from the TensorRT support and more precisely, from the Gather layer operation.
As for @giangblackk, you should modify the nvdsinfer_context_impl.cpp file, to make it work with this branch.
In NvDsInferContextImpl::generateTRTModel, you should modify the network instantiation with the following code :
Nvinfer is using “createNetwork”, which (I think) is deprecated. It doesn’t specify the “explicitBatch” value, needed by the onnx parser, hence your error.
After doing that, we should all be at the same point:
We can successfully build a TRT Engine on onnx2trt AND Deepstream
This engine runs ok on trtexec
But we have the "RGB/BGR input format specified but network input channels is not 3" error
We’ve tested giangblackk’s model with onnx-tensorrt 6.0-full-dims branch and still meet the same error:
Parsing model
While parsing node number 76 [Gather -> "321"]:
ERROR: /home/nvidia/topic_112526/onnx-tensorrt/builtin_op_importers.cpp:703 In function importGather:
[8] Assertion failed: !(data->getType() == nvinfer1::DataType::kINT32 && nbDims == 1) && "Cannot perform gather on a shape tensor!"
Based on our experience, this error usually occurs when a model apply gather to the axis 0, which is not supported by TensorRT yet.
For your error, deepstream expects an input dimension number = 3 for the BGR format.
So the input of ONNX should be [3, 240, 320] rather than [1, 3, 240, 320].
I also noticed afterwards that switching branch would not change the Gather error.
Maybe giangblackk modified the network before exporting as an onnx file.
On my side, I initially also had these Gather errors, but I solved them by changing some pytorch .view operations (it seems to be a common problem).
Because I have access to the network code, I might also be able to change the input shape, by removing the first dimension (which is the batch size).
However, how should we do if we can’t change the network (eg: if only the onnx file is available) ?
ONNX parser seems to handle the (batch_size, n_channels, width, height) inputs (hence the “hasImplicitBatchDimension” error, so Deepstream could also do it ? By any chance, can we override the getBindingDimensions ?