Onnx implicit batch dimension

Please provide complete information as applicable to your setup.

• Hardware Platform: GPU
• DeepStream Version: 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version: 7.0
**• NVIDIA GPU Driver Version:440.64 **
I used a custom onnx classification model as primary engine. However I got an error about implicit batch dimension:

Input filename: /opt/nvidia/deepstream/deepstream-5.0/sources/fire_detect/model.onnx
ONNX IR version: 0.0.4
Opset version: 8
Producer name: tf2onnx
Producer version: 1.5.6
Model version: 0
Doc string:

ERROR: ModelImporter.cpp:457 In function importModel:
[4] Assertion failed: !_importer_ctx.network()->hasImplicitBatchDimension() && “This version of the ONNX parser only supports TensorRT INetworkDefinitions with an explicit batch dimension. Please ensure the network was created using the EXPLICIT_BATCH NetworkDefinitionCreationFlag.”
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:390 Failed to parse onnx file
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:971 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:872 failed to build network.

https://github.com/sherlockking/models/blob/master/model.onnx is the model I used


Could you find if this suggestion works for you first?