Generate model engine file from .onnx

Description

[property]
[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68

tlt-model-key=
#tlt-encoded-model
onnx-file=/root/data/export/age_gender_classification.onnx

#model-engine-file=/root/data/Resnet18_Train_SecurityUniformClassification_V1.1.etlt_b16_gpu0_fp16.engine
#labelfile-path=/root/data/labels.txt
infer-dims=3;224;224

Environment

TensorRT Version: 8.4.3-1
GPU Type: NVIDIA GeForce RTX 3070 Ti
Nvidia Driver Version: 535.113.01
CUDA Version: 11.6
CUDNN Version:
Operating System + Version: Ubuntu 20.04.4 LTS
Python Version (if applicable): Python 3.8.10
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

0:00:00.773280371 646 0x4635d30 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 18]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 18]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: [TRT]: ModelImporter.cpp:779: ERROR: ModelImporter.cpp:566 In function importModel:
[4] Assertion failed: !_importer_ctx.network()->hasImplicitBatchDimension() && “This version of the ONNX parser only supports TensorRT INetworkDefinitions with an explicit batch dimension. Please ensure the network was created using the EXPLICIT_BATCH NetworkDefinitionCreationFlag.”
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:315 Failed to parse onnx file
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:966 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:799 failed to build network.
0:00:02.938685530 646 0x4635d30 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 18]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 18]: build engine file failed
0:00:03.028925764 646 0x4635d30 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 18]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 18]: build backend context failed
0:00:03.028947884 646 0x4635d30 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 18]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 18]: generate backend failed, check config file settings
0:00:03.028968383 646 0x4635d30 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:03.028973381 646 0x4635d30 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start: error: Config file path: wapon_classification_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:secondary-inference:
Config file path: wapon_classification_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

Hi,

We recommend that you please try using the latest TensorRT 8.6.1.
If you still face the same issue, please share with us the complete verbose logs using trtexec and minimal issue repro ONNX model for better debugging.

Thank you.