Generate Dynamic batch size engine with tensorrt for DLA based CNN Inference

So I am new to using tensorrt, especially for DLA. I have a Resnet50 model which I am converting to ONNX format (using python). Then I use tensorrt CLI to get the engine file. Now, I want to execute the model but for varying batch sizes on the DLA. This is the command I used:

/usr/src/tensorrt/bin/trtexec --onnx=model.onnx --minShapes=input:1x3x224x224 --optShapes=input:16x3x224x224 --maxShapes=input:32x3x224x224 --saveEngine=model.engine --fp16 --useDLACore=0 --allowGPUFallback

However, the engine set up fails and I get the following logs:

&&&& RUNNING TensorRT.trtexec [TensorRT v8502] # /usr/src/tensorrt/bin/trtexec --onnx=../model.onnx --minShapes=input:1x3x224x224 --optShapes=input:16x3x224x224 --maxShapes=input:32x3x224x224 --saveEngine=model.engine --int8 --useDLACore=0 --allowGPUFallback
[09/27/2024-21:21:01] [I] === Model Options ===
[09/27/2024-21:21:01] [I] Format: ONNX
[09/27/2024-21:21:01] [I] Model: ../model.onnx
[09/27/2024-21:21:01] [I] Output:
[09/27/2024-21:21:01] [I] === Build Options ===
[09/27/2024-21:21:01] [I] Max batch: explicit batch
[09/27/2024-21:21:01] [I] Memory Pools: workspace: default, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default
[09/27/2024-21:21:01] [I] minTiming: 1
[09/27/2024-21:21:01] [I] avgTiming: 8
[09/27/2024-21:21:01] [I] Precision: FP32+INT8
[09/27/2024-21:21:01] [I] LayerPrecisions: 
[09/27/2024-21:21:01] [I] Calibration: Dynamic
[09/27/2024-21:21:01] [I] Refit: Disabled
[09/27/2024-21:21:01] [I] Sparsity: Disabled
[09/27/2024-21:21:01] [I] Safe mode: Disabled
[09/27/2024-21:21:01] [I] DirectIO mode: Disabled
[09/27/2024-21:21:01] [I] Restricted mode: Disabled
[09/27/2024-21:21:01] [I] Build only: Disabled
[09/27/2024-21:21:01] [I] Save engine: model.engine
[09/27/2024-21:21:01] [I] Load engine: 
[09/27/2024-21:21:01] [I] Profiling verbosity: 0
[09/27/2024-21:21:01] [I] Tactic sources: Using default tactic sources
[09/27/2024-21:21:01] [I] timingCacheMode: local
[09/27/2024-21:21:01] [I] timingCacheFile: 
[09/27/2024-21:21:01] [I] Heuristic: Disabled
[09/27/2024-21:21:01] [I] Preview Features: Use default preview flags.
[09/27/2024-21:21:01] [I] Input(s)s format: fp32:CHW
[09/27/2024-21:21:01] [I] Output(s)s format: fp32:CHW
[09/27/2024-21:21:01] [I] Input build shape: input=1x3x224x224+16x3x224x224+32x3x224x224
[09/27/2024-21:21:01] [I] Input calibration shapes: model
[09/27/2024-21:21:01] [I] === System Options ===
[09/27/2024-21:21:01] [I] Device: 0
[09/27/2024-21:21:01] [I] DLACore: 0(With GPU fallback)
[09/27/2024-21:21:01] [I] Plugins:
[09/27/2024-21:21:01] [I] === Inference Options ===
[09/27/2024-21:21:01] [I] Batch: Explicit
[09/27/2024-21:21:01] [I] Input inference shape: input=16x3x224x224
[09/27/2024-21:21:01] [I] Iterations: 10
[09/27/2024-21:21:01] [I] Duration: 3s (+ 200ms warm up)
[09/27/2024-21:21:01] [I] Sleep time: 0ms
[09/27/2024-21:21:01] [I] Idle time: 0ms
[09/27/2024-21:21:01] [I] Streams: 1
[09/27/2024-21:21:01] [I] ExposeDMA: Disabled
[09/27/2024-21:21:01] [I] Data transfers: Enabled
[09/27/2024-21:21:01] [I] Spin-wait: Disabled
[09/27/2024-21:21:01] [I] Multithreading: Disabled
[09/27/2024-21:21:01] [I] CUDA Graph: Disabled
[09/27/2024-21:21:01] [I] Separate profiling: Disabled
[09/27/2024-21:21:01] [I] Time Deserialize: Disabled
[09/27/2024-21:21:01] [I] Time Refit: Disabled
[09/27/2024-21:21:01] [I] NVTX verbosity: 0
[09/27/2024-21:21:01] [I] Persistent Cache Ratio: 0
[09/27/2024-21:21:01] [I] Inputs:
[09/27/2024-21:21:01] [I] === Reporting Options ===
[09/27/2024-21:21:01] [I] Verbose: Disabled
[09/27/2024-21:21:01] [I] Averages: 10 inferences
[09/27/2024-21:21:01] [I] Percentiles: 90,95,99
[09/27/2024-21:21:01] [I] Dump refittable layers:Disabled
[09/27/2024-21:21:01] [I] Dump output: Disabled
[09/27/2024-21:21:01] [I] Profile: Disabled
[09/27/2024-21:21:01] [I] Export timing to JSON file: 
[09/27/2024-21:21:01] [I] Export output to JSON file: 
[09/27/2024-21:21:01] [I] Export profile to JSON file: 
[09/27/2024-21:21:01] [I] 
[09/27/2024-21:21:01] [I] === Device Information ===
[09/27/2024-21:21:01] [I] Selected Device: Orin
[09/27/2024-21:21:01] [I] Compute Capability: 8.7
[09/27/2024-21:21:01] [I] SMs: 16
[09/27/2024-21:21:01] [I] Compute Clock Rate: 1.3 GHz
[09/27/2024-21:21:01] [I] Device Global Memory: 30588 MiB
[09/27/2024-21:21:01] [I] Shared Memory per SM: 164 KiB
[09/27/2024-21:21:01] [I] Memory Bus Width: 128 bits (ECC disabled)
[09/27/2024-21:21:01] [I] Memory Clock Rate: 1.3 GHz
[09/27/2024-21:21:01] [I] 
[09/27/2024-21:21:01] [I] TensorRT version: 8.5.2
[09/27/2024-21:21:02] [I] [TRT] [MemUsageChange] Init CUDA: CPU +220, GPU +0, now: CPU 246, GPU 4086 (MiB)
[09/27/2024-21:21:05] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +302, GPU +403, now: CPU 571, GPU 4511 (MiB)
[09/27/2024-21:21:05] [I] Start parsing network model
Could not open file ../model.onnx
Could not open file ../model.onnx
[09/27/2024-21:21:05] [E] [TRT] ModelImporter.cpp:688: Failed to parse ONNX model from file: ../model.onnx
[09/27/2024-21:21:05] [E] Failed to parse onnx file
[09/27/2024-21:21:05] [I] Finish parsing network model
[09/27/2024-21:21:05] [E] Parsing model failed
[09/27/2024-21:21:05] [E] Failed to create engine from model or file.
[09/27/2024-21:21:05] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8502] # /usr/src/tensorrt/bin/trtexec --onnx=../model.onnx --minShapes=input:1x3x224x224 --optShapes=input:16x3x224x224 --maxShapes=input:32x3x224x224 --saveEngine=model.engine --int8 --useDLACore=0 --allowGPUFallback

Here is the python code file I am using for converting the model to ONNX format:

import torch
import torchvision.models as models

# Load the model
model = models.resnet50(pretrained=True)
model.eval()

# Create a dummy input tensor
dummy_input = torch.randn(1, 3, 224, 224)

# Export the model
torch.onnx.export(model, dummy_input, "model.onnx",
    export_params=True,
    opset_version=11,
    do_constant_folding=True,
    input_names=["input"],
    output_names=["output"],
    dynamic_axes={
        "input": {0: "batch_size"},
        "output": {0: "batch_size"}
    })

I am able to generate the engine for a static batch size. How am I supposed to do it for dynamic batch size?

Am I doing something incorrectly? Sorry if it seems too novice but I have already checked out the documentations and other questions asked on this forum regarding this. None is helping me. Thanks!

Hi,

In your trtexec log, the error message is:

[09/27/2024-21:21:05] [I] Start parsing network model
Could not open file ../model.onnx
Could not open file ../model.onnx

Please check if the model exists.

Thanks.