I have a .etlt
file of a Yolov3 model trained on images of size (HxW): 704X960, however when I try to create an .engine
file from it via deepstream model config file and change the inference input dimension to something else (e.g., 1376x1920), I got dimension mismatched error. The relevant deepsteam property is: infer-dims
, I want to double check if infer-dims
needs to match training dimensions? It is not clear from the document of both TLT and Deepstream if the infer dims need to match training dims.
My understanding reading the doc of TLT is inference dimension must match training dimension, but may be I’m wrong?
from the doc of tlt-convert:
-d <input_dimensions>
Comma-separated list of input dimensions that should match the dimensions used for tlt-export.
from the doc of tlt-export:
tlt-export [-h] {classification, detectnet_v2, ssd, dssd, faster_rcnn, yolo, retinanet}
-m <path to the .tlt model file generated by tlt train>
-k <key>
[-o <path to output file>]
[--cal_data_file <path to tensor file>]
[--cal_image_dir <path to the directory images to calibrate the model]
[--cal_cache_file <path to output calibration file>]
[--data_type <Data type for the TensorRT backend during export>]
[--batches <Number of batches to calibrate over>]
[--max_batch_size <maximum trt batch size>]
[--max_workspace_size <maximum workspace size]
[--batch_size <batch size to TensorRT engine>]
[--experiment_spec <path to experiment spec file>]
[--engine_file <path to the TensorRT engine file>]
[--verbose Verbosity of the logger]
[--force_ptq Flag to force PTQ]
I see no explicit option to specify the input dimension which means it’s probably inferred from from training config or the input layer?
Since the input-dims for the engine conversion step (.etlt → .engine) must match input-dims for the model export step (.tlt → .etlt), and the input-dims during export is the same as during training, I think we can’t change the input-dims to something else for the engine conversion?