Error at exporting to TRT engine in TLT

I use the command below.
tlt-export detectnet_v2 -m /workspace/tlt-experiments/detectnet_v2/weights/resnet_18retrain.tlt -o /workspace/tlt-experiments/detectnet_v2/weights/resnet18_int8.etlt -k NHRvZzAwbHFncTk0MXJ0YmwwbXB1bGxhbnU6MjYzNzc2MDctYzQ5MC00NjkxLThkODAtODM0NDc3ZTRhNTNh --cal_data_file /workspace/tlt-experiments/detectnet_v2/weights/calibration.tensor --data_type int8 --batches 8 --max_batch_size 8 --cal_cache_file /workspace/tlt-experiments/detectnet_v2/weights/calibration_cache.bin --engine_file /workspace/tlt-experiments/detectnet_v2/weights/resnet_18trt.engine

Tensorfile conversion command is
tlt-int8-tensorfile detectnet_v2 -e /workspace/tlt-experiments/detectnet_v2/detectnet_v2_prune_resnet18_kitti.txt -o /workspace/tlt-experiments/detectnet_v2/weights/calibration.tensor -m 8
tensorfile was converted successfully.

The error is at tlt-export command.

Using TensorFlow backend.
NOTE: UFF has been tested with TensorFlow 1.14.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
DEBUG [/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py:96] Marking ['output_cov/Sigmoid', 'output_bbox/BiasAdd'] as outputs
[TensorRT] INFO: Detected 1 inputs and 2 output network tensors.
[TensorRT] WARNING: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
[TensorRT] INFO: Starting Calibration with batch size 16.
DEPRECATED: This variant of get_batch is deprecated. Please use the single argument variant described in the documentation instead.
Traceback (most recent call last):
  File "/usr/local/bin/tlt-export", line 8, in <module>
    sys.exit(main())
  File "./common/export/app.py", line 234, in main
  File "./common/export/base_exporter.py", line 411, in export
  File "./modulus/export/_tensorrt.py", line 515, in __init__
  File "./modulus/export/_tensorrt.py", line 414, in __init__
  File "./modulus/export/_tensorrt.py", line 104, in get_batch
ValueError: Data file batch size (4) < request batch size (16)

My another query is why tensorflow version is not 14.1 in docker. My docker is tlt-2.

I see “Starting Calibration with batch size 16” in the log, could you please confirm your exact command of tlt-export?

Now it is solved. I retrained the pruned model with batch size 16. Then create tensorfile and export with batchsize 16. Now it works.

My command is

tlt-export detectnet_v2 -m /workspace/tlt-experiments/detectnet_v2/resnet18/prune_0.5/pruned_models/weights/resnet_18retrain.tlt -o /workspace/tlt-experiments/detectnet_v2/resnet18/prune_0.5/pruned_models/weights/resnet18_int8.etlt -k NHRvZzAwbHFncTk0MXJ0YmwwbXB1bGxhbnU6MjYzNzc2MDctYzQ5MC00NjkxLThkODAtODM0NDc3ZTRhNTNh --cal_data_file /workspace/tlt-experiments/detectnet_v2/resnet18/prune_0.5/pruned_models/weights/calibration.tensor --data_type int8 --batches 16 --max_batch_size 8 --cal_cache_file /workspace/tlt-experiments/detectnet_v2/resnet18/prune_0.5/pruned_models/weights/calibration_cache.bin --engine_file /workspace/tlt-experiments/detectnet_v2/resnet18/prune_0.5/pruned_models/weights/resnet_18trt.engine

In the export command above, there is --max_batch_size 8. Is that the batch size I need to use when running the TensorRT Engine?

Then there is warning

NOTE: UFF has been tested with TensorFlow 1.14.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.

I used TLT-2.0 version docker. Why docker is not updated with Tensorflow 1.14?
Is that warning not important?

The UFF warning can be ignored, The 2.0_dp docker is using 1.13. But we found no issue with the uff in export, so you can ignore it.

Additional, I follow the document: https://developer.nvidia.com/blog/creating-a-real-time-license-plate-detection-and-recognition-app/

The following command was executed to export a model.

$ tlt detectnet_v2 export -m /workspace/openalpr/exp_unpruned/weights/model.tlt -o /workspace/openalpr/export/unpruned_model.etlt --cal_cache_file /workspace/openalpr/export/calibration.bin -e /workspace/openalpr/SPECS_train.txt -k nvidia_tlt --cal_image_dir /workspace/openalpr/lpd/data/image --data_type int8 --batch_size 4 --batches 10 –-engine_file /workspace/openalpr/export/unpruned_int8.trt

The command shows the following error:
detectnet_v2 export: error: invalid choice: ‘–-engine_file’ (choose from ‘calibration_tensorfile’, ‘dataset_convert’, ‘evaluate’, ‘export’, ‘inference’, ‘prune’, ‘train’)
2021-08-20 17:25:54,051 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

How to solve this error?

@moche.chan
Please create a new forum topic. Thanks.

Please set a shorter “-” .
–-engine_file
to
--engine_file