Name of generated engine file

I am working with deepstream inference, using TAO pretrained models, precisely Deepstream_LPR_app. When I run the inference pipeline, that generated the following engines based on the etlt models:

us_lprnet_baseline18_deployable.etlt_bX_gpu0_fp16.engine
resnet18_trafficcamnet_pruned.etlt_bX_gpu0_fp16.engine
usa_pruned.etlt_bX_gpu0_fp16.engine

I was investigating and I don’t know from where the bX notation comes. I thought it was the batch size specified in config files, however when I generated those engines using Jetson Xavier NX and Nano, that bX had different values even though I am using the same batch size param. So I would like to know if there is some documentation related to the engine’s name format.

The engine’s name depends on the command you use. For example, in github,

 ./tao-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 \
           models/LP/LPR/us_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.