I am working with deepstream inference, using TAO pretrained models, precisely Deepstream_LPR_app. When I run the inference pipeline, that generated the following engines based on the etlt models:
us_lprnet_baseline18_deployable.etlt_bX_gpu0_fp16.engine
resnet18_trafficcamnet_pruned.etlt_bX_gpu0_fp16.engine
usa_pruned.etlt_bX_gpu0_fp16.engine
I was investigating and I don’t know from where the bX notation comes. I thought it was the batch size specified in config files, however when I generated those engines using Jetson Xavier NX and Nano, that bX had different values even though I am using the same batch size param. So I would like to know if there is some documentation related to the engine’s name format.