I am working with deepstream inference, using TAO pretrained models, precisely Deepstream_LPR_app. When I run the inference pipeline, that generated the following engines based on the etlt models:
I was investigating and I don’t know from where the bX notation comes. I thought it was the batch size specified in config files, however when I generated those engines using Jetson Xavier NX and Nano, that bX had different values even though I am using the same batch size param. So I would like to know if there is some documentation related to the engine’s name format.