Hello!
Are there any way to create explicit detectnet_v2 engine? I tried it using tlt-converter with command tlt-converter -k key -p input_1,1x3x544x960,4x3x544x960,16x3x544x960 -t int8 -c cal.bin -w 2048 -d 3,544,960 -o output_cov/Sigmoid,output_bbox/BiasAdd -e detectnet_v2.engine ./detectnet_v2.etlt.
Then i tested it in python trt and engine.has_implicit_batch_dimension was True. Converter set batch 16 as max_batch_size. Without -d converter doesn’t work, but for others tlt models (fpenet, lprnet) it works with only -p. Also engine works with execute_async function, that works only for implicit engines.
Can you share the error you meet?
According to the feedback of other users, they can run inference with the etlt model or the detectnet_v2 trt engine.
You can deploy the etlt model into deepstream and run inference.
I don’t have any errors or problems inferencing detectnet with implicit batch. Just asking if i could make detectnet_v2 engine with explicit batch. Deepstream creates engine with implicit batch from etlt.