Convert detectnet_v2 model to engine

Hello!
Are there any way to create explicit detectnet_v2 engine? I tried it using tlt-converter with command tlt-converter -k key -p input_1,1x3x544x960,4x3x544x960,16x3x544x960 -t int8 -c cal.bin -w 2048 -d 3,544,960 -o output_cov/Sigmoid,output_bbox/BiasAdd -e detectnet_v2.engine ./detectnet_v2.etlt.
Then i tested it in python trt and engine.has_implicit_batch_dimension was True. Converter set batch 16 as max_batch_size. Without -d converter doesn’t work, but for others tlt models (fpenet, lprnet) it works with only -p. Also engine works with execute_async function, that works only for implicit engines.

System:
Jetson NX
Jetpack 4.4.1
Python 3.6
Tensor RT 7.1.3
CUDA 10.2

Model was created in TLT 2.0.

For example,
For PeopleNet (NVIDIA NGC )

$ wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_peoplenet/versions/pruned_v2.0/files/resnet18_peoplenet_pruned.etlt

$ wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_peoplenet/versions/pruned_v2.0/files/resnet18_peoplenet_int8.txt

$ ./tlt-converter resnet18_peoplenet_pruned.etlt -k tlt_encode -c resnet18_peoplenet_int8.txt -o output_cov/Sigmoid,output_bbox/BiasAdd -d 3,544,960 -i nchw -e peoplenet_int8.engine -m $MAX_BATCH_SIZE -t $INFERENCE_PRECISION -b $BATCH_SIZE

For example,

$ ./tlt-converter resnet18_peoplenet_pruned.etlt -k tlt_encode -c resnet18_peoplenet_int8.txt -o output_cov/Sigmoid,output_bbox/BiasAdd -d 3,544,960 -i nchw -e peoplenet_int8.engine -m 64 -t int8 -b 64

Tried this, but still got implicit engine.

What do you mean by “implicit engine”? And what is the effect of it?

Engine with implicit batch dimentions.
For engines with explicit batch i can defer specifying batch until runtime.

Can you share the error you meet?
According to the feedback of other users, they can run inference with the etlt model or the detectnet_v2 trt engine.
You can deploy the etlt model into deepstream and run inference.

I don’t have any errors or problems inferencing detectnet with implicit batch. Just asking if i could make detectnet_v2 engine with explicit batch. Deepstream creates engine with implicit batch from etlt.

The detectnet_v2 engine does not support explicit batch.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.