TensorRT

The container does not seem to include TensorRT. Is there a way to test the exported model in TensorRT and eventually in DeepStream SDK? I have downloaded the latest version of the TLT container, but it still bundles CUDA 9, while DeepStream requires CUDA 10. Will the models exported from the TLT container be compatible with DeepStream? The users guide and website suggests this compatibility. But with different CUDA versions, I’m not sure how to accomplish this.

Solved: I copied the exported .etlt model and tlt-converter executable to my target environment with TensorRT and DeepStream and the engine generation seems to work okay.

Glad it worked for you, Rohit. That is the correct way.

Thanks,
Adil

@rohit.rawat
How is the generated engine used in deepstream?

Thanks

@szRyan
To use the generated engine, you will have to just update the parameter called model-engine-file to point to the engine file you just created using the TLT converter.
Please make sure to update the batch size in the configuration file, to the size used in the tlt-converter, as deepstream will not be able to modify the engine at runtime.

@vpraveen

thank you for your reply

I have generated the engine in the deepstream environment using the tlt export model. But I did not see how to use this engine in the deepstream documentation, which is an example of the lack of this engine in the configuration file of the deepstream. I am using the detection model of resnet18.

@szRyan

you can take an existing config file like this one from ds-test4

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-engine-file=…/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine
model-file=…/models/Primary_Detector/resnet10.caffemodel
proto-file=…/models/Primary_Detector/resnet10.prototxt
labelfile-path=…/models/Primary_Detector/labels.txt
int8-calib-file=…/models/Primary_Detector/cal_trt4.bin
batch-size=1
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
parse-func=4
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid

Delete the model-file and proto-file lines; update the model-engine-file, int8-calib-file (if you’re using int8) and labelfile-path to point to your labels and engine file.

[property]
gpu-id=0
model-engine-file=…/models/mymodel/model.engine
labelfile-path=…/models/mymodel/labels.txt
batch-size=1
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
parse-func=4
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid