Inferring detectnet_v2 .trt model in python

I have trained a detectnet_v2 model and then converted it to .trt engine… I am able to infer it using tlt-infer inside the docker. But how can i use this .trt file outside the docker in my own python application ?

@jazeel.jk
Please refer to Run PeopleNet with tensorrt - #21 by carlos.alvarez

Hi @Morganh,
Thanks, it worked for detectnet_v2 made .trt file inference. But now i created a .engine file using yolo. And i am getting the following error. I am running it inside the container itself with my code.

[TensorRT] ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin BatchTilePlugin_TRT version 1
[TensorRT] ERROR: safeDeserializationUtils.cpp (293) - Serialization Error in load: 0 (Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
  File "main.py", line 36, in <module>
    context = trt_engine.create_execution_context()
AttributeError: 'NoneType' object has no attribute 'create_execution_context'
root@605c1edbf876:/workspace/tlt-experiments/box_jeston# 

could you please help me if you have a solution?
Thanks…

Should i build TensorRT OSS inside the container ? i read these plugins are required for yolov3, batchTilePlugin, resizeNearestPlugin and batchedNMSPlugin. But can i build TensorRT OSS inside nvidia tlt container ?

Please refer to github GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream mentioned in https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/text/deploying_to_deepstream.html#generating-an-engine-using-tlt-converter

This is ONLY needed when running SSD , DSSD , RetinaNet , YOLOV3 and MaskRCNN models because some TRT plugins such as BatchTilePlugin required by these models is not supported by TensorRT7.x native package.

Moving this topic into TLT forum.

Yes. I saw that. " TensorRT OSS on x86"… But i want to run the code inside the nvidia tlt container itself. Is it possible to build TensorRT OSS inside the container?

And inside the nvidia tlt container tensorrt version is 7.0.0.11, should i downgrade it for batchTilePlugin to support?

See Invalid device function error when export .tlt file to .etlt - #16 by Morganh , it is possible to build OSS inside the docker.

And inside the nvidia tlt container tensorrt version is 7.0.0.11, should i downgrade it for batchTilePlugin to support?

git clone -b release/7.0 GitHub - NVIDIA/TensorRT: NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications. TensorRT && \

Thank you so much… And how i can determine DGPU_ARCHS variable for my GPU? My GPU is GeForce GTX 1660 Ti/PCIe/SSE2

I was not able to get it from deviceQuery.cpp.

This is my nividia-smi output,

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.100      Driver Version: 440.100      CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 166...  Off  | 00000000:01:00.0  On |                  N/A |
| N/A   49C    P8     8W /  N/A |   1159MiB /  5941MiB |     32%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

Reference:

Your card 's compute capability is 7.5 . Please set GPU_ARCHS to 75.

Thanks @Morganh… I did the full steps you mentioned to for Installing TRT OSS to the base docker. But still the same error remains. After installing TRT OSS, it should get those required plugins right? But still “could not find plugin BatchTilePlugin_TRT version 1” error is there…

[TensorRT] ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin BatchTilePlugin_TRT version 1
[TensorRT] ERROR: safeDeserializationUtils.cpp (293) - Serialization Error in load: 0 (Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
  File "main.py", line 36, in <module>
    context = trt_engine.create_execution_context()
AttributeError: 'NoneType' object has no attribute 'create_execution_context'
root@605c1edbf876:/workspace/tlt-experiments/box_jeston#

Have you replaced the plugin “libnvinfer_plugin.so*”?
See more in deepstream_tao_apps/TRT-OSS/Jetson at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

Yes i did these steps,

cp out/libnvinfer_plugin.so.7.0.0.1 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0 && \

cp out/libnvinfer_plugin_static.a /usr/lib/x86_64-linux-gnu/libnvinfer_plugin_static.a && \

cd ../../../ && \

rm -rf trt_oss_src

Can you share below result?
$ ll -sh /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so*

Please take a look at my comment shared in below topic.

this is the output for ll -sh /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so*,

 0 lrwxrwxrwx 1 root root   26 Dec 17  2019 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.7.0.0
   0 lrwxrwxrwx 1 root root   26 Dec 17  2019 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7 -> libnvinfer_plugin.so.7.0.0
4.1M -rw-r--r-- 1 root root 4.1M Jan 13 16:47 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0

Please check my comment in above link.
Please see below.
Please follow the exact step shared in the github.

Original:

$ ll -sh /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*

0 lrwxrwxrwx 1 root root 26 Apr 26 16:41 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so → libnvinfer_plugin.so.7.1.0
0 lrwxrwxrwx 1 root root 26 Apr 26 16:41 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7 → libnvinfer_plugin.so.7.1.0
4.5M -rw-r–r-- 1 root root 4.5M Apr 26 16:38 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0

If

$ sudo mv /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0 ~/libnvinfer_plugin.so.7.1.0.bak

$ sudo cp {TRT_SOURCE}/build/out/libnvinfer_plugin.so.7.0.0.1 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0

$ sudo ldconfig

then

nvidia@nvidia:~$ ll /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*
lrwxrwxrwx 1 root root 26 Apr 27 08:53 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so → libnvinfer_plugin.so.7.1.0*
lrwxrwxrwx 1 root root 26 Apr 27 08:53 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7 → libnvinfer_plugin.so.7.1.0*
lrwxrwxrwx 1 root root 26 Apr 27 08:55 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.0.0 → libnvinfer_plugin.so.7.1.0*
-rwxr-xr-x 1 root root 4652648 Apr 27 08:55 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0*

That’s the expected now.

Thanks… I followed the steps given in the github repo… Installation was successful. But for sudo ldconfig it’s not returning any logs.