Hello,
I use a GTX 1080Ti
I created a small repo with a very simplified version of the code. You can set TRT_mode to true or false. Don’t froget to set PATH_TO_CKPT and PATH_TO_LABELS too.
Link : https://github.com/jcRisch/TMP_NVIDIA/blob/master/DEMO_NVIDIA.py
Also, lowering the batch size doesn’t change anything.
Maybe, my install of trt is wrong (but got no error when using it), is there a way to check the installation ?
Regarding TensorRT installation. I usually recommend trying TensorRT container, which removes a lot of the dependencies. The containers are available from NVIDIA GPU Cloud (NGC), and accounts are free: https://www.nvidia.com/en-us/gpu-cloud/
Per Engineering, you can run the SSD network in native TRT, we have a python and a C++ sample demonstrating that. Python sample also has the ability to compare with native TF.
See sampleUffSSD for C++ and python uff_ssd sample.
I tried TRT5GA, but it doesn’t work.
Error :
tensorflow.python.framework.errors_impl.NotFoundError: libnvinfer.so.4: cannot open shared object file: No such file or directory
I think tensorflow asked for TRT4 and found TRT5 :
me@me /opt/TensorRT-5.0.2.6/lib$ ls
libnvcaffe_parser.a libnvonnxparser_runtime.so.0
libnvcaffe_parser.so libnvonnxparser_runtime.so.0.1.0
libnvcaffe_parser.so.5 libnvonnxparser_runtime_static.a
libnvcaffe_parser.so.5.0.2 libnvonnxparser.so
libnvinfer_plugin.so libnvonnxparser.so.0
libnvinfer_plugin.so.5 libnvonnxparser.so.0.1.0
libnvinfer_plugin.so.5.0.2 libnvonnxparser_static.a
libnvinfer_plugin_static.a libnvparsers.so
libnvinfer.so libnvparsers.so.5 libnvinfer.so.5 libnvparsers.so.5.0.2
libnvinfer.so.5.0.2 libnvparsers_static.a
libnvinfer_static.a libprotobuf.a
libnvonnxparser_runtime.so libprotobuf-lite.a