I have already installed TensorRT according to the tutorial.
I used Debian Installation.
After installation, I checked installation files as dpkg -l | grep TensorRT
. Then the outputs at console are
itc@itc-Precision-7920-Tower:/usr/src/tensorrt/samples/sampleUffFasterRCNN$ dpkg -l | grep TensorRT
ii graphsurgeon-tf 7.0.0-1+cuda10.0 amd64 GraphSurgeon for TensorRT package
ii libnvinfer-bin 7.0.0-1+cuda10.0 amd64 TensorRT binaries
ii libnvinfer-dev 7.0.0-1+cuda10.0 amd64 TensorRT development libraries and headers
ii libnvinfer-doc 7.0.0-1+cuda10.0 all TensorRT documentation
ii libnvinfer-plugin-dev 7.0.0-1+cuda10.0 amd64 TensorRT plugin libraries
ii libnvinfer-plugin7 7.0.0-1+cuda10.0 amd64 TensorRT plugin libraries
ii libnvinfer-samples 7.0.0-1+cuda10.0 all TensorRT samples
ii libnvinfer5 5.1.5-1+cuda10.0 amd64 TensorRT runtime libraries
ii libnvinfer7 7.0.0-1+cuda10.0 amd64 TensorRT runtime libraries
ii libnvonnxparsers-dev 7.0.0-1+cuda10.0 amd64 TensorRT ONNX libraries
ii libnvonnxparsers7 7.0.0-1+cuda10.0 amd64 TensorRT ONNX libraries
ii libnvparsers-dev 7.0.0-1+cuda10.0 amd64 TensorRT parsers libraries
ii libnvparsers7 7.0.0-1+cuda10.0 amd64 TensorRT parsers libraries
ii python3-libnvinfer 7.0.0-1+cuda10.0 amd64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 7.0.0-1+cuda10.0 amd64 Python 3 development package for TensorRT
ii tensorrt 7.0.0.11-1+cuda10.0 amd64 Meta package of TensorRT
ii uff-converter-tf 7.0.0-1+cuda10.0 amd64 UFF converter for TensorRT package
uff-converter-tf
was installed.
But when I run command
convert-to-uff -p config.py -O dense_class/Softmax -O dense_regress/BiasAdd -O proposal uff_faster_rcnn/faster_rcnn.pb
I have error as
convert-to-uff: command not found
What could be wrong?