I decided to explore the DRIVE SDK on my Ubuntu host PC. I installed the complete DRIVE SDK on the Ubuntu host PC using the most recent DriveInstall 5.0.5.0bL. Post installation, I again launched the installer and verified that all components were showing as “Installed”.
Now, I am following a recent NVIDIA webinar on conversion of TensorFlow model to TensorRT (https://github.com/NVIDIA-Jetson/tf_to_trt_image_classification). I understand that this workflow is supposed to be done on the Jetson TX2 kit, but I assume that I should be able to replicate the creation of the inference engine on my Ubuntu host PC too. The project requires both OpenCV and TensorFlow, which are not bundled with the Drive SDK; I installed those separately.
My problem is that I am not able to refer to the installed TensorRT component on my Ubuntu host PC. I do have a folder named “tensorrt” in the path /usr/local/nvidia/, with all relevant files. But when I try to build my project as per the README in the link shared above, I get an error saying “NvInfer.h: no such file or directory”. If I plow on somehow, it throws further errors for missing nvinfer and nvparsers libraries, even though they are present on my PC. Referring to another forum post, I tried “dpkg -l | grep tensorrt” on the shell, but it did not return any information.
I could have posted this under the Jetson forum, but since the TensorRT was installed on my PC with DriveInstall, I decided to post here. If anyone has faced a similar situation, please let me know how you resolved it.
I think these contents will help your topic. Thanks.
“Running jetson-inference on the host (which is not officially supported), you are missing NVIDIA TensorRT (TensorRT SDK | NVIDIA Developer) and gstreamer.”
Thank you, but I had already viewed those contents before posting my query here. My situation is different. I have verified that I have the gstreamer components on my PC. Since I used DriveInstall and asked it to install ‘TensorRT for Host’, I thought I also had the TensorRT installed properly.
How can I check if TensorRT is installed on the Ubuntu host? Is there a shell command that I can use to do that?
One update - I checked NvInfer.h on my PC, and it shows the following: #define NV_TENSORRT_MAJOR 3 //!< TensorRT major version #define NV_TENSORRT_MINOR 0 //!< TensorRT minor version #define NV_TENSORRT_PATCH 2 //!< TensorRT patch version
I assume this means that DriveInstall installed TensorRT 3.0.2 on my Ubuntu host PC.
OK, I suppose you wanted me to know how to verify whether TensorRT is installed with the same ‘dpkg’ command. So I have already shared the output from my PC in response. I am definitely NOT seeing the list of libraries as shown in the installation guide.
I apologize for the confusion.
Could you please refer to below link?
This link takes you to the NVIDIA® TensorRT™ pages for information on the NVIDIA deep learning inferencing engine. TensorRT is installed on the x86 Host by the DriveInstall application at /usr/local/nvidia/tensorrt/. Both x86 and aarch64 components for cross-compile are installed.
(1) I am working on my Ubuntu host PC. There is no Drive PX 2.
(2) DriveInstall 5.0.5.0bL shows it installed TensorRT 3.0 on my Ubuntu host PC, for both x86 and aarch64.
(3) I can see the relevant TensorRT files at /usr/local/nvidia/tensorrt/.
(4) When I try to check for installed TensorRT (dpkg -l | grep TensorRT) libraries, I get nothing.
(5) When I build application code that refers to TensorRT headers, the headers are not found and I get build errors.
May I know you are a DPX2 user?
As I said that TensorRT in DriveInstall is for cross-compilation.
So probably to install TensorRT for Host PC only, I think you should install TensorRT for Host PC using the following link. https://developer.nvidia.com/nvidia-tensorrt-download
And you are trying to use Jetson’s example on a HostPC, so I think you’d better support the problem through the Jetson Forum.
I found two TensorRT components installed by DriveInstall on my host PC: aarch64 and x86. From your reply, I understand that ‘aarch64’ is to be flashed on the DPX2 and used for the purposes of native compilation. And the ‘x86’ is to be used on the host PC but for the purposes of cross-compilation (target DPX2). So in both cases, these components support DPX2, not the host PC. Please confirm if my understanding is correct.
And yes, I understand that if I want to support TensorRT on the host PC, I will have to separately download and install it.
I really appreciate your help so far. I request you to please review and confirm the statements regarding DriveInstall TensorRT components from my last reply.
To answer one of your earlier questions, yes, I am a DPX2 user. But I do not have the hardware yet since it is being procured right now.