Installing TensorRT in a VirtualBox VM

Description

Our software is deployed on NVIDIA Jetson devices such as Nano and Xavier NX. But I’d like to continue to do my development in a Virtualbox VM running Ubuntu 18.04 like I have been until recently.

However, I’m now at the stage were I’m adding into our application some calls into the TensorRT C++ API. For example:

nvinfer1::createInferRuntime();
nvinfer1::IRuntime::deserializeCudaEngine();
nvinfer1::IRuntime::setDLACore();

I believe these are in libnvinfer.so, located in/usr/lib/aarch64-linux/gnu/ on my Jetson device.

What I would like to know is if there is a CPU-only version of TensorRT that I can install in a Ubuntu 18.04 VM which doesn’t have a GPU, so I can continue to develop the application on my desktop. Cause as much as I love working with the Jetson devices, they’re not my desktop, and the extremely long compilation time slows things down considerably.

I’ve attempted to read through the TensorRT installation guide, but did not see a clear section that addresses people wanting to install it when they aren’t running with a NVIDIA GPU.

Thanks in advance for any instructions or hints you can provide.

Hi @stephanecharette,

TRT is only supported on GPU. Please check the support matrix below.

Thank you.

I don’t need to run it. I just need to figure out what to install so I can build. Is there a way to do that?

Hi @stephanecharette,

Please check NVIDIA GPU Cloud (NGC) tensorrt optimized containers, which removes many of the host-side dependencies.

Thank you

I followed your link, and I’m logged in. But I fail to see how this helps people understand how to install the libraries and header into a VM to help them work on C++ development. This is what I see:

Hi @stephanecharette,

Please refer Tensorrt NGC container.

Thank you.