PyCUDA Required for TensorRT Python API?

Hello,

I am experimenting with the Python API for TensorRT that was included in the latest version of JetPack.

One thing that wasn’t immediately clear to me is to how to allocate memory to be used by the inference engine. More specifically, I want to use mapped pinned memory (i.e., I want to pass in cudaHostAllocMapped to cudaHostAlloc()) since this memory API has shown itself to be the fastest on the TX2 in benchmarks.

Is there any way to allocate memory using the TensorRT Python API or is PyCUDA effectively required to do so? If PyCUDA is required to allocate such buffers, are there any plans to include it in JetPack so that users don’t have to install it manually? I think it would be helpful to include PyCUDA so at least users can run the Python samples (which use PyCUDA) without needing to manually install any libraries.

Thanks in advance for the help.

Hi,

You can install PyCUDA with the command shared here:
[url]https://devtalk.nvidia.com/default/topic/1013387/jetson-tx2/is-the-memory-management-method-of-tx1-and-tx2-different-/post/5167500/#5167500[/url]

Thanks.

There seems to be some confusion. I didn’t ask how to install PyCUDA.

I’m wondering if PyCUDA is required to use the TensorRT Python API. If it is required, why is it not included in JetPack automatically?

Hi,

pyCUDA is required to allocate GPU memory with python interface.
We are not able to include it into sdkmanage since it is a third-party library which may lead to some legal issue.

But we do list it as a depedency of python API and provide a installation guide:
[url]https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html#installing-pycuda[/url]

Thanks.