Combine CUDA container with Xilinx container

Hi everyone, I’m trying to do something that did not look that complex at first but that it’s giving me quite a lot of headaches… Basically I want to use the “resnet18_baseline_att_224x224_A” trained model from GitHub - NVIDIA-AI-IOT/trt_pose: Real-time pose estimation accelerated with NVIDIA TensorRT in junction with Xilinx’s Vitis-AI tools to quantize and compile said model to use it in Xilinx’s DPU.

Now, I understand if you tell me to just ask in Xilinx’s forums, but the problem I have right now has more to do with Nvidia containers than Xilinx ones. I want to be able to access CUDA’s libraries from within a Xilinx container (which has all the quantizing tools) so that I can install this trt_pose package. I need this package because some of the loading data utilities use it and Xilinx tools need a part of the dataset to calibrate models, so I’d like to have these utilities as similar to the original project as possible.

However, I cannot see CUDA’s libraries under /usr/local/cuda even when I run the container with option “–runtime=nvidia”. I am able to execute nvidia-smi and correctly recognize my gpu when running with “–gpus all”, but that is not what I need.

So, I was wondering if there is a simple process in which I can map nvidia/cuda:11.0 container libraries’ to the xilinx container. I’ve been looking around and read some information about multi-stage building, but I don’t know enough Docker to make use of that.

Any suggestions?

Thanks in advance