Does anybody have instructions on how to set up a build environment under Linux (X64/ARM64) to build Deepstream apps without using the actual hardware or NSight? I can use an X64 or an ARM64 machine to perform this task.
Here are some things I have tried:
- Used the Deepstream Container image Cloud-Native on Jetson | NVIDIA Developer. However this container does not have CUDA built in so any compilation results in a linker error because all the .so files are empty. Instructions at Docker Containers — DeepStream 5.1 Release documentation (nvidia.com) say that you need to run the containers on the device since CUDA and other dependencies are loaded from the host.
- Used the CUDA Container image in the link above. However, it does not have many of the dependencies needed for Deepstream.
- I tried looking into most of the Docker files listed at GitHub - dusty-nv/jetson-containers: Machine Learning Containers for NVIDIA Jetson and JetPack-L4T. I couldn’t find one that included DeepStream.
- I found this link but I did not see instructions on how the build would work for DeepStream: TensorRT 5.1 Cross Compile for Jetson AGX Xavier - Jetson & Embedded Systems / Jetson AGX Xavier - NVIDIA Developer Forums.
- I downloaded the development version of the dGPU Docker container with the assumption that I could cross compile in there but that has a newer version of CUDA built in (11 vs 10.2): Docker Containers — DeepStream 5.1 Release documentation (nvidia.com)