Build Hello AI World Jetson Inference in docker container

Hello,

I recently began dabbling with Docker containers. I am trying to build the jetson inference files in docker container. I’m using l4t-base: r32:3:1 as the base image and I’m following the build instructions from the jetson-inference github page: build instructions. I successfully built this on the host OS before, but not successful in the container.

When building with cmake, I get the following errors after installing models and pytorch:
CMake Error:
> The following variables are used in this project, but they are set to NOTFOUND.
> Please set them or make sure they are set and tested correctly in the CMake files:
> CUDA_nppicc_LIBRARY (ADVANCED)
> linked by target “jetson-utils” in directory /home/jetson-inference/utils
>
> – Configuring incomplete, errors occurred!
> See also “/home/jetson-inference/build/CMakeFiles/CMakeOutput.log”.
> See also “/home/jetson-inference/build/CMakeFiles/CMakeError.log”

Diving further into the error logs, it looks like the “pthread_create” function doesn’t exist

Please advise if I’m performing steps correctly or if I should be following a different procedure.

Much thanks!!
Anthony

Hi,

First, please noticed that there are some dependency between L4T docker image and the OS used on the Xavier.
Please make sure the OS version are identical.

It’s not recommended to our l4t-base image since it doesn’t have cuDNN/TensorRT library installed.
You can use each of following DL images instead:

If you are looking for an image compatible to r32.3.1, please use 4.0.2-19.12-base container from this page:

Thanks.