Error while building the NanoSAM jetson container

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Orin Nano
• Jetpack 6.2
• Driver Version: 540.4.0
• CUDA Version: 12.6
• Issue Type( questions, new requirements, bugs)

Hello,

I want to try the NanoSAM example from the Jetson AI Lab.
I execute sudo jetson-containers build $(autotag nanosam).

It breaks at this step:

Step 5/6 : RUN cd /opt && git clone --depth=1 GitHub - NVIDIA-AI-IOT/torch2trt: An easy to use PyTorch to TensorRT converter && cd torch2trt && cp /tmp/patches/flattener.py torch2trt && pip3 install . && sed ‘s|^set(CUDA_ARCHITECTURES.*|#|g’ -i CMakeLists.txt && sed ‘s|Catch2_FOUND|False|g’ -i CMakeLists.txt && cmake -B build -DCUDA_ARCHITECTURES=${CUDA_ARCHITECTURES} . && cmake --build build --target install && ldconfig && pip3 install nvidia-pyindex && pip3 install onnx-graphsurgeon
—> Running in 0f608dfdd2e4
Cloning into ‘torch2trt’…
Using pip 25.0.1 from /usr/local/lib/python3.10/dist-packages/pip (python 3.10)
Non-user install because site-packages writeable
Created temporary directory: /tmp/pip-build-tracker-1cjjmp7j
Initialized build tracking at /tmp/pip-build-tracker-1cjjmp7j
Created build tracker: /tmp/pip-build-tracker-1cjjmp7j
Entered build tracker: /tmp/pip-build-tracker-1cjjmp7j
Created temporary directory: /tmp/pip-install-0ruoniag
Created temporary directory: /tmp/pip-ephem-wheel-cache-yg6n3vus
Looking in indexes: jp6/cu126 index
Processing /opt/torch2trt
Added file:///opt/torch2trt to build tracker ‘/tmp/pip-build-tracker-1cjjmp7j’
Running setup.py (path:/opt/torch2trt/setup.py) egg_info for package from file:///opt/torch2trt
Created temporary directory: /tmp/pip-pip-egg-info-cfv0uzw2
Preparing metadata (setup.py): started
Running command python setup.py egg_info
Traceback (most recent call last):
File “”, line 2, in
File “”, line 34, in
File “/opt/torch2trt/setup.py”, line 2, in
import tensorrt
File “/usr/local/lib/python3.10/dist-packages/tensorrt/init.py”, line 75, in
from .tensorrt import *
ImportError: libnvdla_compiler.so: cannot open shared object file: No such file or directory
error: subprocess-exited-with-error

I made sure that the /usr/lib/aarch64-linux-gnu/nvidia/libnvdla_compiler.so is available on the host system.

Hi,

We are testing this internally.
Will provide more info to you later.

Thanks.

Yes please. It would be quite urgent because of internal deadlines.

Hi,

We can build nanosam on JetPack 6.2 successfully.
Please find below for the steps:

jetson-containers

$ git clone https://github.com/dusty-nv/jetson-containers
$ cd jetson-containers/ && git checkout 3631c5c && cd ../
$ bash jetson-containers/install.sh

Build

$ docker pull dustynv/nanosam:r36.2.0
$ docker cp $(docker create --name tmp dustynv/nanosam:r36.2.0):/opt/nanosam/data/resnet18_image_encoder.onnx ./resnet18_image_encoder.onnx && docker rm tmp
$ cp resnet18_image_encoder.onnx jetson-containers/packages/vit/nanosam/

Apply below change:

diff --git a/packages/vit/nanosam/Dockerfile b/packages/vit/nanosam/Dockerfile
index 673fb733..2dc5e73a 100644
--- a/packages/vit/nanosam/Dockerfile
+++ b/packages/vit/nanosam/Dockerfile
@@ -47,11 +47,8 @@ RUN cd /opt/nanosam && \
         --maxShapes=point_coords:1x10x2,point_labels:1x10
 
 # 4. Build the TensorRT engine for the NanoSAM image encoder
-RUN pip3 install gdown && \
-    cd /opt/nanosam/data/ && \
-    gdown https://drive.google.com/uc?id=14-SsvoaTl-esC3JOzomHDnI9OGgdO2OR && \
-    ls -lh && \
-    cd /opt/nanosam/ && \
+COPY resnet18_image_encoder.onnx /opt/nanosam/data
+RUN cd /opt/nanosam/ && \
     /usr/src/tensorrt/bin/trtexec \
         --onnx=data/resnet18_image_encoder.onnx \
         --saveEngine=data/resnet18_image_encoder.engine \
$ jetson-containers build nanosam
-- Done building container nanosam:r36.4.3

Verify

$ sudo docker run -it --rm nanosam:r36.4.3
# cd /opt/nanosam/
# python3 examples/basic_usage.py --image_encoder="data/resnet18_image_encoder.engine" --mask_decoder="data/mobile_sam_mask_decoder.engine"

Then the result can be found in the /opt/nanosam/data/basic_usage_out.jpg.

Thanks.

Thank you for the instructions but it was still not working.

The problem was the build of the torch2trt container. The default docker runtime was not set correctly, similar to here.

Changed the /etc/docker/daemon.json to

{
 "default-runtime": "nvidia",
  "runtimes": {
     "nvidia": {
         "path": "/usr/bin/nvidia-container-runtime",
         "runtimeArgs": []
     }
 }

}

then it was working.

Hi,

After the change, are you able to run the NanoSAM container on your environment?
Thanks

Yes, both changes were necessary. The Dockerfile change and the daemon.json change.