Container torch_tensorrt not working

I’m trying to run the container in my Jetson Orin AGX (Jetpack 6.2+b77):

jetson-containers run $(autotag torch_tensorrt)

and the whole process get stuck here (Loading: 0 packages loaded).

I break the process after 30 minutes of the same message and the container doesn’t have torch_tensorrt:

My settings:

It’s

import tensorrt
print(tensorrt__.version__)

not torch_tensorrt

No. It’s torch_tensorrt. Just like the default test code example included in the same package.

Dear @esteban.gallardo ,
Just checking if you notice issue with other containers as well like pytorch?

XTTS container no longer works:

jetson-containers run dustynv/xtts:r36.3.0

Running the test.py:

Gives the error:

I need both services. Is there any roadmap to fix the containers? I need the XTTS container to work again urgently.

Dear @esteban.gallardo ,
Jetpacj 6.2 has BSP 36.4.3 . Did you try building locally with jetson-container and noticed issue for xtts as well?

Building also fails. I just want that something that it’s been working fine for a year can work again.

The command '/bin/sh -c cd /opt && git clone --depth=1 https://github.com/NVIDIA-AI-IOT/torch2trt && cd torch2trt && cp /tmp/patches/flattener.py torch2trt && pip3 install --verbose . && sed 's|^set(CUDA_ARCHITECTURES.*|#|g' -i CMakeLists.txt && sed 's|Catch2_FOUND|False|g' -i CMakeLists.txt && cmake -B build -DCUDA_ARCHITECTURES=${CUDA_ARCHITECTURES} . && cmake --build build --target install && ldconfig && pip3 install --no-cache-dir --verbose nvidia-pyindex &&     pip3 install --no-cache-dir --verbose onnx-graphsurgeon' returned a non-zero code: 1
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/esteban/Workspace/jetson-containers/jetson_containers/build.py", line 122, in <module>
    build_container(args.name, args.packages, args.base, args.build_flags, args.build_args, args.simulate, args.skip_tests, args.test_only, args.push, args.no_github_api, args.skip_packages)
  File "/home/esteban/Workspace/jetson-containers/jetson_containers/container.py", line 147, in build_container
    status = subprocess.run(cmd.replace(_NEWLINE_, ' '), executable='/bin/bash',shell=True, check=True)
  File "/usr/lib/python3.10/subprocess.py", line 526, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'DOCKER_BUILDKIT=0 docker build --network=host --tag xtts:r36.4.0-torch2trt --file /home/esteban/Workspace/jetson-containers/packages/pytorch/torch2trt/Dockerfile --build-arg BASE_IMAGE=xtts:r36.4.0-tensorrt /home/esteban/Workspace/jetson-containers/packages/pytorch/torch2trt 2>&1 | tee /home/esteban/Workspace/jetson-containers/logs/20250211_090007/build/xtts_r36.4.0-torch2trt.txt; exit ${PIPESTATUS[0]}' returned non-zero exit status 1.

Dear @esteban.gallardo,
Could you double check if you have enough space(and space related errors not shown on the terminal log) as I see not sufficient space issue with torch_tensort and then I can login to container and notice same behavior like you. I will verify the same with an external SSD and see if it works.

I’ve free space. I’ve an SSD of 500 Gigas. 140 Gigas left.

esteban@esteban-desktop:~/Workspace/StoryBookEditor$ df -h
Filesystem       Size  Used Avail Use% Mounted on
/dev/nvme0n1p1   456G  294G  140G  68% /
tmpfs             31G   84K   31G   1% /dev/shm
tmpfs             13G   35M   13G   1% /run
tmpfs            5,0M  4,0K  5,0M   1% /run/lock
/dev/nvme0n1p10   63M  118K   63M   1% /boot/efi
tmpfs            6,2G   80K  6,2G   1% /run/user/1000