Thor CUDA available: False

according to the page: Docker Setup — Jetson AGX Thor Developer Kit - User Guide

docker run --rm -it
-v “$PWD”:/workspace
-w /workspace
nvcr.io/nvidia/pytorch:25.08-py3

output:

Status: Downloaded newer image for nvcr.io/nvidia/pytorch:25.08-py3

=============

== PyTorch ==

NVIDIA Release 25.08 (build 197421315)
PyTorch Version 2.8.0a0+34c6371
Container image Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Copyright (c) 2014-2024 Facebook Inc.
Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu)
Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
Copyright (c) 2011-2013 NYU (Clement Farabet)
Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
Copyright (c) 2006 Idiap Research Institute (Samy Bengio)
Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
Copyright (c) 2015 Google Inc.
Copyright (c) 2015 Yangqing Jia
Copyright (c) 2013-2016 The Caffe contributors
All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.

GOVERNING TERMS: The software and materials are governed by the NVIDIA Software License Agreement
(found at https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-software-license-agreement/)
and the Product-Specific Terms for NVIDIA AI Products
(found at https://www.nvidia.com/en-us/agreements/enterprise-software/product-specific-terms-for-ai-products/).

WARNING: The NVIDIA Driver was not detected. GPU functionality will not be available.
Use the NVIDIA Container Toolkit to start this container with GPU support; see
NVIDIA Cloud Native Technologies - NVIDIA Docs .

NOTE: The SHMEM allocation limit is set to the default of 64MB. This may be
insufficient for PyTorch. NVIDIA recommends the use of the following flags:
docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 …

root@0cf386b9fc26:/workspace#

then:

root@0cf386b9fc26:/workspace# python3 <<‘EOF’
import torch
print(“PyTorch version:”, torch.version)
print(“CUDA available:”, torch.cuda.is_available())
if torch.cuda.is_available():
print(“GPU name:”, torch.cuda.get_device_name(0))
x = torch.rand(10000, 10000, device=“cuda”)
print(“Tensor sum:”, x.sum().item())
EOF

but output:

PyTorch version: 2.8.0a0+34c6371d24.nv25.08
CUDA available: False

then, why?

Try this docker run line.

docker run -it --net=host --runtime nvidia --privileged --ipc=host --ulimit memlock=-1 \
   --ulimit stack=67108864 -v $(pwd):/workspace nvcr.io/nvidia/pytorch:25.08-py3 bash


python
import torch
print(torch.cuda.is_available())
print(torch.cuda.get_device_name(0))
device = torch.device("cuda")
x = torch.rand(10000, 10000)
print(x.sum().item())

Thanks for reply, it works now.

but GPU of Thor must works in docker?

Hi,

Please add --runtime nvidia to allow GPU access within the container.
Locally, the GPU can work but please check the GPU status with nvidia-smi instead.

For more information, please find the topic below for details:

Thanks.

Monitoring GPU Metrics on Jetson Thor with nvidia-smi

When running workloads on NVIDIA Jetson Thor, you can monitor GPU metrics using:

nvidia-smi dmon -s puc

Example Output

# gpu   pwr gtemp mtemp sm mem enc dec jpg ofa mclk pclk
# Idx   W   C     C    %  %   %   %   %   %   MHz  MHz
    0   -   -     -    8  0   0   0   0   0   -    -
    0   -   -     -    3  0   0   0   0   0   -    -
    0   -   -     -   12  0   0   0   0   0   -    -

Key Metrics

  • sm → GPU core (streaming multiprocessor) utilization (%)
  • mem → GPU memory controller utilization (%)
  • enc → Hardware video encoder utilization (%)
  • dec → Hardware video decoder utilization (%)
  • jpg → JPEG codec utilization (%)
  • ofa → Optical Flow Accelerator utilization (%)

This method provides a lightweight way to track GPU utilization in real time on Jetson Thor.

2 Likes

Hi,

Do you get the PyTorch container working?
The PyTorch container can work on Thor as expected like below:

$ sudo docker run --rm -it --network=host -e NVIDIA_DRIVER_CAPABILITIES=compute,utility,video,graphics --runtime nvidia --privileged -v /tmp/.X11-unix:/tmp/.X11-unix -v /etc/X11:/etc/X11 --device /dev/nvhost-vic -v /dev:/dev nvcr.io/nvidia/pytorch:25.08-py3

=============
== PyTorch ==
=============

NVIDIA Release 25.08 (build 197421315)
PyTorch Version 2.8.0a0+34c6371
Container image Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Copyright (c) 2014-2024 Facebook Inc.
Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
Copyright (c) 2012-2014 Deepmind Technologies    (Koray Kavukcuoglu)
Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
Copyright (c) 2011-2013 NYU                      (Clement Farabet)
Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
Copyright (c) 2006      Idiap Research Institute (Samy Bengio)
Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
Copyright (c) 2015      Google Inc.
Copyright (c) 2015      Yangqing Jia
Copyright (c) 2013-2016 The Caffe contributors
All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

GOVERNING TERMS: The software and materials are governed by the NVIDIA Software License Agreement
(found at https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-software-license-agreement/)
and the Product-Specific Terms for NVIDIA AI Products
(found at https://www.nvidia.com/en-us/agreements/enterprise-software/product-specific-terms-for-ai-products/).

root@tegra-ubuntu:/workspace# python3 <<'EOF'
import torch
print("PyTorch version:", torch.version)
print("CUDA available:", torch.cuda.is_available())
if torch.cuda.is_available():
  print("GPU name:", torch.cuda.get_device_name(0))
  x = torch.rand(10000, 10000, device="cuda")
  print("Tensor sum:", x.sum().item())
EOF
PyTorch version: <module 'torch.version' from '/usr/local/lib/python3.12/dist-packages/torch/version.py'>
CUDA available: True
GPU name: NVIDIA Thor
Tensor sum: 50001224.0

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.