NVIDIA Release 25.08 (build 197421315)
PyTorch Version 2.8.0a0+34c6371
Container image Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Copyright (c) 2014-2024 Facebook Inc.
Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu)
Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
Copyright (c) 2011-2013 NYU (Clement Farabet)
Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
Copyright (c) 2006 Idiap Research Institute (Samy Bengio)
Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
Copyright (c) 2015 Google Inc.
Copyright (c) 2015 Yangqing Jia
Copyright (c) 2013-2016 The Caffe contributors
All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
WARNING: The NVIDIA Driver was not detected. GPU functionality will not be available.
Use the NVIDIA Container Toolkit to start this container with GPU support; see NVIDIA Cloud Native Technologies - NVIDIA Docs .
NOTE: The SHMEM allocation limit is set to the default of 64MB. This may be
insufficient for PyTorch. NVIDIA recommends the use of the following flags:
docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 …
Please add --runtime nvidia to allow GPU access within the container.
Locally, the GPU can work but please check the GPU status with nvidia-smi instead.
For more information, please find the topic below for details:
Do you get the PyTorch container working?
The PyTorch container can work on Thor as expected like below:
$ sudo docker run --rm -it --network=host -e NVIDIA_DRIVER_CAPABILITIES=compute,utility,video,graphics --runtime nvidia --privileged -v /tmp/.X11-unix:/tmp/.X11-unix -v /etc/X11:/etc/X11 --device /dev/nvhost-vic -v /dev:/dev nvcr.io/nvidia/pytorch:25.08-py3
=============
== PyTorch ==
=============
NVIDIA Release 25.08 (build 197421315)
PyTorch Version 2.8.0a0+34c6371
Container image Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Copyright (c) 2014-2024 Facebook Inc.
Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu)
Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
Copyright (c) 2011-2013 NYU (Clement Farabet)
Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
Copyright (c) 2006 Idiap Research Institute (Samy Bengio)
Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
Copyright (c) 2015 Google Inc.
Copyright (c) 2015 Yangqing Jia
Copyright (c) 2013-2016 The Caffe contributors
All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
GOVERNING TERMS: The software and materials are governed by the NVIDIA Software License Agreement
(found at https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-software-license-agreement/)
and the Product-Specific Terms for NVIDIA AI Products
(found at https://www.nvidia.com/en-us/agreements/enterprise-software/product-specific-terms-for-ai-products/).
root@tegra-ubuntu:/workspace# python3 <<'EOF'
import torch
print("PyTorch version:", torch.version)
print("CUDA available:", torch.cuda.is_available())
if torch.cuda.is_available():
print("GPU name:", torch.cuda.get_device_name(0))
x = torch.rand(10000, 10000, device="cuda")
print("Tensor sum:", x.sum().item())
EOF
PyTorch version: <module 'torch.version' from '/usr/local/lib/python3.12/dist-packages/torch/version.py'>
CUDA available: True
GPU name: NVIDIA Thor
Tensor sum: 50001224.0