Optix denoiser error

Recently, when using the PathTracing render mode in Omniverse, I started getting lots of these errors.

2023-03-16 12:13:18 [47,201ms] [Error] [rtx.optixdenoising.plugin] result failed. Optix Error: OPTIX_ERROR_INTERNAL_ERROR.
Internal error
2023-03-16 12:13:18 [47,374ms] [Error] [rtx.optixdenoising.plugin] [Optix] [ERROR] Unable to load denoiser weights
Unable to load denoiser weights

The generated rendered images contain a lot of noise.

The last thing I was doing was rewriting my Python script (which runs Omniverse via SimulationApp) and my configuration files to use centimeters instead of meters. But even after returning back to the original state with meters the error still persists and the renders are noisy and unusable.

I do not know much about Optix denoiser.

What could be the cause of this problem and how can I solve it?

Thank you

Hello @michal.stanik! Do you remember what Application you were using and what version it was? If you go to your logs folder (usually found here: C:\Users<USERNAME>.nvidia-omniverse\logs\Kit<APPNAME>) and them here, I can have the RTX Team take a look at what the issue might be.

Also, what is your operating system and GPU/GPU Driver?

Hello.
I was using Isaac Sim 2022.1.
I think I found the logs folder but I can see no recent logs (the newest is from February) :/.
I am running it inside a docker container (nvcr.io/nvidia/isaac-sim:2022.1.1 with some little changes) which is Ubuntu 18.04.6 LTS.
From nvidia-smi output in the container I have:
Driver Version: 515.48.07 CUDA Version: 11.7

I finally found the logs. They were not under .nvidia-omniverse/logs but rather under /isaac-sim/kit/logs/Kit/Isaac-Sim/2022.1/ (/isaac-sim is the workdir).

In the logs I could find

[Warning] [rtx.optixdenoising.plugin] Using OptiX denoiser with `normals` pass requires driver v440 or newer

Where can I check the driver version? I do not know what driver it is. nvidia-smi outputs

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.48.07    Driver Version: 515.48.07    CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+

The denoiser appears to work on another computer (running inside docker as well).

@WendyGram Is there any update on this? I am having exactly the same problem on the cloud.
We are running with this image which uses nvidia driver 535.183.01: bottlerocket-aws-k8s-1.29-nvidia-x86_64-v1.20.4-b6163b2a

The hardware is g5.2xlarge, which has an A10G graphics card.

Any help would be greatly appreciated!

1 Like

Hi Edward and Michal.
To resolve Optix failures in dockers, please follow these steps:

In addition to v1.15.0+ NVIDIA container toolkit, please make sure you are running with NVIDIA runtime and specified NVIDIA_DRIVER_CAPABILITIES=all:

  1. Configure the container runtime by using the nvidia-ctk command:
sudo nvidia-ctk runtime configure --runtime=docker
  1. Restart the Docker daemon:
sudo systemctl restart docker
  1. Start the container requesting --runtime=nvidia and NVIDIA_DRIVER_CAPABILITIES=all, plus other desired options.

For example:

docker run it -rm --runtime=nvidia --gpus all --name=xyz -e NVIDIA_DRIVER_CAPABILITIES=all

@nnikfetrat I think @edward.schneeweiss is likely talking about a Kubernetes deployment based off of his AMI

you need to map it from disk. its a driver issue:

-v /usr/share/nvidia/nvoptix.bin:/usr/share/nvidia/nvoptix.bin

1 Like

Missing mount og nvoptix.bin from libnvidia-gl-535 · Issue #127 · NVIDIA/nvidia-container-toolkit · GitHub

Thanks for responding @Richard3D! locally I can mount /usr/share/nvidia/nvoptix.bin, but on the cloud I cannot find the nvoptix.bin. I’m using the bottlerocket image recommended for ov farm: 535.183.01: bottlerocket-aws-k8s-1.29-nvidia-x86_64-v1.20.4-b6163b2a

Any idea what is wrong?

1 Like

@Richard3D @edward.schneeweiss I could not find that mount either when looking within that specific AMI. Is there another that NVIDIA recommends using that will come with OptiX pre-installed?

The best I can do is forward you to our official docs on Containers
Installing the NVIDIA Container Toolkit — NVIDIA Container Toolkit 1.16.0 documentation

@edward.schneeweiss you should try a different AMI such as amazon-eks-gpu-node-1.29-v20240703 (latest AL2 EKS optimized AMI with GPU support for K8s 1.29) instead of Bottlerocket. Even though Bottlerocket has NVIDIA container toolkit pre-installed, this other AMI worked for me out of the box and is also mentioned in the Omniverse Farm docs.

2 Likes

Thank you so much @jt122 that fixed the issue! I didn’t even need to mount nvoptics.bin!

Great to hear !

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.