Cannot create AOVs with Python API

test.zip (5.2 KB)
Here are the tests run from ./repo.sh test if there are better tests to run let me know.

Come monday I will do the same test on a machine with one GPU a 48Gb Ampere Generation RTX a6000

So it’s not the scene. It is something with 32bit EXRs maybe. Can you try disabling the 32bit override and just try the default? You need to test the variables here to isolate it down. We know no AOVs are ok. We know PNGs are ok. So now let’s look at EXRs and 32bit EXRs. Maybe also look at compression types. Also it may be one specific AOV that is causing it to crash.

I will test the same on my 3090 on windows.

So try:

  1. Standard EXRs - no AOVs
  2. 32bit EXRS - no AOVS
  3. Standard EXRs with AOVs
  4. 32bit EXRs with AOVs

If that provides some clues, then we can look compression types and specific AOVs.

The other variable is between Linux and Windows

Thanks for your patience.

I can certainly try without 32bit flag for AOVs in the AOVs tab

But this foresakes the need for this flag to produce the pass labelled PtZDepth

Without the 32bit flag being on the PtZDepth pass is not produced so functionality wise it is not acceptable

I discovered this 6 days ago

This is why I repeatedly asked “how can you produce an 8bit PtZDepth pass?” on at least three occasions as an attempt to forego the accuracy to get something functional as a depth pass

I will do the test now and if some miracle occurs and the PtZDepth pass is output to disk without corrupting the integrity of the file to be read then I can have my depth pass and move on

Ok so that flag actually does not need to be engaged. Even if you leave that off, it still produces 32bit EXRs just fine. I just ran a test. That is just for seeing the raw 32bit output in the AOV preview in viewport. Again, using the astronaut, I am able to produce the entire Var set of AOVs in 32bit no problem, including the PtZDepth

Here is a video of my workflow

1 Like

Here is my windows test on a 3090 rendering 100 frames of animation of the astronaut file, with 32bit EXRs on and 6 AOVs. Everything rendered fine.

I would certainly look at other hardware factories, like disabling the second card, updating drivers, trying windows, trying a new machine etc. I have also have several other machines tested. And no memory leaks. And more importantly no errors at all in the console. Not even warnings.

Thanks Richard when I was initially testing making PtZDepth there were a number of steps mentioned in the video tutorial

Adding the AOV via the creation menu to create the render product
Setting the camera on the render product
Setting the movie capture to use the render product
Setting the output to Open EXR
Setting the bit depth per channel from 8bit to 32bit

I assumed wrongly these ALL had to be done to produce a PtZDepth pass

Because when I did all of this AND the near and far clipping values matched the scale of my scene I saw values in the depth pass for the first time

At this point I had a recipe that matched the video documentation and there was no need to change the recipe

Now I know 32 bits per channel is producing 32bits per channel regardless of its state I can see if it triggers a new code path on Linux with the Astronaut for a sequence of 100 frames and inspection of the fidelity of the PtZDepth pass will tell me if this is functional for my use case

How much memory was consumed rendering 100 frames?

Given that you have asked me to test with alternative hardware can you please test with your known working good hardware under Linux?

I would but I honestly don’t know linux. Never used it and I do not have access to it, but I can have some of the engineers test it next week.

Yes, all of those steps you mentioned are no longer required. You simple start a fresh scene, right click and “Create AOV” and that is it. You are done. Just render.

1 Like

In that case a TODO: would be to update the documentation and remove deprecated documentation.

in your videos on the first frame the Render Product pull down is already populated with /Render/RenderVar

Is that done automatically when you use the menu “Create > AOV”

Because this step is manual under Linux.

logs_20250622_samh.zip (93.2 KB)
stdout_20250622_samh.zip (6.8 KB)

Still leaking memory without 32 bit OpenEXR specified

See video attached

Now testing on the same hardware booted into Windows.

PS C:\Users\sam> nvidia-smi pci -gErrCnt
GPU 0: NVIDIA GeForce RTX 3090 (UUID: GPU-3c6a2629-8def-c540-f517-f608559bc76b)
    REPLAY_COUNTER:          0
    REPLAY_ROLLOVER_COUNTER: 0
    L0_TO_RECOVERY_COUNTER:  80
    CORRECTABLE_ERRORS:      0
    NAKS_RECEIVED:           0
    RECEIVER_ERROR:          0
    BAD_TLP:                 0
    NAKS_SENT:               0
    BAD_DLLP:                0
    NON_FATAL_ERROR:         0
    FATAL_ERROR:             0
    UNSUPPORTED_REQ:         0
    LCRC_ERROR:              0
    LANE_ERROR:
         lane  0: 0
         lane  1: 0
         lane  2: 0
         lane  3: 0
         lane  4: 0
         lane  5: 0
         lane  6: 0
         lane  7: 0
         lane  8: 0
         lane  9: 0
         lane 10: 0
         lane 11: 0
         lane 12: 0
GPU 1: NVIDIA GeForce RTX 3090 (UUID: GPU-f6b69e65-5bc8-f1bb-dd15-10b2b7f1d4e6)
    REPLAY_COUNTER:          0
    REPLAY_ROLLOVER_COUNTER: 0
    L0_TO_RECOVERY_COUNTER:  285
    CORRECTABLE_ERRORS:      0
    NAKS_RECEIVED:           0
    RECEIVER_ERROR:          0
    BAD_TLP:                 0
    NAKS_SENT:               0
    BAD_DLLP:                0
    NON_FATAL_ERROR:         0
    FATAL_ERROR:             0
    UNSUPPORTED_REQ:         0
    LCRC_ERROR:              0
    LANE_ERROR:
         lane  0: 0
         lane  1: 0
         lane  2: 0
         lane  3: 0
         lane  4: 0
         lane  5: 0
         lane  6: 0
         lane  7: 0
         lane  8: 0
         lane  9: 0
         lane 10: 0
         lane 11: 0
         lane 12: 0

Same hardware

It leaks memory in Windows 11 identically to the way it leaks memory in Ubuntu 24.04.2

see video

and logs
omniverse_107.3_install_run_logs.zip (59.2 KB)

Whoops that video is the warm up video

now for the season finale with memory leak and inability to cancel job

Note the lack of errors in the log file

I look forward to the engineer’s report into this memory leak.

@Richard3D

Can you honestly tell me you rendered 100 frames with 64 samples per pixel and 3 time samples at with a shutter -0.25 to 0.25 and only used a small amount of system memory of the astronaut scene under Windows with Omniverse Composer created from the Omniverse Kit repository under Windows 11

Because it allocated in the order of 20 gigabytes of system memory within rendering two frames gaining in memory allocated with each time sample within each frame

Meaning the amount of system memory allocated at the end of 100 frames would be in the magnitude of 1 terra byte of system memory

My machine doesn’t have that kind of resource

Can you report back with a screen shot of the memory allocation over 15 minutes of rendering on your machine?

A simple video of the Task Manager with the memory highlight would be sufficient

Because we are on the same operating system with similar hardware and seeing different results there must be a measurement error

@Richard3D

Looking here

I can see you have only 10 samples per pixel not 64 samples per pixel

No motion blur not motion blur enabled

No time samples not three time samples

Please make better comparisons of comparing like with like

Assuming that my test case and your test case are equivalent is a falsehood

If you make the changes equivalent to my test case I presume you will get a large memory leak

If this is not the case it will come down to a more detailed comparison of the hardware and run time environment

Ok I can add some more samples and try to match those settings. I will run some more tests but I don’t see it making any difference. The difference between 10 samples and 64 samples if just a more time. It does not allocate any more memory. Only a bigger scene would allocate more memory. Same with motion blur and the other settings. They may add processing time but not memory. This is a tiny scene. I have rendered projects 10x as complex with the same computer and it’s stable.

But I can do another test and show you the task manager.

“ Meaning the amount of system memory allocated at the end of 100 frames would be in the magnitude of 1 terra byte of system memory” - this is not a linear relationship. Most programs grow their memory base dynamically with changing conditions to fill approximately 80-90% of the RAM available. That is perfectly normal. A good example would be After Effects. When you start rendering frames it goes from 10% to 90% linearly very quickly and then stabilizes at 90%.

I just want to clarify that just because the memory increases like that over time, that is not necessarily a memory leak. That could be intentional program caching. We are quite RAM hungry, I acknowledge that. We always have been. We want to squeeze the most out of the system resources. But even if it goes toward 99%, that does not mean that we should crash. High memory, but stable.

If you want just render 1000 frames and you will see that it will not and cannot use more system memory than you have. It will just grow up to 99% max and stabilize.

1 Like

Here is a long 20 min video of me rendering the first 15 frames of the astronaut with EXRs and full AOVs, with the exact settings you asked for. 64 samples, 3 motion blur samples etc. As expected it was slower but still very very stable. And as I said, the ram ramped up linearly up to 99%. High, but stable. I also showed you that I could cancel the rendering successfully. And the program was stable afterwards.

So as mentioned, if you are crashing that is not good, but we can look at some factors that relate to your particular system. But the software is good.

1 Like