Docker image ephemeral storage requirements are huge


I am rendering a scene with many objects with large amount of texture pixels.

As a result I need a lot of room on my ephemeral storage for my pods in k8s.

Is there a way that I can map this to a persistent volume claim shared by many pods

Or is there a way to use a nucleus cache?

I am all ears, because we do not have a lot of ephemeral storage, but we do have lots of GPUs

it would be great if a number of GPUs could share the texture cache together.

Oh and while we are on the topic of textures, how do I use pre-mip-mapped textures in Omniverse?

It seems that most of this issue could be solved by using texture tiled OpenEXR files and just looking up the pixels using a OpenImageIO memory/disk based texture cache, because we cannot need the highest resolution textures for what I am doing.

Each server in the pod, will have its own GPUs and its own copy of the kit app or kit code you are running. Each time you run your files on each server, it should be building up a large amount of texture cache on each drive. HOWEVER, I am not sure how this applies to containers and pods. It is my understanding that all of this cache is temporary and destroyed every time you close the pod.

Do you have to run each server instance in a container ? Can you not run each server as its own Replicator directly ? Then it should start to store texture files cache in a normal local hard drive folder.

The cluster I am using is Kubernetes based, it must be in a container.

This is infrastructure as code, not a desktop application.

Are you an Enterprise Customer with NVIDIA support ?

I do not think so.