Manifest profile fails with ManifestDownloadError: Object not found

Hi,

I’m trying to run MAISI on NVIDIA NIM with a fine-tuned bundle (diffusion + controlnet trained on my own data), following the official Model fine-tuning guide.

It fails with both the fine-tuning and the default profile, giving ManifestDownloadError. The full command is the following:

docker run --rm -it --name maisi –runtime=nvidia -e CUDA_VISIBLE_DEVICES=0 -p 8000:8000 -e NGC_API_KEY=$NGC_API_KEY -e NIM_MANIFEST_PROFILE=43d0baebee73b7bdc2f9c7edf5c726b2323b4f7283eab92fab893d451592656d -v /home/areu/PROJECTS/MAISI/tutorials/generation/maisi/models_nim:/opt/bundle nvcr.io/nim/nvidia/maisi:1

And the error (extract) is like this:

Downloading manifest profile: 43d0…5656d
ERROR nim_sdk::hub::repo: Object not found (repeated)
nimlib.exceptions.ManifestDownloadError: Error downloading manifest: Object not found

I already checked that the API key is valid (NGC, Docker, env) and the names of the models in the folder models_nim are:

autoencoder_epoch273.pt
controlnet_3d_rflow.pt
diff_unet_3d_rflow.pt
mask_generation_autoencoder.pt
mask_generation_diffusion_unet.pt.

Specifications:

OS: Ubuntu 22.04.5 LTS
Docker: 28.0.1 (build 068a01e)
NVIDIA Container Toolkit CLI:1.17.5 (f785e908a7f72149f8912617058644fd84e38cde)
GPU Driver: 535.230.02
CUDA: 12.2

I’d be very grateful if anyone could help me with this :)

Hi Areu,

I checked in with the NIM team that handles MAISI.

They were able to replicate your issue and the main reason is that the manifest looks for very specific spelling for the .pt files.

Looks like you have autoencoder_epoch273.pt, controlnet_3d_rflow.pt, and diff_unet_3d_rflow.pt whereas it should be renamed to autoencoder.pt, controlnet.pt, and diffusion_unet.pt.

Here is the list of names that worked for the team:

With even a single character difference chenged, it will fail the manifest check and would throw the same error that you were seeing.

Can you try the above and let us know if you succeed?

Thank you!

Hi Aharpster,

Thank you very much for your response. I did that, but it still raises the same error.

I also succeeded in running it with the default manifest profile, got into the resulting container, inspected the /opt/nim/workspace/models/maisi/bundle folder, and recreated it in a local folder, but still the same error.

Could you please share the folder structure of the /opt/bundle in the container after starting it? I am wondering if there are extra folder levels that cause this issue.

Sure, this is the structure of the bundle folder after running the container with the defaullt manifest profile (default models):

The folder /opt/bundle doesn’t exist.

For the default container, there won’t be /opt/bundle folder.

Yet, I saw you ran this command to start the container

docker run --rm -it --name maisi –runtime=nvidia -e CUDA_VISIBLE_DEVICES=0 -p 8000:8000 -e NGC_API_KEY=$NGC_API_KEY -e NIM_MANIFEST_PROFILE=43d0baebee73b7bdc2f9c7edf5c726b2323b4f7283eab92fab893d451592656d -v /home/areu/PROJECTS/MAISI/tutorials/generation/maisi/models_nim:/opt/bundle ``nvcr.io/nim/nvidia/maisi:1

This manifest profile means you want to mount some local fine-tuned models for the NIM inference. And the whole bundle should be mounted at the /opt/bundle path. So could you please try to run this command and recored the files under /opt/bundle folder?

Actually, in order to run your local fine-tuned model, you need to have exactly the same files and folder structure like the default one under /opt/nbim/workspace/models/maisi/bundle. The easy way is to copy the bunlde folder from the default container and replace the model weight with yours and then mount it to the /opt/bunlde path.

Thanks for your suggestion, although I cannot see the folder structure inside the container because it can’t be run in the first place and my folder with the models cannot be mounted there as a result. Am I mistaken about this?

Could you please show me what is in this folder /home/areu/PROJECTS/MAISI/tutorials/generation/maisi/models_nim as you are trying to mount it to the /opt/bundle folder.

Yes, it’s this:

It was renamed from models_nim to models. Also, as I wrote to Aharpster in the first reply:

I also succeeded in running it with the default manifest profile, got into the resulting container, inspected the /opt/nim/workspace/models/maisi/bundle folder, and recreated it in a local folder, but still the same error.

That’s the problem. You cannot just mount the model weight to the /opt/bundle folder when trying to run this NIM with your fine-tuned model. You need to mount a folder with content like /opt/nim/workspace/models/maisi/bundle folder. As I said, the easy way to do this is
1, Copy the /opt/nim/workspace/models/maisi/bundle folder from the default container to some place out of the container e.g. /tmp/maisi/bundle
2, Replace the corresponding model weight with your fine-tuned weight
3, Mount the whole foldler to the /opt/bunlde folder when starting the docker like -v /tmp/maisi/bundle:/opt/bundle