FLUX.2 Workflow Loading Failure (ComfyUI Portable)

Operating System:
Windows

GPU Hardware:
A series (Blackwell)

GPU Driver:
Latest 581.80

I am following this FLUX2 model and try to use it with my Blackwell Pro 6000

I have serious issues, please help if you can.

Phase 1: Initial Blocking


📝 FLUX.2 Workflow Loading Failure (ComfyUI Portable)

Environment & Goal

  • System/Setup: ComfyUI Portable (Windows) on an NVIDIA RTX Pro 6000 GPU.

  • Goal: Load and run the official Stability AI FLUX.2 model.

  • File Status: All necessary model files are downloaded and confirmed to be in their correct model subfolders:

    • diffusion_models/: flux2-dev.safetensors

    • vae/: ae.safetensors

    • text_encoders/: 10 sharded files (model-00001-of-00010.safetensors…) + model.safetensors.index.json


Chronological Troubleshooting Summary

Issue

The original workflow failed immediately due to the FLUX.2 Text Encoder being split into 10 sharded files, requiring custom assembly logic.

  • Required Fix: Install the specific FLUX custom nodes (Load Diffusion Model (FLUX), Load CLIP (FLUX), etc.).

  • Blocker: The ComfyUI Manager is blocked from installing new custom nodes via Git URL due to a security/portable config restriction (Error: This action is not allowed with this security level configuration).

  • Result: The model cannot be loaded, producing a RuntimeError: mat1 and mat2 shapes cannot be multiplied... when the standard system tried to process the sharded Text Encoder.

Phase 2: Workaround (Bypassing Custom Nodes)

To bypass the installation block, the three custom FLUX loader nodes were deleted and replaced with a single, generic Load Checkpoint node.

  • Blocker: The Load Checkpoint node exhibited a bug where its dropdown was blank/empty and would not list the available flux2-dev.safetensors file, despite it being correctly placed.

  • Fix Applied (Manual JSON Edit): After extensive searching, the saved workflow JSON was located in a temporary folder. The file was manually edited to hardcode the model name:

    • Changed "widgets_values": [null] to "widgets_values": ["flux2-dev.safetensors"] in the Load Checkpoint node block.
  • Fix Applied (Connections): After loading the edited JSON, the model was manually connected: MODEL $\rightarrow$ BasicGuider, CLIP $\rightarrow$ CLIP Text Encode, and VAE $\rightarrow$ two VAE Encode nodes (as shown in the final workflow image).

Phase 3: Current State (Final Error)

With the workflow fully connected and pointing to the model file, the execution stopped at the SamplerCustomAdvanced step, generating the definitive error in the console:

KeyError: 'text_model.encoder.layers.0.mlp.c_fc.weight'


Conclusion & Request for Assistance

The KeyError confirms that the standard Load Checkpoint node, even when manually configured, cannot assemble the sharded Text Encoder weights from the separate files in the text_encoders directory.

The system is definitively blocked by the missing custom code.

We request assistance with one of the following:

  1. A manual, command-line, or direct file-copy method for installing the required FLUX custom nodes without using the blocked ComfyUI Manager/Git functionality.

  2. Guidance on a native ComfyUI configuration (e.g., extra_model_paths.yaml or a specialized node) that can correctly load the sharded Text Encoder files alongside the main checkpoint.

Thank you for your expertise. We have exhausted all standard workarounds.

Pekka Varis

I am sorry, but this is not related to Nvidia Omniverse, so I would suggest you ask with the ComfyUI help section. Or you can try to ask AI.

You are correct, sorry. I got help from NVIDIA & Dell Pro Max Ambassador community and can run the FLUX2 with my Blackwell Pro 6000 in ComfyUI now!

Ok thank you.