[SUPPORT] Workbench Example Project: Agentic RAG

6/4/2025

Updated with local NIM support via compose feature in AI Workbench

Works perfectly now, thanks a lot Edward!

08/04 - Update default NV-Embed-QA embedding model endpoint to nvidia/nv-embedqa-e5-v5

When try to upload files or urls for RAG receive this error:

—DECISION: GENERATION ADDRESSES QUESTION—

Failed to send telemetry event ClientStartEvent: capture() takes 1 positional argument but 3 were given

Failed to send telemetry event ClientCreateCollectionEvent: capture() takes 1 positional argument but 3 were given

[upload] Vectorstore creation failed: [504] Gateway Timeout

{‘_content’: b’', ‘_content_consumed’: True, ‘_next’: None, ‘status_code’: 504, ‘headers’: {‘Date’: ‘Tue, 05 Aug 2025 16:41:40 GMT’, ‘Transfer-Encoding’: ‘chunked’, ‘Connection’: ‘keep-alive’, ‘Access-Control-Allow-Credentials’: ‘true’, ‘Access-Control-Expose-Headers’: ‘nvcf-reqid’, ‘Nvcf-Reqid’: ‘2aa7a6bf-8182-4375-bb83-49f50be9a389’, ‘Nvcf-Status’: ‘errored’, ‘Vary’: ‘Origin, origin, access-control-request-method, access-control-request-headers’}, ‘raw’: <urllib3.response.HTTPResponse object at 0x74110702bd30>, ‘url’: ‘https://ai.api.nvidia.com/v1/retrieval/nvidia/embeddings’, ‘encoding’: None, ‘history’: , ‘reason’: ‘Gateway Timeout’, ‘cookies’: <RequestsCookieJar>, ‘elapsed’: datetime.timedelta(seconds=302, microseconds=543075), ‘request’: <PreparedRequest [POST]>, ‘connection’: <requests.adapters.HTTPAdapter object at 0x7410fcb1f6a0>}

The https://ai.api.nvidia.com appears dont exist anymore

Any suggestion?

Thanks1

08/04 - Update default NV-Embed-QA embedding model endpoint to nvidia/nv-embedqa-e5-v5

Looks like the default embedding model endpoint has been removed. The project has since been updated to a new embedding model endpoint. Can you pull down the most recent changes and try again? Thanks!

Hi,

I’m having an error in the initial build process

[16/19] RUN pip install --user -r /opt/project/build/requirements.txt:

836.1 status = run_func(*args)

836.1 File “/usr/lib/python3/dist-packages/pip/_internal/cli/req_command.py”, line 205, in wrapper

836.1 return func(self, options, args)

836.1 File “/usr/lib/python3/dist-packages/pip/_internal/commands/install.py”, line 389, in run

836.1 to_install = resolver.get_installation_order(requirement_set)

836.1 File “/usr/lib/python3/dist-packages/pip/_internal/resolution/resolvelib/resolver.py”, line 188, in get_installation_order

836.1 weights = get_topological_weights(

836.1 File “/usr/lib/python3/dist-packages/pip/_internal/resolution/resolvelib/resolver.py”, line 276, in get_topological_weights

836.1 assert len(weights) == expected_node_count

836.1 AssertionError

------

ERROR: failed to build: failed to solve: process “/bin/bash -c pip install --user -r /opt/project/build/requirements.txt” did not complete successfully: exit code: 2

1 Like

I’m have the same error.

[16/19] RUN pip install --user -r /opt/project/build/requirements.txt:

44.50 weights = get_topological_weights(

44.50 File “/usr/lib/python3/dist-packages/pip/_internal/resolution/resolvelib/resolver.py”, line 276, in get_topological_weights

44.50 assert len(weights) == expected_node_count

44.50 AssertionError

------

Containerfile:61

--------------------

60 |

61 | >>> RUN pip install --user \

62 | >>> -r /opt/project/build/requirements.txt

63 |

--------------------

ERROR: failed to build: failed to solve: process “/bin/bash -c pip install --user -r /opt/project/build/requirements.txt” did not complete successfully: exit code: 2

The README doesn’t describe how the two different endpoints are used. I looked through the agentic_rag docs, but don’t see it. There is a diagram for the intermediate mode, but it doesn’t tell me which conceptual model is used at any of the stages.

  • NVIDIA_API_KEY
  • TAVILY_API_KEY

This comment

Same error as shown by other above.

Host system: Mac

Hi

Same issue on DGX Spark :

[16/19] RUN pip install --user -r /opt/project/build/requirements.txt:

44.50 weights = get_topological_weights(

44.50 File “/usr/lib/python3/dist-packages/pip/_internal/resolution/resolvelib/resolver.py”, line 276, in get_topological_weights

44.50 assert len(weights) == expected_node_count

44.50 AssertionError

------

Containerfile:61

--------------------

60 |

61 | >>> RUN pip install --user \

62 | >>> -r /opt/project/build/requirements.txt

63 |

--------------------

ERROR: failed to build: failed to solve: process “/bin/bash -c pip install --user -r /opt/project/build/requirements.txt” did not complete successfully: exit code: 2

==> SOLVED BY :

Adding those lines into preBuild.bash file : edit from AI Workbench console

#Upgrade pip to fix resolver issues

pip install --upgrade pip

If not enough :

add in first line into requirements.txt

pip>=23.3

=> ISSUE SOLVED

I hope the same for you !

2 Likes

Inserting the line above as the first line in preBuild.bash fixed for me.

1 Like

I updated github issue as well in case you are following the instructions on DGX Spark | Try NVIDIA NIM APIs

(11/04) - Bump pip version and base container version tag.

Thanks for the catch, all. Looks like an out of date pip manager version. Simple version bump has been pushed.

i’ve tried everything and i’m still getting 403 error.

used echo $(EACH KEY) in cli to confirm keys are set, “nvapi
”. double checked nvidia key is set for public endpoints. generated a second key. i tried generating the key for specific model. restarted container and rebuilt it. then recloned git and did everything again.

but still 403 for me. any ideas out there?

File “/home/workbench/.local/lib/python3.10/site-packages/langchain_nvidia_ai_endpoints/_common.py”, line 462, in _try_raise

raise Exception(f"{header}\\n{body}") from None

Exception: [403] Forbidden

Authorization failed

Hi, a 403 message typically means you have logged in properly via the API key (eg. it is a valid key), but that the permissions set for that key are improper.

Ensure you had included the “Public API Endpoints” permissions when you generated the key if done on NGC.

If generated from a model card on build.nvidia.com, ensure the selected Org visible under your profile dropdown menu has public API Key access (badge should show green with a checkmark instead of gray).

01/15/2026

Updated model selection to include Qwen 253B
Updated Llama 70B from 3.1 to 3.3
Updated documents web links list to reflect the current AI Workbench documentation sites