Error when setup Rag Chat backend

If you are reporting a bug or error, consider submitting a Support Bundle to aiworkbench-ea@nvidia.com. This will help us solve your issue more quickly.

Please describe your issue or request: (tick the boxes after creating this topic):

Please tick the appropriate box to help us categorize your post
Bug or Error
Feature Request
Documentation Issue
Other
Hi,
Installed Nvidia AI-Workbench I im able to run some project so I know the installation is working.
I cloned RAG project (GitHub - NVIDIA/workbench-example-hybrid-rag: An NVIDIA AI Workbench example project for Retrieval Augmented Generation (RAG)) and build without error and can start JupyterLab. When start chat and buil the chat backend and I getting this error:

curl: /opt/conda/lib/libcurl.so.4: no version information available (required by curl)
Polling inference server. Awaiting status 200; trying again in 5s.

1 Like

1 Like

Hi, were you able to solve tghis? I am having the exact same issue …

Known issue in the hybrid RAG project, working on a fix now. Feel free to follow along on the main thread. Thanks!

The project has been updated. Details