Workbench Example Project: Hybrid RAG - Stuck at setting up backend polling inference server

Hello NVIDIA team,

I am currently working on the Hybrid Retrieval-Augmented Generation (RAG) quickstart project using NVIDIA AI Workbench. I followed the steps outlined in the documentation, but I encountered an issue during the “Setup RAG Backend” step.

Error Details: At the step where the backend setup was polling the inference server, it got stuck with the following error:

Polling inference server. Awaiting status 200; trying again in 5s.
curl: /opt/conda/lib/libcurl.so.4: no version information available (required by curl)
Max attempts reached: 30. Server may have timed out. Stop the container and try again.

Steps to Reproduce:

  1. Cloned the hybrid RAG project from the NVIDIA GitHub repo.
  2. Configured the NVCF_RUN_KEY and attempted to set up the backend via the Gradio Chat App.
  3. At the “Set Up RAG Backend” step, the build was triggered but failed with the above error.

System Details:

  • libcurl version: /opt/conda/lib/libcurl.so.4
  • NVIDIA AI Workbench installed on local machine
  • Using conda environment
  • Followed all prerequisite steps as per the quickstart guide

Troubleshooting Attempts:

  • Verified libcurl is installed and correctly linked.
  • Tried reinstalling libcurl within the conda environment.
  • Checked and ensured that /opt/conda/lib is in the LD_LIBRARY_PATH.
  • Attempted to manually link the system version of libcurl.

Unfortunately, none of these steps resolved the issue.

Could you please assist in diagnosing and resolving this issue? I am also happy to provide additional logs or details if needed.

Please tick the appropriate box to help us categorize your post
Bug or Error
Feature Request
Documentation Issue
Other

The problem here lies in the version libcurl.so.4 is linked too. libcurl.so.4.8 is not the version curl was originally linked too. Curl see’s this as a security breach. You must add libcurl.so.4.7 to the LD_LIBRARY_PATH

export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH

1 Like

Hi, thanks for reaching out! Do you mind sharing what your runtime logs look like? You can access them at Output > Chat.

I’ll attach a picture of my own logs as reference.

If your logs are appearing to progress similarly to what you see here, it may be the fact that your system is starting up the inference server normally, albeit slower than expected. If this is the case, you can increase your MAX_ATTEMPTS by editing this line here.

If there is another error causing the message you see, that error should similarly be captured in the logs. If so, let us know what you see so we can help address accordingly. Hope this helps!

1 Like

Hi,
I’m experiencing the same issue as Mr. @Malay_Kumar .
After reviewing the conversation and the solution provided by Mr. @bfurtaw , I followed these steps:

I navigated to AI Workbench > Environment > Variable > Add and added a new environmental variable with the following details:

Name: LD_LIBRARY_PATH
Value: /usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
After clicking “Add,” it prompted me to restart. However, even after waiting for more than half an hour, the issue still persists, and there are no changes reflected in the chat.
You can refer to the attached screenshot for more details.

kindly help.

Hello admin @bfurtaw @edwli

I would greatly appreciate any assistance or feedback on the issue I’ve raised. Your help would mean a lot.

Thank you.

What I did was execute this command directly in the bash terminal of the container and then I ran “setup rag backend”

export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH

`