Curl: no version information available Error

When I try to upload a document I get the following error message:
curl: /opt/conda/lib/libcurl.so.4: no version information available (required by curl)

I am using the latest version of the AI-Workbench as well as Hybrid RAG.
Was this ever tested?

Please tick the appropriate box to help us categorize your post
Bug or Error
Feature Request
Documentation Issue
Other

Hi Max,

Thanks for reaching out. I just pulled to project down to try and replicate and it appears the message you are seeing is benign. The project was running nominally for me–are you running into any issues/errors on the frontend UI?

Here is some sample log output where you can see the curl message periodically in the logs.

2025-07-21T21:25:34.300330Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-0
2025-07-21T21:25:34.397730Z INFO shard-manager: text_generation_launcher: Shard ready in 16.918901974s rank=0
2025-07-21T21:25:34.492511Z INFO text_generation_launcher: Starting Webserver
2025-07-21T21:25:34.599201Z INFO text_generation_router_v3: backends/v3/src/lib.rs:90: Warming up model
2025-07-21T21:25:35.417099Z INFO text_generation_launcher: Cuda Graphs are enabled for sizes [32, 16, 8, 4, 2, 1]
2025-07-21T21:25:36.135545Z INFO text_generation_router_v3: backends/v3/src/lib.rs:102: Setting max batch total tokens to 471453
2025-07-21T21:25:36.135604Z INFO text_generation_router_v3: backends/v3/src/lib.rs:126: Using backend V3
2025-07-21T21:25:36.135655Z INFO text_generation_router::server: router/src/server.rs:1797: Using the Hugging Face API
2025-07-21T21:25:36.135677Z INFO hf_hub: /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/hf-hub-0.3.2/src/lib.rs:55: Token file not found “/data/token”
2025-07-21T21:25:36.652680Z INFO text_generation_router::server: router/src/server.rs:2515: Serving revision 3b98162e3f97550d62aeeb19ea50208f968c678a of model nvidia/Llama3-ChatQA-1.5-8B
2025-07-21T21:25:38.999901Z INFO text_generation_router::server: router/src/server.rs:1943: Using config Some(Llama)
2025-07-21T21:25:38.999935Z WARN text_generation_router::server: router/src/server.rs:2090: Invalid hostname, defaulting to 0.0.0.0
curl: /opt/conda/lib/libcurl.so.4: no version information available (required by curl)
Polling inference server. Awaiting status 200; trying again in 5s.
2025-07-21T21:25:39.049822Z INFO text_generation_router::server: router/src/server.rs:2477: Connected
curl: /opt/conda/lib/libcurl.so.4: no version information available (required by curl)
Service reachable. Happy chatting!
INFO: 127.0.0.1:50370 - “POST /generate HTTP/1.1” 200 OK
2025-07-21T21:27:21.296160Z INFO text_generation_router_v3::radix: backends/v3/src/radix.rs:108: Prefix 0 - Suffix 570
2025-07-21T21:27:21.433834Z INFO compat_generate{default_return_full_text=true compute_type=Extension(ComputeType(“1-nvidia-h100-nvl”))}:generate_stream{parameters=GenerateParameters { best_of: None, temperature: Some(0.7), repetition_penalty: Some(1.0), frequency_penalty: None, top_k: Some(10), top_p: Some(0.999), typical_p: Some(0.95), do_sample: false, max_new_tokens: Some(512), return_full_text: Some(false), stop: , truncate: None, watermark: false, details: true, decoder_input_details: false, seed: None, top_n_tokens: None, grammar: None, adapter_id: None } total_time=“139.001694ms” validation_time=“1.28671ms” queue_time=“153.802µs” inference_time=“137.561457ms” time_per_token=“13.756145ms” seed=“Some(2323640628534126557)”}: text_generation_router::server: router/src/server.rs:645: Success

I was able to see the curl warning in the logs, but the project still ran fine for me with the Cloud inference option and with the Local inference option.

If inference is not working for you, there may be another error message in the logs that may be more helpful. Let me know if you are able to find it and we can help debug further. Thanks!

Hello,

when I run inference it starts to generate and then suddenly I get an error message:
Server error: Value out of range: -1224647480

return self._loop.run_until_complete(task)

File “/opt/conda/lib/python3.11/asyncio/base_events.py”, line 641, in run_until_complete
self.run_forever()
File “/opt/conda/lib/python3.11/asyncio/base_events.py”, line 608, in run_forever
self._run_once()
File “/opt/conda/lib/python3.11/asyncio/base_events.py”, line 1936, in _run_once
handle._run()
File “/opt/conda/lib/python3.11/asyncio/events.py”, line 84, in _run
self._context.run(self._callback, *self._args)
File “/opt/conda/lib/python3.11/site-packages/grpc_interceptor/server.py”, line 165, in invoke_intercept_method
return await self.intercept(

File “/opt/conda/lib/python3.11/site-packages/text_generation_server/interceptor.py”, line 21, in intercept
return await response
File “/opt/conda/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py”, line 120, in _unary_interceptor
raise error
File “/opt/conda/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py”, line 111, in _unary_interceptor
return await behavior(request_or_iterator, context)
File “/opt/conda/lib/python3.11/site-packages/text_generation_server/server.py”, line 184, in Decode
return generate_pb2.DecodeResponse(
ValueError: Value out of range: -1224647480
2025-07-25T19:20:38.239491Z ERROR batch{batch_size=1}:decode:decode{size=1}:decode{size=1}: text_generation_router_v3::client: backends/v3/src/client/mod.rs:54: Server error: Value out of range: -1224647480
2025-07-25T19:20:38.240931Z ERROR compat_generate{default_return_full_text=true compute_type=Extension(ComputeType(“1-nvidia-geforce-rtx-3090”))}:generate_stream{parameters=GenerateParameters { best_of: None, temperature: Some(0.7), repetition_penalty: Some(1.0), frequency_penalty: None, top_k: Some(10), top_p: Some(0.999), typical_p: Some(0.95), do_sample: false, max_new_tokens: Some(256), return_full_text: Some(false), stop: , truncate: None, watermark: false, details: true, decoder_input_details: false, seed: None, top_n_tokens: None, grammar: None, adapter_id: None }}:async_stream:generate_stream:schedule:infer:send_error: text_generation_router_v3::backend: backends/v3/src/backend.rs:488: Request failed during generation: Server error: Value out of range: -1224647480
Exception in thread Thread-1 (wrapped_llm_predict):
Traceback (most recent call last):
File “/home/workbench/.conda/envs/api-env/lib/python3.10/site-packages/text_generation/client.py”, line 259, in generate_stream
response = StreamResponse(**json_payload)
File “/home/workbench/.conda/envs/api-env/lib/python3.10/site-packages/pydantic/main.py”, line 253, in init
File “/home/workbench/.conda/envs/api-env/lib/python3.10/site-packages/langchain/llms/base.py”, line 1053, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File “/home/workbench/.conda/envs/api-env/lib/python3.10/site-packages/langchain/llms/huggingface_text_gen_inference.py”, line 194, in _call
for chunk in self._stream(prompt, stop, run_manager, **kwargs):
File “/home/workbench/.conda/envs/api-env/lib/python3.10/site-packages/langchain/llms/huggingface_text_gen_inference.py”, line 240, in _stream
for res in self.client.generate_stream(prompt, **invocation_params):
File “/home/workbench/.conda/envs/api-env/lib/python3.10/site-packages/text_generation/client.py”, line 262, in generate_stream
raise parse_error(resp.status_code, json_payload)
text_generation.errors.GenerationError: Request failed during generation: Server error: Value out of range: -1224647480