|
Request: Implement “429 [error_message] – avoid rate limit errors by adding a plan-before-execution guideline to AGENTS.md” & Limit RPM to 10
|
|
0
|
0
|
April 21, 2026
|
|
Kimi K2.6
|
|
3
|
210
|
April 21, 2026
|
|
URGENT: GLM-5 Deprecation (April 20, 2026) — Replacement z-ai/glm-5.1 Not Available in NIM API
|
|
5
|
732
|
April 21, 2026
|
|
Issue with --gpu-memory-utilization — parameter seems ignored
|
|
0
|
14
|
April 20, 2026
|
|
Issue running NIM Llama 3.1 8B in air‑gapped environment: corrupted output on chat/completions
|
|
3
|
55
|
April 20, 2026
|
|
Give us qwen 3.6
|
|
0
|
64
|
April 19, 2026
|
|
Request for NVIDIA NIM API Rate Limit Increase (40 → 200 RPM) – OpenClaw Agent Development
|
|
1
|
61
|
April 19, 2026
|
|
Rate Limit Increase Request: Developer Research & Model Fine-Tuning
|
|
12
|
304
|
April 18, 2026
|
|
Please add more deepseek models and fix a issue that exists with deepseek 3.2
|
|
1
|
119
|
April 17, 2026
|
|
BAN OPENCLAW
|
|
4
|
258
|
April 17, 2026
|
|
Request for Exception: API Rate Limit Increase for NVIDIA NIM Access
|
|
1
|
128
|
April 16, 2026
|
|
Request for Rate Limit Increase – NVIDIA NIM (OpenClaw AI Assistant)
|
|
1
|
133
|
April 16, 2026
|
|
Request to Increase API Rate Limit for AI Customer Service Chatbot
|
|
1
|
66
|
April 16, 2026
|
|
Request: Replace GLM-5 with GLM-5.1 from Z-ai on NIM
|
|
8
|
251
|
April 15, 2026
|
|
Bug Report: NVIDIA NIM Hosted Endpoint Reliability Issues - bugs requiring extensive client-side workarounds
|
|
3
|
142
|
April 14, 2026
|
|
Request: Replace GLM-5 with GLM-5.1 from Z-ai on NIM
|
|
5
|
2985
|
April 14, 2026
|
|
Minimax-M2.7 Error
|
|
2
|
169
|
April 14, 2026
|
|
Request: Add GLM 5.1 from Z-ai on NIM
|
|
8
|
486
|
April 13, 2026
|
|
Default reasoning effort for `openai/gpt-oss-120b`?
|
|
1
|
61
|
April 11, 2026
|
|
Whitespaces issues after 40%~ context on GLM5 and Kimi K2.5
|
|
4
|
96
|
April 10, 2026
|
|
Reliability issues across glm5, kimi-k2.5, and minimax-m2.5; temporary mitigations exist but a permanent fix is needed
|
|
2
|
253
|
April 10, 2026
|
|
Is meta/llama-4-scout-17b-16e-instruct disabled for api?
|
|
0
|
34
|
April 6, 2026
|
|
Llama-4-scout-17b-16e-instruct not accessible through API
|
|
2
|
74
|
April 6, 2026
|
|
Api url not working
|
|
3
|
785
|
April 2, 2026
|
|
How to use tool calling with the NIM api in the cloud
|
|
4
|
109
|
March 30, 2026
|
|
Minimaxai/minimax-m2.5 leaks reasoning into choices[0].message.content on /v1/chat/completions; larger max_tokens only masks it
|
|
0
|
120
|
March 26, 2026
|
|
Moonshotai/kimi-k2.5 on Hosted Integrate returns success-shaped failures: repeated HTTP 200 responses with unusable content
|
|
0
|
110
|
March 26, 2026
|
|
Z-ai/glm5 on Hosted Integrate appears unhealthy: repeated direct requests stall with zero bytes returned
|
|
0
|
148
|
March 26, 2026
|
|
Hosted Integrate /v1/responses returns 404 across multiple models while /v1/models and /v1/chat/completions work
|
|
0
|
205
|
March 25, 2026
|
|
VisionAI deployment using Nvidia NIM
|
|
0
|
21
|
March 25, 2026
|