Hello,
I’m facing an issue with the DiffDock and AlphaFold2 NIM endpoints. DiffDock was working fine when I checked on Friday, but it’s failing now. I’m seeing the same error for the AlphaFold2 endpoint as well. I’ve added the error message below for reference.
Error:
{“type”:“urn:inference-service:problem-details:internal-server-error”, “title”:“Internal Server Error”, “status”:500, “detail”:“Inference error”}
Please let me know if this is a known issue or if anything changed recently.
Thanks.
Hello,
Thanks for bringing this up — I’m experiencing something very similar on my end as well.
Both the DiffDock and AlphaFold2 NIM endpoints returning a 500 “Inference error” suggests this might not be an isolated issue. Since DiffDock was working recently (as of Friday), it’s possible there has been a backend update, model deployment change, or temporary service instability affecting both endpoints.
From a Clinical Research perspective, this kind of disruption can significantly impact workflows, especially when these models are being used for protein structure prediction and drug docking pipelines. It would be helpful to know if there has been any recent change in:
-
Model versions or configurations
-
API schema or request formats
-
Backend infrastructure or inference services
In the meantime, you might want to:
-
Double-check request payloads and headers (in case of silent changes)
-
Review any recent announcements or changelogs
-
Retry with minimal inputs to rule out data-related issues
Would appreciate any official confirmation from the team on whether this is a known issue or if a fix is in progress.
Thanks!