Hello, I’m running the example code on this page Advanced Usage — NVIDIA NIM for DiffDock for running inference with Bash script. However, even when I use the exact example listed, I still get scores of -1000 in my output.json file.
I’ve tried other variations including the example here Getting Started — NVIDIA NIM for DiffDock.
I’m running the NVIDIA NIM on AWS using g4dn.xlarge, and the diffdock API is working and running. Is there something else I’m missing?
TIA.
Hi @emily.theis I’ve reached out to the diffdock team and will get back to you as soon as possible.
Best,
Sophie
Hi! Any updates?
Hi @emily.theis the bug has been prioritised by the DiffDock team, and assigned to an engineer to fix.
Best,
Sophie
Hi @sophwats, can you advise any work arounds to get results from DiffDock until the bug is fixed?
Dear Emily,
- Could you please confirm that you’re using the up-to-date version
2.0.1
? - Could you please share with us the outcome
output.json
file for us to investigate?
Thanks!
Yuxing
Hi Yuxing,
I have verified that I’m using diffdock:2.0.1
I’ve also attached my output.json file (saved as a text file to attach).
output.txt (65.7 KB)
When I use this dump_output.py script from the example, this also what I get:
rank01_confidence_-1000.00.sdf
Thanks!
Emily
Thanks! I noticed that the platform g4dn.xlarge is using T4 GPU, which is outside our support matrix. I suspect that the issue is from the TF32 data-type, which is used in DiffDock but not supported on T4. We provide an environment variable for switching to FP32 in model inference. You can add the following option in the Docker command:
-e DD_USING_FP32=1
Meanwhile, we will also validate/debug the model support on T4 GPUs.
Best,
Yuxing