Hi, I’m trying to reproduce MLPerf inference results v3.1 on AGX Orin system by following steps from https://github.com/mlcommons/inference_results_v3.1/closed/NVIDIA. System is flashed with Jetpack 6.0.
However, docker build fail with "/usr/local/cuda/include/cuda_bf16.hpp(4132): error: no suitable conversion function from “const __nv_bfloat16” to “unsigned short” exists
".
Does anyone know how to resolve it?
Detail error message is included inside errorMessage.txt.
errorMessage.txt (48.1 KB)
The JetPack version with that MLPerf Inference 3.1 was originally tested with - JP 5.1.1.
There is no reference for JetPack 6.0 yet, please roll back to use JetPack 5.1.1.
Thanks
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.