I downloaded the mlpinf-v4.1-cuda12.4-pytorch24.04-ubuntu22.04-x86_64-release container from NGC.
Can I run MLPerf Benchmark Test with this container?
If it’s possible to run the MLPerf benchmark with this container,
please let me know how to run it and the path to the executable.
If not, do I need to build MLPerf LoadGen manually?
If I need to build it manually,
please let me know how to build it.