Originally published at: Extending NVIDIA Performance Leadership with MLPerf Inference 1.0 Results | NVIDIA Technical Blog
Inference is where we interact with AI. Chat bots, digital assistants, recommendation engines, fraud protection services, and other applications that you use every day—all are powered by AI. Those deployed applications use inference to get you the information that you need. Given the wide array of usages for AI inference, evaluating performance poses numerous challenges…
This is very exciting, I am a small time developer just getting on my feet but it seems clear that team green will remain supreme. I really hope I can raise the money to get my hands on this technology when it’s ready.
It seems to me when Nvidia choose Arm architecture it snowballed into the biggest jump start on the next level of innovation since x86 started it’s run
Even if the above tests were hand picked, the graphs are too impressive to ignore. I want to know EVERYTHING!!!
Hi Ronald. We’re pleased with our MLPerf Inference 1.0 results, and we submitted across ALL usages: CV, medical imaging, natural language processing, translation and recommender systems. Our Triton Inference Server software did very well, as did our MIG technology to show the completeness of our data center platform.
Cheers,
Dee
Great to hear about NVIDIA’s performance leadership in MLPerf Inference 1.0 results! They continue to push boundaries in the field.