Minimizing Deep Learning Inference Latency with NVIDIA Multi-Instance GPU

Originally published at: https://developer.nvidia.com/blog/minimizing-dl-inference-latency-with-mig/

Recently, NVIDIA unveiled the A100 GPU model, based on the NVIDIA Ampere architecture. Ampere introduced many features, including Multi-Instance GPU (MIG), that play a special role for deep learning-based (DL) applications. MIG makes it possible to use a single A100 GPU as if it were multiple smaller GPUs, maximizing utilization for DL workloads and providing…

Hi, I have been trying to reproduce the Flower Demo on our system from this article: https://developer.nvidia.com/blog/minimizing-dl-inference-latency-with-mig. However, I got a little problem upon running the client GUI where it could not open the image directory.

root@dgxs:/flowerDemo# pwd
/flowerDemo
root@dgxs:/flowerDemo# ./FlowerDemo 10.1.224.173 8001
Connecting to addr: 10.1.224.173 port: 8001
Could not open the image directory
Failed to load media!
root@dgxs:/flowerDemo#

Could someone help to point out where I should put the images?