Deploying a Scalable Object Detection Inference Pipeline: Optimization and Deployment, Part 3

Originally published at: https://developer.nvidia.com/blog/deploying-a-scalable-object-detection-inference-pipeline-optimization-and-deployment-part-3/

This post is the third in a series on Autonomous Driving at Scale, developed with Tata Consultancy Services (TCS). The previous posts provided a general overview of deep learning inference for object detection and covered the object detection inference process and object detection metrics. In this post, we conclude with a brief look at the…

Hi,
Couple of questions please.

  • Figure 2- which GPUs were used for the inferencing?
  • Figure 2 - unless I miss something, the graph actually shows that the work, per a specific batch size, does NOT scale well for multi gpus, no?

thanks
Eyal