Originally published at: MLPerf Inference: NVIDIA Innovations Bring Leading Performance | NVIDIA Technical Blog
New TensorRT 6 Features Combine with Open-Source Plugins to Further Accelerate Inference Inference is where AI goes to work. Identifying diseases. Answering questions. Recommending products and services. The inference market is also diffuse, and will happen everywhere from the data center to edge to IoT devices across multiple use-cases including image, speech and recommender systems…