Announcing general availability for Transfer Learning Toolkit 2.0

Announcing general availability for Transfer Learning Toolkit 2.0

In this release, we’re introducing

• New NVIDIA purpose-built models for people counting, vehicle tracking, heatmap generation and more. Full list of supported backbone networks, video demos and performance metrics for production quality models available on our product page

• 2x inference speedup with INT8 precision while maintaining comparable to FP16/FP32 using quantization aware training

• Training faster with Automatic Mixed Precision (AMP) running on Tensor Cores for NVIDIA Volta and Turing GPUs

• Pixel level accuracy with instance segmentation using MaskRCNN network architecture

• Extended support for object detection models such as YOLO-V3, SSD, and FasterRCNN, RetinaNet, DSSD and DetectNet_v2

Product page https://developer.nvidia.com/transfer-learning-toolkit

Getting Started https://developer.nvidia.com/tlt-getting-started

Pull from NGC container https://ngc.nvidia.com/catalog/containers/nvidia:tlt-streamanalytics

New blog post “Improving INT8 accuracy using Quantization Aware Training and NVIDIA Transfer Learning Toolkit” https://developer.nvidia.com/blog/improving-int8-accuracy-using-quantization-aware-training-and-the-transfer-learning-toolkit/

New developer tutorial “Training Instance Segmentation Models using MaskRCNN on NVIDIA Transfer Learning Toolkit” https://developer.nvidia.com/blog/training-instance-segmentation-models-using-maskrcnn-on-the-transfer-learning-toolkit/

New webinar : Create Intelligent places using NVIDIA pre-trained vision models and DeepStream SDK https://info.nvidia.com/iva-occupancy-webinar-reg-page.html

DeepStream 2.0 general availability – To find out more, see Announcing general availability for DeepStream 5.0

1 Like