What models requires TensorRT OSS on Deepstream 6.4?

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc): GeForce 4090
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc): Yolo_v4 & Classification
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here): v5.5.0
• Training spec file(If have, please share here): NA
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.): NA

In GitHub - NVIDIA-AI-IOT/deepstream_tao_apps at release/tao5.1_ds6.4ga, the instructions say that:

The OSS plugins are needed for some models with DeepStream 6.4.

  1. Are Yolo_v4 & Classification engines created from tao models require TensorRT OSS?

  2. The bellow picture says that for DeepStream 6.4, TRT_OSS_CHECKOUT_TAG is “binary plugin only”, what does that mean?

In deepstream 6.4, these two networks need not rebuild the TensorRT OSS plugin(libnvinfer_plugin.so).

That means building the plugin. Please build the plugin(libnvinfer_plugin.so) and replace with the existing one.
Some guide:
tao_deploy/docker/Dockerfile at main · NVIDIA/tao_deploy · GitHub,
deepstream_tao_apps/TRT-OSS/x86 at release/tao5.1_ds6.4ga · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.