Is it necessary to Build Tensorrt OSS from the source inside deepstream for better performance

Hi @Morganh
I am working on Deepstream 5.0. I come across a blog where the author mentioned about building Tensorrt OSS from the source inside deepstream container. I can see tensorrt is already installed inside the DS 5.0. I would like to know the difference between going with the existing one and building again from the source.

Moved to TAO Forum.

Usually, the purpose to build the TRT OSS is in order to get a new libnvinfer_plugin.so which supports some additional plugins.

See GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream.

This is ONLY needed when running SSD, DSSD, RetinaNet, YOLOV3 , YOLOV4and PeopleSegNet models because some TRT plugins such as BatchTilePlugin required by these models is not supported by TensorRT8.x native package.

Note:This is also needed for YOLOV3 , YOLOV4 if you are using TRT 8.x version(such as TRT8.0.6).

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.