Hi,
If you have the onnx type YOLOv5 model, it’s recommended to check if all the operations are supported in the TensorRT first.
$ /usr/src/tensorrt/bin/trtexec --onnx=[model]
If yes, it should be possible to deploy with Deepstream.
Here is a sample to run YOLOv4 with Deepstream for your reference:
Thanks.