Custom parser for YOLOv3 running on deepstream, in Jetson Nano

Hi,
I am running Deepstream 7.1 on my Jetson Nano having Jetpack 6.1,
I have trained a YOLO v3 model on NVIDIA TAO.
After converting the onnx file to fp16 using TensorRT, I want to run the model on my Jetson.

Issue faced -
I can not find the objectDetector_Yolo folder in my current deepstream installation on Jetson, which contains the custom lib files for YOLO. I installed the deepstream via tar package.

Alternate method tried -
I tried setting up deepstream on a VM via docker, there i can successfully build my own custom parser (.so) file by which i can run my model.

Please suggest any solution for Jetson Nano.

sorry for the late reply! please refer to this faq for yolo samples.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks