Improve inference performances yolov5

Hello everyone. I am trying to deploy a little software on the Jetson Nano 2gb to do object detection in real time. What I am currently using is the yolov5n model (compiled to tensorrt) to do inference, but for processing a frame it needs ~140ms. I am a newbie and my knowledges are very limited, I read something about DeepStream and how it could improve performances, but I have no idea if and how it can be implemented with my current software (written in Python). Is there something I can do to improve performances, using DeepStream or something else?

This is how I load and use my model:
model = torch.hub.load( ‘yolov5’, ‘custom’, path=path_to_model.engine, force_reload=True, source=‘local’)
predictions = model(opencv_image)

I’ll be very grateful for any help you can give me with this

Hi,

Have you tried it with TensorRT? Since the API you used is PyTorch.

Thanks.

I used TensorRT, the problem was related to how I handled the frame capture from the camera, using multiple threads improved performances as desired. Thanks for your reply though

Hi,

Do you run it with the below API?
If yes, you might use PyTorch built-in TensorRT rather than pure TensorRT.

model = torch.hub.load( ‘yolov5’, ‘custom’, path=path_to_model.engine, force_reload=True, source=‘local’)
predictions = model(opencv_image)

If you want to accelerate a camera pipeline, it’s recommended to try our Deepstream SDK.
Below is a sample for YOLOv5 from the community for your reference:

Thanks.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.