Hello everyone. I am trying to deploy a little software on the Jetson Nano 2gb to do object detection in real time. What I am currently using is the yolov5n model (compiled to tensorrt) to do inference, but for processing a frame it needs ~140ms. I am a newbie and my knowledges are very limited, I read something about DeepStream and how it could improve performances, but I have no idea if and how it can be implemented with my current software (written in Python). Is there something I can do to improve performances, using DeepStream or something else?
This is how I load and use my model: model = torch.hub.load( ‘yolov5’, ‘custom’, path=path_to_model.engine, force_reload=True, source=‘local’) predictions = model(opencv_image)
I’ll be very grateful for any help you can give me with this
I used TensorRT, the problem was related to how I handled the frame capture from the camera, using multiple threads improved performances as desired. Thanks for your reply though
If you want to accelerate a camera pipeline, it’s recommended to try our Deepstream SDK.
Below is a sample for YOLOv5 from the community for your reference: