Hello! I have trained Yolov3 on my own dataset of 600 images. I am able to perform detection’s on random images and videos which are already saved in my PC. But I want real time object detection from live camera feed from nano in real time. Now I do not know that how to do that, so suggest me some path that I can follow to achieve my desired aim. Thanks.
Hi farjadhaider3253,
Please refer to jetson-inference/aux-streaming.md at master · dusty-nv/jetson-inference · GitHub
Hi,
Please check our DeepstreamSDK.
If 416 resolution and inference every 5 frames are acceptable, we can reach 20fps with YOLOv3 on the Jetson Nano.
Thanks.
I do not get this point. Kindly if you can elaborate. Thanks
@AastaLLL I ho not get to your point. Kindly if you can elaborate! Thanks
Hello! @AastaLLL I am not able to get the above mentioned quote. Kindly if you can elaborate?
Hi,
You can check the comment above, it can run YOLOv3 model on Nano with 20fps.
Does this meet your requirement?
But please noticed that to reach this, the model need to use 416x416 input resolution.
And apply the inference every 5 frame but propagate the bound box with tracker in between.
Thanks.
Yes 20 fps will meet my requirement but what this below line means.
Lastly kindly tell me that how can I deploy yolov3 weights with nano , I mean as I am a beginner in this field so is there any easy way to do so?
Hi,
Please install DeepstreamSDK first.
Then you can find the YOLO sample in this folder and please apply the change mentioned above to reach 20fps.
$ /opt/nvidia/deepstream/deepstream-[ver]/sources/objectDetector_Yolo/
Thanks.
what is deepstreamSDK and I am not getting your point?
Hi,
Deepstream SDK is our software library here:
Currently, the latest version is v5.0 and is available for the JetPack4.4 DP environment.
Thanks
Ok got that but in place of deepstream, TensorRT can also be used?