I have been working with the Jetson Nano B01 for a few weeks now with a fresh SD image. I have followed a few tutorials (Jetson Hacks and Toptechboy.com) and I am not getting anywhere close to the frame rates they are getting. I followed the setups EXACTLY but the main difference is that my Nano is hooked up via the display port on a 30 inch monitor @2560x1600.
Is it fair to say that the rendering of the Ubuntu desktop at that resolution causing gstreamer to chug? JTOP shows spikes in GPU with code not running and roughly the same spikes show up when executing code.
I am just thinking I have a bad Nano . I have had all nighters trying to figure out what is different. I am not going to talk specific code because its code from from tutorials that obviously work well.
It depends on what opencv code you are running. You mentioned gstreamer, so is it a pipeline in opencv code with appsink? If you have videoconvert component in your pipeline, it is possible that the performance is bad because it is a pure cpu based converter.
Generally, opencv is using some X based API so the ubuntu desktop is indeed sharing the resources with it. However, this should not cause slow FPS unless your GPU is occupied something else.
If you suspect it is a defective nano, you could try other nano devices as well if you have extra ones.
I am sorry for not being clear. I am using code from Jetson hacks that clearly works really well. I have not changed anything. Its using hardware encoding.
I have tried everything. Yeah I never said Jetson hacks was official - that wasnt my point. My point was you can clearly see in his videos given his github code its running as expected. I was just curious about the monitor and you answered the question. Thanks.
No offense. But I was just hoping you could at least point out a link to which code you are referring from jetsonhacks or at least the video link with us. There are quite of codes over jetsonhacks github. It is better to align our status on the same page.
Maybe it is a general bug that we could try to reproduce on our devkit too. Your kind help may also benefit other forum users .
I have been messing with that string in all sorts of variations over the past couple of weeks with no luck. Keep in mind, I am hobbyist and not a professional and I am finding the Nano is just not cut out for simple face detection development. Maybe it will be fine when its in a headless configuration with no opencv window created. I should try that next.
If I increase the frame rate in that string, the delay gets worse. Obviously there is a buffer de-sync going on, but I just cannot figure out why I simply cannot reproduce their results.
You could run similar usecase from our gstreamer user guide first.
You could firstly try to use some mp4 file with below command to render it on monitor. This pipeline can prove whether this is due to gpu or not. Only xvimagesink in below pipeline is using the GPU. Other components are all going through other hardwares.
This issue is more related to darknet and face recognition. If your opencv code is able to run in smooth when face recognition is disabled (pure rendering), then you might refer to that thread.
Yes, when I disable face recognition, frame rates come back to normal. I completely understand that machine vision eats up CPU/GPU, but it shouldnt be running at 1-3fps.