Hello,everyone.
I have a jetbot nano and I want to used it to do the realtime detection, but when I test the camera Onboard I saw the video streaming has about 400ms lag than my actions, when i changed the resolution to 320240 it become better, but when I use 640480 or more bigger resolutions it shows the lag,Is there anybody meet this problem or anybody know what remains this situations.
Hi.
I have executed the ‘sudo jetson_clocks’. I test the https://github.com/NVIDIA-AI-IOT/jetbot/blob/master/jetbot/camera.py code and show the frame with cv2.imshow from the ssh or the Nano terminal the stream lag of my actions. but the video shows normal from the jupyterLab. I even adjust the gstpipeline of the framerate\flip_method\width\height but it doesn’t work,so I really dont know what’s wrong.
I find that if I use ssh connect to the jetson system and run my camera.py the video will have lag. If connect from the HDMI screen it will not, so I think the enthernet will hanve more influence on the video streaming of opencv with gstreamer.
JetBot was tested against relatively low resolution images: 224x224, 300x300. It’s possible that the simple JPEG streaming would slow down significantly for larger images.
Do you mind sharing your use case for streaming 640x480 images? Perhaps there are optimizations that could be done to improve the latency.
First if you don’t have an extra screen for the jetson Nano, you may find this problem?
I used the jetbot board as a Gesture control endpoint,when you wave your hand left the page on the monitor screen goes to the previous when right the page goes to next,so I want a bigger video windows to see if I am at the right place to do these actions ~~.
But now I find .when you dont have an screen for jetson, The jupyter can be a good choice.