For this specific “road_followng” example… when I move the jetbot camera, It takes more than 10 seconds to see the change on my computer. This lag does not seem to be present in other modules. I am running it remotely from my mac through a jupyter notebook.
Have you tried to maximum system performance by NVPmodel?
Yes I tried running it at MAXN, the camera response was still very slow. I then tried to use the 0.3.2 image, but it won’t boot - is it not compatible with the B01 Jetson?
Thanks for reaching out.
Just to clarify, you were able to run the Collision avoidance and teloperation examples (which both use a live camera) without issues?
Yes I turned the fps down to about 5fps, reduced the speed to as low as possible, and moved my router next to the jetbot. The waveshare jetbot variant still moves too fast for the camera, so I had to make it stop about every second for the camera to keep up. It doesn’t seem to have enough torque to move itself at slow controlled speeds. https://github.com/loibucket/autobus/blob/master/combined_code/autobus_live--Seeed-Waveshare-steer-obstacle-colors.ipynb
My best guess is that there may have been a performance regression, that has caused the road following neural network execution time to drop below the framerate of the camera. This could be causing the image queue to build up.
There is an experimental fix that addresses this in the master branch, which I’m curious if you can test. This uses a camera publisher in a separate process, and the notebook then subscribes and “drops frames” if it can’t process fast enough. This results in no queue build up and addressed some lag issues I was facing, so I’m curious if it resolves your issue.
To launch the publisher:
git clone https://github.com/NVIDIA-AI-IOT/jetbot cd jetbot python3 scripts/zmq_camera_publisher.py
To use the new camera class:
from jetbot.camera.zmq_camera import ZmqCamera camera = ZmqCamera() # this should have same interface as current camera
Please let me know if this helps or you run into issues.
Please note this is highly subject to change.
When following these instructions, it gives the following Error:
ModuleNotFoundError Traceback (most recent call last) <ipython-input-4-7eaed894edf3> in <module> ----> 1 from jetbot.camera.zmq_camera import ZmqCamera 2 3 camera = ZmqCamera() # this should have same interface as current camera ModuleNotFoundError: No module named 'jetbot.camera.zmq_camera'; 'jetbot.camera' is not a package
I cloned the repo and started the publisher just fine. It gives the following output:
GST_ARGUS: Setup Complete, Starting captures for 0 seconds GST_ARGUS: Starting repeat capture requests. CONSUMER: Producer has connected; continuing.
I can manually see the zmq_camera.py file in /jetbot/scripts, so it should have cloned the correct repo.
What could be causing this issue?
Please help to open a new topic if it’s still an issue. Thanks
Hello, I think I solved this problem. I also used a new camera class-ZmqCamera. The first thing you need to make sure is that you have completed the camera changes.
“GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.”
For your problems, you can have the following solutions.
“ModuleNotFoundError: No module named ‘jetbot.camera.zmq_camera’; ‘jetbot.camera’ is not a package”
All you have to do is to change the path of the cloned jetbot, as shown in the picture below, you should put the jetbot folder under this folder.
Then all you have to do is change the code.
“from jetbot.camera.zmq_camera import ZmqCamera
from jetbot import bgr8_to_jpeg
camera = ZmqCamera”
But you may have new problems.
“AttributeError:‘Robot’ object has no attriibute ‘makerobo_servo’”
However, there is a new problem, that is, makerobo_servo, that is, the above servo can’t adjust the angle.
But I can adjust the angle through other program codes in advance, so I deleted this code, and then the whole code is debugged.