Is this normal?
If I kill the process, a new one is spawn taking the same load.
This does not depend on my activity on the jetson. The process is there even after a reboot.
Is this normal?
If I kill the process, a new one is spawn taking the same load.
This does not depend on my activity on the jetson. The process is there even after a reboot.
Hi,
Could you share the process name with us?
More, does the process take one CPU or all CPUs?
There are some monitor-type process which is always running and may take high resource in some scenario.
Please share the process name with us for further suggestion.
Thanks.
Hi, thank you for your reply.
After having installed htop on my jetbot to help with my investigation, I believe the process that I am referring to is:
python3 -m jetbot.apps.stats
This almost constantly takes 90-100% of one cpu and is immediately replaced by the system with a new process if I kill it.
Is it possible this is related to my running the jetbot without a pioled? I removed it since the display would not come on every time I was restarting my jetbot. Actually it was not coming on most of the times.
Please let me know if there are any more diagnostics you would like to see.
Hi,
You can disable the stats display service by calling
sudo systemctl disable jetbot_stats
sudo systemctl stop jetbot_stats
That said, it may be worth tracking why the display stats was not showing. Could you create a new issue on the JetBot GitHub page?
Thanks!
John
Thank you. That worked.
I will start a new issue for the display not working.
Great! Thank you.
John
@fciampoli, Do you have a link to the issue report? I would like to subscribe to it as I have the same problem.
Issue still outstanding, just reproduced it on a newly arrived Jetson Nano with jetbot_image_v0p4p0.zip
Same issue here. Using the same image as above. Everything in the jupyter collision avoidance demo is about 3-5 minutes behind. It seems to be working, just not able to keep up in real time.
top - 15:57:21 up 16 min, 2 users, load average: 1.45, 1.72, 1.40
Tasks: 266 total, 1 running, 265 sleeping, 0 stopped, 0 zombie
%Cpu(s): 13.3 us, 18.3 sy, 0.0 ni, 64.0 id, 0.2 wa, 3.5 hi, 0.7 si, 0.0 st
KiB Mem : 4059484 total, 526484 free, 3289716 used, 243284 buff/cache
KiB Swap: 6224036 total, 5380108 free, 843928 used. 604312 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5247 root 20 0 9823100 49972 5196 S 18.4 1.2 2:26.78 nvargus-daemon
833 root -51 0 0 0 0 S 18.1 0.0 1:53.33 irq/70-host_sta
8448 jetbot 20 0 12.810g 1.688g 458596 S 10.9 43.6 3:01.67 python3
3828 jetbot 20 0 1935424 16816 3716 S 3.9 0.4 0:45.98 python3
7491 jetbot 20 0 9260 1740 1096 R 1.3 0.0 0:13.96 top
5477 root 20 0 77960 1176 952 S 1.0 0.0 0:07.26 nvphsd
832 root -51 0 0 0 0 S 0.7 0.0 0:10.06 irq/69-host_syn
6663 jetbot 20 0 569320 4012 3016 S 0.7 0.1 0:02.84 python3
736 root 20 0 0 0 0 S 0.3 0.0 0:00.99 nvmap-bz
3764 message+ 20 0 8216 2708 1544 S 0.3 0.1 0:03.18 dbus-daemon
5189 root 20 0 0 0 0 S 0.3 0.0 0:04.26 nvgpu_channel_p
15155 root 20 0 0 0 0 S 0.3 0.0 0:00.80 kworker/u8:0
15933 root 20 0 0 0 0 S 0.3 0.0 0:00.03 kworker/1:1
1 root 20 0 95612 4252 3048 S 0.0 0.1 0:02.65 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:01.34 ksoftirqd/0
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
7 root 20 0 0 0 0 S 0.0 0.0 0:00.85 rcu_preempt
8 root 20 0 0 0 0 S 0.0 0.0 0:00.12 rcu_sched
9 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh
10 root rt 0 0 0 0 S 0.0 0.0 0:00.06 migration/0
11 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 lru-add
I have an update! I think my issue was the tp-link usb wifi i was using. I tried plugging the bot into the Ethernet port and now it seems to respond in near real time now. So I think the bottleneck was on streaming back to the notebook. I’m going to try buying the recommended wifi card and see if that also works better.