Jetson AGX Orin Running Heavy Code

I am trying to run some heavy code on my Jetson-AGX-Orin on Thonny terminal. While running the code, it keeps minimizing and reopening a white blank Thonny screen. Should I supply max power to the Jetson? Make it overclock?

Hi,
By default DFS(dynamic frequency scaling) is enabled for CPU, GPU, EMC. You can run $ sudo tegrastats to check status of the engines. DFS is for saving power but if your use-case is usually in heavy loading, you can run $ sudo jetson_clocks to enable the engines always at maximum clock to have optimal throughput.

Hello,

I enabled the maximum clock for my GPU and enabled all my CPU’s. Still, while performing a sorting, of a huge number, the python IDE crashes. The resultant file is created, but the IDE crashes and I am unable to run the rest of the program

Hi,
You may run $ sudo tegrastats and clarify which component constraints the system performance.

I tried that, i dont know what I am supposed to look for
is there a way i can take a copy of those logs and post them here for you to have a look at them?

Hi,
Do you see CPU cores and GPU engine at 100% loading in sudo tegrastats? Or only GPU engine at full loading?

8 of the CPU cores are running till 10-15% 4 of them are off. GPU goes till 6% (seen in GUI) in tegrastats the mW value, the GPU value overshoots it sometimes, 3594/2920mW but apart from that I can’t see any percentage for GPU. Thonny/ Idle, whichever IDE i run, gets stuck. This mostly happens when I try to write the sorted result into a file. The file is created and the values are in it when I reboot the system though

Hi,
It sounds like the issue is in file IO speed instead of GPU or CPU engine. Please check if you can eliminate the code of writing to a file and give it a try. To clarify if file IO is the bottleneck.

Yeah, that did work. is there a way to work around it so that I can continue with the file writing? or is it a Python issue?

A lot of file I/O performance issues occur during open/close of the file. A naive program will open a file when it is needed, operate, and then close. Unfortunately, if that is in a loop, that’s an extraordinary amount of overhead. I’m not sure how to profile in Python, but if you could, then this would be the way to actually “know” where the issue is. Without that, I’m going to suggest that you make sure the file descriptor remains open.

An example of a case where there is no way around this is with pipes. It isn’t unusual for someone to use a program like “find”, and then pipe it to something else, e.g.:
find . -iname 'something*' -print0 | xargs -0 ...something to do with file...

In that case each file found gets piped to a program via xargs, and the operation results in an open/close for each file. The same command set, if operating on a single large file without multiple open/close, would probably be at least an order of magnitude faster. This might not have anything to do with your situation, but consider making sure any file I/O is using open/close only once. It’s a good place to start.

1 Like

Ok, thanks, I will work around the problem. I will also try and check if there is profiling in python and post the solutions here and the solution for all the other suggested options.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.