Power Consumption Developer Kit vs. own build

Hello,

I am currently working on a system for passive cooling of the Jetson Thor. We have already developed our own carrier for the Thor and are now in the process of stressing the hardware in order to test the cooling system.

During comparative measurements, we noticed that the Developer Kit draws more power than our carrier under the same workload and with the same power mode (including jetson_clocks).

Do you know what could explain this difference? Are there any other parameters besides power modes and jetson_clocks that can influence power consumption?

Could you provide more data as reference? How do you runthe measurement and the tests?

You may need to fully compare the result of tegrastats or jtop between these two to check what is different.

What I did:

  1. I created a custom power mode file (nvpmodel.conf) with all settings set to maximum, so both devices now use the same power configuration.

  2. I activated jetson_clocks on both devices.

  3. I ran gpu_burn.

  4. I used tegrastat to compare vdd_gpu.

What I observed:

On our device, vdd_gpu is around 20 W lower than on the developer kit.
I also ran a YOLO benchmark, which showed that the developer kit delivers significantly better performance.

Now I am wondering why there is such a difference, even though both devices were set up with the same power settings.

devkit vs. own build.pdf (285.5 KB)

Have you confirmed the exact same results when you run # nvpmodel -q --verbose on both the devkit and your custom carrier board?

From the result you shared, it seems devkit using more cpu(especially for cpu1/cpu9) and emc resource.

they show the same results.

thats right, that these two cpus show a higher utilization. But to me this does not explain the difference of 20W which can be seen at vdd_gpu.

You’re right, higher CPU utilization alone doesn’t explain a +20 W difference on VDD_GPU.
This points to the GPU rail (clocks / power limit / throttling) behaving differently between devkit and your carrier.
Could you share a short tegrastats log from both boards under the same workload (with GR3D_FREQ, temps and VDD_GPU), plus sudo nvpmodel -q --verbose and sudo jetson_clocks --show, so we can see if the devkit GPU is really running at a higher effective frequency or under a different limit?

DEVKIT.txt (27.8 KB)

T5000.txt (24.0 KB)

Data from T5000.txt is our Carrier, DEVKIT.txt contain Data from Devkit. Let me know if something is missing

From the logs both boards are in MODE_66_0_W with the same GPU/EMC Fmax, and GR3D is ~1.57 GHz on both.
The big difference is that on the devkit the CPUs actually run at 2.6 GHz with EMC ~90%, while on your carrier they sit around 650 MHz with EMC ~58%, so the GPU + memory are doing less real work there – which matches both the lower performance and the ~20 W lower VDD_GPU.
I’d suggest re-capturing tegrastats right after enabling jetson_clocks on your board and making sure the full workload pipeline (data loading / preprocessing etc.) is identical; once CPU/EMC behavior matches the devkit, VDD_GPU should get much closer as well.