Jetpack 6.2 to run AI full loading

Hi NV

For custom carrier board with Orin NX 16GB module,how to simply run AI to full loading in order to test MAXN_SUPER mode reliability?
According to link: Exploring NVIDIA Jetson Orin Nano Super Mode performance using Generative AI
, the bandwidthTest is finished soon, and the POWER GUI interface do not shown DLA working ,is that right? or HOW to confirm DLA engine is working full loading?

BTW, we already config NX module into MAXN_Super,and run CPU/GPU to full loading(frequency is right and 100% usage as picture shown) and the VDD_IN power comes to around 33Watt,so how to bring it to 40W or 40W+(super mode)?please guide some AI test for max power consumption test. Thanks!

Jasper
20250214

Hi,

Do you want to do the stress test or want to increase the AI throughput?

For the stress test, please find more info below:

https://elinux.org/Jetson/L4T/TRT_Customized_Example#Stress_Test_for_Orin

For benchmark, you can check below to deploy LLM with MLC:

Thanks.

Hi,

According to topic “How to verify Orin the TOPS performance”,
the Orin NX 16GB AI TOPS(GPU)=100TOPS(total)-40TOPS(2x DLA)=60TOPS.
the Orin NX 8GB AI TOPS(GPU)=70TOPS(total)-20TOPS(2x DLA)=50TOPS.

so similar calculation for MAXN_SUPER mode, we got:
the Orin NX 16GB MAXN_SUPER: AI TOPS(GPU)=157TOPS(total)-80TOPS(2x DLA)=77TOPS.
the Orin NX 8GB MAXN_SUPER: AI TOPS(GPU)=117TOPS(total)-40TOPS(2x DLA)=77TOPS.

When do cutlass test, the real AI(GPU) we got:
the Orin NX 16GB MAXN_SUPER: AI TOPS(GPU)=42.398TOPS, around 55% of maximum,
the Orin NX 8GB MAXN_SUPER: AI TOPS(GPU)=42.391TOPS, around 55% of maximum.

My question is how to do DLA burning test and validate DLA AI performance?

Jasper
20250218

Hi,

You can find some sample to deploy model on DLA below:

Thanks.

Hi,

when install tensorflow V2.16.1, some error pop out, is that OK or not?

install tensorflow V2.16.1 under JP6.2 on Orin NX 16GB.txt (43.1 KB)

Jasper
20250220

Hi,

>>> import tensorflow
2025-02-20 15:57:42.123316: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT: INTERNAL: Cannot dlopen all TensorRT libraries: FAILED_PRECONDITION: Could not load dynamic library 'libnvinfer.so.10.3.0'; dlerror: libnvinfer.so.10.3.0: cannot open shared object file: No such file or directory
/usr/lib/python3/dist-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.17.3 and <1.25.0 is required for this version of SciPy (detected version 1.26.4
  warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"

The error indicates TensorFlow requires TensorRT 10.3.0 but not able to find it.
TensorRT 10.3 is the default version of the JetPack 6.2.

It’s expected to exist if no manual upgrade has been applied.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.