Jetson Nano, How can I find TensorRT model size to prohibit thermal overheat

Hi,
I am using Jetson Nano DevKit, ( Jetpack 4.2.2 ).
My application has 4 TensorRT models that Face Detection and etc.
First, camera frames to ‘Face Detecting’ engine.
My Face Detecting tensorrt model takes about 20 milliseconds to inference and size is 4.8 MBytes ( trt fp16 mode ).

Another Face Detecting tensorrt model takes about 45 milliseconds to inference and size is 105 MBytes ( trt fp16 mode ).

Using First model (4.8MB), jetson nano’s thermal degrees are 60 'C.
And second model (106MB), nano’s thermal degrees are 72 'C.
(If I operate other three models, thermal degrees get to 80 'C.)

According document - 'Tegra X1 shutdown Temperatures - 102 'c ~ 103 ‘c.’

I am wondering, my applications is stable ?
Thermal about 80 'C, A0 value is 100 'C, CPU and GPU are 90 'C.

How can I find suitable Model Size for stable thermal ?

Thank you.

Hi kkuzuri,

You may need to refer to the Nano Thermal design guide - https://developer.nvidia.com/embedded/dlc/jetson-nano-thermal-design-guide-1.3 to implement the suitable thermal on device to make it stable.

Hi, kayccc.

Thanks !