Running deepstream 5.1 samples yolov3 on jetson nano device error

trying to run yolov3 sample of deepstream5.1 on jetson nano device,

deepstream-app -c deepstream_app_config_yoloV3.txt

the log shows the following information:
log****

Using winsys: x11
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
0:00:01.279524238 32370 0x13430d50 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
Loading pre-trained weights…
Loading weights of yolov3 complete!
Total Number of weights read : 62001757
Loading pre-trained weights…
Loading weights of yolov3 complete!
Total Number of weights read : 62001757
Building Yolo network…
layer inp_size out_size weightPtr
(0) conv-bn-leaky 3 x 608 x 608 32 x 608 x 608 992
(1) conv-bn-leaky 32 x 608 x 608 64 x 304 x 304 19680
(2) conv-bn-leaky 64 x 304 x 304 32 x 304 x 304 21856
(3) conv-bn-leaky 32 x 304 x 304 64 x 304 x 304 40544

(105) conv-linear 256 x 76 x 76 255 x 76 x 76 62001757
(106) yolo 255 x 76 x 76 255 x 76 x 76 62001757
Output yolo blob names :
yolo_83
yolo_95
yolo_107
Total number of yolo layers: 257
Building yolo network complete!
Building the TensorRT Engine…
INFO: [TRT]: mm1_85: broadcasting input0 to make tensors conform, dims(input0)=[1,38,19][NONE] dims(input1)=[256,19,19][NONE].
INFO: [TRT]: mm2_85: broadcasting input1 to make tensors conform, dims(input0)=[256,38,19][NONE] dims(input1)=[1,19,38][NONE].
INFO: [TRT]: mm1_97: broadcasting input0 to make tensors conform, dims(input0)=[1,76,38][NONE] dims(input1)=[128,38,38][NONE].
INFO: [TRT]: mm2_97: broadcasting input1 to make tensors conform, dims(input0)=[128,76,38][NONE] dims(input1)=[1,38,76][NONE].

Killed


waiting for a long time and then showed “killed”.
Is jetson nano don’t support this kind of models or what?

Hi,

Please share the complete environment information below first.
It’s also recommended to upgrade the software to our latest Deepstream 6.0.1+JetPack 4.6.1.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Usually, ‘killed’ is caused by out-of-memory.
May I know which Nano you use? 2GB or 4GB?
You can monitor the memory status with tegrastats as well.

$ sudo tegrastats

Thanks.

we use Nano 4GB.
the cmd sudo tegrastats shows.

RAM 2100/3956MB (lfb 107x4MB) SWAP 1201/1978MB (cached 9MB) IRAM 0/252kB(lfb 252kB) CPU [9%@102,14%@102,9%@102,10%@204] EMC_FREQ 2%@1600 GR3D_FREQ 0%@76 VIC_FREQ 0%@140 APE 25 PLL@34C CPU@37.5C PMIC@100C GPU@34.5C AO@41.5C thermal@35.75C POM_5V_IN 2121/2121 POM_5V_GPU 41/41 POM_5V_CPU 207/207
RAM 2100/3956MB (lfb 107x4MB) SWAP 1201/1978MB (cached 9MB) IRAM 0/252kB(lfb 252kB) CPU [25%@518,22%@518,20%@518,22%@518] EMC_FREQ 3%@1600 GR3D_FREQ 25%@76 VIC_FREQ 0%@140 APE 25 PLL@34.5C CPU@37C PMIC@100C GPU@34.5C AO@42C thermal@35.75C POM_5V_IN 2529/2325 POM_5V_GPU 124/82 POM_5V_CPU 497/352
RAM 2100/3956MB (lfb 107x4MB) SWAP 1201/1978MB (cached 9MB) IRAM 0/252kB(lfb 252kB) CPU [14%@307,16%@307,15%@307,10%@307] EMC_FREQ 3%@1600 GR3D_FREQ 0%@76 VIC_FREQ 0%@140 APE 25 PLL@34.5C CPU@37.5C PMIC@100C GPU@34.5C AO@41.5C thermal@35.75C POM_5V_IN 2159/2269 POM_5V_GPU 83/82 POM_5V_CPU 249/317
RAM 2100/3956MB (lfb 107x4MB) SWAP 1201/1978MB (cached 9MB) IRAM 0/252kB(lfb 252kB) CPU [9%@518,16%@518,11%@518,11%@518] EMC_FREQ 2%@1600 GR3D_FREQ 0%@76 VIC_FREQ 0%@140 APE 25 PLL@34.5C CPU@37.5C PMIC@100C GPU@34.5C AO@41.5C thermal@35.75C POM_5V_IN 2201/2252 POM_5V_GPU 41/72 POM_5V_CPU 290/310
RAM 2099/3956MB (lfb 107x4MB) SWAP 1201/1978MB (cached 9MB) IRAM 0/252kB(lfb 252kB) CPU [19%@204,16%@204,10%@204,14%@204] EMC_FREQ 2%@1600 GR3D_FREQ 0%@76 VIC_FREQ 0%@140 APE 25 PLL@34.5C CPU@37C PMIC@100C GPU@34.5C AO@41.5C thermal@36.75C POM_5V_IN 2038/2209 POM_5V_GPU 41/66 POM_5V_CPU 124/273
RAM 2099/3956MB (lfb 107x4MB) SWAP 1201/1978MB (cached 9MB) IRAM 0/252kB(lfb 252kB) CPU [15%@204,13%@204,8%@204,11%@204] EMC_FREQ 2%@1600 GR3D_FREQ 0%@76 VIC_FREQ 0%@140 APE 25 PLL@34.5C CPU@37C PMIC@100C GPU@34.5C AO@41.5C thermal@35.25C POM_5V_IN 2163/2201 POM_5V_GPU 83/68 POM_5V_CPU 207/262
RAM 2099/3956MB (lfb 107x4MB) SWAP 1201/1978MB (cached 9MB) IRAM 0/252kB(lfb 252kB) CPU [15%@204,7%@204,9%@204,13%@204] EMC_FREQ 2%@1600 GR3D_FREQ 0%@76 VIC_FREQ 0%@140 APE 25 PLL@34C CPU@37C PMIC@100C GPU@34.5C AO@41.5C thermal@35.75C POM_5V_IN 2038/2178 POM_5V_GPU 41/64 POM_5V_CPU 124/242
RAM 2099/3956MB (lfb 107x4MB) SWAP 1201/1978MB (cached 9MB) IRAM 0/252kB(lfb 252kB) CPU [18%@204,10%@204,6%@204,11%@204] EMC_FREQ 2%@1600 GR3D_FREQ 0%@76 VIC_FREQ 0%@140 APE 25 PLL@34C CPU@37C PMIC@100C GPU@34.5C AO@42C thermal@35.75C POM_5V_IN 2038/2160 POM_5V_GPU 41/61 POM_5V_CPU 166/233
RAM 2100/3956MB (lfb 107x4MB) SWAP 1201/1978MB (cached 9MB) IRAM 0/252kB(lfb 252kB) CPU [20%@204,12%@204,13%@204,16%@204] EMC_FREQ 2%@1600 GR3D_FREQ 2%@76 VIC_FREQ 0%@140 APE 25 PLL@34.5C CPU@37C PMIC@100C GPU@34.5C AO@41.5C thermal@35.75C POM_5V_IN 2201/2165 POM_5V_GPU 83/64 POM_5V_CPU 290/239
RAM 2100/3956MB (lfb 107x4MB) SWAP 1201/1978MB (cached 9MB) IRAM 0/252kB(lfb 252kB) CPU [17%@204,13%@204,15%@204,11%@204] EMC_FREQ 2%@1600 GR3D_FREQ 0%@76 VIC_FREQ 0%@140 APE 25 PLL@34.5C CPU@37C PMIC@100C GPU@34.5C AO@41.5C thermal@35.75C POM_5V_IN 2201/2168 POM_5V_GPU 41/61 POM_5V_CPU 166/232
RAM 2100/3956MB (lfb 107x4MB) SWAP 1201/1978MB (cached 9MB) IRAM 0/252kB(lfb 252kB) CPU [7%@204,11%@204,14%@204,10%@204] EMC_FREQ 2%@1600 GR3D_FREQ 0%@76 VIC_FREQ 0%@140 APE 25 PLL@34.5C CPU@37.5C PMIC@100C GPU@34.5C AO@41.5C thermal@35.75C POM_5V_IN 2038/2157 POM_5V_GPU 41/60 POM_5V_CPU 166/226
RAM 2100/3956MB (lfb 107x4MB) SWAP 1201/1978MB (cached 9MB) IRAM 0/252kB(lfb 252kB) CPU [10%@102,10%@102,13%@102,10%@102] EMC_FREQ 2%@1600 GR3D_FREQ 0%@76 VIC_FREQ 0%@140 APE 25 PLL@34.5C CPU@37C PMIC@100C GPU@34.5C AO@41.5C thermal@35.5C POM_5V_IN 2038/2147 POM_5V_GPU 41/58 POM_5V_CPU 166/221
RAM 2100/3956MB (lfb 107x4MB) SWAP 1201/1978MB (cached 9MB) IRAM 0/252kB(lfb 252kB) CPU [9%@204,12%@204,12%@204,11%@204] EMC_FREQ 2%@1600 GR3D_FREQ 0%@76 VIC_FREQ 0%@140 APE 25 PLL@34.5C CPU@37C PMIC@100C GPU@34.5C AO@41.5C thermal@35.75C POM_5V_IN 2080/2141 POM_5V_GPU 41/57 POM_5V_CPU 207/219
RAM 2100/3956MB (lfb 107x4MB) SWAP 1201/1978MB (cached 9MB) IRAM 0/252kB(lfb 252kB) CPU [14%@825,9%@825,11%@825,15%@825] EMC_FREQ 2%@1600 GR3D_FREQ 0%@76 VIC_FREQ 0%@140 APE 25 PLL@34.5C CPU@37C PMIC@100C GPU@34.5C AO@41.5C thermal@35.75C POM_5V_IN 2080/2137 POM_5V_GPU 41/55 POM_5V_CPU 290/224
RAM 2100/3956MB (lfb 107x4MB) SWAP 1201/1978MB (cached 9MB) IRAM 0/252kB(lfb 252kB) CPU [12%@102,10%@102,14%@102,8%@102] EMC_FREQ 2%@1600 GR3D_FREQ 0%@76 VIC_FREQ 0%@140 APE 25 PLL@34.5C CPU@37C PMIC@100C GPU@34C AO@41.5C thermal@35.75C POM_5V_IN 2038/2130 POM_5V_GPU 41/54 POM_5V_CPU 166/221
RAM 2100/3956MB (lfb 107x4MB) SWAP 1201/1978MB (cached 9MB) IRAM 0/252kB(lfb 252kB) CPU [13%@102,9%@102,14%@102,7%@102] EMC_FREQ 2%@1600 GR3D_FREQ 0%@76 VIC_FREQ 0%@140 APE 25 PLL@34.5C CPU@37C PMIC@100C GPU@34.5C AO@41.5C thermal@35.75C POM_5V_IN 2038/2125 POM_5V_GPU 41/54 POM_5V_CPU 166/217

Hi,

Do you run the tegrastats at the same time when the inference is running?
It seems that the GPU utilization is 0% based on your log.

.. GR3D_FREQ 0%@76 ...

Thanks.

how can i fix it ?
thanks in advance.

Hi,

Please run tegrastats and Deepstream app on the different consoles at the same time.

When running Deepstream, it’s expected to get a high GPU utilization (~99%).
Then you can check if the memory usage is more than Nano cability.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.