Jetson nano crashed when using tiny yolo v3 model

Hi, I setup the darknet environment according to the following link.

I trained my own tiny yolo v3 model and tried to detect objects.
The speed can reach 8FPS which is also far away from the mentioned speed in the link.
https://devblogs.nvidia.com/jetson-nano-ai-computing/

In addition, nano was not stable and crashed from time to time.
Can you help investigate the crash issue and performance problem? Thanks.

Hi, bobzeng

The benchmark is based on TensorRT and FP16 inference.
You need to deploy your model with TensorRT engine operate in FP16 mode to get the best performance.

For the Nano dev kit. I recommend to use the DC power input. The usb power supply is not meet the requirement with full loaded Nano.

Using download configs , weights, tiny yolov3 is very stable.
My command line is
./darknet detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights “rtsp://admin:admin@192.168.1.99:554/av0_0”

Performance is so-so, get like 1 to 2 fps.

Using 5V 2.5A power jack input.

dear bobzeng,
I had the same problem ,and I want to ask you that nano had crashed is the ubuntu system or your program? My problem is on the system.

@327985736

I think the problem is related to the program. The algorithm like MobileNet-SSD is stable on the Jetson Nano.

@Alanz

Can you please provide an example on how to get yolov3-tiny running on jetson nano ? I went through the deepstream_reference_apps and was able to get it running on my host machine but no luck on jetson. One of the things I stumbled upon is where I can find the deepstream sdk headers so I can compile the deepstream_reference_apps ?

Thanks for your help ?

Best
Alexander

Hi,

I have the same issue for darknet. Has anyone succeed in running darknet smootly on Jetson Nano ? I compiled the darknet with GPU and OpenCV enabled. Jetson Nano just freezes when I try to detect objects using gpu. When I use with -nogpu flag, it is fine though.

Hi,

Deepstream for Jetson Nano is not release yet but planned for Q2 2019.
You can submit for a notification here: https://developer.nvidia.com/deepstream-sdk/jetson-nano-notify-me

Although the deepstream version is not available, you can try to trt-yolo-app which only depends on TensorRT first.
https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/yolo#trt-yolo-app

Thanks.

Hi,

Would you mind to share the system log of the tegrastats with us first?

sudo tegrastats

Thanks.

I managed to run tiny-yolo on darknet on jetson nano with 18 fps on a Logitech webcam real time and got pretty decent fps this is without tensorrt.

But any one knows how to check temperature of the gpu in jetson nano because when I run yolo on darknet and when I touch the heat sink very very hot so just want to know how can I check the temperature in jetson nano. In My pc I have nvidia drivers installed so when I type nvidia-smi I get GPU memory and its temperature but in jetson nano I don’t know how can I do that any help would be great.

If anyone wants my working darknet tiny-yolo code I can provide you the git link but I suggest you to use tensorrt because I face heating issue may be its jetson problem and you can’t run yolov3 on jetson nano I think because when I try it was getting stuck and after some time it was getting killed. So if any Nvidia member is seeing this can help me to run yolov3, not tiny-yolov3 on jetson nano it can be on tensorrt or on the darknet

As mentioned above, you may use tegrastats for monitoring resources usage and temperature

sudo /usr/bin/tegrastats

@cudanexus, Can you share your git link about tiny yolo-v3 with 18FPS? Thanks.

By the way, you can find the sample code for yolov3_onnx on Jetson Nano.
/usr/src/tensorrt/samples/python/yolov3_onnx

My experience shows me killed means out of memory. I loaded lxqt and openbox for more memory. It helped a lot.

I have tried https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/yolo#trt-yolo-app on jetson nano when I run the command

$ trt-yolo-app --flagfile=/home/teric-ai/Desktop/deepstream_reference_apps/yolo/config/yolov3-tiny.txt
Loading pre-trained weights...
Loading complete!
Total Number of weights read : 8858734
      layer               inp_size            out_size       weightPtr
(1)   conv-bn-leaky     3 x 416 x 416      16 x 416 x 416    496   
(2)   maxpool          16 x 416 x 416      16 x 208 x 208    496   
(3)   conv-bn-leaky    16 x 208 x 208      32 x 208 x 208    5232  
(4)   maxpool          32 x 208 x 208      32 x 104 x 104    5232  
(5)   conv-bn-leaky    32 x 104 x 104      64 x 104 x 104    23920 
(6)   maxpool          64 x 104 x 104      64 x  52 x  52    23920 
(7)   conv-bn-leaky    64 x  52 x  52     128 x  52 x  52    98160 
(8)   maxpool         128 x  52 x  52     128 x  26 x  26    98160 
(9)   conv-bn-leaky   128 x  26 x  26     256 x  26 x  26    394096
(10)  maxpool         256 x  26 x  26     256 x  13 x  13    394096
(11)  conv-bn-leaky   256 x  13 x  13     512 x  13 x  13    1575792
(12)  maxpool         512 x  13 x  13     512 x  13 x  13    1575792
(13)  conv-bn-leaky   512 x  13 x  13    1024 x  13 x  13    6298480
(14)  conv-bn-leaky  1024 x  13 x  13     256 x  13 x  13    6561648
(15)  conv-bn-leaky   256 x  13 x  13     512 x  13 x  13    7743344
(16)  conv-linear     512 x  13 x  13     255 x  13 x  13    7874159
(17)  yolo            255 x  13 x  13     255 x  13 x  13    7874159
(18)  route                  -            256 x  13 x  13    7874159
(19)  conv-bn-leaky   256 x  13 x  13     128 x  13 x  13    7907439
(20)  upsample        128 x  13 x  13     128 x  26 x  26        - 
(21)  route                  -            384 x  26 x  26    7907439
(22)  conv-bn-leaky   384 x  26 x  26     256 x  26 x  26    8793199
(23)  conv-linear     256 x  26 x  26     255 x  26 x  26    8858734
(24)  yolo            255 x  26 x  26     255 x  26 x  26    8858734
Output blob names :
yolo_17
yolo_24
Using previously generated plan file located at data/yolov3-tiny-kFLOAT-kGPU-batch1.engine
Loading TRT Engine...
Loading Complete!
Total number of images used for inference : 1
[======================================================================] 100 %
Network Type : yolov3-tiny Precision : kFLOAT Batch Size : 1 Inference time per image : 216.834 ms

t

how can I run tensorrt on video or on image there was no documation how to run it when i run on darknet i am getting Inference time per image : 150 ms just want to check is tensorrt fast or darknet

Hi,

To prevent Linux OOM killer, I enabled a swapfile around 6gb

are you talking about yolov3 if yes how can I do that on jetson nano.

Hi,

I’m using Barrel-Jack 4.0 amp power supply. I get 5fps for yolo-lightweight for following video format

1920 * 1080
H.265
30 fps
bitrate: 7647kbps

Do you think it is normal to have 5fps in lightweight-yolov3 for that video ?

thanks

First of all, I don’t have that much experience in lightweight-yolov3 but I have work on yolov3-tiny and got around 13 to 15 fps on 1920x1080 video which was live streaming and If I decrease the vidoe resolution the fps increase. Can you describe how many layers does lightweight-yolov3 using and what is the git repo because we have 4 lightweight repo just share the layers and I will let you know even if possible give you the optimized git so you can give it a try

Hi,

well, by saying lightweight I meant yolov3-tiny. I’m also new to neural network systems. I used the official repo for yolov3 and compiled with CUDA and OpenCV flags. So, I think we are using the same settings.

Can you share your code, please?
I just saw codes for Jetson TX1 and 2 which I believe the configuration in the makefile is different from the nano.