Jetson Nano crashes when trying to load and run tiny yolo v3 model (TensorRT Optimized)

Hi.

I have recently trained a face detection model using the very popular implementation of keras yolo model. (https://github.com/qqwweee/keras-yolo3). I have managed to successfully convert this model to a tensorRT optimized frozen graph by loading the model and then taking the graph from Keras’s global graph. I used precision as FP16 and max_segments as 50 while converting the graph.

While loading this graph it takes up a vast majority of my RAM (Around 50%) and as soon as the model starts running the Nano crashes. Could anyone help me understand why this could be happening? I’m powering the Nano using 2A micro usb cable.

Thank you

Hi,

Have you tried our Deepstream sample?
We can run the YOLOv3 model with Deepstream SDK on the Nano.
https://docs.nvidia.com/metropolis/deepstream/Custom_YOLO_Model_in_the_DeepStream_YOLO_App.pdf

Thanks.