Face Recognition Keep Crashing on Jetson Nano 2GB

Currently I am following the lessons taught by Paul McWhorter on his YouTube channel. But until the lesson related to face recognition system (Lesson 43), I failed to obtain the results as his. Where he able to have 10-15 fps and the system able to run continuously without any issues, while mine keep crashing as soon as I run the program.

Currently, I have no idea, is it due to different operating system version or the device (hardware) itself that cause this problem. Because, my system setup differs from the setup that Paul have in his video.

Several things on my system setup that I acknowledge to be different from the tutorial video;

  • Type of device - Nvidia Jetson Nano 2GB
  • Version of Jetpack - 4.5.1 [L4T 32.5.1]
  • Camera sensor - IMX219-77
  • Version of face recognition library ‘d-lib’ - 19.21
  • Micro SD card storage capacity - 64 GB (SanDisk Ultra)
  • Swapfile (have 2 partition) - P=-1 (4.3GB) and P=-2 (6.4GB)

This is my codes, which is totally based on the tutorial

import face_recognition
import cv2
import os
import pickle
import time
print(cv2.version)

fpsReport=0 #Declaring the variable
scaleFactor=0.4

Encodings=[ ]
Names=[ ]

with open(‘train.pkl’,‘rb’)as f:
Names=pickle.load(f)
Encodings=pickle.load(f)

font=cv2.FONT_HERSHEY_SIMPLEX

dispW=320
dispH=240
flip=2
camSet=‘nvarguscamerasrc ! video/x-raw(memory:NVMM), width=3264, height=2464, format=NV12, framerate=21/1 ! nvvidconv flip-method=’+str(flip)+’ ! video/x-raw, width=‘+str(dispW)+’, height=‘+str(dispH)+’, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink’
cam=cv2.VideoCapture(camSet) #Variable for camera usage

timeStamp=time.time() #present time

while (True):
_,frame=cam.read()

frameSmall=cv2.resize(frame,(0,0),fx=scaleFactor,fy=scaleFactor) 
frameRGB=cv2.cvtColor(frameSmall,cv2.COLOR_BGR2RGB) 
facePositions=face_recognition.face_locations(frameRGB,model='cnn') 
allEncodings=face_recognition.face_encodings(frameRGB,facePositions) 
for (top,right,bottom,left),face_encoding in zip(facePositions,allEncodings): 
    name='Unknown Person'
    matches=face_recognition.compare_faces(Encodings,face_encoding)
    if True in matches:
        first_match_index=matches.index(True) #index for the first match happen
        name=Names[first_match_index]

    top=int(top/scaleFactor)
    right=int(right/scaleFactor)
    left=int(left/scaleFactor)
    bottom=int(bottom/scaleFactor)
    
    cv2.rectangle(frame,(left,top),(right,bottom),(0,0,255),2)
    cv2.putText(frame,name,(left,top-6),font,.75,(0,0,255),2)

dt=time.time()-timeStamp 
fps=1/dt
fpsReport=.9*fpsReport+.1*fps
#print('FPS = {:.2f}'.format(fpsReport))
timeStamp=time.time() 

cv2.rectangle(frame,(0,0),(100,40),(0,0,255),-1) 
cv2.putText(frame,str(round(fpsReport,1)) + 'fps',(0,25),font,.75,(0,255,255,2)) 

cv2.imshow('piCam',frame) 
cv2.moveWindow('piCam',0,0)
if cv2.waitKey(1)==ord('q'): 
    break

cam.release()
cv2.destroyAllWindows()

The error are as shown below;

mfaiz269@mfaiz269-desktop:~/Desktop/pyPro$ /usr/bin/python3 /home/mfaiz269/Desktop/pyPro/FaceRecognizer/FaceRecognize-6liveFPS.py
4.1.1
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected

GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 0
Output Stream W = 3264 H = 2464
seconds to Run = 0
Frame Rate = 21.000000
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
CONSUMER: Done Success
Traceback (most recent call last):
File “/home/mfaiz269/Desktop/pyPro/FaceRecognizer/FaceRecognize-6liveFPS.py”, line 34, in
frameSmall=cv2.resize(frame,(0,0),fx=scaleFactor,fy=scaleFactor) #change the frame dimensions
cv2.error: OpenCV(4.1.1) /home/nvidia/host/build_opencv/nv_opencv/modules/imgproc/src/resize.cpp:3720: error: (-215:Assertion failed) !ssize.empty() in function ‘resize’

(Argus) Error Timeout: (propagating from src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 137)
(Argus) Error Timeout: (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 87)
GST_ARGUS: Cleaning up
(Argus) Error Timeout: (propagating from src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 137)
(Argus) Error Timeout: (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 87)
GST_ARGUS: Done Success
(Argus) Error Timeout: (propagating from src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 137)
(Argus) Error Timeout: (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 87)
(Argus) Error Timeout: (propagating from src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 137)
(Argus) Error Timeout: (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 87)
(Argus) Error InvalidState: Argus client is exiting with 4 outstanding client threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 357)

I am sorry for the long descriptions of my problem in this post. I am very new to this field of study, thanks in advance to anybody that willing to help me to solve this problem, and greetings from Malaysia =D

1 Like

Hi,

(-215:Assertion failed) !ssize.empty() in function ‘resize’

The error indicates that OpenCV cannot read the camera data correctly.
Could you try below GStreamer command to see if it works first?

$ gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=3264, height=2464, format=NV12, framerate=21/1' ! nvvidconv flip-method=2 ! 'video/x-raw,width=320, height=240' ! nvvidconv ! nvegltransform ! nveglglessink -e

Thanks.

Yes it work as expected after I try those command on a LXTerminal.

Hi AastaLLL,

Yesterday I continue my learning on “Lesson 47: Facial Recognition on Multiple Camera in OpenCV” where 2 camera streamed simultaneously via threading and recognizing faces independently. For my setup this time, I still using the IMX 219-77 camera module (connected via MPCSI connection) and a Webcam that connected via USB port.

There are good news and bad news from this lesson.

The bad news are;
At first, both camera able to run simultaneously with very minimal latency between them respectively also with a very good fps (10-30 fps). But , once I show my face, the video stream will freezes for about 1 minute approximately in order to process and recognize my face. Then, when it manage to recognize my face, the video stream from the IMX219-77 will stop streaming and freezes at the last frame it captured and an error appeared as shown below.

CONSUMER: Done Success
Exception in thread Thread-2:
Traceback (most recent call last):
File “/usr/lib/python3.6/threading.py”, line 916, in _bootstrap_inner
self.run()
File “/usr/lib/python3.6/threading.py”, line 864, in run
self._target(*self._args, **self._kwargs)
File “/home/mfaiz269/Desktop/pyPro/FaceRecognizer/FaceRecognize-12twoCamRecognize.py”, line 25, in update
self.frame2=cv2.resize(self.frame,(self.width,self.height))
cv2.error: OpenCV(4.1.1) /home/nvidia/host/build_opencv/nv_opencv/modules/imgproc/src/resize.cpp:3720: error: (-215:Assertion failed) !ssize.empty() in function ‘resize’

At the same time, video stream from the Webcam are working smoothly and able to recognize my face without any problem until I decide to kill the program.

Also not to forget, once the program failed or freezes and then I killed the program with the designated key (Q), I need to reboot my jetson in order to re-launch the program properly. I don’t know why this happen, but i think it is due to the camera or video stream are not shutdown properly.

The good news are;
After thinking for a while and asking myself “Why does the IMX219-77 always having trouble compared to the webcam???”. In my opinion, the webcam are running at lower Megapixels (2MP) compare to the IMX219-77 which running at 8MP. So then, I try to change the mode of video streaming on IMX219-77 from 3264 x 1848 (21fps) down to 1280 x 720 (60fps) with the hope that, this will reduce the computational load on the Jetson and help the face recognition program to function as expected. As expected, both camera are working properly and able to recognize my face at 10-25fps but the scale factor for the ‘cv2.resize’ command need to be at most 0.3 or lower in order for the IMX219-77 to work. If the scale is larger than 0.3, it will crash and the issue have been stated above repeats. But, at a scale factor lower than 0.3, I need to locate my face quite close to each of the camera sensor in order for it to recognize, which is quite unpractical for me :-(. On a bright side, it still works anyways.

Therefore, my question is, is it the high resolution video streamed from the IMX219-77 are the actual cause of this problem? or there might be something else?

Thank you.

Hi,

Thanks for the testing.

It’s possible.
Since Nano 2GB has extremely limited resource, it may not able to process a high resolution deep inference.

Does the workaround meet your requirement?

Thanks.

Hi,

Owh. Thank you for the confirmation. So far, I think I will continue my project with a webcam instead of the IMX219-77. Because, it seems the webcam are easier, faster and more effective choice for me right now. Anyways, I wish I can use the IMX219-77.

In the future, if there are any optimization or I manage to find a way to fix this problem, I prefer using the IMX219-77. This is because it have more accessibility to camera settings and it saves my USB port.

Thank you for the response and support, much appreciated.

Hi faiz26,
I have also encountered the exact same problem following the same tutorial. I have the nano 2g and a raspberry Pi cam v2 connected to the csi interface.
What I learned from the comments of Paul were “Make sure when you set the camera up for a web cam to use (‘/dev/video1’) not just (1). You are probably on jetpack 4.4, and things changed. Beyond that, face recognizer runs slow on jetpack 4.4, at least last time I checked. I would suggest going back and downloading jetpack 4.3 from the archive and then using the above address of the WEB cam and you should be running fast”

Maybe it might be an idea to try to downgrade the jetpack version? If anyone have done so and tried face recognition with the setup similar to mine and faiz26 I would be grateful for a comment on the performance. In any case, I might do it in a few weeks and update this thread.

Kind regards