CSI Camera not found after running a face Recognition file on live video

I am using a jetson nano 4GB with waveshare IMX-219 binocular camera. all the Opencv and face recognition files(on images) work properly. but when it comes to face recognition on live video, it never works and stops midway. Then I have forcefully close it . after closing , the computer does not detect any camera on the board . This happens only with face recognition on live video. All the other things work perfectly before running the following file:-

import face_recognition
import cv2
import os
import pickle
import time
print(cv2.__version__)

fpsReport = 0
timestamp = time.time()
scaleFactor = 0.10
Encodings = []
Names = []
font = cv2.FONT_HERSHEY_SIMPLEX
with open('train.pkl','rb') as f:
    Encodings = pickle.load(f)
    Names = pickle.load(f)

dispW = 1200
dispH = 1000
flip =2
camSet='nvarguscamerasrc !  video/x-raw(memory:NVMM), width=1280, height=720, format=NV12, framerate=21/1 ! nvvidconv flip-method='+str(flip)+' ! video/x-raw, width='+str(dispW)+', height='+str(dispH)+', format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink'
cam=cv2.VideoCapture(camSet)

while True :
    _,frame = cam.read()
    frameSmall = cv2.resize(frame,(0,0),fx=scaleFactor,fy=scaleFactor)
    frameRGB = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
    testPositions = face_recognition.face_locations(frameRGB,model="cnn")
    allEncodings = face_recognition.face_encodings(frameRGB,testPositions)

    for (y1,x1,y2,x2),face_encoding in zip(testPositions,allEncodings):
        name = "Unknown Person"
        matches = face_recognition.compare_faces(Encodings,face_encoding)

        if True in matches :
            first_person_index = matches.index(True)
            Names = [first_person_index]
        x1 = int(x1/scaleFactor)
        x2 = int(x2/scaleFactor)
        y1 = int(y1/scaleFactor)
        y2 = int(y2/scaleFactor)
        cv2.rectangle(frameRGB,(x1,y1),(x2,y2),(0,0,0),3)
        cv2.putText(frameRGB,name,(x1,y1-7),font,.75,(0,0,0),2)
    ct = time.time()-timestamp
    fps = 1/ct
    fpsReport = .9*fpsReport + .1*fps
    cv2.rectangle(frameRGB,(0,0),(100,40),(0,0,0),-1)
    cv2.putText(frameRGB,str(round(fpsReport,1)) + "fps", (0,25),font,.75,(255,255,255))
    cv2.imshow("PiCam",frameRGB)
    if cv2.waitKey(1) == ord("q"):
        break

cam.release()
cv2.destroyAllWindows()

the following gstramer code also fails to detect the camera:-


gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! \
   'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1' ! \
   nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=540' ! \
   nvvidconv ! nvegltransform ! nveglglessink -e

Kindly help me with this. The sooner I get to know the problem, the sooner I can solve it.
Thank You

Hi,
Please check fi you can run this:

gst-launch-1.0 nvarguscamerasrc ! nvoverlaysink

If it fails to run, we suggest contact camera vendor to make sure the sensor driver is ready.

And may try Raspberry Pi camera v2. It is supported on Jetson Nano by default.

Hi,
It does not work either. It works before running the face recognition, but not after I run the face recognition, It shows the following error:-

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:736 Failed to create CaptureSession
Got EOS from element "pipeline0".
Execution ended after 0:00:00.005033800
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

So, after I run the face recognition file, the camera ( the whole camera module has two cameras and none of them work) is not detected. As for the rpi camera, I think I should try a webcam as that seems more reliable. The waveshare site does not have any drivers as such . Also, after I reboot, or turn the jetson nano off and on , the camera starts working again as usual.
Thank you

Hi,
I tried changing the video width and height to 300 each , and the display width and display height to 100 each. Now the camera runs and shows a window showing 0.1 fps. It lags a little and then when I take the camera to my face the program turns off showing the following error:-

4.1.1
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 5 
   Output Stream W = 1280 H = 720 
   seconds to Run    = 0 
   Frame Rate = 120.000005 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
Traceback (most recent call last):
  File "faceRecognise6_Instrument.py", line 33, in <module>
    matches = face_recognition.compare_faces(Encodings,face_encoding)
  File "/usr/local/lib/python3.6/dist-packages/face_recognition/api.py", line 226, in compare_faces
    return list(face_distance(known_face_encodings, face_encoding_to_check) <= tolerance)
  File "/usr/local/lib/python3.6/dist-packages/face_recognition/api.py", line 75, in face_distance
    return np.linalg.norm(face_encodings - face_to_compare, axis=1)
TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U32') dtype('<U32') dtype('<U32')
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success

Hi,
It looks like the sensor driver does not handle termination properly, so in next launch, the camera cannot be initialized. Would suggest you contact vendor for support.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.