In the fresh Jetson nano dev kit and flashed SD card image. I tried to run the face_recognition using opencv in Python.
Loaded the test mp4 file (containing faces) using gstreamer as
cv2.Videocapture(‘filesrc location=test.mp4 ! qtdemux ! queue ! h264parse ! omxh264dec ! nvoverlaysink’, cv2.CAP_GSTREAMER’),
but the face is not recognized precisely as done in Google colab using GPU.
- Using gstreamer does really fastens up the way frames are read. How to ensure/verify/examine that ?
- Reading the stored file (cctv footage) vs rtsp streaming which one has better performance in Jetson Nano.
- Can I use deepstream library examples to receive the video frames, detect/recognize the face and label the encodings ?