Help needed: RTSP stream from raspberry pi to jetson nano

Hi,

How do I format this script to receive a live stream from a raspberry mounted camera to jetson nano

gst-launch-1.0 -v rtspsrc location=rtsp://login:password@cam_ip:cam_port latency=500 ! application/x-rtp, media=video, encoding-name=H264 ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! xvimagesink

Our script:

def save_known_faces():
with open(“known_faces.dat”, “wb”) as face_data_file:
face_data = [known_face_encodings, known_face_metadata]
pickle.dump(face_data, face_data_file)
print(“Known faces backed up to disk.”)

def register_new_face_folder(face_encoding, face_image):

“”"

Add a new person to our list of known faces

“”"

# Add the face encoding to the list of known faces

known_face_encodings_folder.append(face_encoding)

# Add a matching dictionary entry to our metadata list.

# We can use this to keep track of how many times a person has visited, when we last saw them, etc.

known_face_metadata_folder.append({

“first_seen”: datetime.now(),

“first_seen_this_interaction”: datetime.now(),

“last_seen”: datetime.now(),

“seen_count”: 1,

“seen_frames”: 1,

“face_image”: face_image,

})

def save_known_faces_from_folder():

imagePaths = list(paths.list_images(’/Users/yudiz/Downloads/MyExtraStuff/FaceRecogInbuildLibraryCompare/FaceRecognitionMichael/UnknownFaces’))

for (i, imagePath) in enumerate(imagePaths):

name = imagePath.split(os.path.sep)[-2]

print(“name”,imagePath)

print(“i”,i)

frame1 = cv2.imread(imagePath)

# small_frame = cv2.resize(frame1, (0, 0), fx=0.25, fy=0.25)

We’re really stuck and need help here.

Thanks

Hi,
Please refer to the python samples:
I got the RTSP video stream on the jetson nano and ran it for about 50 frames before returning false
Doesn't work nvv4l2decoder for decoding RTSP in gstreamer + opencv

Hi,

It worked but when I run this with our script it lags a whole lot more. How do we remedy this?

I could post the script here if it helps.

Thanks

Hi,
OpenCV functions consume heavy CPU usage. The performance may be capped by CPU capability. Please run sudo tegrastats to check the system status.

Yes it does. How do I mitigate this or make it work with GPU resources

Hi,
In C code, you can rebuild OpenCV with CUDA enabled and leverage cv::gpu::gpuMat. Please refer to the sample:

But it cannot be applied to python. You may check the sample and consider to use C.