Hello,
I have a real-time video streaming on the raspberry pi but I want to receive it via Jetson nano. What’s a convenient script I can use to do this?
Note: My Jetson nano run opencv without deepstream
Hello,
I have a real-time video streaming on the raspberry pi but I want to receive it via Jetson nano. What’s a convenient script I can use to do this?
Note: My Jetson nano run opencv without deepstream
Assuming the stream from you RPi is H264 encoded, you mat try something like:
gst-launch-1.0 -v rtspsrc location=rtsp://login:password@cam_ip:cam_port latency=500 ! application/x-rtp, media=video, encoding-name=H264 ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! xvimagesink
# or more general:
gst-launch-1.0 -v uridecodebin uri=rtsp://login:password@cam_ip:cam_port ! nvoverlaysink
How can I format this into a script running openCV.
video_capture = cv2.VideoCapture('gst-launch-1.0 -v rtspsrc location=rtsp://10.42.0.167:8554/stream latency=500 ! application/x-rtp, media=video, encoding-name=H264 ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! xvimagesink', cv2.CAP_GSTREAMER)
# Track how long since we last saved a copy of our known faces to disk as a backup.
number_of_faces_since_save = 0
while True:
ctime = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
I fail to really understand your case, but your capture pipeline looks not correct. It should end with appsink receiving BGR frames. Don’t use gst-launch being a shell command.
video_capture = cv2.VideoCapture('rtspsrc location=rtsp://10.42.0.167:8554/stream latency=500 ! application/x-rtp, media=video, encoding-name=H264 ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink', cv2.CAP_GSTREAMER)
can I send you the script to take a look at
Please post it here, so that I would advise only once.
I’m sorry it doesn’t format properly.Can I upload
Please enclose your code with 3 ` chars before and same 3 after, so that it makes a code block easy to read and copy.
Also please try my previous suggestion about your capture pipeline.
it keeps giving the same error but I don’t feel it from the stream pipeline
Traceback (most recent call last):
File “facerecognitionrasp1.py”, line 324, in
main_loop()
File “facerecognitionrasp1.py”, line 184, in main_loop
small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
cv2.error: OpenCV(4.1.1) /home/nvidia/host/build_opencv/nv_opencv/modules/imgproc/src/resize.cpp:3720: error: (-215:Assertion failed) !ssize.empty() in function 'resize
Seems it fails to resize to null size (0,0). Try resizing to a real size or try None instead of (0,0).
the error still persist
You may check the input frame size.
The input frame is designated on the wireless raspberry pi. Had to comment out that from the camera attached to the jetson nano.
I hope I did the right thing here