Hello,
I have a problem in my python3 application using OpenCV & GStreamer.
It seems like i only recieve a 1-channel-image stream (grayscale)
Hardware: NVIDIA Jetson Nano Developer Kit…
Basically when declaring a variable like this:
vs = cv2.VideoCapture("rtsp://username:password@XXX.XXX.X.XX/PORT", cv2.CAP_GSTREAMER)
And then passing it through a while-loop like this:
while True:
ret, frame = vs.read()
frame = cv2.resize(frame, (1280, 720))
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
break
cv2.destroyAllWindows()
vs.stop()
The recieved VideoStream in “Frame” is in grayscale.
When calling Gstreamer in the commandline, the Stream is in RGB (or BGR).
If questions occur:
I need the RTSP Stream in colour, because i pass them through a face detector later. Those are not trained to work with 1-channel images.
Your gstreamer pipeline for video capture is incomplete. You need to depay, decode and convert before getting suitable frames for opencv. You may try:
vs = cv2.VideoCapture("rtsp://username:password@XXX.XXX.X.XX/PORT ! decodebin ! videoconvert ! video/x-raw,format=BGR ! appsink", cv2.CAP_GSTREAMER)
#maybe more efficient if encoding is h264:
vs = cv2.VideoCapture("rtsp://username:password@XXX.XXX.X.XX/PORT latency=200 ! queue ! rtph264depay ! queue ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink", cv2.CAP_GSTREAMER)
I still get this error when passing a frame from vs through the face-detector:
cv2.error: /home/nvidia/build_opencv/opencv/modules/dnn/src/layers/convolution_layer.cpp:199: error: (-215) ngroups > 0 && inpCn % ngroups == 0 && outCn % ngroups == 0 in function getMemoryShapes
Nothing has changed… But thanks for the recommendation!
Face detector is a different issue. Can you view the video with imshow as in the code you’ve posted above ?
VideoStream opens, but still in grayscale…
I tried getting the Stream without the
, cv2.CAP_GSTREAMER
at the end.
Now the Stream is in RGB, but the CPU Usage is much higher (roughly 70% instead of 25%)
So i am guessing the problem is GStreamer running in OpenCV and not openCV itself…
I also tried setting up Jetpack completely new…
I even checked out the CPU usage of the Stream with GStreamer running in the commandline. Still 25% usage + colored Stream!
Any suggestions? I am open to try anything…
I managed it to solve the problem on my own, thanks for the recommendation though!
Basically what i did was using this gstreamer pipeline (maybe not the most efficient one, but the only one that worked for me):
vs = cv2.VideoCapture("rtspsrc location=rtsp://username:password@XXX.XXX.XXX.XXX/PORT/ latency=20 ! queue ! rtph264depay ! queue ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink", cv2.CAP_GSTREAMER)
CPU usage is now at 25% and - the most important thing - the stream is in BGR.
I hope that this helps anyone solving the same problem, since i have struggled several hours achieving that…