I want to use OpenCV by branching the deepstream pipeline.
We are using Python Binding Deepstream.
I want to use Deepstream to infer the input of the webcam, and to input the signal split by appsink to OpenCV.
webcamera -->DeepStream → ObjectDetection
|---------->appsink---->opencv–>cv2.imshow
What can I do?
I saw this and understood that I could change the last SINK to APPSINK.
Hello. I am a beginner of ROS. I tried to stream video using OpenCV Video Capture and GStreamer on Ubuntu 18.04 LTS(NVIDIA Jetson TX2) and ROS-melodic. I wanted a node to publish image which is from cv2.VideoCapture with GStreamer pipeline, to a subscribing node and it show the image using cv2.imshow().
However, when I roslaunch the package, it doesn’t show the image on a window, just keep run with warning message:
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/c…
But I don’t know how to implement it in the Python API.
PythonAPI uses Gst.Element.make() instead of a string, so it can’t be passed to VideoCapture in OpenCV.
You can pass a gst pipeline string to OpenCV like this:
uri = "rtsp://1.2.3.4/stream"
gst_str = "rtspsrc location={} ! application/x-rtp, media=video ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink".format(uri)
video = cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)
2 Likes