I am using usb camera with Jetson Nano. I notice there are some “nv” prefix components on my device when I use the command gst-inspect-1.0
. How can I use these kinds of nvidia components like “nvv4l2camerasrc” or “NVMM”. I don’t know how to write the gs-pipeline string for python cv2.
Hi, edward871130
I might be able to help you create a GStreamer pipeline but I would need you to provide a little bit more information about what is your use case. Do you want to capture raw images, record videos, save snapshots, web streaming, etc?
Jafet Chaves,
Embedded SW Engineer at RidgeRun
Contact us: support@ridgerun.com
Developers wiki: https://developer.ridgerun.com/
Website: www.ridgerun.com
Hi jchaves!
I am trying to use Python3 CV2 to “Record MJPG Video”.
Here’s my code:
import cv2
width = 2560
height = 1440
framerate = 30
gs_pipeline = f"nvv4l2camerasrc device=/dev/video0 " \
f"! video/x-raw(memory:NVMM), width={width}, height={height}, format=MJPG, framerate={framerate}/1 " \
f"! nvvidconv " \
f"! video/x-raw, format=(string)BGRx " \
f"! videoconvert " \
f"! video/x-raw, format=(string)BGR " \
f"! appsink"
v_cap = cv2.VideoCapture(gs_pipeline, cv2.CAP_GSTREAMER)
if not v_cap.isOpened():
print("failed to open video capture")
exit(-1)
while v_cap.isOpened():
ret_val, frame = v_cap.read()
if not ret_val:
break
cv2.imshow('', frame)
input_key = cv2.waitKey(1)
if input_key != -1:
print(f"input key = {input_key}")
if input_key == ord('q'):
break
Here’s the error I got:
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (711) open OpenCV | GStreamer warning: Error opening bin: could not link nvv4l2camerasrc0 to nvvconv0, neither element can handle caps video/x-raw(memory:NVMM), width=(int)2560, height=(int)1440, format=(string)MJPG, framerate=(fraction)30/1
Hi, puffvayne
My advice in general is that you prototype the pipeline you are attempting to implement in a Python script or application using gst-launch first or gstd. Since your use case is simple then gst-launch is a good first option.
Now, regarding the issue you encountered, there are couple of observations I have:
-
First, it seems you are trying to capture from a camera that outputs MJPEG video so nvv4l2camerasrc is not going to work since is does not support to capture that kind of video. Your next option is to try with v4l2src. In general you can check what an element is capable to receive or send using the
gst-inspect tool
. -
Second, I would discard using OpenCV for your use case, it just overcomplicates things. You could try to write a plain GStreamer application using the gst-python bindings. Here is an easy to follow tutorial that shows how to use it: Python GStreamer Tutorial
-
Now, by "Record MJPG Video” do you mean to save the camera input as a RAW video file (decoded frames), transcode it to H.264 and save it as mp4 files. Here is a pipeline suggestion using gst-launch for you to try first and then implement in your Python script
- Record MJPEG video directly into MP4 file
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=150 ! "image/jpeg, width=2560, height=1440, framerate=30/1" ! jpegparse ! qtmux ! filesink location=test_video.mp4
- Record RAW video (decoded frames)
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=150 ! "image/jpeg, width=2560, height=1440, framerate=30/1" ! nvv4l2decoder mjpeg=1 ! nvvidconv ! "video/x-raw" ! filesink location=test_video.raw
- Transcode to H.264 and save in MP4 file
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=150 ! "image/jpeg, width=2560, height=1440, framerate=30/1" ! nvv4l2decoder mjpeg=1 ! nvv4l2h264enc ! h264parse ! mp4mux ! filesink location=test_video.mp4
Jafet Chaves,
Embedded SW Engineer at RidgeRun
Contact us: support@ridgerun.com
Developers wiki: https://developer.ridgerun.com/
Website: www.ridgerun.com
actually I’d like to make two separate pipeline, one for the cv2.VideoCapture and cv2.VideoWriter()
Hi,
In that case you could try to use “appsink” and “appsrc” elements for that, to split the pipelines suggestions above into two. Then again that would be too inefficient for you use case at 2k@30fps resolution. Unless you are trying to perform some image processing with OpenCV (in which case would make total sense), my suggestion is to stick with a plain GStreamer script.
Jafet Chaves,
Embedded SW Engineer at RidgeRun
Contact us: support@ridgerun.com
Developers wiki: https://developer.ridgerun.com/
Website: www.ridgerun.com
I have a question here, what’s the difference between ! qtmux
and mp4mux
?
Hi,
Both in most cases produce practically the same results. In reality QuickTime file format (qtmux) is more like an extension of the MP4 file format (mp4mux). There is extensive documentation of both elements contained in the isomp4
plugin in the links below:
https://gstreamer.freedesktop.org/documentation/isomp4/qtmux.html?gi-language=c
https://gstreamer.freedesktop.org/documentation/isomp4/mp4mux.html?gi-language=c
Thanks for ur explanation. One Last Question, what does num-buffers=150
do ?
And what kind of scenario will I use it to get better performance?
The num-buffers
is a property is inherited from any element that inherits from GstBaseSrc. It is used to set a finite amount of buffers to output before sending an EOS (end-of-stream) message to throughout the pipeline.
Does that mean instead of sending data to EOS one by one, we collect a chunk of data and push it to EOS at once?
EOS is GstMessage type (one of many already predefined). The messages are posted on the GstBus of the pipeline. When the count set in num-buffers
(if not set by default is -1 which means run indefinitely) is reached, then the source element posts this message in the bus so all other elements are notified of the streaming termination.
Also since this thread has been flagged as solved and to avoid violating any forum rules, if you have any other questions please send a direct message through the forum platform.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.