Probably the thing you would have to know is that most of the doc for a plugin is embedded into it.
A gstreamer command named gst-inspect allow to list all available plugins:
gst-inspect-1.0 --version # Would display gstreamer version
gst-inspect-1.0 # Would list all available plugins and more
You can use it to inspect a plugin:
gst-inspect-1.0 nvarguscamerasrc
It would display the library providing it, then its SRC and SINK capabilities, that are different types of multimedia data, formats,… So you would only be able to use a plugin giving one of its listed available types/formats in SINK as input. These CAPS should also match the previous plugin SRC (output) CAPS.
So basically a pipeline goes like this:
SrcPlugin -> CAPS1 -> ProcessPlugin1 -> CAPS2 -> ProcessPlugin2 -> CAPS3 -> SinkPlugin
Src Plugin has no SINK capabilities, for example a camera. SinkPlugin has no SRC capabilities, so no output (ex a display).
CAPS1 would the belong to the intersection of SrcPlugin SRC caps and ProcessPlugin1 SINK caps, …etc.
Since we are lazy, and for some conciseness, instead of writing the caps between each pair of plugins, you may let gstreamer find for you one that matches both ends. Using gst-launch with -v (verbose) option, it will display the caps used for each plugin input and output.
For your case, I assume you’re using a RPi V2 cam with IMX219 sensor.
If you want to use a native mode of your sensor, you don’t need to specify any caps if next plugin SINK caps are ok.
You might however specify caps for resizing input into a different resolution from source or dividing framerate.
Usually for video in raw format the caps would specify type video/x-raw
. This refers to standard memory and can be be used by a CPU plugin. For HW accelerators or GPU, contiguous memory is preferred. So with Jetson you have the video/x-raw(memory:NVMM) caps for using dedicated HW such as nvarguscamerasrc.
Note that NVMM=NV12 is not correct. You would have to use caps: video/x-raw(memory:NVMM),format=NV12
Plugin nvvidconv allows to copy between both types. It can also convert format and resize, but expects at least one among its input or output to be in NVMM memory.
I have no knowledge of iPywidget. Is there a way it can be connected to gstreamer ? If it can be linked to gstreamer, maybe it would use appsink for iPywidget application and you would try:
gst-launch-1.0 nvarguscamerasrc ! nvvidconv ! jpegenc ! appsink
# Or specifying some caps
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM)' ! nvvidconv ! video/x-raw ! jpegenc ! image/jpeg ! appsink
Note that in a shell command, in order to prevent it from interpreting the parenthesis, the caps have to be quoted.
If it works, you would use as pipeline from your application :
pipeline_string="nvarguscamerasrc ! video/x-raw(memory:NVMM) ! nvvidconv ! video/x-raw ! jpegenc ! image/jpeg ! appsink"
gst-inspect-1.0 | grep sink | grep video
would give most of them. They use different backends.
nvhdmioverlaysink or other EGL sinks would expect a local monitor and for EGL a X session running for the user.
xvimagesink may be used remotely with ssh -X or ssh -Y but you wouldn’t set DISPLAY for remote display.
cacasink would work on any terminal.