Hey guys, I’ve been really struggling with these questions and finally I’m going to ask them. As a newbie with basic Python, I’m really having a hard time understanding certain concepts.
Is there any where else, other than NVIDIA Jetson Linux Developer Guide, or the Jetson Linux Multimedia API Reference where I can learn more about building a GStreamer pipeline? I understand the pipeline concept, from reading the official GStreamer Foundations, as well as reading about Elements, Bins, Bus, Pads and capabilities, etc and all of this makes sense.
My questions are:
Where can I find more information to understand the parameters that you pass nvarguscamerasrc, as well as the other plugins in the pipeline, especially sinks (video or display sinks).
For example:
-
If I select a sensor-mode = 3, which means a width = 1280 and a height = 720 pixels, do I still then need to deal with specifying a width and height in the video/x raw(memory:NVMM) part of the configuration? If so how come? What would be the application of using sensor-mode then? I’ve seen it used on some examples online, but then they start specifying a height, width, frame rate anyway. This same type of questions also applies to frame rate as well. I thought by picking a sensor mode, you automatically define a frame rate for that mode as well.
-
Are there any other options for NVMM, other than NVMM=NV12? Can I please get a reference on what this means? Or a link to a reference where I can read up more? Or is this more of a “It just is” type of thing? I’m asking because in the NVIDIA Input and Output Formats section of the Developers Guide, it talks about different formats for nvvidconv. What’s not clear is that when we’re building the pipeline, do we always expect NV12 as an input or can you select another input format? This question is an attempt to minimize transformations and conversions that I see in almost every example of OpenCV online. If I can immediatley get a stream that is JPEG, and then stream that to an iPywidget, wouldn’t that be more effeciant than importing an image to BGR, converting to JPEG and then displaying?
-
Where do I find information on video/x-raw? It looks like I have to specify it a height, width, but am I stuck using BGR? Or can I create a stream that outputs right to JPEG? (I get that this question may be answered already above, but in case I have #2 completely wrong, I’m asking another question).
-
Where do I find more information on video sinks? My goal is to display an image/video stream using an iPywidget on a remote machine running Jupyter Lab (Not unlike the DLI courses, except using ethernet instead of USB). I’m already able to ssh into the nano and run juypter lab, I have even successfully diplayed an image using an iPywidget running jet-cam. However, the more I learn about GStreamer pipelines, the more questions I have on the examples I see online. They don’t seem to make sense in the context of building the right pipeline.
I’ve exhausted reading both the developer guide and the API manual but don’t seem to find the answers I’m looking for.
I have also ran the following commands:
gst-inspect-1.0 nvarguscamerasrc
As well as:
nvgstcapture-1.0 --help
But these still don’t give me a clear picture of what I’m trying to do.
Any help or advise you guys can give would be greatly appreciated, I’ve exhausted looking through the literature so need to ask.
Thanks guys.
Cheers!