How to find out what pipeline to create

How do beginners go about finding out what pipeline to construct to get a particular bit of hardware to do ‘for example’ save to disk ?

for example if i have a camera i can find out the supported modes:
v4l2-ctl -d /dev/video0

so then i use a v4l2src device=/dev/video0

i understand that i have to add a caps of RAW after that to make sure the rest of the pipeline uses it…

so at this stage i would have

v4l2src → caps (forcing it to raw)

if i want to write it to an mp4 file, i would need to mux it with an mp4 muxer and save it to file

splitmuxsink location=/tmp/file%02d.mp4 muxer=mp4mux max-size-time=30000000000

this is still missing something as :

gst-launch-1.0 -e v4l2src device=/dev/video0 ! video/x-raw ! splitmuxsink location=/tmp/file%02d.mp4 max-size-time=30000000000
WARNING: erroneous pipeline: could not link v4l2src0 to splitmuxsink0, splitmuxsink0 can't handle caps video/x-raw

however if i

gst-inspect-1.0 splitmuxsink

i see:

Pad Templates:
  SINK template: 'video'
    Availability: On request

Capabilities: ANY

how do i go about finding the information i need to build a working pipeline.
I’ve seen really long streams that plug in heaps of plug-ins inbetween (and also don’t work)
How do i learn this black magic ?

note: i understand that this question is not exactly DeepStream related, however I am stuck at this level before i can create more complex pipelines with nvinfer

We have deepstream-app which supports general use-cases. You may try to run it with default config file and modifying config file to save to a file, or do RTSP streaming. Please take a look at document:
Quickstart Guide — DeepStream 5.1 Release documentation
DeepStream Reference Application - deepstream-app — DeepStream 5.1 Release documentation

Thanks @DaneLLL , i changed the config in the deepstream-app to only write to file, and then write the pipeline to .dot file

it’s not what i would call a clean example, Is this how everybody gets started with this ? It’s a very steep learning curve.

either way, if i can extract what i need from it i’ll be happy.

i assume the tee’s are not necessary for my example? they would only be in there to handle the other processing (that i switched off in the config)
and there is an NvStreamMux with a NvStreamDemux right after each other … can these 2 be omitted as well ?
same for the 2 Queues (with a T inbetween)?

Multiple use-cases are supported in deepstream-app, so certain plugins are used. And we can easily modify config file to run different use-cases.

There are posts about constructing pipelines through gst-launch-1.0 commands:
Adding a ghost pad after splitting a pipeline using Tee? - #11 by DaneLLL
Could not link nvarguscamerasrc0 to nvstreammux0 - #10 by DaneLLL
Writing a custom Gstreamer plugin - #3 by DaneLLL
Nvmultistreamtiler/nvstreamdemux with omxh264enc won't work - #5 by DaneLLL
You may refer to the pipelines and construct yours.

Thank you. All these examples are different from what I would like to do:
camera → disk

So these examples set out something else.
My question is how did you get to the particular configuration from the use case?

If i can’t find an exact example of what i need to do, then how would I go about in constructing a particular pipeline so it does what it needs to do ?
Is there a logical step on how to know what plugins to use for what use case ?

For my original example i have a videosrc, then a filter to make sure the data is in raw format…
then if i need to write it in mp4 format, is there a standard set of plugins that are always used to convert RAW → MP4 file ?
Or is it a guess game until it works ?

i’m not asking for a fish, i would like to learn how to fish.

so i have got it working, but not because i understand what part does what. or even if everything is necessary

    gst-launch-1.0 v4l2src device=/dev/video0 ! \
    'video/x-raw, width=1280, height=720, framerate=10/1' ! \
    clockoverlay halignment=right valignment=bottom text="Device Time:" shaded-background=true font-desc="Sans, 12" !  \
    nvvidconv !  \
    'video/x-raw(memory:NVMM),format=(string)I420' !  \
    nvv4l2h265enc bitrate=2000000 !  \
    'video/x-h265, stream-format=(string)byte-stream'!  \
    h265parse ! \
   splitmuxsink location=/home/yoda/Videos/file%02d.mp4 muxer=qtmux max-size-time=30000000000

for example, there is a caps in-between nvv4l2h256enc and h265parse… is this necessary
if i look at the output of gst-inspect-1.0 of nvv4l2h256enc the source shows:

SRC template: 'src'
    Availability: Always
          stream-format: byte-stream
              alignment: au

update: it still works if i remove that caps

thank you @danelll for thinking along. sorry if i seem a bit all over the shop, just starting with this and it is a pretty steep learning curve.