Creating a pipeline for all type of inferences

Creating a pipeline for all type of inferences image-inference/rtsp/video file upload.

is there any deepstream sample app exists which accept all sort of inputs and save/stream output.

Yeah, the default deepstream app does that. Just edit the config file

Please check this,
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_ref_app_deepstream.html
source group:
uri

URI to the encoded stream. The URI can be a file, an HTTP URI, or an RTSP live source. Valid when type=2 or 3. With MultiURI, the %d format specifier can also be used to specify multiple sources. The application iterates from 0 to num-sources 1 to generate the actual URIs.

sink group:
type

Type of sink, to use.

1: Fakesink

2: EGL based windowed sink (nveglglessink) will be deprecated

3: Encode + File Save (encoder + muxer + filesink)

4: Encode + RTSP streaming

5: Overlay (Jetson only) will be removed in future release

6: Message converter + Message broker

yeah this works… the issue is with image(.jpg) inference in the same pipeline
using a switch for above mentioned pipeline based on input type(file-rtsp or jpg).

Did you have streammux in your pipeline? i see you used stdemux after nvdsanalytics

I tried this bins approach but it’s stuckking… elements successfully added and linked and
I see dataflow is there.

filesrc location=xxx ! jpegparse ! nvv4l2decoder ! m.sink0
filesrc location=xxx ! jpegparse ! nvv4l2decoder ! m.sink1
nvstreammux name=m width=xxx height=xxx batch-size=2 ! …

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.