Can we perform a pipeline that (sink) display and save video output to a file at the same time?

• Hardware Platform (Jetson / GPU): NVIDIA GeForce RTX 4070 Laptop GPU
• DeepStream Version: 7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version: 8.6.1.6-1+cuda12.0
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs): questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Can we perform a pipeline that (sink) display and save a video output to a file at the same time? I mean, while displaying the inference result, it saves the video output.

sink0:
  enable: 1
  #Type - 1=FakeSink 2=EglSink/nv3dsink (Jetson only) 3=File
  type: 2
  sync: 1
  source-id: 0
  gpu-id: 0
  nvbuf-memory-type: 0


sink1:
  enable: 1
  #Type - =FakeSink 2=EglSink/nv3dsink (Jetson only) 3=File
  type: 3
  #1=mp4 2=mkv
  container: 1
  #1=h264 2=h265
  codec: 1
  #encoder type 0=Hardware 1=Software
  enc-type: 0
  sync: 1
  #iframeinterval=10
  bitrate: 4000000
  #H264 Profile - 0=Baseline 2=Main 4=High
  #H265 Profile - 0=Main 1=Main10
  # set profile only for hw encoder, sw encoder selects profile based on sw-preset
  profile: 0
  output-file: output.mp4
  source-id: 0

"Sink0 is responsible for outputting to the display, and sink1 is responsible for simultaneously writing the result to the file: output.mp4.

I hope this can help you.

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!

Thank you for reply!
I forgot to mention that I am developing with Python (deepstream_python_app). I tried to implement your previous intruction like this:

463 # Create display sink
464 sink0 = Gst.ElementFactory.make(“nveglglessink”, “nvvideo-renderer”)
465 ink0.set_property(“qos”,0)
466 sink0.set_property(“sync”,1)
467 pipeline.add(sink0)

469 # Create file sink
470 sink1 = Gst.ElementFactory.make(“filesink”, “nvvideo-renderer”)
471 sink1.set_property(‘location’, output_file)
472 sink1.set_property(“sync”,1)
473 pipeline.add(sink1)

But occurs the following ERROR:
File “/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-nvdsanalytics/nvdsanalytics_peoplenet_opencv.py”, line 473, in main
pipeline.add(sink1)
File “/usr/lib/python3/dist-packages/gi/overrides/Gst.py”, line 73, in add
raise AddError(arg)
gi.overrides.Gst.AddError: <gi.GstFileSink object at 0x73e9b3331280 (GstFileSink at 0x5a1024404930)>

Note: When I run individual sink0 or sink1, it works well.

you need to use a different name. for example, sink1 = Gst.ElementFactory.make(“filesink”, “nvvideo-renderer1”)

I implemented your suggestion. But samething still wrong. No display and no file save.

Create display sink

sink0 = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
sink0.set_property("qos",0)    
sink0.set_property("sync",1)
pipeline.add(sink0)
    
# Create file sink    
sink1 = Gst.ElementFactory.make("filesink", "nvvideo-renderer1")
sink1.set_property('location', output_file)
sink1.set_property("sync",1)   
pipeline.add(sink1)

Link elements:

print("Linking elements in the Pipeline \n")
streammux.link(pgie)    
pgie.link(tracker)
tracker.link(nvanalytics)
nvanalytics.link(tiler)           
tiler.link(nvvidconv)    
nvvidconv.link(nvosd)    
nvosd.link(nvvidconv2)
nvvidconv2.link(encoder)
encoder.link(parser1)
parser1.link(mux)
mux.link(sink0)
sink0.link(sink1)  

#Here is the full Python code
nvdsanalytics.txt (22.7 KB)

#Below is console result:
python3 nvdsanalytics_peoplenet_opencv.py file:///home/eduardo/Devel/Video_dataset/02_0164.mp4
Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating Pgie
Creating nvtracker
Creating nvdsanalytics
Unknown value ‘loosue’ in for key ‘mode’ using ‘loose’
Creating tiler
Creating nvvidconv
Creating nvosd
Creating nvvidconv
Creating nvv4l2h264enc
Creating qtmux
Creating h264parse
Creating FileSink
Adding elements to Pipeline
Linking elements in the Pipeline
Now playing…
1 : file:///home/eduardo/Devel/Video_dataset/02_0164.mp4
Starting pipeline

libEGL warning: DRI3: Screen seems not DRI3 capable
libEGL warning: DRI2: failed to authenticate
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:04.725713454 32307 0x563de69cf780 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 1]: deserialized trt engine from :/home/eduardo/Downloads/deepstream_tao_apps/models/peoplenet/resnet34_peoplenet_int8.onnx_b1_gpu0_int8.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1:0 3x544x960
1 OUTPUT kFLOAT output_cov/Sigmoid:0 3x34x60
2 OUTPUT kFLOAT output_bbox/BiasAdd:0 12x34x60

0:00:04.848379607 32307 0x563de69cf780 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /home/eduardo/Downloads/deepstream_tao_apps/models/peoplenet/resnet34_peoplenet_int8.onnx_b1_gpu0_int8.engine
0:00:04.852607718 32307 0x563de69cf780 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 1]: Load new model:/home/eduardo/Downloads/deepstream_tao_apps/configs/nvinfer/peoplenet_tao/config_infer_primary_peoplenet.txt sucessfully
Decodebin child added: source

Decodebin child added: decodebin0

Decodebin child added: qtdemux0

Decodebin child added: multiqueue0

Decodebin child added: h264parse0

Decodebin child added: capsfilter0

Decodebin child added: nvv4l2decoder0

In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7a3630ba4580 (GstCapsFeatures at 0x7a351430f540)>

**PERF: {‘stream0’: 0.0}

##################################################
Objs in ROI: {‘RF’: 0}
Linecrossing Cumulative: {‘Exit’: 0, ‘Entry’: 0}
Linecrossing Current Frame: {‘Exit’: 0, ‘Entry’: 0}
Frame Number= 0 stream id= 0 Number of Objects= 0 PERSON_count= 0 BAG_count= 0 FACE_count= 0
##################################################
##################################################
Objs in ROI: {‘RF’: 0}
Linecrossing Cumulative: {‘Exit’: 0, ‘Entry’: 0}
Linecrossing Current Frame: {‘Exit’: 0, ‘Entry’: 0}
Frame Number= 1 stream id= 0 Number of Objects= 0 PERSON_count= 0 BAG_count= 0 FACE_count= 0
##################################################
##################################################
Objs in ROI: {‘RF’: 0}
Linecrossing Cumulative: {‘Exit’: 0, ‘Entry’: 0}
Linecrossing Current Frame: {‘Exit’: 0, ‘Entry’: 0}
Frame Number= 2 stream id= 0 Number of Objects= 0 PERSON_count= 0 BAG_count= 0 FACE_count= 0
##################################################
##################################################
Object 5 roi status: [‘RF’]
Object 3 roi status: [‘RF’]
Object 7 roi status: [‘RF’]
Objs in ROI: {‘RF’: 3}
Linecrossing Cumulative: {‘Exit’: 0, ‘Entry’: 0}
Linecrossing Current Frame: {‘Exit’: 0, ‘Entry’: 0}
Frame Number= 3 stream id= 0 Number of Objects= 9 PERSON_count= 6 BAG_count= 0 FACE_count= 3
##################################################
##################################################
Object 5 roi status: [‘RF’]
Object 3 roi status: [‘RF’]
Object 7 roi status: [‘RF’]
Objs in ROI: {‘RF’: 3}
Linecrossing Cumulative: {‘Exit’: 0, ‘Entry’: 0}
Linecrossing Current Frame: {‘Exit’: 0, ‘Entry’: 0}
Frame Number= 4 stream id= 0 Number of Objects= 10 PERSON_count= 7 BAG_count= 0 FACE_count= 3
.
.
.
Object 26 moving in direction: DIR:North
Objs in ROI: {‘RF’: 0}
Linecrossing Cumulative: {‘Exit’: 4, ‘Entry’: 2}
Linecrossing Current Frame: {‘Exit’: 0, ‘Entry’: 0}
Frame Number= 360 stream id= 0 Number of Objects= 12 PERSON_count= 10 BAG_count= 0 FACE_count= 2
##################################################
nvstreammux: Successfully handled EOS for source_id=0

**PERF: {‘stream0’: 76.34}

**PERF: {‘stream0’: 0.0}

**PERF: {‘stream0’: 0.0}

**PERF: {‘stream0’: 0.0}

**PERF: {‘stream0’: 0.0}

**PERF: {‘stream0’: 0.0}

**PERF: {‘stream0’: 0.0}

**PERF: {‘stream0’: 0.0}

**PERF: {‘stream0’: 0.0}

the pipeline is not correct. you can use tee to link two branches. one branch is used to display. the other branch is use to output a file. please refer to deepstream-test4\deepstream_test_4.py. one branch is used to sending broker. the other branch is used to display.

Thank you. It works!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.