Record Deepstream 6.1 output - Input MP4, Output MP4

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Orin AGX
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only) 5.01
• TensorRT Version 8.4
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi

I have built upon DeepStream Tes3 Python sample app that accepts an H265 MP4 file as input. The PGIE is a model that has been built using TAO and converted to run on aarch. That piece is working fine.

My question is that I would like to record the results of the inferred video to a similar MP4 file. I have tried various incarnations of gstreamer togther with a review of similar questions here. Ive seen other solutions but they have been for elementary H264 streams (like test 1 app).

I have also tried this outside of python

gst-launch-1.0 uridecodebin uri=file:///home/super/Downloads/HelmetFull.mp4 ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvinfer config-file-path=hardhat.txt ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! “video/x-raw,format=I420” ! avenc_mpeg4 bitrate=2000000 ! mpeg4videoparse ! mux.video_0 qtmux name=mux ! filesink location=test.mp4

but to no avail.

I have read the gstreamer documentation but would appreciate any guidance. Thank you.

Cheers.

Hi, @IainA . Which format do you want about your output mp4 file (h264, h265, or both is OK)?
Could you give me your HelmetFull.mp4 file, models file and config file?
From the pipeline that you attached, you used avenc_mpeg4 to encode your file. We suggest to use nvv4l2h265enc or nvv4l2h264enc to encode file. You can refer below pipeline to use the nvv4l2h265enc plugin.Thanks

gst-launch-1.0 filesrc location= ./XXX.mp4 ! qtdemux ! h265parse ! nvv4l2decoder ! queue ! nvvideoconvert ! “video/x-raw(memory:NVMM), format=I420” ! nvv4l2h265enc ! h265parse ! qtmux ! filesink location=test.mp4

Hi
Sorry for the length of post but wanted to give all details.

I tried this pipeline and it works:

gst-launch-1.0 uridecodebin uri=file:///home/super/Downloads/HelmetFull.mp4 ! mx.sink_0 nvstreammux width=1920 height=1080 batch-size=1 name=mx ! nvinfer config-file-path=/home/super/AIProgramming/Helmet/Sources/deepstream-test1-usbcam/hardhat.txt unique-id=8 ! nvmultistreamtiler width=1920 height=1080 rows=1 columns=1 ! nvvideoconvert ! nvdsosd ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=test.mp4

However, I have lost the customization for the bboxes i produced in the Python script. Therefore I used the test3 python sample and adjusted the “no-display” argument to add the nvv4l2h264, h264parse, qtmux and filesink elements (this approach was just for convienience and fitted the use case). I further adjusted the python code to check for no-display and add the sequence of linking code to link the added components appropriately. The relevant python code is as follows:

if no_display:
    print("Creating Filesink \n")
    
    enc = Gst.ElementFactory.make("nvv4l2h264enc","nvv4l2h264enc")
    parse = Gst.ElementFactory.make("h264parse", "h264parse")
    qtm = Gst.ElementFactory.make("qtmux", "qtmux")
    sink = Gst.ElementFactory.make("filesink", "filesink")
    sink.set_property("location","output.mp4")
    sink.set_property('enable-last-sample', 0)
    sink.set_property('sync', 0)
else:
    if(is_aarch64()):
        print("Creating transform \n ")
        transform=Gst.ElementFactory.make("nvegltransform", "nvegl-transform")
        if not transform:
            sys.stderr.write(" Unable to create transform \n")
    print("Creating EGLSink \n")
    sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")

and,

print(“Adding elements to Pipeline \n”)
pipeline.add(pgie)
if nvdslogger:
pipeline.add(nvdslogger)
pipeline.add(tiler)
pipeline.add(nvvidconv)
pipeline.add(nvosd)
if no_display:
pipeline.add(enc)
pipeline.add(parse)
pipeline.add(qtm)
if transform:
pipeline.add(transform)
pipeline.add(sink)

Finally,

print("Linking elements in the Pipeline \n")
streammux.link(queue1)
queue1.link(pgie)
pgie.link(queue2)
if nvdslogger:
    queue2.link(nvdslogger)
    nvdslogger.link(tiler)
else:
    queue2.link(tiler)
tiler.link(queue3)
queue3.link(nvvidconv)
nvvidconv.link(queue4)
queue4.link(nvosd)
if transform:
    print("*** in transform")
    nvosd.link(queue5)
    queue5.link(transform)
    transform.link(sink)
elif no_display:
    print("*** in no display")
    nvosd.link(queue5)
    queue5.link(enc)
    enc.link(queue6)
    queue6.link(parse)
    parse.link(queue7)
    queue7.link(qtm)
    qtm.link(queue8)
    queue8.link(sink)
else:
    print("*** in else")
    nvosd.link(queue5)
    queue5.link(sink) 

This fails with following output:

{‘input’: [‘file:///home/super/Downloads/HelmetFull.mp4’], ‘configfile’: None, ‘pgie’: None, ‘no_display’: True, ‘file_loop’: False, ‘disable_probe’: False, ‘silent’: False}
Creating Pipeline

Creating streamux

Creating source_bin 0

Creating source bin
source-bin-00
Creating Pgie

$$$$$ PGIE make
Creating tiler

Creating nvvidconv

Creating nvosd

Creating Filesink

Adding elements to Pipeline

Linking elements in the Pipeline

*** in no display
Now playing…
0 : file:///home/super/Downloads/HelmetFull.mp4
Starting pipeline

Opening in BLOCKING MODE
0:00:00.169549505 11429 0xaaaaaf57baa0 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:02.625956842 11429 0xaaaaaf57baa0 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/super/AIProgramming/Helmet/Models/hardhat/final_model_hardhat.etlt_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x304x400
1 OUTPUT kFLOAT output_bbox/BiasAdd 12x19x25
2 OUTPUT kFLOAT output_cov/Sigmoid 3x19x25

0:00:02.783831813 11429 0xaaaaaf57baa0 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /home/super/AIProgramming/Helmet/Models/hardhat/final_model_hardhat.etlt_b1_gpu0_int8.engine
0:00:02.791272220 11429 0xaaaaaf57baa0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:hardhat.txt sucessfully
Decodebin child added: source

Decodebin child added: decodebin0

Decodebin child added: qtdemux0

Decodebin child added: multiqueue0

Decodebin child added: h264parse0

Decodebin child added: capsfilter0

Decodebin child added: aacparse0

Decodebin child added: avdec_aac0

Decodebin child added: nvv4l2decoder0

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0xffffb0cbfac0 (GstCapsFeatures at 0xffff04060820)>
In cb_newpad

gstname= audio/x-raw
0:00:03.051217666 11429 0xaaaaaf5862a0 WARN nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:03.051255362 11429 0xaaaaaf5862a0 WARN nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop: error: streaming stopped, reason not-linked (-1)
Error: gst-stream-error-quark: Internal data stream error. (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2299): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-linked (-1)
Exiting app

I’ve read within the forum that some compenents need a callback (such as uridecodebin) in order to link, but my added components don’t appear to need that.

Thank you for any help

Cheers

Fixed - needed a nvvideoconvert element in there after the osd - wired up in python (needed 1 more queue)
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.