Hi
Sorry for the length of post but wanted to give all details.
I tried this pipeline and it works:
gst-launch-1.0 uridecodebin uri=file:///home/super/Downloads/HelmetFull.mp4 ! mx.sink_0 nvstreammux width=1920 height=1080 batch-size=1 name=mx ! nvinfer config-file-path=/home/super/AIProgramming/Helmet/Sources/deepstream-test1-usbcam/hardhat.txt unique-id=8 ! nvmultistreamtiler width=1920 height=1080 rows=1 columns=1 ! nvvideoconvert ! nvdsosd ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=test.mp4
However, I have lost the customization for the bboxes i produced in the Python script. Therefore I used the test3 python sample and adjusted the “no-display” argument to add the nvv4l2h264, h264parse, qtmux and filesink elements (this approach was just for convienience and fitted the use case). I further adjusted the python code to check for no-display and add the sequence of linking code to link the added components appropriately. The relevant python code is as follows:
if no_display:
print("Creating Filesink \n")
enc = Gst.ElementFactory.make("nvv4l2h264enc","nvv4l2h264enc")
parse = Gst.ElementFactory.make("h264parse", "h264parse")
qtm = Gst.ElementFactory.make("qtmux", "qtmux")
sink = Gst.ElementFactory.make("filesink", "filesink")
sink.set_property("location","output.mp4")
sink.set_property('enable-last-sample', 0)
sink.set_property('sync', 0)
else:
if(is_aarch64()):
print("Creating transform \n ")
transform=Gst.ElementFactory.make("nvegltransform", "nvegl-transform")
if not transform:
sys.stderr.write(" Unable to create transform \n")
print("Creating EGLSink \n")
sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
and,
print(“Adding elements to Pipeline \n”)
pipeline.add(pgie)
if nvdslogger:
pipeline.add(nvdslogger)
pipeline.add(tiler)
pipeline.add(nvvidconv)
pipeline.add(nvosd)
if no_display:
pipeline.add(enc)
pipeline.add(parse)
pipeline.add(qtm)
if transform:
pipeline.add(transform)
pipeline.add(sink)
Finally,
print("Linking elements in the Pipeline \n")
streammux.link(queue1)
queue1.link(pgie)
pgie.link(queue2)
if nvdslogger:
queue2.link(nvdslogger)
nvdslogger.link(tiler)
else:
queue2.link(tiler)
tiler.link(queue3)
queue3.link(nvvidconv)
nvvidconv.link(queue4)
queue4.link(nvosd)
if transform:
print("*** in transform")
nvosd.link(queue5)
queue5.link(transform)
transform.link(sink)
elif no_display:
print("*** in no display")
nvosd.link(queue5)
queue5.link(enc)
enc.link(queue6)
queue6.link(parse)
parse.link(queue7)
queue7.link(qtm)
qtm.link(queue8)
queue8.link(sink)
else:
print("*** in else")
nvosd.link(queue5)
queue5.link(sink)
This fails with following output:
{‘input’: [‘file:///home/super/Downloads/HelmetFull.mp4’], ‘configfile’: None, ‘pgie’: None, ‘no_display’: True, ‘file_loop’: False, ‘disable_probe’: False, ‘silent’: False}
Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating Pgie
$$$$$ PGIE make
Creating tiler
Creating nvvidconv
Creating nvosd
Creating Filesink
Adding elements to Pipeline
Linking elements in the Pipeline
*** in no display
Now playing…
0 : file:///home/super/Downloads/HelmetFull.mp4
Starting pipeline
Opening in BLOCKING MODE
0:00:00.169549505 11429 0xaaaaaf57baa0 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:02.625956842 11429 0xaaaaaf57baa0 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/super/AIProgramming/Helmet/Models/hardhat/final_model_hardhat.etlt_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x304x400
1 OUTPUT kFLOAT output_bbox/BiasAdd 12x19x25
2 OUTPUT kFLOAT output_cov/Sigmoid 3x19x25
0:00:02.783831813 11429 0xaaaaaf57baa0 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /home/super/AIProgramming/Helmet/Models/hardhat/final_model_hardhat.etlt_b1_gpu0_int8.engine
0:00:02.791272220 11429 0xaaaaaf57baa0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:hardhat.txt sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Decodebin child added: qtdemux0
Decodebin child added: multiqueue0
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: aacparse0
Decodebin child added: avdec_aac0
Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
In cb_newpad
gstname= video/x-raw
features= <Gst.CapsFeatures object at 0xffffb0cbfac0 (GstCapsFeatures at 0xffff04060820)>
In cb_newpad
gstname= audio/x-raw
0:00:03.051217666 11429 0xaaaaaf5862a0 WARN nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:03.051255362 11429 0xaaaaaf5862a0 WARN nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop: error: streaming stopped, reason not-linked (-1)
Error: gst-stream-error-quark: Internal data stream error. (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2299): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-linked (-1)
Exiting app
I’ve read within the forum that some compenents need a callback (such as uridecodebin) in order to link, but my added components don’t appear to need that.
Thank you for any help
Cheers