Post processor nvinferserver

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) T4
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.2
• NVIDIA GPU Driver Version (valid for GPU only) 11.6
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

im using post-processor plugin Gst-nvdspostprocess in DeepStream — DeepStream 6.1.1 Release documentation

root@39026ff1831b:/opt/nvidia/deepstream/deepstream-6.1/sources/gst-plugins/gst-nvdspostprocess#

i have modified config_detector.yml file and made an build libnvdsgst_postprocess.so got generated and also

inside root@39026ff1831b:/opt/nvidia/deepstream/deepstream-6.1/sources/gst-plugins/gst-nvdspostprocess/postprocesslib_impl# ls

made an build and generated libpostprocess_impl.so

And i have linked postprocess component in the plugin , and im inferencing custom yolov4 tlt model .

not able get the inference output (metadata)

below is the pipeline



srcpad.link(sinkpad)


    streammux.link(queue2)
    queue2.link(pgie)

    pgie.link(nvvidconv1)
    nvvidconv1.link(filter1)
    filter1.link(tiler)
    tiler.link(nvvidconv)
    nvvidconv.link(postprocess)
    postprocess.link(nvosd)

And the probe function

    postprocess_sink_pad = postprocess.get_static_pad("sink")
    if not postprocess_sink_pad:
        sys.stderr.write(" Unable to get src pad \n")
        return
    else:
        postprocess_sink_pad.add_probe(Gst.PadProbeType.BUFFER, tiler_sink_pad_buffer_probe , 0)

im getting l_obj as None always
attached is the config file and yaml file

helmetLabels.txt (27 Bytes)
config_infer_postprocess.txt (1.1 KB)
helmet.yaml (1.1 KB)

app_PostProcessor.py (33.7 KB)

From the guide you refer to:

This is done by changing the network-type=100 and output-tensor-meta=1. 

Could you set this 2 paras and have a try?

i have already set output-tensor-meta=1. But when i set network-type =100 in helmet,yaml file as below , as below

property:
gpu-id: 0
process-mode: 1
num-detected-classes: 2
gie-unique-id: 1 #Operate on gie-unique-id’s output

1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)

cluster-mode: 3
output-blob-names: BatchedNMS;BatchedNMS_1;BatchedNMS_2;BatchedNMS_3
network-type: 100
labelfile-path: /opt/nvidia/deepstream/deepstream-6.1/sources/gst-plugins/gst-nvdspostprocess/helmetLabels.txt
parse-bbox-func-name: NvDsPostProcessParseCustomBatchedNMSTLT
is-classifier: 0
#Use the config params below for dbscan clustering mode
#class-attrs-all:
#detected-min-w: 4
#detected-min-h: 4
#minBoxes: 3


and run the application , below error i get

*** DeepStream: Launched RTSP Streaming at rtsp:// 0.0.0.0 : 3075 /ds-test ***

Now playing…
1 : file:///app/nvidia-video_Tao/helmet.h264
Starting pipeline

Error in NvDsPostProcess::SetConfigFile() <postprocesslib_impl.cpp:392> : Parsing for network type 100 is not supported
Error: gst-library-error-quark: Library config file /opt/nvidia/deepstream/deepstream-6.1/sources/gst-plugins/gst-nvdspostprocess/helmet1.yml parsing failed (5): gstnvdspostprocess.cpp(307): gst_nvdspostprocess_start (): /GstPipeline:pipeline0/GstNvDsPostProcess:postprocess-plugin:
Postprocess lib config file /opt/nvidia/deepstream/deepstream-6.1/sources/gst-plugins/gst-nvdspostprocess/helmet1.yml parsing failed

Could you try to comment out the network-type in your config file? Also you can refer our tao demo with c code.
It shows the use of multiple models and scenarios.
https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/configs/yolov3_tao/pgie_yolov3_tao_config_dgpu.txt

we are using nvinferserver , with nvinfer the app is working correctly .
Hence we are trying to use this plugin Gst-nvdspostprocess in DeepStream — DeepStream 6.1 Release documentation

we have attached the corresponding in the link .

Hi @h9945394143 , in your helmet.yml, we cannot find the custom-lib-path para. Basically, when you set these paras successfully, it will use the function you defined. Could you just send the model, stream to us and tell us how you run your demo step by step? Thanks

we are using postprocess plugin , you can refer app_PostProcessor.py , there we have used custom-lib-path there

    postprocess = Gst.ElementFactory.make("nvdspostprocess", "postprocess-plugin")
    if not postprocess:
        sys.stderr.write(" Unable to create preprocess \n")
        return
    postprocess.set_property("postprocesslib-config-file", "/opt/nvidia/deepstream/deepstream-6.1/sources/gst-plugins/gst-nvdspostprocess/helmet.yml")
    postprocess.set_property("postprocesslib-name", "/opt/nvidia/deepstream/deepstream-6.1/sources/gst-plugins/gst-nvdspostprocess/postprocesslib_impl/libpostprocess_impl.so")

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Ok. I try to run your python code in my T4 env, but I don’t have some required modules like DSEventHandler. Maybe you can refer our c++ code, config files from the source code:

/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-infer-tensor-meta-test/

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.