Create a simple replacement of gst-nvinfer


I am new to DS.
I want to use the same pipeline in deepstream-test4 but I want to replace gst-nvinfer with my own lib (input is single frame to cv::mat and output is bbox-ed frame). I use ONNX model and Tensort as inference engine but it was all processed in our lib.

What is the best way to replace only gst-nvinfer and keep the same pipeline including metadata of stream ?

Many thanks,

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Xavier NX
• DeepStream Version: 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version:
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Why can’t you deploy your network with gst-nvinfer?

I had a pre-processing and post-processing step between model inferencing which lead me to spend effort to integrate with gst-nvinfer. So I would think a way to bypass nvinfer but keep stream metadata.

How many models do you have? What kind of “pre-processing and post-processing” steps “between” model inferencing?

Hi Fiona,

I have with one model only. It’s kind of smooth filtering process which applied a set of opencv mathlib.

Also, would you show me the input/output gstbuffer of gst-nvinfer src code ? AFAIK, it do the same as my own lib does: catch every frame and output bboxed-frame ?

The gst-nvinfer is open source. The code is under /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvinfer

There is a simple introduction of the gst-nvinfer: DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

Hi Fiona,

I am a bit confused about gstbuffer in/out flow between element/plugin. Could you show me where in source code to manage this flow ?

I want to insert my own lib as bellow sequence:
gstbuffer input → convert to frame RGB → my own lib → convert to frame RGB → gstbuffer ouput

e.g: I don’t see bellow API to be invoked by any others… So I don’t know the flow/sequence-order to insert my modified code…

/* Implementation of the GObject/GstBaseTransform interfaces. */
static void gst_nvinfer_finalize (GObject * object);
static void gst_nvinfer_set_property (GObject * object, guint prop_id,
const GValue * value, GParamSpec * pspec);
static void gst_nvinfer_get_property (GObject * object, guint prop_id,
GValue * value, GParamSpec * pspec);

static gboolean gst_nvinfer_start (GstBaseTransform * btrans);
static gboolean gst_nvinfer_stop (GstBaseTransform * btrans);
static gboolean gst_nvinfer_sink_event (GstBaseTransform * trans,
GstEvent * event);

static GstFlowReturn gst_nvinfer_submit_input_buffer (GstBaseTransform *
btrans, gboolean discont, GstBuffer * inbuf);
static GstFlowReturn gst_nvinfer_generate_output (GstBaseTransform *
btrans, GstBuffer ** outbuf);

Deepstream is based on gstreamer. Please refer to gstreamer. GStreamer: open source multimedia framework

Please make sure you know basic knowledge and coding skills of gstreamer before you start with deepstream.