Extract Camera Frame from DeepStream Pipeline

I need to extract the camera frame from the deepstream sdk pipeline in order to run a specific algorithm on the camera frame separately (in parallel) from the DeepStream pipeline.


Do you have any ideas please?

Moving to DeepStream SDK forum so that DeepStream team can take a look.

You could add a tee or GstInterpipe to the pipeline and use an appsink in the second branch, like this:

Another option is to implement an element based on GstVideoFilter and add it to the DeepStream pipeline. You can perform the custom algorithm in the _transform_ip virtual method. For a reference you can check our perf plugin that is quite simple, based on VideoFilter and implements transform_ip: https://github.com/RidgeRun/gst-perf/tree/master/plugins

Is there further reference you could provide, I am trying to get example 2 to work so I can understand how I can tolerate the example to get what I am trying to do to work however, I am receiving a null pointer error.