Combining frontend + gstVideoEncode samples

Hello,

I am looking at the frontend and gstVideoEncode samples in the multimedia API folder. I would like that the bounding boxes detected by the TRTStreamConsumer are drawn in whatever the gstVideoEncode outputs.

Currently, the CaptureSession creates two OutputStreams (one for TRT and one for gstVideoEncoder). The gstreamer source is then made as follows:

// Create EGLStream video source.
GstElement *videoSource = gst_element_factory_make("nveglstreamsrc", NULL);
g_object_set(G_OBJECT(videoSource), "display", display, NULL);
g_object_set(G_OBJECT(videoSource), "eglstream", eglStream, NULL);

Could anyone demonstrate how to have the gstreamer source start on the output of the buffers that are being passed to nvosd_draw_rectangles(…) in TRTStreamConsumer::RenderThreadProc()?

So I guess there should be no two OutputStreams and the gstreamer source should be pointed to memory buffers that are processed in the TRT Consumer.

Hi Beerend,
We suggest you use NvVideoEncoder as demonstrated in VideoEncoder.cpp

Hello DaneLLL,

Thanks for your answer. However, I do not just want to output to a single file, but would like to write to an hlssink for streaming over HTTP. The NvVideoEncoder appears to encode but it is not clear to me what to pass to gstreamer and how.

Hi Beerend,
You can refer to the patch in below post:
[url]CLOSED. Gst encoding pipeline with frame processing using CUDA and libargus - Jetson TX1 - NVIDIA Developer Forums