Please provide complete information as applicable to your setup.
**• Hardware Platform Jetson **
**• DeepStream Version 5.1 **
• JetPack Version 4.5.1
• TensorRT Version
• Issue Type questions
I want to compose the stream from camera going through nvinfer with the original stream from camera.
The pipeline is like below, it is not working with error from “nvcompositor ERROR: input buffer not supported”
gst-launch-1.0 v4l2src ! video/x-raw,width=1920, height=1080 ! tee name=video_tee ! queue ! nvvidconv ! video/x-raw(memory:NVMM),format=RGBA,width=960, height=540 ! comp. video_tee. ! queue ! nvvideoconvert ! video/x-raw(memory:NVMM),format=(string)NV12 ! streammux.sink_0 nvstreammux name=streammux width=1920 height=1080 batch-size=1 batched-push-timeout=4000000 ! nvinfer config-file-path=./config_infer_primary_yoloV5.txt ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! video/x-raw(memory:NVMM),width=960, height=540 ! comp. nvcompositor name=comp sink_0::xpos=0 sink_0::ypos=0 sink_0::width=960 sink_0::height=540 sink_1::xpos=0 sink_1::ypos=540 sink_1::width=960 sink_1::height=540 ! video/x-raw(memory:NVMM) ! nv3dsink sync=FALSE
I saw nvcompositor is not compatible with deepstream. But I don’t know which plugin can help my case, is it possible to use nvmultistreamtiler to solve my issue?
nvcompositor is not dee[stream plugins. It can not be used in deepstream pipeline.
DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums
Thanks for your feedback.
How to composite output stream from nvinfer with the original video without nvinfer? Is there any solution?
Deepstream is mainly for inferencing purpose. So nvmultistreamtiler composites the inferenced videos only. A possible method is to tee the video as two streams, and remove the object meta for one stream after nvinfer. So the bbox or other inference result will not appear in the stream.
My case is to merge the video stream from camera with the stream output from nvinfer with the bbox to one single stream.
Is there any solution on Jetson to implement such application?
A possible method is to tee the video as two streams, and remove the object meta for one stream after nvinfer. So the bbox or other inference result will not appear in the stream.
Sorry, I didn’t get your point.
How to remove the object meta of one stream after infer? Is there any sample code in deepstream samples
After removing the object meta of one stream, and then how to compose the two streams? which plugin should be used to compose the two stream?
what I want to render is like below:
Just input the two streams( tee from your original stream) to nvinfer as normal, and remove the object meta for one of the stream.
_NvDsFrameMeta — Deepstream Deepstream Version: 5.1 documentation, _NvDsObjectMeta — Deepstream Deepstream Version: 5.1 documentation
Which function should be called to remove the object meta data? Is there any sample code?
There is “nvds_remove_obj_meta_from_frame()” interface.
Metadata Structures — Deepstream Deepstream Version: 5.1 documentation
And you can find the interface in /opt/nvidia/deepstream/deepstream/sources/includes/nvdsmeta.h