Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson Xavier AGX
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 10.2
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I have problems with my pipeline which are slowly driving me crazy.
I am working on a stereo camera which works in two states.
- Detection (specific object detection)
- Calibration (calibrate stereo camera)
In the detection state, I have to search the objects detected by the network on the master camera on the slave camera. I do this with a template matching algorithm, which I wrote in cuda.
The whole stream was implemented in gstreamer and can be seen on the pictures.
To do the template matching I use a probe. But since the template matching algorithm needs two images in RGBA (master and slave), I had to move the probe and the conversion of the images before the demuxer.
Since I did this, the stream is behaving very strangely.
For example, the object description generated by nvdsosd flickers. Also the memory access to the images in the probe changes periodically (master and slave).
This is how I programmed the RGBA conversion:
// create filter element for color format conversion from NV12 to RGBA, dsosd and template algo need RGBA input stream
videoconvert_nv12_rgba_algo_probe = gst_element_factory_make("nvvideoconvert", "videoconvert_nv12_rgba_algo_probe"); // describe video conversion capabilities (NV12 -> RGBA), dsosd and template algo need RGBA input stream capsfilter_nv12_rgba_algo_probe = gst_element_factory_make("capsfilter", "capsfilter_nv12_rgba_algo_probe"); GstCaps *videoconvertcap = gst_caps_new_simple("video/x-raw", "format", G_TYPE_STRING, "RGBA", NULL); GstCapsFeatures *feature = gst_caps_features_new("memory:NVMM", NULL); gst_caps_set_features(videoconvertcap, 0, feature); g_object_set(G_OBJECT(capsfilter_nv12_rgba_algo_probe), "caps", videoconvertcap, NULL); gst_caps_unref(videoconvertcap);
IMAGE1: Pipeline after the change (start of the problems)
The change is marked in red.
Am I making a specific mistake or can it be because I am now converting on a buffer size of 2?
Does anyone have any input?
Thanks in advance :)