Hi everybody,
I have built an OpenVX graph to stabilize video frames read in from a video source (video file or camera). The work flow is as the following:
(video source) —> [Reader] —> (frame: vx_image) ----> [Graph] —> (stabilized_frame: vx_image)
Inside the graph, (frame) is first converted to gray scale by vxColorConvertNode, then this gray image is passed through some other nodes to generate the output stored in (stabilized_frame). No thing difficult here!
The problem occurs when I use two different video readers to get data in (frame), including ovxio::FrameSource VisionWork API and GStreamer API.
- The code using ovxio::FrameSource is looks like this
std::unique_ptr<ovxio::FrameSource> source = ovxio::createDefaultFrameSource( context, video_path );
while(1)
{
source->fetch( frame );
///...Process frame
vxProcessGraph( graph );
}
- The code using Gstreamer for stabilizing live video from camera looks like this
GstSample *sample = gst_app_sink_pull_sample((GstAppSink*) vsink);
GstMapInfo map;
GstBuffer *buf = gst_sample_get_buffer(sample);
gst_buffer_map(buf, &map, GST_MAP_READ);
cv::Mat mBgr(1080, 1920, CV_8UC4, map.data);
vx_image frame = nvx_cv::createVXImageFromCVMat(m_context, mBgr);
gst_buffer_unmap(buf ,&map);
gst_buffer_unref(buf);
///... Process frame
When I pass frame capture by GStreamer to the graph, the execution time of vxColorConvertNode jump to the average of 10ms (figure 1), while if using ovxio::FrameSource it is only around 1.5ms (figure 2). The only reason I can think of is that ovxio::FrameSource store frame data directly to NVX_MEMORY_TYPE_CUDA, while Gstreamer method do it with VX_MEMORY_TYPE_HOST, so it will have to create a new image in CUDA type and convert the one in host memory to the correct format. But I am not sure. Could anyone help me fix this?
https://imgur.com/j0iEdlu
Figure.1. Graph performance when using Gstreamer for frame reading
https://imgur.com/7jphl4C
Figure.1. Graph performance when using ovxio::FrameSource for frame reading
Thank in advance!