I am working on TX2, jetpack 3.3, OpenCV 3.4.5 environment. I found that I can not get video input through gstream because I have to use 6 csi type cameras at the same time. So I am using the camera with the argus library for video input.
- It succeeded to input image in nvbuffer format through argus. It also succeeded in converting it to opencv’s cv :: mat type. However, mat data obtained through this method can not be saved via cv :: imwrite and converted to jpg. I do not know what the problem is with this.
My source code is as follows.
...
// Acquire a frame.
UniqueObj <Frame> frame (i_consumer _-> acquireFrame ());
IFrame * IFrame = interface_cast <IFrame> (frame);
if (! iFrame)
{
return false;
}
if (iFrame-> getNumber () <skip_frames_)
{
return false;
}
EGLStream :: Image * image_ = iFrame-> getImage ();
if (save_image_)
{
if ((getTick () - post_save_time_)> = (1000 / save_freq_))
{
post_save_time_ = getTick ();
stringstream save_path;
save_path << image_file_path_ << image_file_name_ << save_count_ << "." << image_file_type_;
EGLStream :: IImageJPEG * iImageJPEG_ = interface_cast <EGLStream :: IImageJPEG> (image_);
status_ = iImageJPEG _-> writeJPEG (save_path.str (). c_str ());
save_count _ ++;
cout << "save image success:" << save_path.str () << endl;
}
}
native_buffer_fd_ = -1;
native_buffer_fd2_ = -1;
NV :: IImageNativeBuffer * iNativeBuffer = interface_cast <NV :: IImageNativeBuffer> (image_);
if (native_buffer_fd_ == -1)
{
native_buffer_fd_ = iNativeBuffer-> createNvBuffer (i_stream _-> getResolution (),
NvBufferColorFormat_ABGR32,
NvBufferLayout_Pitch, & status_);
native_buffer_fd2_ = iNativeBuffer-> createNvBuffer (i_stream _-> getResolution (),
NvBufferColorFormat_YUV420,
NvBufferLayout_BlockLinear, & status_);
if (native_buffer_fd_ == -1)
{
cout << "Failed to create NvBuffer" << endl;
return false;
}
if (native_buffer_fd2_ == -1)
{
cout << "Failed to create NvBuffer" << endl;
return false;
}
}
void * pdata = NULL;
NvBufferMemMap (native_buffer_fd_, 0, NvBufferMem_Read_Write, & pdata);
NvBufferMemSyncForCpu (native_buffer_fd_, 0, & pdata);
// NvBufferMemSyncForDevice (native_buffer_fd_, 0, & pdata);
Matrix imgbuf = Mat (height_, width_,
CV_8UC4, pdata);
if (imgbuf.empty ())
{
cout << "imgbuf empty." << endl;
return false;
}
cvtColor (imgbuf, frame_, COLOR_RGBA2BGR);
...
You can not imwrite any jpg compression with the cv :: Mat frame_ input from the following code. What is the problem with this? I copied the cv :: mat for this, received the input, and tried to input the value rather than the pointer, but all failed.
- I also have to do a tensorrt-based inference using this input image. I am referring to jetson-inference as a reference. However, there is no example anywhere in the package to receive images from the argus library. Do you have any examples or source code to refer to?
Thanks.