EGLStream::Image *image = iFrame->getImage(); How is the raw image data i get from *image mapped on memory?

I tried to get raw image data from using the following.

EGLStream::Image *image = iFrame->getImage();
IImage iimg = interface_cast(image);
char data = reinterpret_cast<char>(const_cast<void
>(iimg->mapBuffer((uint32_t)0)));

So *data points to where the raw image data is for YCrCb channels. When i read values from that location i see that the values change when I move the camera. So the location i am trying to read should be correct. But I still cannot figure out how the channels are mapped into the location?

The image buffer looks like this -

buffer 1 127 128 126 128 121 132 122 130 122 130 122 130 …
buffer 0 00 02 00 03 205 213 135 141 132 122 131 143 …

Any ideas ?

According the egl library the way is saves raw images on the buffer depends on the producer (LibArgus)
So the pixel format i have selected is - PIXEL_FMT_YCbCr_420_888.

I tried using NVbuffer but it makes everything slow and it crashes randomly.

The image buffer looks like this -

buffer 1 127 128 126 128 121 132 122 130 122 130 122 130 …
buffer 0 00 02 00 03 205 213 135 141 132 122 131 143 …

In the example yuvjpeg I see the following code printing some sample data on the buffer

for (uint32_t i = 0; i < iImage->getBufferCount(); i++)
{
const uint8_t d = static_cast<const uint8_t>(iImage->mapBuffer(i));
if (!d)
ORIGINATE_ERROR(“\tFailed to map buffer\n”);

        Size2D<uint32_t> size = iImage2D->getSize(i);
        CONSUMER_PRINT("\tIImage(2D): "
                       "buffer %u (%ux%u, %u stride), "
                       "%02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x\n",
                       i, size.width(), size.height(), iImage2D->getStride(i),
                       d[0], d[1], d[2], d[3], d[4], d[5],
                       d[6], d[7], d[8], d[9], d[10], d[11]);
    }

so the d[0], d[1], d[2] … how do these values map to YCrCB channels and pixel values

Hi Ttheekshana, are you on TX1 or TX2? Do you flash the image via Jetpack 3.1(r28.1)?

I am using TX2 and flashed jetpack 3.1 and i am using IMX185 camera

iImage->mapBuffer() does not give buffers in pitch linear. Please use the APIS in nvbuf_utils.h
[url]msync with MS_SYNC option failure - Jetson TX1 - NVIDIA Developer Forums

And also refer to tegra_multimedia_api/samples/09_camera_jpeg_capture

I get these black lines moving around the image randomly in the Y channel when i use the APIS in nvbuf_utils.h

https://drive.google.com/file/d/0B-WbfH0IbWC_SXFRa29UYUlDTUU/view?usp=sharing

Any specific reason why the library does not layout the image data in pitch linear?

It is in block linear, which is private.

Could you provide a patch for us to reproduce the issue with any sample in ~/tegra_multimedia_api/?
We can take a look if anything is wrong in using the APIs.

So the image data is there in the buffer but it is scrambled is it ?

Yes, it is there but not in pitch linear.

Can you try to call NvBufferMemMap() and then NvBufferMemSyncForCpu(), and check if you still see the black line in dumped Y channel?

Hi Ttheekshana,

Is this still an issue in your project?
Any further assistance required?
Please share the result.

Thanks

NvBufferMemMap() and then NvBufferMemSyncForCpu() The video stream i get using this is not good as the one I get using the argus_camera example. They seem to use OpenGL as a consumer if i understand correctly.
I am able to get around 30 FPS but cannot get 60FPS from camera because it takes time to the functions shown bellow.

iFrameConsumer->acquireFrame()
NvBufferMemMap();
NvBufferMemSyncForCpu();
NvBufferMemSyncForDevice();
NvBufferMemUnMap();

If i try to display an images taken from frame consumer-> convert it to RGB-> display on a GL window. It is not as crisp as what i see in your examples.

Also some times it looks like displays the a previous frame randomly from aqureFrame(). (I check it with a stop watch in front of the camera)

The examples which has openGL viewing the camera works well. The best case is to Read the GL buffer to get a frame. But unfortunately the buffer seems to be empty all the time.

It would be better if we had a better implementation which uses less CPU.
Currently I am using IMX185 which is 2MP 60FPS. I am planning to use 4k camera with 30FPS in future. I am not sure if it will eat all the CPU left when getting the image.

I have the same problem as you have mentioned:
“I get these black lines moving around the image randomly in the Y channel when i use the APIS in nvbuf_utils.h”.

I am using TX1 Jetpack 3.0, LI-TX1-CB and two IMX290. Any ideas?

I have the same problem as you have mentioned:
“I get these black lines moving around the image randomly in the Y channel when i use the APIS in nvbuf_utils.h”.

I am using TX1 Jetpack 3.0, LI-TX1-CB and two IMX290. Any ideas?

You have to install via Jetpack3.1 to get new APIs:
[url]msync with MS_SYNC option failure - Jetson TX1 - NVIDIA Developer Forums

I am aware of this. But the person who asked this question was using JetPack 3.1, and he had the same issue with black lines. Do you have any explanation about it?

#12 has confirmed it fixed, but it may not achieve 1080p60 as #14 reports.

Which function has to be called first:

NvBufferMemMap and then NvBufferMemSyncForCpu or NvBufferMemSyncForCpu and then NvBufferMemMap?

And why with Jetpack 3.1 on TX2, after exporting DISPLAY=:0, I cannot run Argus program via ssh due to failure to initialize EGL display? There was no such issue in the previous version.

Invalid MIT-MAGIC-COOKIE-1 key(Argus) Error NotSupported: Failed to initialize EGLDisplay (in src/eglutils/EGLUtils.cpp, function getDefaultDisplay(), line 75)
(Argus) Error NotSupported: Failed to get default display (in src/api/OutputStreamImpl.cpp, function initialize(), line 80)
(Argus) Error NotSupported:  (propagating from src/api/CaptureSessionImpl.cpp, function createOutputStreamInternal(), line 565)