I compiled the multimediaAPI sample 07_video_convert ,then try to convert one block_linear image to pitch-linear one. however, the output image was not as expected.
my command is:
./video_convert memsync2160.raw 1920 2160 ABGR32 outmemsync4.raw 1920 2160 ABGR32 --input-nvbl --output-nvpl
in attached you can see the input file memsync2160.raw and output file outmemsync4.raw, they are the same. they can be viewed with the following gstreamer command:
gst-launch-1.0 multifilesrc location=nv4raw2160.raw loop=true caps=“video/x-raw,framerate=1/1” ! videoparse format=11 width=1920 height=2160 framerate=1/1 ! imagefreeze ! ‘video/x-raw, width=1920, height=2160, format=RGBA,framerate=30/1’ ! videoconvert ! xvimagesink
what I actually expected to see is [nv3raw2160.raw|attachment], nv3raw2160.raw (15.8 MB)
In addition. I followed this link Trying to process with OpenGL an EGLImage created from a dmabuf_fd to get the memsync2160.raw. (after NvBufferMemSyncForCpu (dmabuf_fd, 0, &virtual_addr); I could save the output image to a file.) I read that block linear is not available for CPU access, that might be the reason why video_convert sample doesn’t work. If that’s the reason, then I wonder how I could construct an nvbuffer from that dmabuf_fd in order to feed block-linear image to video_convert?
Hi,
Please share how you get the frame data in block linear. Generally we have block linear frame data from decoder or Argus stack, and you can convert the frame data to pitch linear through NvBufferTransform(). This shall work fine.
For running 07 sample, we would expect both input and output frame data are in pitch linear.