Hello. Sorry for posting my question here, i didn’t find appropriate topic.
My question is about gstreamer + cudaconvert.
Current situation:
we have a camera, and we get a stream from it with 1920 * 1080 dimensions. And we use 3 channels. So the size of the frame should be 1920 * 1080 * 3 = 6220800 bytes.
when i use gstreamer 1.20, i can use a such pipeline part “appsrc name=source%d ! h264parse ! nvh264dec ! cudaconvert ! video/x-raw(memory:CUDAMemory), format=(string)BGR ! appsink name=sink%d emit-signals=true async=false sync=false” and after getting the frame using gstreamer api i have a frame with size which equals to 6220800 bytes.
when i use gstreamer 1.22 with the same pipeline part, after getting the frame using gstreamer api i have a frame with size which equals to 6635520 bytes.
I’ll try to write a topic in the gstreamer forum too, but my experience with it is bad - i have never got any answer there.
The difference seems to be due to alignment, sometimes also represented as stride when seeing an image’s metadata, which might have changed in GStreamer 1.22.
So in 1.20 you have:
6220800 / 1080 / 3 = 1920
1920 is divisible by 128,64, … So the data is aligned to 128 bytes or less.
And in 1.22 you have: 6635520 / 1080 / 3 = 2048
The extra padding means that some extra bytes are needed, in this case, it might align the data anywhere from 256 bytes up to 2048 bytes.
Thank you very much. Today i’ve created a cv::Mat with 2048 * 1080 size and with map.data pointer as i did with gstreamer 1.20, and i got a correct frame with 128 black pixel-wide area on the right side. So i think you are right, but i didn’t solve how to get pointer to the right size area. Or is it impossible and i should process it by myself, how do you think?
You’ll have to handle it yourself. If you are using OpenCV for processing, its functions should be aware of stride if you are able to set this value in the constructor.