NvBufferTransform to transform decoded jpeg images from YUV to ABGR. Here are the high level steps I’m taking (pseudocode):
NvBufferCreateEx(&fd2) → allocate hardware buffer if needed (image size changes)
As long as consecutive images are of the same resolution or the next image has higher resolution everything works fine. But once
NvBufferCreateEx allocates buffer with smaller resolution,
NvBufferTransform throws the following error:
Can anybody help me identify what I may be doing wrong?
My configuration details:
- Jetson AGX Xavier
- Jetpack 4.6
- nvidia-l4t-jetson-multimedia-api/stable,now 32.6.1
This looks to be duplicate of
NvDdkVicConfugure failed, nvbuffer_transform Failed - #4 by DaneLLL
There is constraint in downscaling ratio.
@DaneLLL so as far as I understand once a process allocates hardware buffer with size
(x, y), the minimum size of the hardware buffer that can be allocated by the same process is
(x/16, y/16). Is this correct?
Is this still an issue to support? Any result can be shared? Thanks
@kayccc This is no longer an issue. I have a working solution.
The only outstanding question was to confirm (or correct) the hardware allocation limitations from my post above.
Yes, 16 is the constraint. If you need ratio >16, please scale twice to the desired resolution.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.