After some research, i guess we have to implement this with nvdspreprocess. However, it seems that all the cropped object is resized with processing-height and processing-width. Is there any way to get cropped object without it being resized.
Or we can get the whole frame out and do cropping our-self but currently we have no clue, the API is a bit ambiguous. Please help us with 2 problem with nvdspreprocess:
Can we get the cropped object without it being scaled.
If not, can we get the frame data with frame-meta or batch-meta.
The problem is i have to do custom warping on input image and nvidia example doesnt provide it, so our plan is to use custom tensor with nvdspreprocess like this image -> text detection (4 points to make a polygon) -> nvdspreprocess (use source image, crop and warp using the polygon) -> input tensor -> text recognition.
We suggest that you first refer to the following sample deepstream-pose-classification. This sample covers how to handle the buffer and how to prepare the tensor data. And you need to implement the warp algorithm yourself.
I already go through it but is there anyway to get FULL FRAME with nvdspreprocess API? We will have to do the cropping ourself from the full frame since using NvBufSurfaceParams *surf_params = batch->units[i].roi_meta.converted_buffer; already scaled down my object.
Yes i need cropped detected object but it is automatically scaled to network height and width in deepstream, we are hoping we can get the cropped object before it get scaled.