Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU):GeForce RTX 3090
• DeepStream Version: 6.1
• TensorRT Version:8.2
• NVIDIA GPU Driver Version (valid for GPU only): 510
• Issue Type( questions, new requirements, bugs): questions
nvds_obj_enc_process crop() operates on
NvDsFrameMeta which stores 2 types of dimension:
source_frame_height (camera dimension) vs.
pipeline_height (detector dimension). If camera dimension and detector dimension are different, at which dimension were the cropped images cropped?
Please provide more context for the use case. If we use the example in deepstream-image-meta-test, the crop will apply to the detector dimension. The original camera dimension may be scaled by nvstreammux before the detector, and the position information in metadata is in the same frame as detector , so detector dimension will be used.
Thank you for your reply. Yes, I have a
nvstreammux that scaled the camera dims to the detector dims and I attached the probe after the detector element so does that mean
pipeline_height (detector dimension) will be used for cropping?
If camera dimension and detector dimension are different, in what situations would the camera dimension be used for cropping and in what situations would the detector dimension be used for cropping? In other words, what are rules that
nvds_obj_enc_process crop() follows on picking which dimension to do the cropping?
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Generally speaking, the rules only ralate to the paras in the api. You can refer the link below:
bool nvds_obj_enc_process ( NvDsObjEncCtxHandle ,
NvDsObjEncUsrArgs * ,
NvBufSurface * ,
NvDsObjectMeta * ,
It ralated to the width and height in NvBufSurface, the scale para in NvDsObjEncUsrArgs, and so on.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.