Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) - GPU • DeepStream Version - 6.0.1 • Issue Type( questions, new requirements, bugs) - new requirements • Requirement details
I want to save a detected object and also the whole scene of that detection. I’m using the same objMeta from the frameMeta->obj_meta_list. I noticed my object image also saved in higher resolution with extra padding. Its like I can’t use the same object meta to save two images. Is there a work around for this.
Hi @miguel.taylor , there was an typo in my source code. My code is exaclty same as your suggestion.
What I understand is since I updated the objMeta->rect_params it affected the first nvds_obj_enc_process making the size of that image source_frame_width x source_frame_height with extra black padding.
Seems like the nvds_obj_enc_process takes in objMeta not as copy but as a pointer or reference variable. By the time nvds_obj_enc_process_finishe is called objMeta->rect_params are new values.
nvds_obj_enc_process is used to enqueue an object crop for JPEG encode, if want to dump the whole frame, you can leverage NvBufSurface. please refer to How to convert NvBufSurface to jpeg?
Thanks @fanzh for the NvBufSurface to jpeg reference. I have few questions about this workflow.
The reference workflow cudamemcopy the NV12 data from nvbufsurface to the host side and let OpenCV rotate NV12 to RGB and encode to JPEG in CPU. Will it drastically reduce the Pipeline FPS.
Is there a way to encode NV12 device data from nvbufsurface using nvJPEG APIs to gain better performance?