Scaling in jetson nano

Hi all,
I saw in the DeepStream sdk, for scaling we can use VIC/ISP for accelerating this operation.
I use opencv+ gstreamer in python with using NVDEC to decode the stream, but for resizing I have a problem. I use opencv resizer function and this use cpu usage,
Q1- What are the VIC / ISP? these are HW like NVDEC?
Q2- I want to know how I can use VIC or ISP for resizing?


You can use VIC engine. It is hardware block like NVDEC.

For using VIC in DeepStream SDK, it is NvBufSurfTransform() in using NvBufSurf APIs, and nvvideoconvert plugin in gstreamer. You can use either one.

Thanks so much.
If the image located in CPU memory, and I want to resize the image, what’s best solution for this? especially in python language.
Since I work on multi-stream and streams are 1080p, I have two deep model for processing the stream, but ones has 1920x1080 input size and other has 500x500. for 1920x1080 I don’t problem because I directly feed input streams to model but for 500x500 model I need to resize the stream, because opncv resize is done on CPU and slow, I want to do this task as efficient on Hardware or GPU, In your opinion, It’s better to do this task on GPU, right?

Please check

And share which sample is close to your usecase and what the deviation is. Then we can check if it is possible in python launguage.

In C code, you can refer to dsexample in


which demonstrates scaling through NvBufSurfTransform().

In the deepstream_python_apps, we have a few plugin, right? I think some plugings like *NvBufSurfTransform() or QR-Barcode plugins are not exist in the deepstream_python_apps, right?

I want to use detection model for multi-stream RTSP.

NvBufferTransform() is not in python bindings. You can use nvvideoconvert plugin. For multi-stream RTSP, please check deepstream-test3

I want to use FaceDetection of TLT in this samples of deepstream_python_apps, How do I do? Is it possible?