Hardware Platform (Jetson / GPU) NX • DeepStream Version 6.01 • JetPack Version (valid for Jetson only) 4.6 • TensorRT Version 8.0
I want to use a rectangle area of the input tensor for real inference, and the rectangle position and size depends the input tensor itself. So firstly I need to copy the tensor from GPU and calculate the rectangle, then I should put the data in the rectangle back to GPU(nvinfer->input_tensor_from_meta ?) . Simply speaking, to substitue the original meta_data just before inference.
Please tell me how to do. in gst-nvinfer.cpp or in gstnvdspreprocess.cpp?
And, on YUY2 usb input, what’s the colorformat?
switch (init_params->networkInputFormat) {
case NvDsInferFormat_RGB:
case NvDsInferFormat_BGR:
if(nvinfer->transform_config_params.compute_mode == NvBufSurfTransformCompute_VIC) {
color_format = NVBUF_COLOR_FORMAT_RGBA;
}
else {
color_format = NVBUF_COLOR_FORMAT_RGB;
}
break;
case NvDsInferFormat_GRAY:
if(nvinfer->transform_config_params.compute_mode == NvBufSurfTransformCompute_VIC) {
color_format = NVBUF_COLOR_FORMAT_NV12;
}
else {
color_format = NVBUF_COLOR_FORMAT_GRAY8;
}
You can check it in your env.
1.use gst-inspect-1.0 nvdspreprocess cli
2.check the libnvdsgst_preprocess.so in the /opt/nvidia/deepstream/deepstream-6.1/lib path