Inference using a model with 6 channels input

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): GPU
• DeepStream Version: 6.0

We have a dual encoder semantic segmentation model which takes 2 inputs (each an RGB image) loaded using nvinfer and we are trying to provide input to it.

We are trying to create our own custom tensor through nvdspreprocess and provide it to nvinfer. This custom tensor should consist of the two (RGB) input images stacked along the channel dimension (size: 1x6xHxW). To test capability, in nvdspreprocess configuration we set the tensor dimensions to 1x6xHxW.
However, we’re getting the error “RGB/BGR input format specified but network input channels is not 3 normalization_mean_subtraction_impl_initialize failed”, which tells me that we cannot set the tensor’s number of channels to any number more than 3, am I correct? If yes, what would be other ways to create a 6 channel tensor and send it to nvinfer? Or possibly, provide 2 RGB frames to nvinfer as input instead of one?

You may have to customize the whole nvdspreprocess_lib. You need to skip the network_color_format check in normalization_mean_subtraction_impl_initialize() since your network don’t accept any image format(RGB,BGR,…) any longer.

Hi, thanks for the feedback.

What about the other option, provide 2 RGB frames to nvinfer as input instead of one? Is this currently supported by nvinfer?

You need to customize nvinfer to support 2 RGB frames input layers.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.