How to resize and normalize image before nvInfer?
You can use streammux to resize the image before it is fed to NvInfer.
Here is an example of streammux parameters in deepstream configuration
[streammux] gpu-id=0 ##Boolean property to inform muxer that sources are live live-source=0 batch-size=1 ##time out in usec, to wait after the first buffer is available ##to push the batch even if the complete batch is not formed batched-push-timeout=40000 ## Set muxer output width and height width=1280 height=720 ##Enable to maintain aspect ratio wrt source, and allow black borders, works ##along with width, height properties enable-padding=0 nvbuf-memory-type=0
NvInfer will normalize the image for you.
The formula is like this (x is image input):
normalized = net-scale-factor * ( x - mean( x ) )
Here is an example of NvInfer configuration where net-scale-factor is.
[property] gpu-id=0 net-scale-factor=0.0039215697906911373 ## 0=RGB, 1=BGR model-color-format=0 custom-network-config=yolov3-tiny.cfg model-file=yolov3-tiny.weights labelfile-path=labels.txt ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=2 num-detected-classes=80 gie-unique-id=1 network-type=0 is-classifier=0 ## 0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering) cluster-mode=2 maintain-aspect-ratio=1 parse-bbox-func-name=NvDsInferParseCustomYoloV3 custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so engine-create-func-name=NvDsInferYoloCudaEngineGet
Can we use different and separate mean and std values for each of the 3 color channels?
I am afraid nvinfer does not directly support mean and std for each channel of input.
However, there might be multiple ways for you to get to the same place.
One way is to add a new NvDsInferConvertFcn function implemented by yourself in deepstream-5.0/sources/libs/nvdsinfer/nvdsinfer_context_impl.cpp.
Another way is to add a Batch Normalization or Instance Normalization layer ahead of your deep learning model where you set γ=1 and β=0 for each channel by yourself.