According to the document below, nvinfer seems to do preprocess image(performs normalization and mean subtraction).
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html
I was able to do inference with command below.
gst-launch-1.0 -e filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m width=1280 height=720 batch-size=1 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt ! nvvideoconvert ! 'video/x-raw(memory:NVMM), format=RGBA' ! nvdsosd ! nvvideoconvert ! autovideosink sync=0
In this inference, “resnet10.caffemodel” is used.
This model seems to expect NCHW(1,3,368,640) formatted data.
However, the command above inputs frame with width=1280 height=720.
I think nvinfer automatically resizes frame as a preprocess because there is no description about resize in config_infer_primary.txt.
Does nvinfer automatically resizes images as a preprocessing?