Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU): Jetson • DeepStream Version: 6.0 • JetPack Version (valid for Jetson only): 4.6.3 • TensorRT Version: 8.2 • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I used repo https://github.com/NVIDIA-AI-IOT/yolo_deepstream/tree/main/deepstream_yolo and change these line for one image #type=3 #uri=file:/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 type=2 uri=file://<path-to-image>
I checked for jpeg image with same config file, it worked, but it did not work with bitmap image. test.bmp (798.8 KB)
I checked in PC, I can run inference on bitmap image without video-format, but when I add video-format=NV12 for bitmap images, acc of mAP is changed much? How do you think what reason of difference?
I trained model by RGB (jpeg images). I means that, when I use this model for 2 configs (one config with video-format=NV12, one config without video-format) on same bitmap images. The mAP difference is big.
Sorry for lack of information.
I use Docker container in PC with DS6.2 and TRT8.5, and I can run inference on bitmap images. But when setting with and without video-format=NV12 (otherwise is same), there is a large mAP difference. My model is trained with RGB images.
@junshengy
I have one more questions. If my image (jpeg) have size equal to size of engine model. Does Deepstream preprocess anything except for decoding?
jpeg => rgb (decoding, NVIDIA) close source) => nvinfer
I saw that my image before TRT inference is different with my original image (quality if lower). Thanks.