Now I use tensorrt3.0 to infer multi streams. Every stream is the same video. When I use a fp32 yolov3 model(tensorrt engine file), the inference results are the same. All are OK. But when I use a fp16 or int8 yolov3 model(tensorrt engine file), the inference results are different. Only one is OK. I don’t know why.
Hello, can you provide details on the platforms you are using?
Linux distro and version
GPU type
nvidia driver version
CUDA version
CUDNN version
Python version [if using python]
Tensorflow version
TensorRT version 3.0
Any source file you can source will help us debug this issue.