Input Tensor is unexpectedly modified before fetched to primary detector

  1. about “directly calling a Triton Inference server”, do you mean you are using python + triton to do inference without deepstream?
  2. you are using a DeepStream pipeline including nvinfer and python+triton without DeppStream to do inference respectively. and the using DeepStream’s results are worse. are I right? please refer to this yolov8 sample. let’s focus on nvinfer in this topic if using nvinferserver also has the worse results.
  3. In theory, if bboxes are different, we need to compare the data of preprocessing, inference results and postprocessing. here is the method to dump preprocessing and postprocessing data.