Sorry for later reply cause i am too busy with other things
In our applications, we use pre-process pictures in jpg format, then transfer the processed images to inference flow, then inference flow outputs the inference results to post-process, then the final results come out in post-process, we don’t use the ‘buffers’
In your MNIST demo, the pictures in pgm format are read and installed into input buffers, then TensorRT read the related buffers and run inferences, then restore the output into output buffers
so my questions are:
does the buffers can recognize jpg files? or do you have function to transfer the jpg file to pgm file?
can i or how to directly use the processed images in TensorRT inference without restoring the results into the input buffers?
also can i or how to directly use inference output without reading them from the output buffers?
In fact, i am confused about the buffers work flow in TensorRT and its advantages, it makes our algorithms transplanting become a little complex