What is the composition of the input memory buffer for C++ inference?
Ive trained, pruned and retrained a vgg16 unet model and tested the inferences in the unet notebook, all working fine.
Now I need to use in C++
Using the tensorRT quickstart C++ example, I am able to load the model and query dims, input and outputlayer names, etc.
All the samples seem to use the PPM format with all the channels for one pixel together.
I am reading the PNG images with OpenCV into a 3 channel Mat structure, but have no idea what the model expects to receive, as far as the order of pixels by channel, row and column; for example, with P as pixel and C as channel,
P(1,1)C1 P(1,1)C2 P(1,1)C3 P(2,1)C1 P(2,1)C2 P(2,1)C3 … P(512,512)C1 P(512,512)C2 P(512,512)C3
P(1,1)C1 P(2,1)C1 P(3,1)C1 P(4,1)C1 P(5,1)C1… P(512,512)C1 P(1,2)C1 P(2,2)C1 P(3,2)C1 …
Or something else?
If the model is INT8, are imput values unint_8t or float?