TensorRT 2 sample INT8

I have run the sampleINT8 on MNIST successfully ,and now I want to run it on GoogleNet.

I think data should be in the form of : Batches/Batch0,Batch1… ,but I don’t konw how to prepare these data. .

Can you tell me how to run sampleINT8 on GoogleNet ?

Thanks for your help.


Can you please share the steps required to run the sampleINT8 on MNIST with TensorRT ?


After making , move to ‘…/…/bin’ folder and do ‘./sample_int8 mnist’ , and you will see the result.

I use GPU P4 and GPU P40 , some GPUs may not support INT8 mode.

I am not sure how the calibration works either.

You can read codes in ‘BatchStream.h’ under the folder ‘sampleINT8’ , I think it’s about how the program read batches , I’m working on it too.


Thanks for your reply.

I am also trying to test my own Keras/Theano trained models for inference using Tensorrt. I am using the approach which Tensorrt provides for sampleMNISTAPI folder which is to make models without using Caffe.

I made the weightsapi.wts file as required by sampleMNISTAPI , but my inference is not getting correct.
I tried various combinations of how to dump the Keras/Theano tranined weights into the .wts file(row-wise and column-wise of each Convolution and Dense layer weights), but it didn’t worked.

Can you please let me know if you are able to run sampleMNISTAPI example using your own Keras/Theano or Keras/Tensorflow models.?