TensorRT

I am trying to run tensorrt python API code which given in this link API Reference :: NVIDIA Deep Learning TensorRT Documentation

that code is example of optimizing MNIST model using tensorrt in caffe I don"t find predefined mnist inside caffe/data/mnist folder so i used mnist.caffe,mnist.prototxt,mean.binaryproto form this github link https://github.com/DigitalGlobe/gbdx-caffe/tree/master/work/input/model.

while running the code I am getting this protobyf error

[libprotobuf FATAL /home/erisuser/p4sw/sw/gpgpu/MachineLearning/DIT/externals/protobuf/x86_64/10.0/include/google/protobuf/repeated_field.h:1078] CHECK failed: (index) < (current_size_):
terminate called after throwing an instance of ‘google_private::protobuf::FatalException’
what(): CHECK failed: (index) < (current_size_):
Aborted (core dumped)

can anyone help me to solve this. and it will more help full is there demo on optimizing mnist buid usin caffe in tensorrt .

We created a new “Deep Learning Training and Inference” section in Devtalk to improve the experience for deep learning and accelerated computing, and HPC users:
https://devtalk.nvidia.com/default/board/301/deep-learning-training-and-inference-/

We are moving active deep learning threads to the new section.

URLs for topics will not change with the re-categorization. So your bookmarks and links will continue to work as earlier.

-Siddharth

The data file used is expected to live inside the data directory. How did you install TensorRT? Debian or Tarball? Can you point to the link you used?

Thank you

issue got solved its due to conflict between binaryproto and other files path

now I have different questions:

what is binaryproto file?what is use of it?

what is workspace memory?

after creating engine file should I use it for inference. I am not clear in why we need data for inference since its not any training related.

The data in inference samples are the weights from training along with any input files that are needed for the application.

For more information on protocol buffers, you can read about them here: Protocol Buffers  |  Google Developers

Some frameworks(Caffe, TensorFlow) use them to serialize the network and TensorRT loads the network using this format from those frameworks.