I’ve modified the SampleOnnxMNIST C++ project to load a custom model (and have changed the input and output node names), but the binary fails at ConstructNetwork throwing the following error
Here is the model file and my slightly changed SampleUffMNIST.cpp file.
I’ve tried the following 2 approaches to get the onnx model file in TensorFlow 2.5 using tf2onnx
TF checkpoint → pb → onnx
TF checkpount → SavedModel → onnx
With both approaches, to get onnx model, I seem to be running into the same error. I am clueless as to where I am going wrong. Any help would be greatly appreciated.
Steps To Reproduce
This c++ file can be replaced in the place of the one at ‘TensorRT-8.0.1.6\samples\sampleOnnxMNIST’
and the model.onnx file is expected to be in ‘TensorRT-8.0.1.6\data’
This project was built using Visual Studio 2017. Upon successfully building, ‘sample_onnx_mnist.exe’ is generated in the bin folder which can be run from command line (no args required)
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command. https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!
We could reproduce the issue. Looks like the model has dynamic channel size for the convolution layers, which TRT does not support yet. Please allow us some time to work on this.
Hi @ashwin.kannan3
Google drive links are not accessible anymore in order to access your model and code.
Could you please give us the access so we can better help?