Hi,
I’ve upgraded tensorrt to verion 5.0.2.6 and now i can’t run the SampleONNXMnist sample also I can’t run my own onnx model and i got core dumped error
you can see the error below:
ERROR: Network must have at least one output
sample_onnx_mnist: sampleOnnxMNIST.cpp:64: void onnxToTRTModel(const string&, unsigned int, nvinfer1::IHostMemory*&): Assertion `engine’ failed.
Aborted (core dumped)
What platform (windows/linux) are you on? I’m able to use the TensorRT 18.11 container and seems to run mnist sample fine.
NVIDIA Release 18.11 (build 817536)
NVIDIA TensorRT 5.0.2 (c) 2016-2018, NVIDIA CORPORATION. All rights reserved.
Container image (c) 2018, NVIDIA CORPORATION. All rights reserved.
https://developer.nvidia.com/tensorrt
To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh
root@2ca610df4db7:/workspace# ls
README.md tensorrt
root@2ca610df4db7:/workspace# cd tensorrt/
TensorRT-Release-Notes.pdf bin/ data/ doc/ python/ samples/
root@2ca610df4db7:/workspace# cd tensorrt/samples/
Makefile getDigits/ sampleFasterRCNN/ sampleINT8API/ sampleMNISTAPI/ sampleNMT/ sampleSSD/ trtexec/
Makefile.config python/ sampleGoogleNet/ sampleMLP/ sampleMovieLens/ sampleOnnxMNIST/ sampleUffMNIST/
common/ sampleCharRNN/ sampleINT8/ sampleMNIST/ sampleMovieLensMPS/ samplePlugin/ sampleUffSSD/
root@2ca610df4db7:/workspace# cd tensorrt/samples/sampleOnnxMNIST/
root@2ca610df4db7:/workspace/tensorrt/samples/sampleOnnxMNIST# ls
Makefile README sampleOnnxMNIST.cpp
root@2ca610df4db7:/workspace/tensorrt/samples/sampleOnnxMNIST# cat README
This sample demonstrates conversion of an MNIST network in ONNX format to
a TensorRT network. The network used in this sample can be found at https://github.com/onnx/models/tree/master/mnist
(model.onnx)
root@2ca610df4db7:/workspace/tensorrt/samples/sampleOnnxMNIST# make
../Makefile.config:5: CUDA_INSTALL_DIR variable is not specified, using /usr/local/cuda by default, use CUDA_INSTALL_DIR=<cuda_directory> to change.
../Makefile.config:8: CUDNN_INSTALL_DIR variable is not specified, using $CUDA_INSTALL_DIR by default, use CUDNN_INSTALL_DIR=<cudnn_directory> to change.
:
Compiling: sampleOnnxMNIST.cpp
Linking: ../../bin/sample_onnx_mnist_debug
:
Compiling: sampleOnnxMNIST.cpp
Linking: ../../bin/sample_onnx_mnist
# Copy every EXTRA_FILE of this sample to bin dir
root@2ca610df4db7:/workspace/tensorrt/samples/sampleOnnxMNIST# ../../bin/sample_onnx_mnist
----------------------------------------------------------------
Input filename: ../../data/mnist/mnist.onnx
ONNX IR version: 0.0.3
Opset version: 1
Producer name: CNTK
Producer version: 2.4
Domain:
Model version: 1
Doc string:
----------------------------------------------------------------
----- Parsing of ONNX model ../../data/mnist/mnist.onnx is Done ----
---------------------------
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@#-:.-=@@@@@@@@@@@@@@
@@@@@%= . *@@@@@@@@@@@@@
@@@@% .:+%%% *@@@@@@@@@@@@@
@@@@+=#@@@@@# @@@@@@@@@@@@@@
@@@@@@@@@@@% @@@@@@@@@@@@@@
@@@@@@@@@@@: *@@@@@@@@@@@@@@
@@@@@@@@@@- .@@@@@@@@@@@@@@@
@@@@@@@@@: #@@@@@@@@@@@@@@@
@@@@@@@@: +*%#@@@@@@@@@@@@
@@@@@@@% :+*@@@@@@@@
@@@@@@@@#*+--.:: +@@@@@@
@@@@@@@@@@@@@@@@#=:. +@@@@@
@@@@@@@@@@@@@@@@@@@@ .@@@@@
@@@@@@@@@@@@@@@@@@@@#. #@@@@
@@@@@@@@@@@@@@@@@@@@# @@@@@
@@@@@@@@@%@@@@@@@@@@- +@@@@@
@@@@@@@@#-@@@@@@@@*. =@@@@@@
@@@@@@@@ .+%%%%+=. =@@@@@@@
@@@@@@@@ =@@@@@@@@
@@@@@@@@*=: :--*@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Prob 0 0.0000 Class 0:
Prob 1 0.0000 Class 1:
Prob 2 0.0000 Class 2:
Prob 3 1.0000 Class 3: **********
Prob 4 0.0000 Class 4:
Prob 5 0.0000 Class 5:
Prob 6 0.0000 Class 6:
Prob 7 0.0000 Class 7:
Prob 8 0.0000 Class 8:
Prob 9 0.0000 Class 9:
root@2ca610df4db7:/workspace/tensorrt/samples/sampleOnnxMNIST#
Thank you for your response.
I use Ubuntu 18 and upgrade tensorrt to 5.0.2.6 also I installed onnx-tensorrt to run the yolo-onnx model in python. now I want to run the yolo-onnx in c++ framework.
It seems because of the onnxIR version, as u see it is ONNX IR version: 0.0.3
and in the TensorRT-Developer-Guide was mentioned that only support ONNX IR version 7!
And now. I don’t know how to convert the onnx model to version 7!
In general, the ONNX Parser is designed to be backward compatible, therefore, a model file produced by an earlier version of ONNX exporter should not cause a problem.
Hi there, trying to build the sampleOnnxMNIST sample on TensorRT5.0.2GA is not working for me. I’ve installed trt via the deb package, and built onnx-tensorrt from source via their git, but when trying to make the package I get the following:
../Makefile.config:5: CUDA_INSTALL_DIR variable is not specified, using /usr/local/cuda by default, use CUDA_INSTALL_DIR=<cuda_directory> to change.
../Makefile.config:8: CUDNN_INSTALL_DIR variable is not specified, using $CUDA_INSTALL_DIR by default, use CUDNN_INSTALL_DIR=<cudnn_directory> to change.
:
Compiling: sampleOnnxMNIST.cpp
sampleOnnxMNIST.cpp: In function ‘void onnxToTRTModel(const string&, unsigned int, nvinfer1::IHostMemory*&)’:
sampleOnnxMNIST.cpp:45:63: error: cannot convert ‘nvinfer1::INetworkDefinition’ to ‘nvinfer1::INetworkDefinition*’ for argument ‘1’ to ‘nvonnxparser::IParser* nvonnxparser::{anonymous}::createParser(nvinfer1::INetworkDefinition*, nvinfer1::ILogger&)’
auto parser = nvonnxparser::createParser(*network, gLogger);
^
../Makefile.config:172: recipe for target '../../bin/dchobj/sampleOnnxMNIST.o' failed
make: *** [../../bin/dchobj/sampleOnnxMNIST.o] Error 1
Note on this, I still encounter this error, but in the TensorRT container it works, I’ve tried to find the dockerfile to see if I did any version mismatch but can’t find it.
The container has this TRT version:
ii tensorrt 5.0.2.6-1+cuda10.0 amd64 Meta package of TensorRT
Hello regarding the dockerfile, from NGC.nvidia.com, you can look at the “layer” of the image you are pulling, which should give you an idea how the image is constructed.
Hi @NVES, I thought we needed to install onnx-tensorrt to have the onnx parser working. I uninstalled onnx-tensorrt, reinstalled TRT5GA and tried to build the samples and it’s still failing on the ONNX part.
Compiling: sampleINT8API.cpp
g++ -Wall -std=c++11 -I"/usr/local/cuda/include" -I"/usr/local/include" -I"../include" -I"../common" -I"/usr/local/cuda/include" -I"../../include" -D_REENTRANT -g -c -o ../../bin/dchobj/sampleINT8API.o sampleINT8API.cpp
sampleINT8API.cpp: In member function ‘bool sampleINT8API::build()’:
sampleINT8API.cpp:448:102: error: cannot convert ‘nvinfer1::INetworkDefinition’ to ‘nvinfer1::INetworkDefinition*’ for argument ‘1’ to ‘nvonnxparser::IParser* nvonnxparser::{anonymous}::createParser(nvinfer1::INetworkDefinition*, nvinfer1::ILogger&)’
auto parser = SampleUniquePtr<nvonnxparser::IParser>(nvonnxparser::createParser(*network, gLogger));
^
../Makefile.config:172: recipe for target '../../bin/dchobj/sampleINT8API.o' failed
make[1]: *** [../../bin/dchobj/sampleINT8API.o] Error 1
make[1]: Leaving directory '/home/bpinaya/Documents/tensorrt/samples/sampleINT8API'
Makefile:37: recipe for target 'all' failed
make: *** [all] Error 2
I checked on the layers of the docker image but there is a script /nvidia/build-scripts/installTRT.sh that I can’t find.
On TRT 4, building the samples (onnx) worked from scratch, I think I’ll try again on a clean install but it’d be awesome if you release the dockerfiles eventually. I’ll update this if I have luck on a clean build.
Ok, so if I understand correctly: With tensorrt we can use the c++ parser API to parse ONNX models, without having to install GitHub - onnx/onnx-tensorrt: ONNX-TensorRT: TensorRT backend for ONNX right?
But for the python API we need to install it?
From the container you linked I can start python and type:
import onnx
And that works of course but trying to import:
import onnx_tensorrt.backend as backend
results in:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named 'onnx_tensorrt'
My goal is to take the network I have (trained in pytorch) latter parsed to ONNX to tensorrt.
I’ve serialized networks before, but just from caffe, loading it from ONNX is where I seem to be struggling. Thanks for your time!!
You are right, from a clean setup all the samples build correctly, I think it was an issue of onnx-tensorrt that messed things up.
Thanks for the help!
I want to install keras on Nvidia Jetson Xavier,but the version of CUDA is 10.0. It’s a pity,I can’t install keras sucessfully.I don’t know how to install it on Xavier, and I have installed tensorflow sucessfully. Can anyone help me?