Version of NvOnnxParser used to built samples? (Can't build samples)

Hi there,
I’m having some problems building the examples from TRT 5.0.2GA. After copying the tensorrt directory somewhere else and trying to build the examples I get:

g++ -MM -MF ../../bin/dchobj/sampleINT8API.d -MP -MT ../../bin/dchobj/sampleINT8API.o -Wall -std=c++11 -I"/usr/local/cuda/include" -I"/usr/local/include" -I"../include" -I"../common" -I"/usr/local/cuda/include" -I"../../include"  -D_REENTRANT sampleINT8API.cpp
Compiling: sampleINT8API.cpp
g++ -Wall -std=c++11 -I"/usr/local/cuda/include" -I"/usr/local/include" -I"../include" -I"../common" -I"/usr/local/cuda/include" -I"../../include"  -D_REENTRANT -g -c -o ../../bin/dchobj/sampleINT8API.o sampleINT8API.cpp
sampleINT8API.cpp: In member function ‘bool sampleINT8API::build()’:
sampleINT8API.cpp:448:102: error: cannot convert ‘nvinfer1::INetworkDefinition’ to ‘nvinfer1::INetworkDefinition*’ for argument ‘1’ to ‘nvonnxparser::IParser* nvonnxparser::{anonymous}::createParser(nvinfer1::INetworkDefinition*, nvinfer1::ILogger&)’
     auto parser = SampleUniquePtr<nvonnxparser::IParser>(nvonnxparser::createParser(*network, gLogger));
                                                                                                      ^
../Makefile.config:172: recipe for target '../../bin/dchobj/sampleINT8API.o' failed
make[1]: *** [../../bin/dchobj/sampleINT8API.o] Error 1
make[1]: Leaving directory '/home/bpinaya/Documents/tensorrt/samples/sampleINT8API'
Makefile:38: recipe for target 'all' failed
make: *** [all] Error 2

Anyone was able to build that example? It seems to be ONNX related.
I’m using CUDA 10, cudnn 7.4.1.5-1+cuda10.0 and onnx-tensorrt that was built from source from GitHub - onnx/onnx-tensorrt: ONNX-TensorRT: TensorRT backend for ONNX

Building each example individually does work, but for sampleINT8API I get the following error:

../Makefile.config:5: CUDA_INSTALL_DIR variable is not specified, using /usr/local/cuda by default, use CUDA_INSTALL_DIR=<cuda_directory> to change.
../Makefile.config:8: CUDNN_INSTALL_DIR variable is not specified, using $CUDA_INSTALL_DIR by default, use CUDNN_INSTALL_DIR=<cudnn_directory> to change.
:
Compiling: sampleINT8API.cpp
sampleINT8API.cpp: In member function ‘bool sampleINT8API::build()’:
sampleINT8API.cpp:448:102: error: cannot convert ‘nvinfer1::INetworkDefinition’ to ‘nvinfer1::INetworkDefinition*’ for argument ‘1’ to ‘nvonnxparser::IParser* nvonnxparser::{anonymous}::createParser(nvinfer1::INetworkDefinition*, nvinfer1::ILogger&)’
     auto parser = SampleUniquePtr<nvonnxparser::IParser>(nvonnxparser::createParser(*network, gLogger));
                                                                                                      ^
../Makefile.config:172: recipe for target '../../bin/dchobj/sampleINT8API.o' failed
make: *** [../../bin/dchobj/sampleINT8API.o] Error 1

Also for sampleOnnxMNIST I get:

../Makefile.config:5: CUDA_INSTALL_DIR variable is not specified, using /usr/local/cuda by default, use CUDA_INSTALL_DIR=<cuda_directory> to change.
../Makefile.config:8: CUDNN_INSTALL_DIR variable is not specified, using $CUDA_INSTALL_DIR by default, use CUDNN_INSTALL_DIR=<cudnn_directory> to change.
:
Compiling: sampleOnnxMNIST.cpp
sampleOnnxMNIST.cpp: In function ‘void onnxToTRTModel(const string&, unsigned int, nvinfer1::IHostMemory*&)’:
sampleOnnxMNIST.cpp:45:63: error: cannot convert ‘nvinfer1::INetworkDefinition’ to ‘nvinfer1::INetworkDefinition*’ for argument ‘1’ to ‘nvonnxparser::IParser* nvonnxparser::{anonymous}::createParser(nvinfer1::INetworkDefinition*, nvinfer1::ILogger&)’
     auto parser = nvonnxparser::createParser(*network, gLogger);
                                                               ^
../Makefile.config:172: recipe for target '../../bin/dchobj/sampleOnnxMNIST.o' failed
make: *** [../../bin/dchobj/sampleOnnxMNIST.o] Error 1

It seems to be onnx-trt related, as I said I built it from source, maybe a version mismatch? Any help is appreciated.

Hello,

I’m using TensorRT 18.11 container (TensorRT 5 GA). and am able to build sampleINT8API and sampleOnnxMNIST . Can you try it with the container, so to rule out any specific issues with your build environment?

=====================
== NVIDIA TensorRT ==
=====================

NVIDIA Release 18.11 (build 817536)

NVIDIA TensorRT 5.0.2 (c) 2016-2018, NVIDIA CORPORATION.  All rights reserved.
Container image (c) 2018, NVIDIA CORPORATION.  All rights reserved.

https://developer.nvidia.com/tensorrt

To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh

root@2715ee833917:/workspace# cd tensorrt/samples/
Makefile            getDigits/          sampleFasterRCNN/   sampleINT8API/      sampleMNISTAPI/     sampleNMT/          sampleSSD/          trtexec/
Makefile.config     python/             sampleGoogleNet/    sampleMLP/          sampleMovieLens/    sampleOnnxMNIST/    sampleUffMNIST/
common/             sampleCharRNN/      sampleINT8/         sampleMNIST/        sampleMovieLensMPS/ samplePlugin/       sampleUffSSD/
root@2715ee833917:/workspace# cd tensorrt/samples/sampleINT8API/
root@2715ee833917:/workspace/tensorrt/samples/sampleINT8API# make clean
../Makefile.config:5: CUDA_INSTALL_DIR variable is not specified, using /usr/local/cuda by default, use CUDA_INSTALL_DIR=<cuda_directory> to change.
../Makefile.config:8: CUDNN_INSTALL_DIR variable is not specified, using $CUDA_INSTALL_DIR by default, use CUDNN_INSTALL_DIR=<cudnn_directory> to change.
Cleaning...
root@2715ee833917:/workspace/tensorrt/samples/sampleINT8API# make
../Makefile.config:5: CUDA_INSTALL_DIR variable is not specified, using /usr/local/cuda by default, use CUDA_INSTALL_DIR=<cuda_directory> to change.
../Makefile.config:8: CUDNN_INSTALL_DIR variable is not specified, using $CUDA_INSTALL_DIR by default, use CUDNN_INSTALL_DIR=<cudnn_directory> to change.
:
Compiling: sampleINT8API.cpp
Linking: ../../bin/sample_int8_api_debug
:
Compiling: sampleINT8API.cpp
Linking: ../../bin/sample_int8_api
# Copy every EXTRA_FILE of this sample to bin dir
root@2715ee833917:/workspace/tensorrt/samples/sampleOnnxMNIST# ls
Makefile  README  sampleOnnxMNIST.cpp
root@2715ee833917:/workspace/tensorrt/samples/sampleOnnxMNIST# make
../Makefile.config:5: CUDA_INSTALL_DIR variable is not specified, using /usr/local/cuda by default, use CUDA_INSTALL_DIR=<cuda_directory> to change.
../Makefile.config:8: CUDNN_INSTALL_DIR variable is not specified, using $CUDA_INSTALL_DIR by default, use CUDNN_INSTALL_DIR=<cudnn_directory> to change.
:
Compiling: sampleOnnxMNIST.cpp
Linking: ../../bin/sample_onnx_mnist_debug
:
Compiling: sampleOnnxMNIST.cpp
Linking: ../../bin/sample_onnx_mnist
# Copy every EXTRA_FILE of this sample to bin dir
root@2715ee833917:/workspace/tensorrt/samples/sampleOnnxMNIST#

Hi there, thanks for the quick answer! And indeed with the container, it works perfectly. This is the output:

root@b2cb8f8f0dc5:/workspace/tensorrt/bin# ./sample_onnx_mnist
----------------------------------------------------------------
Input filename:   ../data/mnist/mnist.onnx
ONNX IR version:  0.0.3
Opset version:    1
Producer name:    CNTK
Producer version: 2.4
Domain:
Model version:    1
Doc string:
----------------------------------------------------------------
 ----- Parsing of ONNX model ../data/mnist/mnist.onnx is Done ----



---------------------------



@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@+  :@@@@@@@@
@@@@@@@@@@@@@@%= :. --%@@@@@
@@@@@@@@@@@@@%. -@= - :@@@@@
@@@@@@@@@@@@@: -@@#%@@ #@@@@
@@@@@@@@@@@@: #@@@@@@@-#@@@@
@@@@@@@@@@@= #@@@@@@@@=%@@@@
@@@@@@@@@@= #@@@@@@@@@:@@@@@
@@@@@@@@@+ -@@@@@@@@@%.@@@@@
@@@@@@@@@::@@@@@@@@@@+-@@@@@
@@@@@@@@-.%@@@@@@@@@@.*@@@@@
@@@@@@@@ *@@@@@@@@@@@ *@@@@@
@@@@@@@% %@@@@@@@@@%.-@@@@@@
@@@@@@@:*@@@@@@@@@+. %@@@@@@
@@@@@@# @@@@@@@@@# .*@@@@@@@
@@@@@@# @@@@@@@@=  +@@@@@@@@
@@@@@@# @@@@@@%. .+@@@@@@@@@
@@@@@@# @@@@@*. -%@@@@@@@@@@
@@@@@@# ---    =@@@@@@@@@@@@
@@@@@@#      *%@@@@@@@@@@@@@
@@@@@@@%: -=%@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@


 Prob 0  0.9998 Class 0: **********
 Prob 1  0.0000 Class 1:
 Prob 2  0.0000 Class 2:
 Prob 3  0.0000 Class 3:
 Prob 4  0.0000 Class 4:
 Prob 5  0.0000 Class 5:
 Prob 6  0.0002 Class 6:
 Prob 7  0.0000 Class 7:
 Prob 8  0.0000 Class 8:
 Prob 9  0.0000 Class 9:

Just found out about ngc.nvidia.com, it’s a very useful resource to stop fighting with versions installation steps.
Are the docker files going to be available? I’d really like to know where I messed up my installation, I think it was in onnx-tensorrt.

Hello,

discussed in a separate thread, but from ngc.nvidia.com, you can also view the “layer” of the image you are pulling, which should give you an idea of how the image is constructed.