onnx-tensorrt build failure

Hi,

I have installed the latest tensorrt via dpkg/apt on Ubuntu 16.04 with CUDA 10 already built and tested. All of the samples in tensorrt run successfully. I am now trying to build onnx-tensorrt so that I can use yolov3.

When I attempt to install I do the following:

mkdir build
cd build
cmake … -DTENSORRT_ROOT=/usr/src/tensorrt
make -j8

How can I build this?

This is the traceback (it is very long so I just include the beginning and the end):

me@mylaptop:~/onnx-tensorrt/build$ make -j8 | tee > builderror
In file included from /usr/local/cuda-10.0/include/channel_descriptor.h:61:0,
                 from /usr/local/cuda-10.0/include/cuda_runtime.h:95,
                 from /usr/include/cudnn.h:64,
                 from /home/luke/onnx-tensorrt/InstanceNormalization.hpp:27,
                 from /home/luke/onnx-tensorrt/InstanceNormalization.cpp:23:
/usr/local/cuda-10.0/include/cuda_runtime_api.h:1775:101: error: use of enum ‘cudaDeviceP2PAttr’ without previous declaration
 extern __host__ __cudart_builtin__ cudaError_t CUDARTAPI cudaDeviceGetP2PAttribute(int *value, enum cudaDeviceP2PAttr attr, int srcDevice, int dstDevice);
                                                                                                     ^
/usr/local/cuda-10.0/include/cuda_runtime_api.h:2232:25: error: expected ‘)’ before ‘*’ token
 typedef void (CUDART_CB *cudaStreamCallback_t)(cudaStream_t stream, cudaError_t status, void *userData);
                         ^
/usr/local/cuda-10.0/include/cuda_runtime_api.h:2300:9: error: ‘cudaStreamCallback_t’ has not been declared
         cudaStreamCallback_t callback, void *userData, unsigned int flags);
         ^
/usr/local/cuda-10.0/include/cuda_runtime_api.h:2484:81: error: ‘cudaGraph_t’ has not been declared
 extern __host__ cudaError_t CUDARTAPI cudaStreamEndCapture(cudaStream_t stream, cudaGraph_t *pGraph);
                                                                                 ^
/usr/local/cuda-10.0/include/cuda_runtime_api.h:2523:87: error: use of enum ‘cudaStreamCaptureStatus’ without previous declaration
 extern __host__ cudaError_t CUDARTAPI cudaStreamIsCapturing(cudaStream_t stream, enum cudaStreamCaptureStatus *pCaptureStatus);
 
.
.
.
.
.
.

/usr/local/cuda-10.0/include/cuda_runtime_api.h:9135:101: error: expression list treated as compound expression in initializer [-fpermissive]
 extern __host__ cudaError_t CUDARTAPI cudaGraphLaunch(cudaGraphExec_t graphExec, cudaStream_t stream);
                                                                                                     ^
/usr/local/cuda-10.0/include/cuda_runtime_api.h:9156:60: error: ‘cudaGraphExec_t’ was not declared in this scope
 extern __host__ cudaError_t CUDARTAPI cudaGraphExecDestroy(cudaGraphExec_t graphExec);
                                                            ^
/usr/local/cuda-10.0/include/cuda_runtime_api.h:9176:56: error: ‘cudaGraph_t’ was not declared in this scope
 extern __host__ cudaError_t CUDARTAPI cudaGraphDestroy(cudaGraph_t graph);
                                                        ^
make[2]: *** [CMakeFiles/nvonnxparser_plugin.dir/InstanceNormalization.cpp.o] Error 1
make[1]: *** [CMakeFiles/nvonnxparser_plugin.dir/all] Error 2
make: *** [all] Error 2

I have the same error when installing PyTorch from source.

nobody any ideas…? I can build other CUDA dependent things without issue - is it possible that onnx-tensorrt does not yet support Cuda10?

I open the file:
/usr/include/cudnn.h

And I changed the line:
#include “driver_types.h”

to:
#include <driver_types.h>

and now it can compile…

Worked for me on Ubuntu18.04 CUDA10 cuDNN7.4

[Reference]
https://devtalk.nvidia.com/default/topic/1025801/cudnn/cudnn-test-did-not-pass/

2 Likes