onnx-tensorrt install / test failure

Both the Python and Python3 onnx backend tests fail with identical output, I include the output for python3 below as this is probably better supported? Can anybody help me to resolve and get a working onnx-tensorrt install?

Ubuntu 16.04, CUDA 10, CuDNN 7.4.2, tensorrt 5.0.3

I installed tensorrt using the debian nvidia guide: https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html#installing-debian
All CUDA, CuDNN and tensorrt tests have passed.

This is the failed test output:

python3 onnx_backend_test.py OnnxBackendRealModelTest
Traceback (most recent call last):
  File "onnx_backend_test.py", line 31, in <module>
    import onnx_tensorrt.backend as trt
  File "/home/luke/onnx-tensorrt/onnx_tensorrt/__init__.py", line 23, in <module>
    from . import backend
  File "/home/luke/onnx-tensorrt/onnx_tensorrt/backend.py", line 22, in <module>
    from . import parser
  File "/home/luke/onnx-tensorrt/onnx_tensorrt/parser/__init__.py", line 23, in <module>
    from ._nv_onnx_parser_bindings import *
ImportError: No module named _nv_onnx_parser_bindings

If I copy+paste the contents of test.py into ipython, it runs as far as downloading the test models but then fails like:

ERROR: test_single_relu_model_cuda (__main__.OnnxBackendSimpleModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/luke/.local/lib/python2.7/site-packages/onnx/backend/test/runner/__init__.py", line 208, in device_test_func
    return test_func(*args, device=device, **kwargs)
  File "/home/luke/.local/lib/python2.7/site-packages/onnx/backend/test/runner/__init__.py", line 229, in run
    prepared_model = self.backend.prepare(model, device)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 178, in prepare
    return TensorRTBackendRep(model, device, **kwargs)
  File "build/bdist.linux-x86_64/egg/onnx_tensorrt/backend.py", line 60, in __init__
    self._logger = trt.infer.ConsoleLogger(trt.infer.LogSeverity.WARNING)
AttributeError: 'module' object has no attribute 'infer'

----------------------------------------------------------------------
Ran 548 tests in 551.829s

FAILED (errors=185, skipped=363)

(most output abridged for clarity, but all of the failures are: AttributeError: ‘module’ object has no attribute ‘infer’

Hello,

looks like you are calling legacy TRT 4 APIs. Please reference https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/migrationGuide.html#create-and-destroy-functions

Legacy API
G_LOGGER = [b]trt.infer.ConsoleLogger/b

vs.

New API
TRT_LOGGER = [b]trt.Logger/b

Not sure what version of TRT the onnx-tensorrt github is tuned for.GitHub - onnx/onnx-tensorrt: ONNX-TensorRT: TensorRT backend for ONNX

I see, so then onnx-tensorrt has not been tested with tensorrt 5
I will suggest that this is indicated on their github.