OnnxParser debuggability?

I am running into an error while importing a network in ONNX format.

terminate called after throwing an instance of 'std::out_of_range'
  what():  Attribute not found: shape

How could I add verbosity to this run to understand for which layer it’s complaining about not finding the “shape” attribute? I used the –verbose switch but it affects only the logger, which does not seem to affect the verbosity of the parser at all.

I am using TensorRT 4. Hard to imagine that this is production software.

This is the backtrace I see:

#0  0x00007fffe11b5207 in raise () from /lib64/libc.so.6
#1  0x00007fffe11b68f8 in abort () from /lib64/libc.so.6
#2  0x00007fffe1ac47d5 in __gnu_cxx::__verbose_terminate_handler() () from /lib64/libstdc++.so.6
#3  0x00007fffe1ac2746 in ?? () from /lib64/libstdc++.so.6
#4  0x00007fffe1ac2773 in std::terminate() () from /lib64/libstdc++.so.6
#5  0x00007fffe1ac2993 in __cxa_throw () from /lib64/libstdc++.so.6
#6  0x00007ffff2ab6f40 in nvonnxparser::OnnxAttrs::at(std::string) const ()
   from /home/soroy/work/TensorRT-4.0.1.6/lib/libnvparsers.so.4.1.2
#7  0x00007ffff2ab6f9d in ?? () from /home/soroy/work/TensorRT-4.0.1.6/lib/libnvparsers.so.4.1.2
#8  0x00007ffff2ab3b68 in nvonnxparser::convert_Reshape(nvonnxparser::Converter*, onnx::NodeProto const*) ()
   from /home/soroy/work/TensorRT-4.0.1.6/lib/libnvparsers.so.4.1.2
#9  0x00007ffff2ab848f in nvonnxparser::Converter::convert_node(onnx::NodeProto const*) ()
   from /home/soroy/work/TensorRT-4.0.1.6/lib/libnvparsers.so.4.1.2
#10 0x00007ffff2aba9e2 in nvonnxparser::Converter::convert_tensor_or_weights(std::string) ()
   from /home/soroy/work/TensorRT-4.0.1.6/lib/libnvparsers.so.4.1.2
#11 0x00007ffff2ab1099 in nvonnxparser::convert_Gemm(nvonnxparser::Converter*, onnx::NodeProto const*) ()
   from /home/soroy/work/TensorRT-4.0.1.6/lib/libnvparsers.so.4.1.2
#12 0x00007ffff2ab848f in nvonnxparser::Converter::convert_node(onnx::NodeProto const*) ()
   from /home/soroy/work/TensorRT-4.0.1.6/lib/libnvparsers.so.4.1.2
#13 0x00007ffff2abf9ca in nvonnxparser::parserONNX::convertToTRTNetwork() ()
   from /home/soroy/work/TensorRT-4.0.1.6/lib/libnvparsers.so.4.1.2
#14 0x0000000000405b82 in onnxToTRTModel () at trtexec.cpp:315
#15 0x0000000000407442 in createEngine () at trtexec.cpp:584
#16 0x000000000040789e in main (argc=2, argv=0x7fffffffde98) at trtexec.cpp:657

The ONNX file looks good. I am able to view it using Netron, for instance.

Anyone familiar with this issue?

-SR

Hello, can you provide details on the platforms you are using?

Linux distro and version
GPU type
nvidia driver version
CUDA version
CUDNN version
Python version [if using python]
Tensorflow version
TensorRT version

Also, can you share the ONNX model file you are using?

and how you are importing the network? syntax used?

thanks.

Hello,

It looks like you are running a very old version of the Onnx parser. please update.

Hi,

Here is all the info that I am in a position to provide:

  • Linux: RHEL 3.10.0-693.17.1.el7.x86_64
  • GPU: GeForce GTX 1080
  • Nvidia Driver Version: 390.25
  • CUDA Version: 8.0
  • CuDNN Version: 7.0
  • Python Version: Not Applicable
  • TensorFlow Version: Not Applicable
  • TensorRT Version: 4.0

Unfortunately, I am not in a position to share the ONNX model file with you. I am using the “trtexec” application that’s shipped with the samples in the distribution (using this cmdline: “./bin/trtexec --onnx=”)

How could this be a “very old version” of ONNX parser when TensorRT v4 is the latest download available here?
https://developer.nvidia.com/nvidia-tensorrt-download

-SR