TensorRT Fails to load ONNX file created by CNTK

Provide details on the platforms you are using:
O/S: Windows 10
GPU type: 1080
nvidia driver version:
CUDA version: N/A
CUDNN version: N/A
TensorRT version: 5.1.5.0

Describe the problem

Trying to load ONNX file exported from CNTK (see link below) into TensorRT the import fails on the first node (a Slice transforming 1x3x1024x1024 -> 1x3x1024x512);

C:\Software\TensorRT-5.1.5.0\bin>sample_HeteroGenius.exe
&&&& RUNNING TensorRT.sample_onnx_mnist # sample_HeteroGenius.exe

Input filename: …/data/HGdata/TISSUETYPE_8X_DSNET_12D.cntk2.7.onnx
ONNX IR version: 0.0.4
Opset version: 9
Producer name: CNTK
Producer version: 2.7
Domain: ai.cntk
Model version: 1
Doc string:

WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
While parsing node number 0 [Slice]:
ERROR: builtin_op_importers.cpp:2046 In function importSlice:
[4] Assertion failed: std::all_of(axes.begin(), axes.end(), [nbDims](int d)->bool{return d < nbDims;})
[E] Failure while parsing ONNX file
&&&& FAILED TensorRT.sample_onnx_mnist # sample_HeteroGenius.exe
Assertion failed: trtModelStream != nullptr, file c:\software\tensorrt-5.1.5.0\samples\sampleheterogenius\sampleonnxmnist.cpp, line 214

The ONNX file loads fine into netron (https://lutzroeder.github.io/netron/). It also loads fine with the onnx python module (from Microsoft?):

model = onnx.load(“TISSUETYPE_8X_DSNET_12D.cntk2.7.onnx”)
onnx.checker.check_model(model)

Files

https://drive.google.com/open?id=1mdLaDgBFWDYU-KNJ9zoToLk2FcMhDtMZ

hi,

your slice layer input dim (1,3,1024,1024)

and your slice layer attribute axis is 4 ,but available axis 0,1,2,3.

Thanks
kalyan ch

Please, reply if you solved the issue

Hi kalyan.c,

It may well be the case that Microsoft (who co-invented ONNX) got their numbering wrong on export (CNTK uses onnxruntime internally to write onnx files - both are written by Microsoft and nothing to do with me!), or or NVIDIA interpreted it wrong, who knows. In the meantime Jonny Hancok (NVIDIA UK) I believe raise this internally with the NVIDIA dev team and now it works. Fix will land in TRT release 7 apparently, so I’ve not tried it myself (as it hasn’t been released yet), but Jonny has and he says it now fails on a later node (pool). He was going to raise this internally as well last time I talked to him.

Thanks

Derek