Tensorrt caffe parser crash with ArgMax layer

Version: Tensorrt 6
ICaffeParser will crash if caffe prototxt contain ArgMax layer.
Actually tensorrt support argmax natively. please add this support to the caffe parser and DO NOT crash.


Can you try converting your Caffe model to ONNX and then trying the ONNX parser? It generally supports more ops than Caffe does, and I see Argmax listed as a supported ONNX op on this page: https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html

I believe there’s a way to export from Caffe to ONNX, but if not you can probably use PyTorch to convert from Caffe to ONNX as well.

I workaround this by removing the argmax in caffe prototxt and add the argmax layer manually to the parsed engine by calling tensorrt api.
ONNX had much more problem than caffe.
But again this should be fixed in the tensorrt caffe parser. Is the tensosrt cafe parser open source ?.
It shouldn’t takes more than one day to add this support as the tensorrt already support this layer.

The Caffe parser is open source, you can find it here in the TensorRT OSS repo: https://github.com/NVIDIA/TensorRT/tree/master/parsers/caffe

Once you make the changes, you’ll have to build the parser and link it as described in the README: https://github.com/NVIDIA/TensorRT/blob/master/README.md.

I’m curious to know what issues you’re experiencing with ONNX - It is generally smoother than the other options in my experience.