Error (Could not find any implementation for node ArgMax_260.)

Description

Trying to the C++ quickstart example running. following the instructions here, the first step is to get a working tensorRT engine, from an onnx file. Converting the mode with trtexec fails.

I’m using the trtexec created with the build of the repo at GitHub - NVIDIA/TensorRT: NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.

Environment

TensorRT Version: 8.2.4-1+cuda11.4
GPU Type: GTX3090
Nvidia Driver Version: 11.6
CUDA Version: 11.4
CUDNN Version: 8.4
Operating System + Version: Ubuntu 20.04
Docker nvcr.io/nvidia/pytorch:20.12-py3

Baremetal or Container (if container which image + tag):

Steps To Reproduce

cloned the repo and built

git clone -b master https://github.com/nvidia/TensorRT TensorRT
cd TensorRT
git submodule update –init --recursive
cd quickstart
docker run --rm -it --gpus all -p 8888:8888 -v `pwd`:/workspace -w /workspace/SemanticSegmentation nvcr.io/nvidia/pytorch:20.12-py3 bash
python export.py

close the docker

convert the onnx model:

trtexec --onnx=fcn-resnet101.onnx --fp16 --workspace=64 --minShapes=input:1x3x256x256 --optShapes=input:1x3x1026x1282 --maxShapes=input:1x3x1440x2560 --buildOnly --saveEngine=fcn-resnet101.engine

This is the error:

[04/29/2022-15:21:52] [E] Error[10]: [optimizer.cpp::computeCosts::2011] Error Code 10: Internal Error (Could not find any implementation for node ArgMax_260.)
[04/29/2022-15:21:52] [E] Error[2]: [builder.cpp::buildSerializedNetwork::609] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed. )
[04/29/2022-15:21:52] [E] Engine could not be created from network
[04/29/2022-15:21:52] [E] Building engine failed
[04/29/2022-15:21:52] [E] Failed to create engine from model.
[04/29/2022-15:21:52] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8200] # trtexec --onnx=fcn-resnet101.onnx --fp16 --workspace=64 --minShapes=input:1x3x256x256 --optShapes=input:1x3x1026x1282 --maxShapes=input:1x3x1440x2560 --buildOnly --saveEngine=fcn-resnet101.engine
Segmentation fault (core dumped)

Hi ,
We recommend you to check the supported features from the below link.

You can refer below link for all the supported operators list.
For unsupported operators, you need to create a custom plugin to support the operation

Thanks!

I have no idea what you mean by that.

I am trying to use an NVIDIA product out-of-the-box and it doesn’t work. I guess the NVIDIA developer is the one who needs to look at the Supported ONNX Operators not me.

If the model fcn-resnet101.onnx, is not supported by tensorrt, remove it from the quickstart example and provide something that works

Please provide me instructions on how to get the tensorrt quickstart c++ example working.

Many thanks!

2 Likes

Hi,

We recommend you to please try on the latest TensorRT version 8.4 EA.
If you still face this issue please file this issue on Issues · NVIDIA/TensorRT · GitHub to get better help.

Thank you.

1 Like

This was bad advise. On the one hand the problem persists (I already posted the problem at the github, with no reply yet…) while on the other hand, all the tensorrt engines created with tao-converter STOPPED WORKING, and there is no version of tao-converter compatible with tensorRT 8.4…

Hi,

Could you please share with us the ONNX model you’re using to try from our end for better debugging.

Thank you.

@spolisetty Onnx example given in Nvidia’s offcial tutorial page. You can get the onnx model by running the export.py by yourself (TensorRT/export.py at main · NVIDIA/TensorRT · GitHub)

2 Likes

To fix this problem just add the workspace size with --workspace=4096 option. This because the workspace is not enough for tensorrt 8.X.
Here list a example of the changed cmd:
trtexec --onnx=fcn-resnet101.onnx --fp16 --workspace=4096 --minShapes=input:1x3x256x256 --optShapes=input:1x3x1026x1282 --maxShapes=input:1x3x1440x2560 --buildOnly --saveEngine=fcn-resnet101.engine
Thanks to jasxu-nvidia 's comments
reference from Quick Start, Unable to prepare engine · Issue #1965 · NVIDIA/TensorRT · GitHub

5 Likes

I can confirm that the larger workspace allows the command to execute with no errors.

I did not try the converted model in the quick start code, since I have already implemented my own.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.