What does the Error mean? in ../builder/cudnnBuilderGraph.cpp

I’m wroting a converter from onnx to tensorrt.
When calling buildCudaEngine, this error raised. There is no source code about the cudnnBuilderGraph.
However, use official onnx mnist models my converter runs correctly~
Thanks

python: …/builder/cudnnBuilderGraph.cpp:386: void nvinfer1::builder::checkSanity(const nvinfer1::builder::Graph&): Assertion `tensors.size() == g.tensors.size()’ failed.

I also met this probelm, when I wrote a converter from tensorflow.
python: …/builder/cudnnBuilderGraph.cpp:386: void nvinfer1::builder::checkSanity(const nvinfer1::builder::Graph&): Assertion `tensors.size() == g.tensors.size()’ failed

I also have this problem and have you solved it?

Here is the code to reproduce the error:
The error message is:…/builder/cudnnBuilderGraph.cpp:386: void nvinfer1::builder::checkSanity(const nvinfer1::builder::Graph&): Assertion `tensors.size() == g.tensors.size()’ failed.

#include
#include
#include
#include
#include
#include <malloc.h>
#include <cuda_runtime_api.h>
#include “NvInfer.h”
#include

ITensor *conv(INetworkDefinition *network, ITensor tensor, int out_channels, int kernel_h, int kernel_w, int stride_h, int stride_w, int padding_h, int padding_w, int groups) {
int in_channels = tensor->getDimensions().d[0];
Weights filter_weights = {dtype, nullptr, (in_channels / groups) * out_channels * kernel_h * kernel_w};
Weights bias_weights = {dtype, nullptr, out_channels};
bool free_filter = false, free_bias = false;
if(filter_weights.values == nullptr) {
filter_weights.values = malloc(sizeof(float) * filter_weights.count);
free_filter = true;
}
if(bias_weights.values == nullptr) {
bias_weights.values = malloc(sizeof(float) * bias_weights.count);
free_bias = true;
}
auto conv = network->addConvolution(tensor, out_channels, DimsHW(kernel_h, kernel_w), filter_weights, bias_weights);
if(free_filter)
free((void
)filter_weights.values);
if(free_bias)
free((void
)bias_weights.values);

conv->setStride(DimsHW(stride_h, stride_w));
conv->setPadding(DimsHW(padding_h, padding_w));
conv->setNbGroups(groups);
return conv->getOutput(0);

}

void test() {
IBuilder *builder = createInferBuilder(gLogger);
INetworkDefinition *network = builder->createNetwork();

ITensor * t = network->addInput("input", dtype, Dims3(3, 5, 5));
ITensor * t1 = conv(network, t, 5, 1, 1, 1, 1, 0, 0, 1);
ITensor * t2 = conv(network, t, 5, 1, 1, 1, 1, 0, 0, 1);
t1 = network->addSlice(*t1, Dims3{0, 0, 0}, Dims3{2, 5, 5},  Dims3{1, 1, 1})->getOutput(0);
t2 = network->addSlice(*t2, Dims3{0, 0, 0}, Dims3{2, 5, 5},  Dims3{1, 1, 1})->getOutput(0);
std::vector<ITensor*> vc = {t, t1, t2};
t = network->addConcatenation(vc.data(), vc.size())->getOutput(0);
network->markOutput(*t);

builder->setMaxBatchSize(1);
builder->setMaxWorkspaceSize(1 << 30);
ICudaEngine* engine = builder->buildCudaEngine(*network);
network->destroy();
builder->destroy();

}

int main() {
test();
}

hi, did you find any solution? I am facing a similar problem.

Hello,
I also get this problem with my onnx model while running it on my Jetson Xavier which use JetPack 4.4.
When I run the same onnx model on my PC which includes TRT 7.2.1.6, CuDNN 8.0.4 and CUDA 11.0 it works fine.

Anyone has any clue what is the problem source and how to fix it?
Thanks,

Hello, did you find out the root cause for this problem? I’m facing the same problem. Thanks