TRT has no any valid tactics for this config

I have a network in TensorFlow. I convert it to ONNX and try to run on TensorRT.
I get an error:

*************** Autotuning format combination: Float(1,16,256,26624) -> Float(1,32,1024,106496) ***************
--------------- Timing Runner: unpool4/unpool/conv2d_transpose (CudnnDeconvolution)
CudnnDeconvolution has no valid tactics for this config, skipping
--------------- Timing Runner: unpool4/unpool/conv2d_transpose (CaskDeconvolution)
CaskDeconvolution has no valid tactics for this config, skipping
--------------- Timing Runner: unpool4/unpool/conv2d_transpose (GemmDeconvolution)
Tactic: 0 skipped. Scratch requested: 532480, available: 0
Fastest Tactic: -3360065831133338131 Time: 3.40282e+38
Internal error: could not find any implementation for node unpool4/unpool/conv2d_transpose, try increasing the workspace size with IBuilder::setMaxWorkspaceSize()
C:\source\builder\tacticOptimizer.cpp (1523) - OutOfMemory Error in nvinfer1::builder::`anonymous-namespace'::LeafCNode::computeCosts: 0

Windows 10
TensorRT 7
CUDA 10.2
RTX 2080 ti
Tensorflow 1.14
Nvidia driver version 441.87

My code to run:

#define BATCH_SIZE  1
	nvinfer1::IBuilder* builder = nvinfer1::createInferBuilder(gLogger);
	const auto explicitBatch = 1U << static_cast<uint32_t> 
	nvinfer1::INetworkDefinition* network = builder->createNetworkV2(explicitBatch);
	nvonnxparser::IParser * parser = nvonnxparser::createParser(*network, gLogger);
	parser->parseFromFile(path, 1);
        auto maxsize = 10000000000;
        nvinfer1::IBuilderConfig* config = builder->createBuilderConfig();
        nvinfer1::ICudaEngine* engine = builder->buildEngineWithConfig(*network, *config);

my ONNX model (Google Drive)

Help me please!


Could you please share the ONNX model so we can help better?
Meanwhile, you can also use “trtexec” command line tool for benchmarking & generating serialized engines from models.


my ONNX model (Google Drive)


I try to use “trtexec”

trtexec --verbose=true --fp16 --explicitBatch --workspace=1024 --onnx=512_tf1.14_test.onnx --saveEngine=512_tf1.14_test_fp16.trt

and it works. 512_tf1.14_test_fp16.trt created.
So, the problem is in my code?
How make it correctly?


trtexec --workspace=1024 --onnx=512_tf1.14_test.onnx --saveEngine=512_tf1.14_test.trt --minShapes='Input:0':1x512x512x3 --optShapes='Input:0':1x1024x1024x3 --maxShapes='Input:0':1x3072x3072x3 --explicitBatch --shapes='Input:0'1x:1x-1x-1x3

Is it possible to set dynamic sizes for Input:0 ?


The dynamic sizes command seems to be working on TRT 7.Could you please and let us know in case of any issues?

&&&& PASSED TensorRT.trtexec # ./trtexec --onnx=/test/512_tf1.14_test.onnx --saveEngine=512_tf1.14_test.trt --minShapes='Input:0':1x512x512x3 --optShapes='Input:0':1x1024x1024x3 --maxShapes='Input:0':1x3072x3072x3 --explicitBatch --shapes='Input:0':1x512x512x3 --verbose

Please refer below sample code:


I solved the problem with the error

I wrote:

And it would be right to write



Now I am solving the problem with dynamic input size.
It looks like my onnx network has static dimensions, and trtexec skipping dynamyc mode.
I still do not understand how to set a dynamic network input