Assertion `!formats.empty()' failed wile using TopK layer

I`m developing two plugins and use layer TopK on one of the outputs between them:

...
        nvinfer1::IPluginV2* pluginBoxGenerator = new tensorrt::DetectionBoxPlugin<Type>(params);
        auto pluginlayer = network->addPluginV2(boxgeneratorinputs.data(), boxgeneratorinputs.size(), *pluginBoxGenerator);

std::vector<nvinfer1::ITensor*> boxprocessorinputs;
        boxprocessorinputs.push_back(pluginlayer->getOutput(0));

auto topk_layer = network->addTopK(*pluginlayer->getOutput(1),nvinfer1::TopKOperation::kMAX,
                                           int(sizeThreshold_), 1<<2);

auto identity = network->addIdentity(*topk_layer->getOutput(0));

        auto identity1 = network->addIdentity(*topk_layer->getOutput(1));

        boxprocessorinputs.push_back(identity->getOutput(0));
        boxprocessorinputs.push_back(identity1->getOutput(0));

typename tensorrt::DetectionProcessPlugin<Type>::DetectionBoxProcessParams paramsproc {sharedMode_,
                                                           scoreThreshold_, overlapThreshold_,
                                                           sizeThreshold_,
                                                           lbls};

        nvinfer1::IPluginV2* pluginProcessProcessor = new tensorrt::DetectionProcessPlugin<Type>(paramsproc);
        auto pluginprocesslayer = network->addPluginV2(boxprocessorinputs.data(), boxprocessorinputs.size(), *pluginProcessProcessor);
...

and I have following crash:

Reading input data ... 
Reading neural network ... 
TENSORRT INFO: Plugin Creator registration succeeded - GridAnchor_TRT
TENSORRT INFO: Plugin Creator registration succeeded - NMS_TRT
TENSORRT INFO: Plugin Creator registration succeeded - Reorg_TRT
TENSORRT INFO: Plugin Creator registration succeeded - Region_TRT
TENSORRT INFO: Plugin Creator registration succeeded - Clip_TRT
TENSORRT INFO: Plugin Creator registration succeeded - LReLU_TRT
TENSORRT INFO: Plugin Creator registration succeeded - PriorBox_TRT
TENSORRT INFO: Plugin Creator registration succeeded - Normalize_TRT
TENSORRT INFO: Plugin Creator registration succeeded - RPROI_TRT
TENSORRT INFO: Original: 8 layers
TENSORRT INFO: After dead-layer removal: 8 layers
TENSORRT INFO: After scale fusion: 8 layers
TENSORRT INFO: After vertical fusions: 8 layers
TENSORRT INFO: After swap: 8 layers
TENSORRT INFO: After final dead-layer removal: 8 layers
TENSORRT INFO: After tensor merging: 8 layers
TENSORRT INFO: After concat removal: 8 layers
TENSORRT INFO: Graph construction and optimization completed in 0.000263332 seconds.

v5universal-tensor-test-data: ../builder/cudnnBuilder2.cpp:834: virtual std::vector<nvinfer1::query::RequirementsCombination> nvinfer1::builder::EngineTacticSupply::getSupportedFormats(const nvinfer1::builder::Node&, const nvinfer1::query::Ports<nvinfer1::query::AbstractTensor>&): Assertion `!formats.empty()' failed.

Thread 1 "v5universal-ten" received signal SIGABRT, Aborted.
0x00007fffcf3c5428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54
54	../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  0x00007fffcf3c5428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54
#1  0x00007fffcf3c702a in __GI_abort () at abort.c:89
#2  0x00007fffcf3bdbd7 in __assert_fail_base (fmt=<optimized out>, assertion=assertion@entry=0x7fffdbda026d "!formats.empty()", 
    file=file@entry=0x7fffdbda0250 "../builder/cudnnBuilder2.cpp", line=line@entry=834, 
    function=function@entry=0x7fffdbda1980 "virtual std::vector<nvinfer1::query::RequirementsCombination> nvinfer1::builder::EngineTacticSupply::getSupportedFormats(const nvinfer1::builder::Node&, const nvinfer1::query::Ports<nvinfer1::query::A"...) at assert.c:92
#3  0x00007fffcf3bdc82 in __GI___assert_fail (assertion=0x7fffdbda026d "!formats.empty()", file=0x7fffdbda0250 "../builder/cudnnBuilder2.cpp", line=834, 
    function=0x7fffdbda1980 "virtual std::vector<nvinfer1::query::RequirementsCombination> nvinfer1::builder::EngineTacticSupply::getSupportedFormats(const nvinfer1::builder::Node&, const nvinfer1::query::Ports<nvinfer1::query::A"...) at assert.c:101
#4  0x00007fffdb8a8831 in nvinfer1::builder::EngineTacticSupply::getSupportedFormats(nvinfer1::builder::Node const&, nvinfer1::query::Ports<nvinfer1::query::AbstractTensor> const&) () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.5
#5  0x00007fffdb892c41 in nvinfer1::builder::chooseFormatsAndTactics(nvinfer1::builder::Graph&, nvinfer1::builder::TacticSupply&, std::unordered_map<std::string, std::vector<float, std::allocator<float> >, std::hash<std::string>, std::equal_to<std::string>, std::allocator<std::pair<std::string const, std::vector<float, std::allocator<float> > > > >*, bool) () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.5
#6  0x00007fffdb8af114 in nvinfer1::builder::makeEngineFromGraph(nvinfer1::CudaEngineBuildConfig const&, nvinfer1::rt::HardwareContext const&, nvinfer1::builder::Graph&, std::unordered_map<std::string, std::vector<float, std::allocator<float> >, std::hash<std::string>, std::equal_to<std::string>, std::allocator<std::pair<std::string const, std::vector<float, std::allocator<float> > > > >*, int) () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.5
#7  0x00007fffdb8b3776 in nvinfer1::builder::buildEngine(nvinfer1::CudaEngineBuildConfig&, nvinfer1::rt::HardwareContext const&, nvinfer1::Network const&) ()
   from /usr/lib/x86_64-linux-gnu/libnvinfer.so.5
#8  0x00007fffdb921e0d in nvinfer1::builder::Builder::buildCudaEngine(nvinfer1::INetworkDefinition&) () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.5
#9  0x00000000007dfb0c in v5::Network<(v5::DataType)4, (v5::PlatformType)3>::build (this=0x7fffffffd630)
    at /home/timofey/Work/v5neural-network-new/tests/check-nn/../../sources/apis/tensorrt/network.h:159
#10 0x00000000007d85f7 in RunTest<(v5::DataType)4, (v5::PlatformType)3> (input_data_name_s=std::vector of length 3, capacity 4 = {...}, 
    neural_network_name="../tests/data/detection_output/detection_false.ascii", reference_data_name_s=std::vector of length 1, capacity 1 = {...})
    at /home/timofey/Work/v5neural-network-new/tests/check-nn/main.cpp:201
#11 0x00000000007d4295 in main (argc=11, argv=0x7fffffffdb58) at /home/timofey/Work/v5neural-network-new/tests/check-nn/main.cpp:386
(gdb)

both plugins have:

bool supportsFormat(nvinfer1::DataType type, nvinfer1::PluginFormat format) const {
        return (format == nvinfer1::PluginFormat::kNCHW) && (type == nvinfer1::DataType::kFLOAT);
    }

first i tried to do not use Identity layers, then added then just to be sure.

So what is wrong? I guess that TopK layer have second output with kINT32 DataType, but accordig documentation it must be converted to float automatically.

Ubuntu14
Cuda10
TensorRT 5.0.2.6

Hello,

we are triaging and will keep you updated.

Hello,

per engineering: This question is similar to question https://devtalk.nvidia.com/default/topic/1047675/tensorrt/-tensorrt-error-elementwise-elementwise-inputs-must-not-be-int32 .
For topK layer the second output tensor’s data type is always Int32. Currently, TensorRT doesn’t support automatic or forced conversion of topK layer’s second output tensor’s data type to float.

Ok, Thanx. But is there any way to distinguish what exact type will be obtained on particular input?