Description
A clear and concise description of the bug or issue.
Environment
TensorRT Version:
7.1
GPU Type:
2070
Nvidia Driver Version:
440
CUDA Version:
10.2
CUDNN Version:
8.0
Operating System + Version:
Ubuntu 18
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Taking the channel splitting in shufflenet-v2 for instance, I could use the API addslice to make this layer in the case of fixed input size. However, i hope use dynamic input size now,
const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
INetworkDefinition *network = builder->createNetworkV2(explicitBatch);
ITensor *data = network->addInput(INPUT_BLOB_NAME, dt, Dims4{-1, 3, INPUT_H, INPUT_W});
assert(data);
Then I used the addslice:
ISliceLayer *s1 = network->addSlice(input, Dims4{d.d[0], 0, 0, 0 }, Dims4{ d.d[0], d.d[1] / 2, d.d[2], d.d[3] }, Dims4{1, 1, 1, 1 });
ISliceLayer *s2 = network->addSlice(input, Dims4{d.d[0], d.d[1] / 2, 0, 0 }, Dims4{d.d[0], d.d[1] / 2, d.d[2], d.d[3] }, Dims4{1, 1, 1, 1 });
Expected Result:
the input for this layer AddSlice: [-1, 116, 40, 40]
two outputs for this layer should be: [-1, 58, 40, 40] and [-1, 58, 40, 40]
Problem:
because the first dimension is -1 (placeholder for dynamic batch size), then raised this error:
[E] [TRT] (Unnamed Layer 20) [Slice]: slice size cannot have negative dimension, size = [-1,58,40,40]*