TensorRT 3.0 IShufflelayer cannot transpose tensor from CHW format to HWC

Hi, everyone.

I’m trying to implement my dnn model inference with tensorrt-3. I want to convert input data from HWC format to CHW. But when I use the transpose operation of IShuffleLayer, it seems that I can’t permute the channel dimension with the spatial dimension. Here’s my code for testing permutation:

void test(float* in, float* out)
    {
        Logger gLogger;
        nvinfer1::IBuilder* builder = createInferBuilder(gLogger);
        nvinfer1::INetworkDefinition* network = builder->createNetwork();

        //  Create input
        auto data = network->addInput("data", nvinfer1::DataType::kFLOAT, nvinfer1::DimsCHW{4, 4, 3});
        assert(data != nullptr);

        // Permute
        auto ps = network->addShuffle(*data);
        assert(ps != nullptr);

        std::cout << "ps transpose" << std::endl;
        ps->setFirstTranspose(nvinfer1::Permutation{1, 2, 0});

        // Set output layer
        ps->getOutput(0)->setName("out");
        network->markOutput(*ps->getOutput(0));

        // Build the engine
        builder->setMaxBatchSize(1);
        builder->setMaxWorkspaceSize(1 << 20);

        nvinfer1::ICudaEngine* engine = builder->buildCudaEngine(*network);
        network->destroy();
        assert(engine != nullptr);
        nvinfer1::IExecutionContext* context = engine->createExecutionContext();
        assert(context != nullptr);

        assert(engine->getNbBindings() == 2);
        int inputIndex = engine->getBindingIndex("data");
        int outputIndex = engine->getBindingIndex("out");

        void* buffers[2];
        CHECK(cudaMalloc(&buffers[inputIndex], 48 * sizeof(float)));
        CHECK(cudaMalloc(&buffers[outputIndex], 48 * sizeof(float)));
        cudaStream_t stream;
        CHECK(cudaStreamCreate(&stream));

        CHECK(cudaMemcpyAsync(buffers[inputIndex], in, 48 * sizeof(float), cudaMemcpyHostToDevice, stream));
        context->enqueue(1, buffers, stream, nullptr);
        CHECK(cudaMemcpyAsync(out, buffers[outputIndex], 48 * sizeof(float), cudaMemcpyDeviceToHost, stream));
        cudaStreamSynchronize(stream);

        // release the stream and the buffers
        cudaStreamDestroy(stream);
        CHECK(cudaFree(buffers[inputIndex]));
        CHECK(cudaFree(buffers[outputIndex]));
        engine->destroy();
        builder->destroy();
    }

Input data is a float buffer, and when I execute this code I get following errors:

helpers.cpp:39: nvinfer1::DimsCHW nvinfer1::getCHW(const nvinfer1::Dims&): Assertion `isIndexedCHW(d)’ failed.
The program has unexpectedly finished.

And it seems ok when I set the permutation as [0, 2, 1] that will keep the channel dimension and just transpose in spatial dimensions.

Any suggestion?
Thank you very much in advance!

Hi,

For TensorRT 3, reshape only apply on constant weights.
Tensor reshapes will automatically drop when creating a TensorRT engine.

Thanks and sorry for the inconvenience.

Hi AastaLLL,

Thanks for your reply. Dose it mean that I can only convert the data format form HWC to CHW before feed it to the network?

Furthermore, I’m still confused about IShuffleLayer. If this layer can’t be used on permuting and reshaping input or output data of the network, under what circumstances can we use this layer?

Hi,

Sorry for that there is a misread in our previous post.
Your issue comes from wrong permutation parameter, and you can find more information in our TensorRT document:

nvinfer1::Permutation Struct Reference @ /usr/share/doc/tensorrt/html
[i]

int nvinfer1::Permutation::order[Dims::MAX_DIMS]

the elements of the permutation. The permutation is applied as outputDimension = 
permutation.order[inputDimension], so to permute from CHW order to HWC order, the 
required permutation is [1, 2, 0], and to [b]permute from HWC to CHW, the required 
permutation is [2, 0, 1][/b].

[/i]
Thanks.

Hi,

I’ve tried different permutation parameters. When it was set as [1, 2, 0] or [2, 0, 1], the same error occur just like my first post:

helpers.cpp:39: nvinfer1::DimsCHW nvinfer1::getCHW(const nvinfer1::Dims&): Assertion `isIndexedCHW(d)' failed.
The program has unexpectedly finished.

And as I mentioned, it can only work when the permutation is [0, 2, 1], that just permute the HW dimensions.

Any suggestion for that?

Thanks.

Hi,

Thanks for your feedback.
We can reproduce this issue now.

We are checking this problem internally and will update information with you later.
Thanks and sorry for the inconvenience.

Hi,

Miss some setup in shuffle layer.

For example, to convert a format from CHW to HWC:

... ...
//  Create input
auto data = network->addInput("data", nvinfer1::DataType::kFLOAT, nvinfer1::DimsCHW{3, 4, 4});
assert(data != nullptr);

// Permute
auto ps = network->addShuffle(*data);
assert(ps != nullptr);

<b>ps->setReshapeDimensions(DimsCHW(4, 4, 3));</b>
ps->setFirstTranspose(nvinfer1::Permutation{1, 2, 0});

// Set output layer
ps->getOutput(0)->setName("out");
network->markOutput(*ps->getOutput(0));
... ...

Thanks.

Hi AastaLLL,

It works. Thanks for your help.

Furthermore, according to above discussion, I think shuffle layer can only apply on 3-dims data. Is it possible that we can use this layer or other api of tensorrt to implement a pixelshullfe layer just like what pixel_shuffle of pytorch does:

def pixel_shuffle(input, upscale_factor):

    batch_size, channels, in_height, in_width = input.size()
    channels //= upscale_factor ** 2

    out_height = in_height * upscale_factor
    out_width = in_width * upscale_factor

    input_view = input.contiguous().view(
        batch_size, channels, upscale_factor, upscale_factor,
        in_height, in_width)

    shuffle_out = input_view.permute(0, 1, 4, 2, 5, 3).contiguous()
    return shuffle_out.view(batch_size, channels, out_height, out_width)

http://pytorch.org/docs/master/_modules/torch/nn/functional.html#pixel_shuffle

I have implemented pixel shuffle via stacking two shuffle layer. The codes look like:

......

    //  Create input, say its shape is {4, 3, 3}
    auto data = network->addInput("data", nvinfer1::DataType::kFLOAT, nvinfer1::DimsCHW{4, 3, 3});
    assert(data != nullptr);

    // ps1
    auto ps1 = network->addShuffle(*data);
    assert(ps1 != nullptr);

    // declare the reshape params, two index dimensions followed by CHW dimensions
    nvinfer1::Dims dims1{5,1,2,2,3,3};
    dims1.type[1] = dims1.type[0] = DimensionType::kINDEX;
    dims1.type[2] = DimensionType::kCHANNEL;
    dims1.type[3] = dims1.type[4] = DimensionType::kSPATIAL;

    ps1->setReshapeDimensions(dims1);

    // ps2
    auto ps2 = network->addShuffle(*ps1->getOutput(0));
    assert(ps2 != nullptr);

    ps2->setFirstTranspose(nvinfer1::Permutation{0, 3, 1, 4, 2});

    ps2->setReshapeDimensions(DimsCHW{1, 6, 6});    // out shpae {1, 6, 6}

    // Set output layer
    ......

Many thanks for your kind help.

1 Like

hello,have you solved this problem?

I use this to convert HWC to CHW

Dims dims=prevData->getDimensions();
IShuffleLayer* shu=network->addShuffle(*prevData);
shu->setReshapeDimensions(Dims3(dims.d[1],dims.d[2],dims.d[0]));			shu->setSecondTranspose(nvinfer1::Permutation{2,0,1});