TensorRT explicit batch with DLA

I am implementing my own GStreamer inference plugin with TensorRT and while looking at the DeepStream gst-nvinfer source code I found that when the DLA is used, a network with implicit dimensions is created in nvdsinfer_model_builder.cpp. However, in nvdsinfer_backend.cpp we can find a backend context for an engine that has both an explicit batch and DLA support:

    if (!(*engine)->hasImplicitBatchDimension())
    {
        /* Engine built with fulldims support */
        assert((*engine)->getNbOptimizationProfiles() > 0);

        if (engine->hasDla())
        {
            backend = std::make_unique<DlaFullDimTrtBackendContext>(
                std::move(cudaCtx), engine, DEFAULT_CONTEXT_PROFILE_IDX);
        }
        else
        {
            backend = std::make_unique<FullDimTrtBackendContext>(
                std::move(cudaCtx), engine, DEFAULT_CONTEXT_PROFILE_IDX);
        }
    }

Is it possible then to build a TRT engine with explicit batch that can run on DLA?

Hi,
Can you try running your model with trtexec command, and share the “”–verbose"" log in case if the issue persist

You can refer below link for all the supported operators list, in case any operator is not supported you need to create a custom plugin to support that operation

Also, request you to share your model and script if not shared already so that we can help you better.

Meanwhile, for some common errors and queries please refer to below link:

Thanks!

Hi,

Currently there is no issue, it will take me some time to finish the plugin and test it. However, this is something that the engineering team should be aware of and clearly state in the documentation. Is it possible to forward this question to them?

Hi,

Hope the following doc may help you. If you need further assistance, we would like to move this post to the Deepstream forum to get better help.

Thank you.

No, it does not help. I have already read the DLA part from the docs. Forget about DeepStream, my question is TRT related:

Can one create a network with an explicit batch if a DLA core is used?

Hi,

Explicit batch is always allowed for DLA.
DLA allows the user to use “implicit batch” mode, but it can only run the “max batch”.

Thank you.

Then why is the documentation using an implicit batch in the example?

Hi,

Sorry, missed conveying another point.
Please refer to my edited response.

Thank you.

Alright, thank you for the clarification!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.