Driveworks dnn plugin getOutputDimensions problem

Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.6
[*] DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
[*] Linux
QNX
other

Hardware Platform
[*] DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
1.9.2.10884
other

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
[*] native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

Hi,
It seems that the model plugin developed base on TensorRT nvinfer1::IPluginV2DynamicExt is incompatible with driveworks DNN, so I try to write my own plugin base on the sample code ‘PoolPlugin.cpp’, I have a question about the function dwBlockSize getOutputDimensions(int32_t index, const dwBlockSize inputs, int32_t numInputDims).*
I find that dwBlockSize is a struct of 4 dimensions [batchsize, channels, height, width], my plugin has 5 inputs, the dimension of input[0] and input[2] are both [2X20000X8X4X2X2], I don’t know how to implement the getOutputDimensions function, any idea about this?

And I have some other questions about driveworks DNN.

  1. Are the preprocess operations in dwDataConditioner_prepareData GPU accelerated or hardware(Image2D or other hardware block) accelerated?
  2. I already have model infer code base on TensorRT API, Will the model infer time be faster using drivework DNN than TensorRT API?

Thanks.

Dear @zhuhuibin,
getOutputDimensions() implements the logic to compute output size of plugin layer(In your case pool). Do you need to both input buffer dimension to compute output of pool layer? If so, you can access input buffer dimensions using
inputs[0] and input[1] in code snippet.

  1. Are the preprocess operations in dwDataConditioner_prepareData GPU accelerated or hardware(Image2D or other hardware block) accelerated?

It uses GPU backend

  1. I already have model infer code base on TensorRT API, Will the model infer time be faster using drivework DNN than TensorRT API?

No. DW DNN APIs are wrapper over TRT APIs

Thans for your quick response.
My plugin output demensions is calculated as below
output.dim0 = inputs[0].dim[0]
output.dim1 = inputs[3].dim[0]
output.dim2 = inputs[0].dim[4] * inputs[2].dim[5]
The dimension of inputs[0] and inputs[2] is 6, according to the sample function dwBlockSize getOutputDimensions(int32_t index, const dwBlockSize* inputs, int32_t numInputDims) , dwBlockSize can only represent 4 dimensions(batchsize * channels * height * width), what will happend if the dimension of input tensor large than 4?

Dear @zhuhuibin,
As per my knowledge, it is not possible to integrate into DNN framework.
Is the input layer also has 6 dimensions?
What are the additional two dims represents?

The plugin is one of the layers in model network and the inputs of the plugin are result of upper layers. As far as I know, TensorRT API getOutputDimensions can support upto 8 dimension input tensor.

Yes. you are right(TensorRT: nvinfer1::Dims32 Class Reference) .

I am checking with core team on this issue and update you.

Glad to hear that, thanks.

Dear @zhuhuibin,
Currently DW support upto 4dims. We are now working to support upto 8 dims.