It’s just used for 3d output layer. We use the 25d output layer in our demo. So don’t have to worry about the configuration of these parameters.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| No result when using tensorRT Sample FasterRCNN with other images | 43 | 6192 | October 18, 2021 | |
| Custom initialization function for more than one input layers | 5 | 1024 | October 12, 2021 | |
| TensorRT from caffe proto with runtime-defined input dimensions. | 5 | 2188 | October 18, 2021 | |
| NvDsInferLayerInfo not giving expected no. of outputs | 60 | 2376 | October 12, 2021 | |
| *** HELP ME! How to get input tensor size(C,H,W) and how input/output are ordered and dynamic output size? | 0 | 682 | May 28, 2019 | |
| BodyPoseNet trained with custom dataset not detecting | 21 | 992 | June 6, 2022 | |
| TensorRT's nvinfer1::INetworkDefinition::addFullyConnected() does not work as expected for C3D network | 30 | 1956 | October 12, 2021 | |
| Where is frame/video Index come from in deepstream detection sample | 9 | 1414 | October 12, 2021 | |
| Training and Optimizing a 2D Pose Estimation Model with the NVIDIA Transfer Learning Toolkit, Part 1 | 2 | 782 | July 12, 2021 | |
| Deploy openpose on DS5.0 | 7 | 1291 | October 12, 2021 |