How to run Detectnet_v2_resnet18.trt without deepstream

Hi all,
1- I trained the detectnet_v2_resnet18 on kitti dataset using TLT, and output of model is (N,12,24,78), and (N,3,24,78), How to convert these outputs to boxs and scores?
I saw in this repo mention, this output of this model, but for bbox it’s different with that link, that’s wrong?

2- I want to know, what’s prepossessed algorithms used in this network?

Could you please paste again for “this repo”? The link is broken.

please refer to Detectnet_v2:

  1. For postprocess, please refer to the ode which is exposed in C++ in /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer_customparser/nvdsinfer_custombboxparser.cpp .
  2. For preprocess, please refer to Run PeopleNet with tensorrt

I convert the postprocess of nvdsinfer_custombboxparser.cpp to python codes and the bboxes incorrect.
Is nvdsinfer_custombboxparser.cpp for detectnet_v2_resnet18?

Yes, it is for detectnet_v2.

So in that file :

float bboxNormX = 35.0;
float bboxNormY = 35.0;

detectnet_v2 has varites of resnet backbone 10/18/50, and input dim maybe change so in this situation I guess that above value can’t valid for any input size, right?

When resnet backbone changes, the input dim will not be changed.

Hi. Did you resolve pre- and post- processing issues? Could you please share full example of inference detectnet using c++ and tensor rt? Thanks in advance.