TLT trafficnet3.0 ,TRT C++ inference

CUDA_CHECK(cudaMemcpyAsync(output1, buffers[outputIndex], batchSize 603416sizeof(float), cudaMemcpyDeviceToHost, stream));

CUDA_CHECK(cudaMemcpyAsync(output2, buffers[outputIndex2], batchSize *60*34*4*sizeof(float), cudaMemcpyDeviceToHost, stream));

i get the model outputs by the code above,but i dont kown how to use the outputs.

Can you elaborate your question? Code/command/full log are appreciated.

i think my problem is how to post process the result i got by c++. can i just imitate the python code ??? or NvDsInferParseCustomResnet given in “\deepstream_sdk_v5.1.0_x86_64\opt\nvidia\deepstream\deepstream-5.1\sources\libs\nvdsinfer_customparser\nvdsinfer_custombboxparser.cpp”

by the way , i cant figure out some parameters like gridW,gridH,stride

Trafficcamnet is based on TLT detectnet_v2 network.
For detectnet_v2 network, there is postprocess code which is exposed in C++ in /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer_customparser/nvdsinfer_custombboxparser.cpp

And also some other user write python code to run inference against the trt engine. Run PeopleNet with tensorrt