How to look at the tensor output in deepstream-infer-tensor-meta-test

Here we can look at the count on the terminal. But what if one wants to see the tensor output of a vehicle if a vehicle is detected or tensor output of a person if a person is detected?

Thanks!

Hey,

I need to use the tensor output for further analysis. So please suggest me how can I achieve it.

@Mrunalkshirsagar

There is one way of 2 steps that you can do to intercept output tensors to do your analysis.

Step 1: Implement a callback function to intercept output tensors.

The callback function would be like this:

extern "C" bool NvDsInferParseCustom(
    std::vector<NvDsInferLayerInfo> const& outputLayersInfo,
    NvDsInferNetworkInfo const& networkInfo,
    NvDsInferParseDetectionParams const& detectionParams,
    std::vector<NvDsInferParseObjectInfo>& objectList)
{
    // TODO: 
    // Add your customized post processing code here
}

Step 2: Update the configuration file (you should change your network-type to 0)

network-type=0   # Change your network type to 0
parse-bbox-func-name=NvDsInferParseCustom
custom-lib-path=your_custom_parser_dir/your_custom_parser.so

You can refer to objectDetector_FasterRCNN, objectDetector_SSD and objectDetector_Yolo for more detailed information about customized post processing of detection models.