Reading tensor data in a custom DeepStream parser. (PyTorch ---> ONNX --- > tensorrt.engine --> DeepStream)

Please provide complete information as applicable to your setup.

• Hardware Platform
Jetson AGX Xavier
• DeepStream Version
6.0
• JetPack Version (valid for Jetson only)
4.6.1
• TensorRT Version

ii  libnvinfer-bin                                        8.2.1-1+cuda10.2                           arm64        TensorRT binaries
ii  libnvinfer-dev                                        8.2.1-1+cuda10.2                           arm64        TensorRT development libraries and headers
ii  libnvinfer-doc                                        8.2.1-1+cuda10.2                           all          TensorRT documentation
ii  libnvinfer-plugin-dev                                 8.2.1-1+cuda10.2                           arm64        TensorRT plugin libraries
ii  libnvinfer-plugin8                                    8.2.1-1+cuda10.2                           arm64        TensorRT plugin libraries
ii  libnvinfer-samples                                    8.2.1-1+cuda10.2                           all          TensorRT samples
ii  libnvinfer8                                           8.2.1-1+cuda10.2                           arm64        TensorRT runtime libraries
ii  python3-libnvinfer                                    8.2.1-1+cuda10.2                           arm64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                                8.2.1-1+cuda10.2                           arm64        Python 3 development package for TensorRT

• Issue Type( questions, new requirements, bugs)
question, bug?, limitation?
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

  1. Use the Pytorch-SSD_MODEL

  2. Convert to ONNX (using the guide provided in pytorch docs)

  3. Use the converted model in deepstream Jetson Xavier

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

P.S
Apologies I don’t have the files with me at the moment.

could you elaborate your issue? what is your media pipeline? which deepstream sample are your testing?

Hi @fanzh ,

Thanks a lot for getting back to me.
This is how I ended up here.
I made a pytorch model and got an ONNX file
[Download the notebook here]
pytorch_to_onnx_to_nvidia_forum.ipynb (377.6 KB)
Also I made a gist here so you can see the rendered notebook (This was run in an nvidia DGX station A100)
The model I made by that is this
ssd300-ganindu_test_gpu_fp16.onnx (43.7 MB)
then I wanted to make this into an engine file. so I followed instructions [here] (Gst-nvinfer — DeepStream 6.0 Release documentation) to make a pgie file fr a custom parsing function.

(This pattern works with my other deepstream python apps so I’m sure my python code is fine )

Then let’s look into a cpp parsing function

/* Template */
extern "C" bool MyCustomParsingFunction(std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
                                           NvDsInferNetworkInfo const &networkInfo,
                                           NvDsInferParseDetectionParams const &detectionParams,
                                           std::vector<NvDsInferObjectDetectionInfo> &objectList);

/* function */

extern "C" bool  MyCustomParsingFunction(std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
                                           NvDsInferNetworkInfo const &networkInfo,
                                           NvDsInferParseDetectionParams const &detectionParams,
                                           std::vector<NvDsInferObjectDetectionInfo> &objectList)
{

for (unsigned int i = 0; i < outputLayersInfo.size(); i++)
        {

                NvDsInferDimsCHW bboxLayerDims_temp;
                getDimsCHWFromDims(bboxLayerDims_temp, outputLayersInfo[i].inferDims);
                std::cout << "Layer name  " << outputLayersInfo[i].layerName << " index = " << i << " channels " << bboxLayerDims_temp.c << " height " << bboxLayerDims_temp.h << " width " << bboxLayerDims_temp.w << "\n";


/* then I find the index of my bbox layer and save it in my_class_scores_layer_index */

const NvDsInferLayerInfo *scoreLayer = &outputLayersInfo[my_bbx_class_scores_index ]; /* I get a pointer to class_scores layer which according to netron and the ` getDimsCHWFromDims(bboxLayerDims_temp, outputLayersInfo[i].inferDims);` should be 1x81x8732  */

/* At this point how do I access an arbitrary element of that tensor? */
            
}

return 0;
}

It seems that after that point my attempts to access in my multi dimensional class_scores fails miserably.

Can you please help!

PS.

To further clarify, I haven’t consciously done any non maximum suppression as the class score are still un normalised and needs to be soft maxed etc. so I think the output tensors should be full sized eg. “scores” to be 81 x n_bbox long. (But I can’t index that length without getting overruns) I’m sure I’m doing something wrong here because these type of operations are bread and butter for deepstream. So apologies in advance if it’s something very silly 🙏

please refer to some samples: opt\nvidia\deepstream\deepstream-6.1\sources\libs\nvdsinfer_customparser\nvdsinfer_customclassifierparser.cpp

1 Like

Hi @fanzh,

Actually deepstream_tao_apps/nvdsinfer_custombboxparser_tlt.cpp at release/tlt3.0 · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub Has all the answers I was looking for!

I didn’t know this existed!

I tested this on on the TAO model and it worked straight away! Will try on the ONNX one and let you know!!

Thanks a lot!!
Cheers,
Ganindu.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.