Custom output model parsing in deepstream

Please provide complete information as applicable to your setup.

•Hardware Platform Jetson Tx2
•DeepStream Version: 6.0
•JetPack Version (valid for Jetson only):4.6
•TensorRT Version:8.x

I’ve a custom model which I’ve converted to TensorRT and is been tested using pycuda. The model takes images as an input and gives out a 128 dimension array as a face embedding, similar to the siamese network. I want to implement this in deepstream as a secondary model after the facenet detector. How can I write a custom parser for such a model to be integrated in deepstream, and extract metadata for this model?

Hi,

Please check below sample first:

/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-infer-tensor-meta-test

Thanks.

Any python implementation of this?

Hi,

Please check if the following example can meet your requirement:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.