How to use DeepStream for customized deeplearning model?

DeepStream tutorial described how to use custom Plugin for new layer with gst-nvinfer.

If I need to use for totally different model like Humanpose

Humanpose has two parts VGG model and pafprocessing (post processing).

I can run VGG model in TensorRT engine for all different types like FP32/FP16/INT8.
Currently output tensors of TensorRT engine from VGG model is post processed (pafprocessing) in Python.

Since I like to use DeepStream SDK, how can I implement VGG model as primary-gie? I just need Tensor output from output layers.

I can implement post processing in custom plugin referring to SSD or FasterRCNN plugin in TensorRT.

  1. Wait for deepstream 4.0. It will be available in about 10 days.

  2. Refer to smaple "deepstream-infer-tensor-meta-test"to deploy your VGG model.
    “output-blob-names” should be set to be your “tensor output layers name”. Multiple names are supported.
    “output-tensor-meta=1” should be set. Please refer to dstensor_pgie_config.txt
    “threshold” can be set > 1.0 means does not parse the tensor output in nvinfer plugin but get it in applicaton probe(callback)
    You can get your tensor output in “pgie_pad_buffer_probe()” function.

  3. You can set “model-engine-file” to be your engine file generated by your tensorRT app. so as to don’t care about tensorRT iplugin if it has.

Thanks for the reply.

The sample you mentioned “deepstream-infer-tensor-meta-test” is available only in deepstream 4.0. Is it true?

I can’t find in deepstream 3.0


“deepstream-infer-tensor-meta-test” is available only in deepstream 4.0. Is it true?

Thank you, for the info. I’ll wait for it.

I have flashed my jetson xavier with jetpack 4.2 recently. Would I have to wait for Deepstream 4.0 release to build intelligent video analysis application or I can start with deepstream 3.0?
Also, can you please help me with the installation steps of deepstream and how to deploy the pipeline of video analysis on Xavier?

Can you open a new topic in deepstream-for-tegra ?

I have started a new topic at following link:


Did you ever get pose estimation working in deepstream? I’m looking into it as well.

Deepstream not yet. But tensorrt yes.

could u share how to deploy the pose estimation model in tensorrt? i always get the wrong results

I didn’t tried in deepstream yet. I’ll try soon and share.