Support a few plugin in deep stream

In the this talk, he says the some plugins in deep stream are open-source, For using other plugins, what do I do?

yes, some deepstream plugins are not open source, but you can still use them with the functions they provide now.

Did you see any difficulty to use them?

Thanks!

Is it possible to use FaceDetection of TLT in deepstream app python codes?

Yes, it is possible. Inside DS5, there are already pruned models of FaceDetection. Reference: Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation

Thanks,
This depend on deep stream, but I want to use face detection in python code.

When run DS inference, a trt engine will be saved.
Then end user can use this trt engine for further use.
So, your question will become: how to do inference with a trt engine.
It does not depend on DS.

Thanks,
I thinks trt engine will be saved with TLT, right?

See the steps in deepstream-5.0/samples/configs/tlt_pretrained_models/README.
After downloading the face pruned model,you can use it to do inference. Then you will see a trt engine at deepstream/samples/models/tlt_pretrained_models too.

Or you use TLT to train the unpruned face model, see details in tlt user guide and https://ngc.nvidia.com/catalog/models/nvidia:tlt_facedetectir, an etlt model or trt engine will be generated.

Thanks.
My last question is that, the tracker plugin in deep stream is available in sample python code for custom using?

Please create a new topic for your last question. I just focus on TLT.

Hi @Morganh,
This etlt model or .engine file, is it the same as .trt output from trtexec command. If so can it be included in the NVINFER plugin config file as other .engine file.

The .engine file is the same as .trt file.
The .engine file can be generated by the tool tlt-converter against the etlt model.

See Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation
and Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation

Hi @Morganh,
So are you suggesting that if we want to sure on whether a particular caffemodel can be implemented into DeepStream for Inference, the successful execution of trtexec to get .trt file is enough or will there be any supporting files needed
Thanks in advance

@GalibaSashi,
Sorry, I do not quite understand your question. What I comment or share above is mostly based on TLT(transfer learning toolkit). In TLT, it can generate etlt model or trt engine. Customer can deploy each of them to do inference in Deepstream.
For your new question which focuses on Deepstream, please create a new forum topic.

1 Like