Humanpose has two parts VGG model and pafprocessing (post processing).
I can run VGG model in TensorRT engine for all different types like FP32/FP16/INT8.
Currently output tensors of TensorRT engine from VGG model is post processed (pafprocessing) in Python.
Since I like to use DeepStream SDK, how can I implement VGG model as primary-gie? I just need Tensor output from output layers.
I can implement post processing in custom plugin referring to SSD or FasterRCNN plugin in TensorRT.
Wait for deepstream 4.0. It will be available in about 10 days.
Refer to smaple "deepstream-infer-tensor-meta-test"to deploy your VGG model.
“output-blob-names” should be set to be your “tensor output layers name”. Multiple names are supported.
“output-tensor-meta=1” should be set. Please refer to dstensor_pgie_config.txt
“threshold” can be set > 1.0 means does not parse the tensor output in nvinfer plugin but get it in applicaton probe(callback)
You can get your tensor output in “pgie_pad_buffer_probe()” function.
You can set “model-engine-file” to be your engine file generated by your tensorRT app. so as to don’t care about tensorRT iplugin if it has.
I have flashed my jetson xavier with jetpack 4.2 recently. Would I have to wait for Deepstream 4.0 release to build intelligent video analysis application or I can start with deepstream 3.0?
Also, can you please help me with the installation steps of deepstream and how to deploy the pipeline of video analysis on Xavier?