Glad to hear that you can run demo from tlt cv inference pipeline samples now. That means you already run triton server well. It does not need additional preprocessing.
Did you mean that I don’t need to do preprocess even if I write my own python script?
For python standalone script, there should be preprocessing.
Where could I find preprocessing for each model in tlt?
Currently, the preprocessing is only available in below.
For example, in config file deepstream_tao_apps/configs at release/tlt3.0 · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub</titl
Integrating TLT CV Models with Triton Inference Server — Transfer Learning Toolkit 3.0 documentation
or some forum topics
Thanks you Morganh, I will take a look and give it a try.
Could I know what’s the meaning after postprocessing?
I saw there are 77 output for each person and didn’t know what they means.
Can you share more details about your observation for 77 output?
The output will like this
I think 4 numbers is in a set.
How did you get above result? Can you share all the detailed step?
Here is my code. It will open a usb camera and send data to triton server.
client_bodypose.py (1.7 KB)
Sorry for late reply. Actually your original issue is resolved.
For bpnet inference, there are below ways.
- By default, bodyposenet supports “bpnet inference xxx” .
- Use deepstream app provided by Nvidia. See deepstream_tao_apps/apps/tao_others/deepstream-bodypose2d-app at release/tao3.0 · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub
- Use triton server mentioned above, but it is not supported yet. We need to implement postprocessing etc.
- Use your own standalone inference script, this depends on you and please leverage with item 2.
More, for item 2, Use deepstream app provided by Nvidia. deepstream_tao_apps/apps/tao_others/deepstream-bodypose2d-app at release/tao3.0 · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub
It only work with DS 6.0.