Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc)
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
Hello, what are the steps to use the resnet18_trafficcamnet_pruned.etlt model with the Python apps, for example - deepstream-nvdsanalytics.
I tried by just adjusting the config files, but the pipeline failed in parsing the config.
Yes, I read these, but couldn’t find an example with python. Did I miss it, or there is no such example? If not, could you provide us with one or at least point out the things that need to be changed in the python test apps.
Officially we provide triton-apps for running inference in python. You can refer to its preprocessing and postprocessing. See GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton
There is also some sharing from other forum user. Run PeopleNet with tensorrt - #21 by carlos.alvarez
You can have a look.
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.