Please provide the following information when requesting support.
• Hardware (T4/)
• Network Type (Detectnet_v2)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
I train a modle in tao, how could I generate a engine file from etlt or tlt in dgpu?
thank you very much.
Usually there are 5 ways.
- Use " tao detectnet_v2 export xxx ". There is an option “
--engine_file” . It is the expected tensorrt engine.
- Use " tao converter xxx ". The converter is inside the docker, it can generate tensorrt engine based on .etlt file.
- Actually it is the same as method 2. But it is running outside the docker. See DetectNet_v2 — TAO Toolkit 3.22.05 documentation and TensorRT — TAO Toolkit 3.22.05 documentation
- Use deepstream. Config the .etlt file in deepstream config file. Run deepstream and it will generate tensort engine. See DetectNet_v2 — TAO Toolkit 3.22.05 documentation
- Use triton-app. Config the .etlt file in GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton . Let it generate model.plan(i.e., tensorrt engine)
That is a very detailed answer, thank you very much.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.