Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc) : Orin AGX
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) : ALL
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) : 4.0
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
I need to convert an .etlt to an TensorRT engine to run on Orin AGX, so I followed this documentation in Installing TAO Deploy through wheel section, the installation went fine without any errors. But when I try to run the sample command:
Here’s the content of docker file I used to create the environment:
FROM nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel
RUN apt update
RUN apt --fix-broken install
RUN apt install -y mpich
RUN pip install mpi4py
RUN pip install torchinfo
RUN pip install clearml
RUN pip install segmentation-models-pytorch
RUN pip install transformers
RUN pip install nvidia-tao-deploy
Please help, I don’t know how to proceed further. Any guidance on how to fix this issue is appreciated.
I’m trying to get a TensorRT engine for my custom application. From the documents, it seems that tao-deploy should work on Jetson platform…
Could Nvidia make a tao-deploy docker-image that supports Jetson platforms? This would be much better the semi-complete/accurate documentation that puts us in confusion when we follow step by step and it doesn’t work…
Yes, currently I’m exploring tao-converter. Could you please point me to a specific examples of Retinanet using tao export (TAO 4.0) of a QAT model and then converting it using tao-converter… The sample TAO notebooks only have samples of tao-deploy…
Good morning,
I’ve got the same @glingk error running detectnet_v2 gen_trt_engine natively on Jetson Orin AGX.
I installed tao-deploy natively on Orin with the following command
python3.8 -m pip install nvidia-tao-deploy
Is in possible to know from NVIDIA which is the “official” way to take an .etlt model file → convert to tensort engine and use it to run inference with TensorRT c++ or python API on Jetson Orin?
It is frustrating to waste days in order to do one of the basic operation expected to do with Orin
I dowloaded the binary but it does not run on Jetson Orin with Jetpack 5.1.1 since the binary is based on tensorRT 7.x
I tried the following workaround create symbolic links for libnvinfer.so.7, libnvinfer_plugin.so.7, libnvparsers.so.7 linking to the .so.8 version available on Orin. Now the binary works but when I run it with the following
Hi @Morganh,
thank you for your suggestion. I tried the binary and it works. I successfully converted the model and run the engine with TensorRT (python)