I have no experience with NVIDIA TAO Toolkit, now I want to convert a PyTorch model to .etlt model, but I don’t know if the two can be directly converted, or I need to get a .tlt model first and then convert the .tlt model to .etlt model.
So is there any way to convert a PyTorch model to a .tlt model?
Do I have to train my own dataset with NVIDIA TAO Toolkit to get a .tlt model?
No, it is not supported.
Yes. And you can also download some pretrained models in ngc. There are also .tlt model.
Thank you for your reply!
I want to train our own dataset with YOLOv5 in NVIDIA TAO, but does NVIDIA TAO not support training our own dataset with YOLOv5? I saw in the official website documentation that only YOLOv3 and YOLOv4 are currently supported
Yes, YOLOv5 is not supported in TAO. Currently it is still not in the roadmap since YOLOv5 is not for sure better than YOLOv4.
Now YOLOv5 supports converting .pt model to .engine model, and using NVIDIA TAO can also convert .etlt model to .engine model.
What is the difference between using .etlt model and .engine model on DeepStream SDK?
In Deepstream, end user can config with either of two ways.
- The .etlt model and its key
- The .engine file.
If use 1st way, the deepstream will also convert it into .engine file for running inference.
So even if the .pt model of YOLOv5 cannot be converted to an .etlt model, it can still be converted to an .engine model for use on DeepStream.
Is the performance of the .engine model and the .etlt model the same on DeepStream?
It is expected to get similar mAP between .tlt model and .engine model.
Many thanks!
Speeding Up Deep Learning Inference Using TensorFlow, ONNX, and NVIDIA TensorRT | NVIDIA Technical Blog
This link may answer your problems.
Next release will support Classification and UNet open-source ONNX model to a
TAO-compatible model.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.