How to run nvidia pretrained model directly on T4(or similar cards, not edge device)?

pretrained model can be any model, such as FaceNet, FPENet, GazeNet, etc. which is .tlt/.etlt format.

Can I deploy it using TensorRT directly?

thanks

Hi,

This looks like a TAO Toolkit related issue. We will move this post to the TAO Toolkit forum.

Thanks!

Yes, you can deploy etlt model in the config file and then refer to
deepstream_tao_apps/apps/tao_others at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub ,actually it will generate tennsor engine and then run inference with deepstream.

For facenet, in deepstream, see /opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/deepstream_app_source1_facedetectir.txt

More, for facenet, actually it is based on detecnet_v2 network. You can also run inference with DetectNet_v2 — TAO Toolkit 3.21.11 documentation
or Integrating TAO CV Models with Triton Inference Server — TAO Toolkit 3.21.11 documentationGitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.