The examples of python backend are all simple, which does not include model loading. So if I’d like to deploy a pytorch model, can I use python backend? If I can, how may I load the model from state dict?
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Support for PyTorch | 1 | 572 | April 15, 2021 | |
| How to Deploy an AI Model in Python with PyTriton | 1 | 641 | January 4, 2024 | |
| How to run a tao yolov4 model in triton inference server | 0 | 462 | September 14, 2023 | |
| Simplifying and Scaling Inference Serving with NVIDIA Triton 2.3 | 0 | 453 | October 5, 2020 | |
| Install Triton Python backend | 4 | 4222 | February 8, 2022 | |
| Using TLT models with Triton Inference Server | 8 | 1567 | October 12, 2021 | |
| Help with Custom Backend (Beginner level) | 0 | 675 | May 31, 2020 | |
| The Triton Inference Server Lets Teams Deploy Trained AI Models From Any Framework | 0 | 927 | March 24, 2021 | |
| Triton Inference Server not supporting PyTorch v1.6? | 13 | 2485 | October 12, 2021 | |
| Can i deploy TTS models using C++ APIs for Triton inference server and Triton client using Python APIs | 0 | 1144 | May 12, 2021 |