Is the Tensorrt model binding with the hardware environment? For example, can the Engine generate on T4 also use on 2080Ti? Is the Tensorrt model binding with the software environment? Only can infer on the same software env where the engine generated ?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
How can we build a tensorRT model just once and run on different GPUs? | 3 | 429 | May 5, 2020 | |
TensorRT support for multiple GPUs - URGENT | 6 | 2276 | October 28, 2021 | |
tensorrt-3 portability | 1 | 460 | November 29, 2017 | |
Question regarding Tensorrt engine build vs inference environment (TensorRT version, Platform, etc) | 4 | 920 | October 21, 2021 | |
Cross Operating System platform compatibility with the same GPU platform | 1 | 456 | April 11, 2022 | |
.engine generated on device A can`t be deployed to device B | 3 | 401 | May 17, 2023 | |
Bug : Tensorrt Model not loading on same GPU on a different device (slight driver version difference) | 1 | 248 | April 30, 2024 | |
tensorRT engine file can be used in different TX2 devices? | 3 | 1174 | September 19, 2021 | |
Python serialized TensorRT engine output wrong data at TensorRT C++ runtime | 4 | 1012 | April 20, 2020 | |
Build TensorRT on Cuda compute capability 7.5 and make it backward compatible with previous capabilities | 4 | 2030 | May 19, 2022 |