.engine generated on device A can`t be deployed to device B


A clear and concise description of the bug or issue.


TensorRT Version: 7.2.2-1
GPU Type: GeForce GTX 1650 Ti Mobile
Nvidia Driver Version: 525.85.12
CUDA Version: 11.1
CUDNN Version:
Operating System + Version: Linux 22.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

I did generated an YOLOv4 .engine file on a given system. However, .engine does not work in similar setup, where just the GPU differs (RTX3080 8GB).
Maybe, you have suggestions why this happens?

Thanks in advance!

Engines aren’t supposed to work on different devices. Maybe you can build engine with hardware compatibility Developer Guide :: NVIDIA Deep Learning TensorRT Documentation, but I think it can hurt performance

I also found some additional information Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.