Can the trt model on the server be directly transplanted to Jetson NX for use?

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Server:[GeForce RTX 3070 Ti] and NX:Jetpack4.6
• DeepStream Version version 6.0.0
• JetPack Version (valid for Jetson only) Jetpack4.6
• TensorRT Version TensorRT-8.0.1.6
**• NVIDIA GPU Driver Version (valid for GPU only)**GeForce RTX 3070 Ti: CUDA Version: 11.5/ Jetpack4.6:CUDA Version:10.2.300
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Can the trt model on the server be directly transplanted to Jetson NX for use?

the server is faster than the NX to engine model, so I want to copy the model from the server to use in NX

no, tensorrt version, device are different.

Thank you for your attention, tensorrt version I have installed the server and equipment as the same (TensorRT-Version:8.0.1.6)

Let’s move it to TensorRT forum to get better support, thank you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.