"TensorRT Warning: Using Engine Plan File Across Devices — Seen Even When Engine Generated on Same Jetson"

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson )
• DeepStream Version = 6.0

Hi team,

I am running a DeepStream pipeline on a Jetson device which includes inference and tracking (nvtracker). During execution, I am encountering the following warning:

**RNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.

I have generated the TensorRT engine file (.engine) directly on the same Jetson device where the inference is being executed — no transfer between different Jetson models or devices. However, the warning still appears during runtime.

  • Why is this TensorRT warning being shown even though the engine is created and used on the same device?
  • Should this warning be a cause for concern regarding stability or performance in my DeepStream pipeline?

Any guidance would be highly appreciated!

DeepStream leverages TensorRT to do inference. That is a TensorRT warning log. please refer to this topic for the explanation. could you share the complete running log including generating engine? Wondering the context of the the log.

Sir,
we are generating the the tensorrt file and using the same engine file to run the application…Yet we are seeing the above warning!!!

please refer to my last comment. could you share the complete log? Thanks!

logs.txt (8.3 KB)
Hello Team,

I have attached the complete log for your reference. In our pipeline, we are pushing data to the appsrc element, performing inference, and attaching probes to each element to calculate processing time.

However, even though the TensorRT engine file is generated and used on the same device, we are still encountering the following warning:

WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.

Could you please help clarify why this warning appears, even when there is no change in the device?

Thank you.

Thanks for the sharing! In logs.txt, there is no generating engine log? how did you generate the engine? by trtexec or DeepStream? could you share a complete DeepStream log including generating engine? Thanks!

Engine_file_logs.txt (10.5 KB)

I have attached the logs generated during the engine file conversion process.

We are generating the TensorRT engine file using DeepStream, utilizing the model’s .cfg and .weights files along with the configuration file from the nvinfer element.

Thank you.

Sorry for the late reply! the log logs.txt includes loading engine, and the log Engine_file_logs.txt only includes generating engine. could you share a complete? Thanks! from example, rename the engine, then run the application to generate the log of generating and loading engine.

Apologies! The log file I shared contains logs related to the conversion of .cfg and .wts to .engine file.
Do you also need the details of how I’m converting the .pt file to .cfg and .wts?

no. please refer to my last comment. you can rerun the application after renaming the engine. I need a complete log including generating engine, loading engine, doing inference. not some separate logs. Thanks!