Can TLT engine be reused between two machines with identical hardware configuration?

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc): GeForce RTX 3090
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc): Yolo_v4, Classification
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here): 3.0

From the TLT doc:

Machine specific optimizations are done as part of the engine creation process, so a distinct engine should be generated for each environment and hardware configuration. If the inference environment’s TensorRT or CUDA libraries are updated – including minor version updates or if a new model is generated– new engines need to be generated. Running an engine that was generated with a different version of TensorRT and CUDA is not supported and will cause unknown behavior that affects inference speed, accuracy, and stability, or it may fail to run altogether.

So it seems that TLT engine be reused between 2 machines with identical hardware configuration (same dGPU, CPU, memory, ect.,) and software versions of core libraries (CUDA, cuDNN, Deepstream, TensorRT, Driver). Can you confirm my understanding is correct?

Correct.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.