Tensorrt

Hello, I’m currently doing federated learning training in my Jetson device, my code is running in docker, if I need to speed up training, does tensorrt need any preconditions or what code to make it speed up my training

Hi,

TensorRT is an inference library.
It cannot be used in the training since there is no back propagation support.

Thanks.

Do you have any suggestions for speeding up training, thanks

Hi,

Have you tried it on the Jetson device?
If yes, could you share the performance of the training job with us?

In general, you can monitor the device performance with tegrastats.
If it doesn’t reach 99% utilization, you can improve it by feeding more tasks to the GPU (ex. increase batch size.

$ sudo tegrastats

Thanks.

Sorry, I just saw the news today. Our team is currently trying to use NVIDIA jetson equipment to complete federated learning training for anomaly detection in surveillance scenarios. If there is any latest progress, I will share it with you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.