TRT conversion on Jetson AGX

While doing TRT conversion for my tensorflow model, the process gets killed.

Can anyone pass the solution for the same?

Hi,

Could you share the complete error log with us?

In general, killed is caused by running out of memory.
You can validate this by monitoring the device with sudo tegrastats at the same time.

However, Xavier already has 16GiB memory and it should be enough for most of the model.
Do you use TF-TRT?
TF-TRT tends to use more memory since both two frameworks needs to be enabled.

Thanks.

Yes, I have used the TF-TRT based model ,
Converted tensorflow saved_model format to TF-TRT model type

Also, on doing sudo tegrastats, the memory get used up full. Do you want the screenshot of “sudo tegrastats” to be specific?

Hi,

Sorry for the late update.
It seems that the error is caused by out-of-memory.
(memory reach to full from tegrastats)

Is pure TensorRT an option for you?
If yes, it’s recommended to do so since it can save much more memory than TF-TRT.
(since TensorFlow tends to occupy lots of memory on Jetson devices)

Thanks.

@AastaLLL What does Pure TensorRT actually imply here? Can you share some related links maybe?

Hi,

Pure TensorRT indicates that the model is running with TensorRT directly without TensorFlow integration.
You can find TensorRT workflow below:
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#fit

Thanks.