Individual Using of CPU and GPU

Hello there,

I want to develop applications with Xavier Nx. The application consists of two independent programs using DNNs. Can I run the first program in cpu and the other in gpu in terms of speed?

Hi,

TensorRT doesn’t support CPU inference.
But other third-party libraries (ex. TensorFlow, PyTorch, …) can switch inference to CPU.

However, it’s more recommended to inference both model on the GPU.
It should give you a better performance on Jetson.

Thanks.