We have some Python ML algorithms in the form of .pyc files that we have tested on the Jetson Nano. The ML algos make use of Python libraries such as Numpy, Pandas, Scipy, Scikit-Learn, etc. We are using the Yocto build system to build a custom image. Currently, it seems like the algorithms run only on the CPU. We would like to be able to use the GPU to speed up the execution of these ML algorithms.
I have some questions regarding the approaches we can take to achieve this:
Is it possible to run these Python ML algorithms as is, without any modification in a way that they get executed on both the CPU+GPU?
If the method in question 1 is not possible, what is quick way with minimal modifications to the python code, that would help us use the CPU+GPU and see performance benefits?
What is the ideal / recommended method for moving to CPU+GPU based processing that will help us see the most performance benefits?
I am new to Python, ML and GPU based processing. A couple things I know are that Tensorflow allows for distribution of workload between CPU and GPU. There is PyCuda, which allows for GPU acceleration using CUDA. And there TensorRT as well allows using the GPU.
I would be really grateful if someone with knowledge about this can point me in the right direction, so that I can speed up my learning process.
Sorry missed that. If you’re using Jetson Nano, we are not sure you can use Rapids.
We recommend you to please post your concern on Jetson related forum to get better help.
you cannot use RAPIDS on a Jetson Nano. NX and AGX only ATM, and those packages are older
If you can use an NX or AGX and get RAIDS installed, the transition from CPU to GPU ML, as you described, with RAPIDS is generally pretty painless. Let us know if this is still a need.