I’m trying to utilize both models RidgeClassifier and MiniRocketMultivariate on the Xavier, but the benchmark of these models are slower by a factor of 2.5 vs an Intel CPU. Are there any optimized libraries for these models?
Note: I saw no reference on the devtalks site.
Could you share more details about how do you inference it?
Did you run it with GPU and TensorRT?
Please note that you can maximize the device performance with the following command:
$ sudo nvpmodel -m 0
$ sudo jetson_clocks
Thanks for the prompt reply.
I am running on MAXN, but I have applied the configuration per your suggestion though without any improvements with the time utilization.
I’m not sure how TensorRT is related as these modules are not implemented in TensorFlow, but in:
from sklearn.linear_model import RidgeClassifier
from sktime.transformations.panel.rocket import MiniRocketMultivariate
I think that on PC these libraries utilize numba and tbb, so, are there any optimized libraries for these models (RidgeClassifier, MiniRocketMultivariate)?
It should be some acceleration in the TensorFlow classifier.
But please note the sometimes this acceleration is not available for Jetson.
On Jetson, it’s more recommended to run a model with TensorRT.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.