Multioutput regression with Jetson Orin

I’m new to AI/ML and working on a hobby project. So far I’ve used Azure AutoML but it’s already getting expensive in short-term. Now that the Jetson Orin was released I’ve been considering getting one. I’ve looked at the documentation and tutorials but most examples seem to be related to text, vision/image and audio.

I need to train and use multioutput regression models for real-time inference. Is that possible with the Jetson Orin?

Hi,

Do you want to apply training on the Jetson device?

As Jetson is more suitable for inferencing, we are recommended to apply the training job on a desktop GPU and then copy the model to Jetson for real-time inference.
To do so, you can find some examples in our TAO toolkits

Thanks.

I would like to use the Jetson device also for training if it’s possible. The datasets for training aren’t that big, a few hundred megabytes containing 5-8 columns.

Thanks for the link. But even that talks about Vision, etc. I need to make multioutput regression predictions on real-time data (after having trained the model). Is that possible with a Jetson device or is this a limitation of the framework used. Like I said, I’m still very new to AI/ML.

Hi,

Do you have the ideal model and frameworks already?

On Jetson, we also support several popular DL frameworks.
For example, you can set up PyTorch and TensorFlow with the link below and apply the training on Jetson directly.

PyTorch: https://pypi.jetson-ai-lab.dev/jp6/cu126
TensorFlow: Installing TensorFlow for Jetson Platform - NVIDIA Docs

Thanks.

I’m still in the process of collecting the training data. Looks like I need at least 32GB RAM to do the training so the Jetson would only be suitable for inference which you already suggested.

Regarding models and frameworks so far I’ve only done some research and ran a couple of simple tests with preliminary data. Based on what I’ve read my best options might be gradient boosting machines or some sort of neural network. These links are closest to what I’m trying to achieve:

What I’m trying to achieve in a nutshell:

  • Use past data to train the model. Based on my tests this would happen in the cloud, the Jetson is insufficient for the training part.
  • The data is in tabular format consisting of the following colums: timestamp (datetime), category1 (string), category2 (string), category3 (string), value1 (double), value2 (double).
  • The predictions should contain at least category1, category2 and value1. Possibly the timestamp which would somehow correlate to the probability of the prediction (the further the prediction is in the future, the lower the probability would be). At this point I’m unsure if category3 and value2 are required in the prediction, probably not.
  • Pass real-time data to the model from which it should make predictions, including a probability on how reliable the prediction is.
  • The predictions would have to be “instant”, ie. very low latency. Inference should be run on the past 10-60 seconds of data points that arrive in real-time (100-200 entries per second) and then make a prediction for the next n data points depending on how many different categories there happen to be, one per category.
  • The model should use its own predictions and the real-time data to “learn from its own mistakes” (I haven’t figured out how to accomplish this, any pointers would be appreciated).
  • I could run this in the cloud or get a Jetson device to run it locally.

Any suggestions/comments regarding these requirements?

Hi,

If the data is big, it’s recommended to train it on the cloud.

We don’t have too much experience with the scikit-learn-based or data-related models.
But the package should work on Jetson so it should be possible to use Jetson for inference.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.