Using Xavier GPU for RL?

Hi there I seen a similar question I had from someone in this forum so I thought it would be appropriate to ask it here as well.

Right now I am working on training RL trading agents and I am seeking cost effective GPU as My current computer doesn’t have Gpu and it’s a laptop,

I want to buy my own gpu rather than renting in the cloud but it seems like it would be best to just buy a whole new computer just for gpu so I can train multiple agents but like compared to the RTX how is the Xavier, I understand the xavioir is it’s own computer but could someone please help me figure out if I’m on the right track with my thought process of using these, I seen that Tesla uses these as backups in the cars so I figured it would also be along the same lines for my rl agents, am I wasting my time? Do I just need to get gaming GPU or will this work as well can I plug extra gpu like rtx into Xavier

Also in theory could you make 3 screens perform with video with no lag, my coy lags my trading desk monitors

I can only guess at requirements, and you’ll have other questions I won’t be able to answer, but what follows will probably still be of interest to you.

If you want your GPU for faster video rendering, then yes, the Xavier is good with this. However, the GPU is more limited than on a desktop model. In the embedded world the Xavier is king, but in the desktop world, this would be “ok, but not impressive”.

A GPU can also be used for certain compute abilities. Basically, what people are calling “AI”. Training an AI requires a much faster/larger GPU, such as on a high end desktop, but executing a pre-trained AI model runs quite well on an Xavier. Any software designed to use the Xavier for compute would need to be set up specifically for the Xavier.

The architecture on an Xavier is ARMv8-a (“arm64/aarch64”). This differs from a desktop PC, and so programs designed to run on a Linux PC will not work on a Jetson without recompile (the exception being that shell scripts will work without modification).

PCs connect to a GPU via the PCIe hardware. This includes code to detect GPUs and find them and query the GPU for capabilities. Jetsons integrate their GPUs directly to the memory hub and lack the ability to respond to the PCI query methods.

The drivers for the desktop video cards will not run on a Jetson. The architecture is wrong.

Because the drivers on the Jetson are specific to the GPU being directly wired to the memory controller, these drivers cannot function with a PCIe GPU. Thus, you won’t be able to get a PC GPU to work on a Jetson.

Any drivers you might see for arm64 (outside of the software installer for Jetsons) are not for Jetsons, but for some of the compute warehouse style systems. These will not work for PCIe GPUs on Jetsons.

Jetsons are able to display to both an HDMI port and a DisplayPort. I don’t know for certain, but I highly doubt a third monitor would be possible with default hardware.

A Jetson’s GPU cannot be simply plugged in to a laptop or PC the way an external GPU can be.

These days a lot of laptops can be purchased with a GPU.

In the case of a highly capable and cost effective GPU for display to three monitors, you might consider one of the series of the 1060, 2060, or 3060 GPUs, although you might have trouble finding the right GPU available. Before picking any card though you should be able to give exact details of how the GPU is to be used, and what the requirements are…if it is just a low latency 3 monitor screen, then you could go by “gaming” standards since gaming concentrates on this, but if there were some special compute ability being used, there would be other requirements. From what was told there is no way to provide an exact answer.

By the sounds of it I’m going to need desktop gpu, what I’m trying to do is train several agents together using reinforcement learning and deep neural networks. I think the author recommended the RTX but I just wanted to check to see if an xavior could be used in place of a desktop for this project, but it seems all the code might have to be modified to suite Nvidia architecture so I guess I should stick with regular gpu, thank you kindly for your amazing help thanks

Not all AI related software which runs on a desktop PC can run on an Xavier, but a lot of what should not run on an Xavier can. You could train on Xavier, but you would be very disappointed in training performance. If you train something on a PC and execute the model on the Xavier, then you will probably be very happy.

As great as the Xavier’s abilities are concerned within the embedded world, it could not compete with a full desktop system for training.

During training on a desktop PC, where the GPU has its own dedicated RAM, you might find that GPU RAM available becomes more important than for something like gaming. I don’t know what your budget is, but starting with the RTX 3000 series there is no longer a “Titan” variant. The RTX 3090 in fact is the “new Titan”, and you can tell this is the case by the higher quantity of GPU RAM. If you could not get a 3090, then you would probably still be fairly happy with 2000 series Titan, which also has plenty of RAM. If you need to save money, and don’t have a complicated model to train which requires lots of RAM, then you’d probably happy with a 1080/2080/3080.

And there are much more expensive solutions with more RAM and performance for training.

I was looking into the Tesla’s like the K80 for example could you help me see the difference between the titan series vs Tesla’s, I seen some used k80’s for a good price about 350 Canadian amazon refurbished, what would be the difference for training for these to the latter?

I couldn’t give you a definitive answer. Tesla’s tend to be the high end of AI training, but there are old models and new models, and so an old Tesla might not be as good as an RTX 3090. I would in general be wary of used Teslas unless you know it wasn’t used for bitcoin mining. You would then have to find some sort of review and see how it compares to something like an RTX 3090.

Among all of the choices, if you are training a large enough model, then there won’t be any substitute for lots of video RAM. The amount of VRAM always goes up with GPUs which are intended for AI training, but it might be that something as a simple RTX 3080 has as much RAM as you need for your situation. I have no way of knowing how you might estimate the amount of VRAM needed for a given situation. FYI, the “Titan” series branding is sort of a cross over point between being a desktop GPU and an AI training GPU, very useful for both situations.

The branding for Titan has ended, and in the RTX 3000 series, the RTX 3090 is essentially what you could call the continuation of the “Titan” series, just without the name.

The Tesla series has always been for high end AI and never used as a desktop card.

Added note: Older hardware might not be able to work with newer CUDA releases. So if you do pick an older card you should consider if drivers make available the features you need.

Thank you kindly for sharing this with me