Updated Jetson AGX Xavier module available Now

The updated Jetson AGX Xavier module with 32 GB memory is available now.

Per the announcement, this is Part Number 900-82888-0040-000, replacing the previous 16 GB version (PN: 900-82888-0000-000).

The PCN covering the increase in memory is available here.

3 Likes

what about ECC support of 32 GB memory?
Datasheet has info about CPU cache ECC support only

4 month no answer
Ignoring? 😒

1 Like

The module is identical except for an increase in the amount of memory. There is no DRAM ECC support added.

The ECC support you saw mentioned in the datasheet is not new and is specific to CPU caches. The Xavier SoC (including CPUs) and all other module components remain unchanged. Only the memory component has been updated, and that solely for an increase in the amount of memory.

I’ll jump in and provide some additional feedback, since we bought all the hype and purchased one of these units. All of the specs on the Xavier AGX listed on NVIDIA’s site are sheerly “functional parameters”. That is, they look good on paper but cannot be applied in real-world applications. I won’t go into a whole lot of detail here, but anyone that has worked with Xavier AGX to any degree will agree that NVIDIA has not provided proper support for the hardware and none of the modern libraries for ML or AI vision will work with it, even with software upgrades. Just follow this thread, to learn why Jetson fails to perform (miserably) on multiple benchmark tests, partially due to the inability to CUDA-compile the most prominent libraries available. Even with CUDA compilation of OpenCV, for example, we have seen single-camera frame-rates of only about .25 fps (no, the decimal is not a typo). I do not get the impression that their team even fully understands the nuances of the hardware built into the units. It was just glued to the circuit board to “look nice” in advertisements. Meanwhile, you will find developers the world over that have not been able to make use of it. “ECC support of 32 GB memory” should be the least of your concerns. Worry more about why NVIDIA couples libraries with Jetpack that are not CUDA-compiled along with CUDA, itself. That should tell you everything you need to know about the readiness of these products for real-world applications.

visionarymind111, sorry you have been having trouble with your JAX devkit. We do have popular ML/AI frameworks on Jetson (see Jetson Zoo), and if you have a specific problem or issue please feel free to open a support topic about it and our engineers will take a look.

When we flish get some error ,I can’t sure about this , please help to check
log_uart.txt (7.2 KB)

Tony_Li, please open a separate support topic to track your issue.

Would it make sense to use this updated AGX for deep learning model training instead of the inference process?

It has 32GB memory (fast) and the processor itself seems quite capable. I would just ad a NVme drive to store training data.

What I have in mind if to install the recent update of nvidia-docker and work with Tensorflow internally if if works properly. Otherwise just to have a minimalist Linux distribution with Tensorflow.

Thank you.

Hi aberistain,

That should be fine to use AGX Xavier for model training.