Jetson Xavier requirements for host PC---(DIGITS System Setup)?

Hi, everyone. I am using the Jetson Xavier developer kits following the official tutorial,but I encountered some questions:

1.“note: to setup DIGITS natively on your host PC, you should go to Natively setting up DIGITS on the Host (advanced)”,when I set up DIGITS on my Host, after the system software is upgraded, it conflicts with Nvidia’s driver. I want to know whether the Host PC must have a NVIDIA GPU?

2.In the DIGITS, “we’ll use a host PC (or cloud instance) for training DNNs, alongside a Jetson for inference.” Can I train a DNN with Jetson Xavier?

Hope to receive your responses.


1. YES. You will need a desktop GPU.

2. Suppose yes. But DIGITs doesn’t support Jetson environment.
You will need to use TensorFlow/Caffe/PyTorch frameworks directly.


Hi, AastaLL.Thanks for your reply.Now my host PC has no nvidia GPU, it only has AMD/ATI. So I can not train my model according to DIGITS.My questions:

1.If I make model training in my host PC, and get the model, how to run the trained model in Jetson Xavier?

2.Does it diffcult for me to run my training model(not get from DIGITS)on Jetson Xavier?

If you know how to run the trained model(not get from DIGITS) in Jetson Xavier, please tell me in detail. Thanks a lot.


DIGITs is just a wrapping for several DL frameworks, like Caffe, TensorFlow and pyTorch.
You can still train the model with each framework directly. They all provides CPU mode.

To run model on the Xavier, it’s recommended to use TensorRT engine for acceleration.
You can find lots of TensorRT sample here:

The jetson_inferece sample you shared in comment #1 can also help you to run the Caffemodel on Xavier.


Thanks,AastaLLL. You recommended to use TensorRT engine for acceleration, I am a little confused about it.
I know the Xavier include TensorRT engine module,so do I need install some softwares to use use TensorRT engine? Or, when I run my model(not form DIGITS), the Xavier will automaticly use the TensorRT engine for acceleration?

Hope receive your resopnse sincerely.



TensorRT is a software library and can be installed via SDK manager.
To use TensorRT, you will need to use its API for deserializing and inference.