Why we need to use DIGITS on host computer?

Hi everyone,

As I understand the workflow, we train a model in host computer. Then send the model to Jetson - optimize with tensorRT - then reference.

In order to do it, we just need to use a trained model in tensorflow, caffe… I wonder to know that why we need to transform the model into DIGITS as the recommendation from NVIDIA.



It’s optional.

DIGITs is a helper with UI which allows users to inspect their model/monitor GPU earlier.
If you are familiar with Caffe, TensorFlow, …, you can also train the model on your own.


Thank you AastaLLL,

after searching more information, I found that we have 3 ways to use tensorRT to optimize from Pytorch model.

  1. Pytorch model -> tensor flow model -> TF - T-RT
  2. Pytorch model -> caffe model -> caffe - T-RT
  3. Pytorch model -> T-RT model

Some reviewers have some experiences that Caffe-TRT faster than TF-TRT. And, currently don’t have python api to work on T-RT. Is it correct?


Method 3) should be optimal since you won’t need to use other frameworks as an intermediate.
The python API for Jetson will be available in our next release.

If you cannot wait for our next release, you can give method 2) a try.
TensorFlow is op-based frameworks which usually has more issue when converting.



That is all about what I am doing now.
I am trying to solve some problem when converting TF-TRT.

Many thanks,

If you are ever interested in installing NVIDIA DIGITS on your system, check out this tutorial:


-Cuda Education