Can the TX2 work with mixed precision and different batches?


I am testing the TX2. I have also been reading the documentation.

Is it possible to use different batches with TX2 (more than 1)? Can it work faster if it uses mixed precision or is that only for the datacenter GPUs?


Do you want to use mixed precision for inference or for training.

Just inference. The mixed precision and the different number of batches.


Sorry for the late update.

It depends on which frameworks do you use.
For TensorRT, only one precision is allowed for an engine file.
But you can create and inference different precision on separate engine at the same time.

For TensorFlow, there is not much different for Jetson package compared to dataceneter version.
So it should be workable.


1 Like