Support INT8 precision in Jetson Nano

Hi all,
I saw in the new version of DeepStream(v5), in the benchmark, some models are tested on jetson nano with INT8, and we know the nano, is not supported int8, converting the precision of model to INT8, Does it affect in speeding up?

Hi,

Nano doesn’t support INT8 precision.
So the available precision mode is float32 and float16.

Thanks.

I know the nano dosen’t support INT8, but in the above like of deep stream, why they tested INT8 Precision with jetson nano?

Hi,

Thanks for your report and sorry for I miss your point yesterday.
I will check this with our internal team and update more information with you later.

Thanks.

Hi,

Thanks for your report.

The table is updated now: https://news.developer.nvidia.com/new-software-enhancements-for-iva-iot/
Nano is measured with FP16.

Thanks.

Hello @AastaLLL & @LoveNvidia

So, is it the case that DeepStream supports INT8 models, in general, but not on Jetson Nano? This link shows that support for INT8 models & INT8 input data is added in DS 2.0 itself.

Thanks!

@sparsh-b,
INT8 only support in the jetson Xavier’s devices.

Thanks for the reply, @LoveNvidia!!