i have another question about nvidia. when i run deepstreram -c source30_1080p_dec_infer-resnet_tiled_display_int8.txt , the Average frame rate can reach 27 on rtx2060 ,while on rtx3070 the Average frame rate is 25 . The results did not meet expectations because I believe 3070 is definitely better than 2060… why?
And when I go to pull 50 RTSP videos inference recognition, does the T4 card perform better than RTX3070 and RTx2060?
I think this topic should stay in the DeepStream category, they will know more about performance numbers for this specific use-case.
In general the 3070 should perform better on inference since it has more and faster Tensor cores. But of course it depends highly on the payload. And if video decoding is the bottle-neck, then both 20 and 30 series NVDEC chips are quite comparable in terms of performance.
The T4 will be about the same as a higher end 20 series GPPU, they use the same basic system architecture (Turing). The T4 is more specialized for compute workloads.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
- if testing peformance, please set sync to 0 in [sinkx], it means playing as fast as possible. please refer to DeepStream Reference Application - deepstream-app — DeepStream 6.2 Release documentation.
- did you modify the source30_1080p_dec_infer-resnet_tiled_display_int8.txt or other relevant configuration files? if yes, please share the diff.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.