I measure the inference time of my network using the method in sample/sampleGoogleNet/sampleGoogleNet.cpp, change TIMING_ITERATIONS to 1. I get frame from a video. However, the time is fluctuating for every frame. Sometimes it is 52ms for a frame, sometimes it is 17ms for a frame. Why is the time volatile?
Have you maximized the TX2 performance?
Yes, I have. I also have entered the command nvpmodel -m 0
Here are some possible reasons:
1. Remember to lock GPU frequency to the maximal.
Please remember to run these commands in order:
sudo nvpmodel -m 0 sudo ./jetson_clocks.sh
2. Related to IO.
What is your input source? Is it camera stream?
3. It’s recommended to use TIMING_ITERATIONS > 10 to have a more accurate results.