I’m making a scientific application where I need to have as little time as possible between OpenGL drawing commands and pixels lighting up on the screen. I test this round trip time using a computer running a real-time OS running at 4 kHz connected to a photodiode. The general process is: RTOS sends a command to draw, 2nd computer receives the command and uses OpenGL to draw simple 2D shapes on the screen, RTOS reads the photodiode to see how long the process took.
Historically the 2nd computer has been Windows based and now we are trying to use either a TX2 or a nano. The Windows computer uses integrated Intel HD630 graphics. On our reference system (using a 60 Hz display with an HDMI connection, Windows 10 computer for drawing) this would normally produce latencies on the order of 34ms from RTOS command to pixels lighting up.
Using both the nano and TX2 in our reference system (Windows 10 computer removed) the time from RTOS command to pixels lighting up is 46ms. Running the program with “jetson_clocks” made it worse at about 51ms. Applying all updates to the nano and TX2 didn’t make any difference. Using linux gamemode did not make any difference.
The code to listen for drawing commands is written in Java (so it can run on Windows or Linux) and using JOGL to draw to a full screen window. The drawing is all OpenGL 1.1 compatible. For the testing scenario, 5 circles are drawn using a triangle fan with about 30 vertices. So the drawing is not particularly intense.
Does anyone have an idea where the extra 12 ms may be coming from for the nano and TX2? Does anyone have any ideas about how to reduce this?
Please let me know if there are any other details I should provide.