I am looking to access the I2C channel on an otherwise not-connected port on a Quadro RTX GPU. My aim is to do some signal measurement for latency measurements on the display, and I aim to use the DDC channel with custom data for this. The I2C example in the nvapi however only shows how to read out enumerated GPU ports in the struct. Is there any way to just raw communicate via a bare metal port, or enumerate ALL ports on the GPU (not only the „connected“ ones)?
Thank you for contacting the NVIDIA Developer Forum. Our engineering team analyzed your query and has provided the following feedback:
Your primary question is: Is there any way to just raw communicate via a bare metal port, or enumerate ALL ports on the GPU (not only the „connected" ones)?
Note that the I2C APIs need a
displayMask as an identifier in the struct
Here’s the way you can enumerate ALL the available connectors (displayMasks) on a GPU.
- Use the API
NvAPI_GPU_GetAllDisplayIdsto enumerate all the
- Loop through the enumerated displayIds and get the corresponding
displayMask) using this API:
- You can further use this outputId/displayMask in the I2C APIs.
I hope we have answered your question.
Dear @agokhale ,
thanks for the reply. I am trying to implement this and built sucessfully an array with the displayMasks. This works with a NvAPI_I2CWrite if I write any data to a displayMask where there is an already connected display behind. However, I cannot seem to write to a port where the displayId.isActive or .isPhysicallyConnected flag is 0 / false (on which of those checks the i2cwrite? How can I circumvent this check?). If I write to such a displayMask, I get nvapi returns NVAPI_ERROR = -1, e.g. a miscellanious error. How can I write to a displayMask where the driver believes no device to be connected? (There is something connected, it is just not a standard monitor, just the i2c/ddc lines.)
any news here as written?
Sorry for the delay in the update. I have checked with the development team and got the update that they need to coordinate with some other teams as well, to investigate this.
They should have more information in hand in couple of days. If the team needs to reproduce and debug internally, I will ask you to file a ticket later.
Thank you for your patience.
NVAPI Forum Moderator.
We are now trying to reproduce the issue internally so as to be able to track the cause of the
NVAPI_ERROR. Based on the investigation outcome, we would get to know if your goal is achievable using the NvAPI or not.
Please keep an eye on a message we may send you if we need additional information.
NVAPI Forum Moderator