[Bug Report] Black X11 Screen and partial lockup when upgraded to 515.76 and dual RTX3060

Same problem here with 515.76, i downgraded to 515.65.
3080 using HDMI, Fedora 35.

Also being triaged here:

We have filed a bug 3817621 internally for tracking purpose.
Shall try to reproduce issue locally and update further on it.

1 Like

We were able to duplicate issue locally and are currently debugging it.
Shall keep updated on the same.

Everyone affected: a possible fix has been posted, please check it:

Issue has been root caused and fix is integrated in future release driver.

1 Like

Good news: 520.56.06 fixed this issue for me and it works with upcoming 6.0 kernel.

I just had the upgrade on Fedora, however I have a 2080 Ti and I am now encountering the black screen bug.

My 1080p HDMI monitor is fine, but my 4K DisplayPort monitor is black and flashes. So it seems this bug, or something related, is not just on the 30xx series and is not solved.
nvidia-bug-report.log.gz (290.4 KB)

Edit: I’ve narrowed it down.

One resolution does it: 2560x1440, all other resolutions work. But as of this driver 520.56.06, that resolution just produces the black screen.

I am experiencing what I believe is this very issue on a Ubuntu system sporting 2 RTX A4000 cards. It worked perfectly in the lab on a 4k display, but when we moved to the production site and connected it to a NovaStar Tauris ( https://cdn.shopify.com/s/files/1/0099/2260/9218/files/Taurus_Series_Multimedia_Player_TB60_Specifications-V1.0.0.pdf?v=1631901791 )
We could see our application running without issue in the logs but the X server produced a blank screen. We tried xclock and a few others with the same result.
The system journal reported:
No matching GPU found
nvidia-modeset ERROR: GPU:1: Idling display engine timed out: 0x0000c67x:x:x:xxxx
failed to initialize RM client

Xserver was hung preventing the machine from restarting without a hard reset in addition to systemctl reporting that the nvidia-powerd service was in a failed state.

The machine is being relocated back to the lab currently. After a backup we will attempt the solution mentioned above.

hi same problem with me here 515.76