High idle power consumption in headless server without monitor connected

I have headless server with RTX 3090. It is running ubuntu24.04 server (no desktop enviroments). I work with it solely through ssh - no monitor is connected. I am running latest reccomended drivers (550.120) and persistance mode is enabled.

According to nvidia-smi, idle power is around 25w. I assumed this was normal because of the high number of memory chips on the 3090 gpu.

However, today I found out, that when I connect monitor to the server (still no desktop enviroment, just terminal), the power drops to only 13w. It stays this way even after I disconnect the monitor. But when I restart the server, the power is back up to 25w and I have to plug in the monitor again (or re-plug it if I did not disconnected it before restart).

This is reproducible, I tried it multiple times and the power always goes down after the monitor is connected.

EDIT: I tried connecting hdmi edid emulator, which also decreased the power. It seems that the gpu just needs to detect that something new is connected once the OS starts.

EDIT2: Added nvidia bug report tool log.
nvidia-bug-report.log.gz (562.5 KB)

1 Like

I also tried it with debian 12, same issue.

I’ve faced exactly the same problem. When restarting Ubuntu 24.04 with RTX 3090 (nvidia-driver-580) and no monitor connected I was getting 30W on P8 via nvidia-smi. Connected hdmi - immediate drop to 20W P8.

I’ve tried to search info everywhere and no luck. Almost ordered dummy HDMI plug. But LLM suggested workaround to simulate EDID. And it worked!

Kernel-level fake EDID (no X needed)
This makes the DRM/KMS layer think your HDMI connector is “connected” and has a valid EDID, so nvidia-drm creates a fb console (fb0) even when nothing is plugged in.

Steps:
- With HDMI temporarily plugged in, find the connector name and save its EDID:
  - Find connector: ls /sys/class/drm | grep HDMI
    Example you’ll see something like: card0-HDMI-A-1
  - Save EDID from your real monitor (best for compatibility):
    sudo mkdir -p /lib/firmware/edid
    sudo cat /sys/class/drm/card0-HDMI-A-1/edid > /lib/firmware/edid/headless.bin
- Enable modesetting for NVIDIA (if not already):
  - Check: grep -q 'modeset=1' /etc/modprobe.d/nvidia-kms.conf || echo 'options nvidia-drm modeset=1' | sudo tee /etc/modprobe.d/nvidia-kms.conf
- Add kernel params to force that connector “on” with your EDID and a mode:
  - Edit /etc/default/grub and append to GRUB_CMDLINE_LINUX:
    nvidia-drm.modeset=1 drm.edid_firmware=HDMI-A-1:edid/headless.bin video=HDMI-A-1:1920x1080@60e
    Notes:
    - Replace HDMI-A-1 with your actual connector name.
    - 1920x1080@60e forces a 1080p60 mode. You can pick another mode that exists in your EDID.

My config:
# cat /etc/default/grub | grep nvidia
GRUB_CMDLINE_LINUX_DEFAULT="nvidia-drm.modeset=1 nvidia-drm.fbdev=1 drm.edid_firmware=HDMI-A-1:edid/headless.bin video=HDMI-A-1:1920x1080@60e"


- Make sure firmware is available early and update boot config:
  sudo update-initramfs -u -k all
  sudo update-grub
- Reboot with HDMI unplugged.
- Verify:
  - dmesg | grep -E 'EDID|HDMI-A-1|nvidia.*drmfb'
  - ls -l /dev/fb0 (should exist)
  - cat /sys/class/drm/card0-HDMI-A-1/status (should report connected)

Hope that it would help someone in the future.

@mesouug what driver version are you using?

@rrameshbabu nvidia-driver-580-server

# nvidia-smi 
Sat Nov 15 23:43:31 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.95.05              Driver Version: 580.95.05      CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3090        Off |   00000000:03:00.0  On |                  N/A |
| 62%   44C    P8             21W /  370W |       3MiB /  24576MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

1 Like

I have a Gigabyte RTX 3090 GAMING OC edition.
Another interesting fact that I’ve discovered is that when I start any GPU workload (ollama or llama.cpp) in either Docker or as plain service GPU becomes stuck at 30W P8 state.

Checking other threads I’ve found a workaround. Running following commands via SSH (I’m using a headless server):

echo suspend > /proc/driver/nvidia/suspend && echo resume > /proc/driver/nvidia/suspend

Even when the app is loaded (tested with lllama.cpp in Docker) standby power draw drops again to 20W (confirmed via both nvidia-smi and power plug watts-meter).

here is quick and dirty script that makes sure every hour that GPU is in it’s lowest power state:

~# crontab -l
# m h  dom mon dow   command
0 * * * * /root/nvidia-idle-fix.sh 1>/dev/null 2>&1

~# cat /root/nvidia-idle-fix.sh
#!/bin/bash

p=$(nvidia-smi --query-gpu power.draw --format=noheader,nounits)
p=${p%.*}

if (( p>=25 && p<=35 )); then
   echo suspend > /proc/driver/nvidia/suspend && \
   echo resume > /proc/driver/nvidia/suspend
fi