GTX 1050 Mobile not working properly / somehow disabled in some way

I’m coming from this thread here on the Fedora (which is my system) Forum, which I’m linking so I don’t have to repeat every troubleshooting step: remove ( I added “remove” to the beginning because of the limit on links)

But in summary, I am running Fedora 36, on my laptop which has hybrid graphics, GTX 1050 and Intel UHD Graphics 630. The driver is loaded properly, but the dGPU appears to be disabled in some way, even though I don’t have any such option enabled in BIOS, of which a direct consequence is that I get just as many (and sometimes even less) FPS on Minecraft as my iGPU, which really shouldn’t be the case.
nvidia-bug-report.log.gz (251.1 KB)

Please try increasing allocated memory in minecraft setttings. Also using oracle-jre instead of open-jre might help.

The thing is, this issue isn’t only present in Minecraft, I also get bad performance in benchmarks and other games

You’re using Wayland, IDK about the current state of using offloading with Xwayland. Please post the output of

sudo cat /sys/module/nvidia_drm/parameters/modeset

Oh sorry, the nvidia-bug-report.log.gz file I attached was generated using Wayland 😅 (so I attached one in Xorg: nvidia-bug-report.log.gz (320.0 KB)
). Well anyways when I was testing the performance it was always in Xorg, with my GTX 1050 Mobile set as the only GPU to be used.
Here’s the output:

➜  ~ sudo cat /sys/module/nvidia_drm/parameters/modeset


I can’t really see an issue with the nvidia gpu or the setup, only oddity is that gnome-shell is using a huge amount of video memory. Do you use any uncommon plugins?

Well I use a lot of blur effects, in the shell and the applications, but other than that, no. So you think this terrible performance is just the manufacturer’s fault?(No irony here, just a question)

Difficult to say, a log is just a snapshot of current values. Please run something taxing (like furmark) on the gpu and create a new nvidia-bug-report.log while running.

Ok, here it is, with the Unigine Superposition benchmark:
nvidia-bug-report.log.gz (671.3 KB)

Now it’s visible in nvidia-smi:

Performance State  : P3 
SW Power Cap  : Active

    Power Readings
        Power Management                  : N/A
        Power Draw                        : N/A
        Power Limit                       : N/A
        Default Power Limit               : N/A
        Enforced Power Limit              : N/A
        Min Power Limit                   : N/A
        Max Power Limit                   : N/A

Lenovo ideapad 330-15ICH
Since the gpu doesn’t have any power management features accessible by nvidia-smi, this seems to be system firmware controlled. Please check if you can use thermald, have /sys/firmware/acpi/platform_profile_choices available and/or can make use of power-profiles-daemon to change it.

So I checked thermald and power-rpofiles-daemon, and those seem to be rather meant for controlling the CPU (though in thermald for whatever reason I cannot edit the config files so not 100% sure there) and /sys/firmware/acpi/platform_profile_choices does not exist on my laptop

When supported by your notebook, thermald should change its whole thermal profile when enabled, thus also allowing the nvidia gpu to use more power (in theory).

Could you please explain how I would be able to do that? I tried thermald’s options but none of them change the performance

I don’t know either, please read this:
All patches mentioned are aleady in the kernel/thermald.

Sorry for not being active for such a long time. So I looked into the GitHub page of the thermald forkand it seems like my laptop doesn’t support the adaptive policy mentioned there.

Then you’re out of luck I guess unless there are options in bios regarding power/thermal management.

That’s the thing, there aren’t any options in the BIOS for thermal management. Well, I guess the myth of Linux supporting every hardware is wrong. I suppose I might have to dual-boot with Windows now.