Drivers Optimus
After spending 4 days in a row,i was finally able to install and run Official NVIDIA driver on my HP Envy 15 laptop.Here is my specs: CPU: Intel core i7-4510U CPU GPU #1: Intel HD Graphics 4400 GPU #2: NVIDIA GeForce GTX 850M My system: root@linux:# uname -a Linux linux 4.9.0-kali3-amd64 #1 SMP Debian 4.9.13-1kali3 (2017-03-13) x8664 GNU/Linux root@linux:# cat /etc/.release. With the July release of the monthly Verde drivers, NVIDIA has released a new Optimus user interface which provides even more visibility into how Optimus is. The best part is, NVIDIA’s Optimus feature. This cool technology can save power (use Intel graphics) when doing desktop type tasks and then switch automatically to the more powerful NVIDIA dedicated card when playing games etc. Sometimes the drivers can get a bit tricky but I can good success with the following. If a system has nVidia Optimus, there will be two video drivers necessary. One of the drivers is for the Intel GPU and the other for nVidia GPU. It may become necessary to uninstall the drivers for troubleshooting. When you uninstall the drivers, the nVidia driver should be uninstalled first and then the Intel driver.
Recently I did a clean Windows install on a Dell 6430 laptop. The laptop has an NVIDIA dedicated GPU as well as integrated Intel HD graphics.
The best part is, NVIDIA’s Optimus feature. This cool technology can save power (use Intel graphics) when doing desktop type tasks and then switch automatically to the more powerful NVIDIA dedicated card when playing games etc.
Sometimes the drivers can get a bit tricky but I can good success with the following:
- First, install the latest Intel HD Graphics drivers from Intel
- Then, install the latest NVIDIA drivers
After that, everything has been working perfectly 🙂
More info about NVIDIA Optimus drivers:
NVIDIA Optimus is a technology that allows an Intel integrated GPU and discrete NVIDIA GPU to be built into and accessed by a laptop.
Available methods
There are several methods available:
- #Use Intel graphics only - saves power, because NVIDIA GPU will be completely powered off.
- #Use NVIDIA graphics only - gives more performance than Intel graphics, but drains more battery (which is not welcome for mobile devices). This utilizes the same underlying process as the optimus-manager and nvidia-xrun options, it should be utilized for troubleshooting and verifying general functionality, before opting for one of the more automated approaches.
- Using both (use NVIDIA GPU when needed and keep it powered off to save power):
- #Using PRIME render offload - official method supported by NVIDIA.
- #Using optimus-manager - switches graphics with a single command (logout and login required to take effect). It achieves maximum performance out of NVIDIA GPU and switches it off if not in use.
- #Using nvidia-xrun - run separate X session on different TTY with NVIDIA graphics. It achieves maximum performance out of NVIDIA GPU and switches it off if not in use.
- #Using Bumblebee - provides Windows-like functionality by allowing to run selected applications with NVIDIA graphics while using Intel graphics for everything else. Has significant performance issues.
- #Using nouveau - offers poorer performance (compared to the proprietary NVIDIA driver) and may cause issues with sleep and hibernate. Does not work with latest NVIDIA GPUs.
Use Intel graphics only
If you only care to use a certain GPU without switching, check the options in your system's BIOS. There should be an option to disable one of the cards. Some laptops only allow disabling of the discrete card, or vice-versa, but it is worth checking if you only plan to use just one of the cards.
If your BIOS does not allow to disable Nvidia graphics, you can disable it from the Linux kernel itself. See Hybrid graphics#Fully Power Down Discrete GPU.
Use CUDA without switching the rendering provider
You can use CUDA without switching rendering to the Nvidia graphics. All you need to do is ensure that the Nvidia card is powered on before starting a CUDA application, see Hybrid graphics#Fully Power Down Discrete GPU for details.
Now when you start a CUDA application, it will automatically load all necessary kernel modules. Before turning off the Nvidia card after using CUDA, the nvidia
kernel modules have to be unloaded first:
Use NVIDIA graphics only
The proprietary NVIDIA driver can be configured to be the primary rendering provider. It also has notable screen-tearing issues unless you enable prime sync by enabling NVIDIA#DRM kernel mode setting, see [1] for further information. It does allow use of the discrete GPU and has (as of January 2017) a marked edge in performance over the nouveau driver.
First, install the NVIDIA driver and xorg-xrandr. Then, configure /etc/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf
the options of which will be combined with the package provided /usr/share/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf
to provide compatibility with this setup.
Next, add the following two lines to the beginning of your ~/.xinitrc
:
Now reboot to load the drivers, and X should start.
If your display dpi is not correct add the following line:
If you get a black screen when starting X, make sure that there are no ampersands after the two xrandr
commands in ~/.xinitrc
. If there are ampersands, it seems that the window manager can run before the xrandr
commands finish executing, leading to a black screen.
Display managers
If you are using a display manager then you will need to create or edit a display setup script for your display manager instead of using ~/.xinitrc
.
LightDM
For the LightDM display manager:
Make the script executable:
Now configure lightdm to run the script by editing the [Seat:*]
section in /etc/lightdm/lightdm.conf
:
Now reboot and your display manager should start.
SDDM
For the SDDM display manager (SDDM is the default DM for KDE):
GDM
For the GDM display manager create two new .desktop files:
Make sure that GDM use X as default backend.
Checking 3D
You can check if the NVIDIA graphics are being used by installing mesa-demos and running
Further Information
For more information, look at NVIDIA's official page on the topic [2].
Troubleshooting
This article or section needs language, wiki syntax or style improvements. See Help:Style for reference.
Tearing/Broken VSync
This requires xorg-server 1.19 or higher, linux kernel 4.5 or higher, and nvidia 370.23 or higher. Then enable DRM kernel mode setting, which will in turn enable the PRIME synchronization and fix the tearing.
You can read the official forum thread for details.
It has been reported that linux kernel 5.4 breaks PRIME synchronization but this has since been fixed.
Failed to initialize the NVIDIA GPU at PCI:1:0:0 (GPU fallen off the bus / RmInitAdapter failed!)
Add rcutree.rcu_idle_gp_delay=1
to the kernel parameters. Original topic can be found in [3] and [4].
Resolution, screen scan wrong. EDID errors in Xorg.log
This is due to the NVIDIA driver not detecting the EDID for the display. You need to manually specify the path to an EDID file or provide the same information in a similar way.
To provide the path to the EDID file edit the Device Section for the NVIDIA card in Xorg.conf, adding these lines and changing parts to reflect your own system:
If Xorg wont start try swapping out all references of CRT to DFB.card0 is the identifier for the intel card to which the display is connected via LVDS. The edid binary is in this directory. If the hardware arrangement is different, the value for CustomEDID might vary but yet this has to be confirmed. The path will start in any case with /sys/class/drm.
Alternatively you can generate your edid with tools like read-edid and point the driver to this file. Even modelines can be used, but then be sure to change 'UseEDID' and 'IgnoreEDID'.
Wrong resolution without EDID errors
Using nvidia-xconfig, incorrect information might be generated in Xorg.conf and in particular wrong monitor refresh rates that restruct the possible resolutions. Try commenting out the HorizSync
/VertRefresh
lines. If this helps, you can probably also remove everything else not mentioned in this article.
Lockup issue (lspci hangs)
Symptoms: lspci hangs, system suspend fails, shutdown hangs, optirun hangs.
Applies to: newer laptops with GTX 965M or alike when bbswitch (e.g. via Bumblebee) or nouveau is in use.
When the dGPU power resource is turned on, it may fail to do so and hang in ACPI code (kernel bug 156341).
Driver Optimal Launch Angle
When using nouveau, disabling runtime power-management stops it from changing the power state, thus avoiding this issue.To disable runtime power-management, add nouveau.runpm=0
to the kernel parameters.
For known model-specific workarounds, see this issue.In other cases you can try to boot with acpi_osi='!Windows 2015'
or acpi_osi=! acpi_osi='Windows 2009'
added to your Kernel parameters. (Consider reporting your laptop to that issue.)
No screens found on a laptop/NVIDIA Optimus
Check if $ lspci | grep VGA
outputs something similar to:
NVIDIA drivers now offer Optimus support since 319.12 Beta [5] with kernels above and including 3.9.
Another solution is to install the Intel driver to handle the screens, then if you want 3D software you should run them through Bumblebee to tell them to use the NVIDIA card.
Use switchable graphics
Using PRIME render offload
This is the official NVIDIA method to support switchable graphics.
See PRIME#PRIME render offload for details.
Using nouveau
See PRIME for graphics switching and nouveau for open-source NVIDIA driver.
Using Bumblebee
See Bumblebee.
Using nvidia-xrun
See nvidia-xrun.
Drivers Optimus 3
Using optimus-manager
Nvidia Optimus Drivers Download
See Optimus-manager upstream documentation. It covers both installation and configuration in Arch Linux systems.