Advanced search

Message boards : Graphics cards (GPUs) : Guide: RTX (also GTX), increasing efficiency in Linux

Author Message
ProDigit
Send message
Joined: 13 Nov 19
Posts: 6
Credit: 87,400,696
RAC: 0
Level
Thr
Scientific publications
wat
Message 54573 - Posted: 4 May 2020 | 12:54:30 UTC
Last modified: 4 May 2020 | 12:56:58 UTC

I've done some research in the past, and RTX 2060 to 2080 (including 2060 Super and 2070Super), can run 'fine' at 125W.
With a little overclock, and power capped to 125W, the 2060, 2060 Super, and 2070 ; both open ended and blower style GPUs run pretty much at stock speed values (but at 125W instead of 170 to 225W).

The 2070 Super and 2080 is best ran between 129 and 135W for an open bench, and depending on the cooling around 145-155W for inside a case.

The 2080Ti makes too much of a performance penalty at those wattages, depending on the project you'll have to set it anywhere from 180W to 225W on an open bench, and add about +15% of wattage when using inside a case.


For multi GPUs under a debian based Linux:
(like all the Ubuntu variants, Debian, or Mint), you have to run the latest supported Operating system which is 18.04 LTS.
Later versions will give desktop errors.
Then install the Nvidia .run drivers from their website (geforce.com/drivers or nvidia.com/download )
- After downloading the drivers, right click the .run time file, and enable it to execute (or go to terminal and do: "chmod +x nvidia_driver.run")
- log out of your GUI
- Go into terminal (CTRL+ALT+F1) (sometimes it's F2 or F8, depending on OS)
- To terminate GUI do:

sudo service lightdm stop
sudo init 3

Or replace lightdm with SDDM, GDM, or GDM3, depending on what desktop Display Manager your OS runs.
- Re-log into your terminal (terminal characters should look different or blocky. If not, you can always boot into grub, recovery mode, and start the terminal from there. Works as well.

After that go to your download folder (where nvidia.run file is located) and do:
sudo nvidia-xconfig --enable-all-gpus
sudo nvidia-xconfig --cool-bits=28
reboot


The above is a one time process for every time you install new drivers, kernel, or OS.
Beneath procedure needs to be repeated every time after boot, or written into a script:

You then can adjust power curve in Linux terminal by typing:
sudo nvidia-smi -i 0 -pl 130

Where '0' is your first GPU, this number can be 1 or 2, or higher; depending on how many GPUs are installed,
and '130' is the desired watts. type a too low or too high value like '10' and the system will give an error, and show you the value range (for most RTX GPUs it's 125W to 170/225 or 280W)


In GUI start menu (under preferences) you'll now be able to see nvidia-xserver, which you can use to adjust the fan curve, or overclock your GPU and RAM (be careful, you can cause system instability, or worse, break the hardware)

Most of my RTX 2080 Tis have their sweet spot around +120Mhz GPU, and +1400Mhz RAM.
lower end RTX GPUs might differ between 34Mhz max on low binned GPUs,+65 Mhz on 2060 1 fan designs, +100Mhz on most RTX GPUs between 2060 and 2080, and +200Mhz on high binned GPus.
In any case, you don't watch GPU overclock, you generally watch highest core boost frequency, which in most designs hovers around 1875Mhz (1935Mhz for RTX 2060, 2010Mhz for ROG STRIX 2060).

For single GPUs under Debian based Linux:
You can run any OS from 14.04LTS to the latest 20.04, with either the repository .deb file, or the nvidia .run file.
Do:
sudo nvidia-xconfig --enable-all-gpus
sudo nvidia-xconfig --cool-bits=28
reboot


You can now adjust overclocking GPU and VRAM as well as adjust the fan curve of the GPU. See above for details.

You can then adjust the power curve, by typing:
sudo nvidia-smi -i 0 -pl 130

Where '0' is your first GPU, this number can be 1 or 2, or higher; depending on how many GPUs are installed,
and '130' is the desired watts. type a too low or too high value like '10' and the system will give an error, and show you the value range (for most RTX GPUs it's 125W to 170/225 or 280W)

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1077
Credit: 40,231,533,983
RAC: 40
Level
Trp
Scientific publications
wat
Message 54578 - Posted: 4 May 2020 | 15:38:42 UTC

if you're on ubuntu or mint (which is probably most people), it's a lot easier to use the PPA to install the nvidia drivers.

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt install nvidia-driver-440


then you don't have to deal with all the nonsense of stopping the display manager or blacklisting the nouveau drivers, the PPA does it for you.
____________

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 54580 - Posted: 4 May 2020 | 23:57:13 UTC - in response to Message 54573.
Last modified: 5 May 2020 | 0:08:28 UTC

The 2070 Super and 2080 is best ran between 129 and 135W for an open bench, and depending on the cooling around 145-155W for inside a case.


Further figures:
If discussing peak efficiency, that is, output vs Power input
GTX1060 3GB peak efficiency occurs at 64W
GTX1660 SUPER peak efficiency occurs at 75W

These figures seem very low, but after the above power limit any increase in power is not met by the same increase in output.

The following figures were tested on ACEMD3 ELISA tasks which had a relatively consistent runtime.

The metric used to draw this conclusion was using "WATT HOURS" (Watts * Hours or energy consumed every hour)

GTX1660 SUPER @ 70W (minimum)
completes task in 23800 seconds = 463 WH

GTX1660 SUPER @ 125W (maximum)
completes task in 17100 seconds = 594 WH

Splitting difference between min and max runtimes into quartiles (runtime is evenly divided between min and max)
first quartile - 22250 seconds requires 74W (increase of 4W over min) = 457 WH (lower than minimum power)
Second quartile - 20450 seconds requires 83W (increase of 9W over first quartile) = 471 WH
Third quartile - 18850 seconds requires 98W (increase of 15W over second quartile) = 513 WH
From third quartile to max there is a 27W jump

So the jump in extra power required over and above the last quartile is 4W then 9W then 15W then 27W .... The increae in power is exponential when compared to the increase in runtime.

This demonstrates that squeezing out the last drop of performance from the GPU requires a comparitively huge amount of power. For me, running this GPU over 98W is quite wasteful. (Running at lower power has added benefits of a cooler and quieter GPU, plus lower power bills)

The exponential increase in power used was greater on the Pascal GPU (GTX1060 3GB), so it seems nVidia has "optimised" the performance curve on the Turing GPU. Bring on Ampere!!

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1077
Credit: 40,231,533,983
RAC: 40
Level
Trp
Scientific publications
wat
Message 54611 - Posted: 7 May 2020 | 17:22:38 UTC
Last modified: 7 May 2020 | 17:26:39 UTC

I ran some tests on my RTX 2070s (that's plural, not s for Super) over the last several days.

I think I will settle on 150W PL, +125core/+400mem OC. this gives me a good balance of power draw and overall output, core clocks stay around 1900MHz, some more some less these are the low end EVGA black models. I'm sure I can increase efficiency going deeper into power limiting, but I don't want to give up too much more raw performance. I was able to match or even exceed overall output at 150W/+125core/+400mem than I was at 165W/+75core/+400mem, reducing system power draw by about 100 watts (7 GPUs) and reducing GPU temps by about 5C across the board. win-win.

I also found that, at least for GPUGRID, there's not much benefit overclocking the memory beyond the stock P0 clocks. (+400 on the RTX cards in Linux, +200 in Windows) and you can probably get away with even less. GPUGRID isn't too bottlenecked by the memory speed, and more dependent on GPU core clocks and PCIe bandwidth. GPU memory use and bus utilization is rather low on GPUGRID tasks. overclocking the memory lot will just eat into your power budget and reduce your core clocks, resulting in less output overall at a set power limit.
____________

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,779,632,681
RAC: 1,582,099
Level
Trp
Scientific publications
watwatwat
Message 54643 - Posted: 10 May 2020 | 19:50:31 UTC

I'm trying to understand this but the terminology you guys are using differs from what's in the nvidia-smi documentation: http://developer.download.nvidia.com/compute/DCGM/docs/nvidia-smi-367.38.pdf

E.g., using sudo nvidia-smi -q reports 4 clock speeds for a 1080 Ti:
Graphics: 1911 MHz
SM : 1911 MHz
Memory : 5005 MHz
Video : 1620 MHz
In one place the clocks are referred to as GPU & RAM and in another as Core & Mem. Aren't they all a form of RAM?
To set the clocks:
-ac, --applications-clocks=MEM_CLOCK,GRAPHICS_CLOCK
Specifies maximum <memory,graphics> clocks as a pair (e.g. 2000,800) that defines GPU's speed while running applications on a GPU.
My guess is that GPU or Core=GRAPHICS_CLOCK and RAM or Mem=MEM_CLOCK.

I was wondering about the p0 option and will explore that.
What about the persistence mode? Any reason to implement that?

sudo nvidia-smi -pl 150 -pm -p0

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,779,632,681
RAC: 1,582,099
Level
Trp
Scientific publications
watwatwat
Message 54644 - Posted: 10 May 2020 | 21:34:59 UTC
Last modified: 10 May 2020 | 21:39:52 UTC

aurum@Rig-26:~$ sudo nvidia-smi -pm 1 ; sudo nvidia-smi -pl 150
Persistence mode Enabled for GPU 00000000:02:00.0.
All done.
Power limit for GPU 00000000:02:00.0 was set to 150.00 W from 150.00 W.
All done.

Changes power limit fine but I'm stuck in P2:
Performance State : P2
Power Limit : 150.00 W
Clocks
Graphics : 1645 MHz
SM : 1645 MHz
Memory : 5005 MHz
Video : 1392 MHz

I tried to set the Performance State using a startup command:
sh -c '/usr/bin/nvidia-settings --load-config-only -a GPULogoBrightness=0 -a GpuPowerMizerMode=1'
This turns off the baby-blinky lights but still stuck in P2.

aurum@Rig-26:~$ sudo nvidia-smi -pm 1 ; sudo nvidia-smi -pl 150 ; sudo nvidia-smi -ac 5405,2036
Persistence mode is already Enabled for GPU 00000000:02:00.0.
All done.
Power limit for GPU 00000000:02:00.0 was set to 150.00 W from 150.00 W.
All done.
Setting applications clocks is not supported for GPU 00000000:02:00.0.
Treating as warning and moving on.
All done.

How can I set the clocks or force P0???

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1077
Credit: 40,231,533,983
RAC: 40
Level
Trp
Scientific publications
wat
Message 54645 - Posted: 10 May 2020 | 21:37:59 UTC - in response to Message 54643.
Last modified: 10 May 2020 | 21:48:10 UTC

sounds like you're running Linux? I can't see since your systems are hidden. i do all my overclocking through the Nvidia X Server settings application that is installed with the nvidia drivers on Linux, and when I've found stable settings, I drop it all into a script and I run it at startup.

but 'GPU' and 'core' are synonymous. they are not the same as memory clocks, which are 'ram' or 'mem'.

~1900 MHz is pretty normal for a 1080ti crunching with no tweaks. you can usually bump that close to 2000MHz with sufficient power. but if you powerlimit the card, it will by default pull the clocks down. you can apply an overclock to the GPU/core with the powerlimit in place and it will compensate and try to give more clocks within the power budget.

your memory clocks are running at the standard 'P2' state that you are stuck in with compute loads. you cannot get the card to run in P0 as it is a limitation in the driver itself. the only thing you can do is overclock the memory in the P2 state to mimic what you would get in the P0 state. your clocks now are 5000MHz, which is "10Gbps", to get the clocks up to the "11Gbps" (5500MHz) you need to overclock the memory.

in the command line you set clocks using an offset with nvidia-settings, not a straight number with nvidia-smi

for one of my RTX 2070s, the command looks like this:

nvidia-settings -a "[gpu:0]/GPUMemoryTransferRateOffset[4]=400" -a "[gpu:0]/GPUGraphicsClockOffset[4]=100"


but the GTX 10-series cards do not have 4 levels of clocks like the RTX cards do. so you would need a command like this:
nvidia-settings -a "[gpu:0]/GPUMemoryTransferRateOffset[3]=1000" -a "[gpu:0]/GPUGraphicsClockOffset[3]=100"


I set mem offset to 1000 in that command which will bring your 1080ti to the same as P0 clocks. play around with the graphics clock offset until you reach your desired clocks.
____________

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,779,632,681
RAC: 1,582,099
Level
Trp
Scientific publications
watwatwat
Message 54646 - Posted: 10 May 2020 | 21:49:20 UTC
Last modified: 10 May 2020 | 21:55:11 UTC

So do I just set -pl 150 and let P2 pick the clocks???
In trying to understand this thread I couldn't tell if it was
-pl AND clocks
or
-pl OR clocks.

I've never overclocked my GPUs but reducing power consumption is very appealing. It's starting to get hot here and I cannot afford my power bill any more. I already pulled the second GPU from all my computers and have started selling them on fleaBay.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1077
Credit: 40,231,533,983
RAC: 40
Level
Trp
Scientific publications
wat
Message 54647 - Posted: 10 May 2020 | 21:53:50 UTC - in response to Message 54646.
Last modified: 10 May 2020 | 21:54:54 UTC

PL AND clocks. PL alone will have the affect of reducing power AND losing performance. the point of overclocking on top of the power limit is to gain back lost performance.

here's the startup script I use for my 10x 2070 system. you should be able to figure what's doing what. just modify it to however many GPUs you have in a system, change the offsets to whatever suits you, and change the [4]s to [3]s for Pascal cards. it sets power limits, clocks, and fan speeds all in one script. you need to have applied the coolbits tweak to your Xorg.conf file for these commands to work.

#!/bin/bash

/usr/bin/nvidia-smi -pm 1
/usr/bin/nvidia-smi -acp UNRESTRICTED

/usr/bin/nvidia-smi -i 0 -pl 160
/usr/bin/nvidia-smi -i 1 -pl 160
/usr/bin/nvidia-smi -i 2 -pl 160
/usr/bin/nvidia-smi -i 3 -pl 160
/usr/bin/nvidia-smi -i 4 -pl 160
/usr/bin/nvidia-smi -i 5 -pl 160
/usr/bin/nvidia-smi -i 6 -pl 160
/usr/bin/nvidia-smi -i 7 -pl 160
/usr/bin/nvidia-smi -i 8 -pl 160
/usr/bin/nvidia-smi -i 9 -pl 160

/usr/bin/nvidia-settings -a "[gpu:0]/GPUPowerMizerMode=1"
/usr/bin/nvidia-settings -a "[gpu:1]/GPUPowerMizerMode=1"
/usr/bin/nvidia-settings -a "[gpu:2]/GPUPowerMizerMode=1"
/usr/bin/nvidia-settings -a "[gpu:3]/GPUPowerMizerMode=1"
/usr/bin/nvidia-settings -a "[gpu:4]/GPUPowerMizerMode=1"
/usr/bin/nvidia-settings -a "[gpu:5]/GPUPowerMizerMode=1"
/usr/bin/nvidia-settings -a "[gpu:6]/GPUPowerMizerMode=1"
/usr/bin/nvidia-settings -a "[gpu:7]/GPUPowerMizerMode=1"
/usr/bin/nvidia-settings -a "[gpu:8]/GPUPowerMizerMode=1"
/usr/bin/nvidia-settings -a "[gpu:9]/GPUPowerMizerMode=1"

/usr/bin/nvidia-settings -a "[gpu:0]/GPUMemoryTransferRateOffset[4]=400" -a "[gpu:0]/GPUGraphicsClockOffset[4]=100"
/usr/bin/nvidia-settings -a "[gpu:1]/GPUMemoryTransferRateOffset[4]=400" -a "[gpu:1]/GPUGraphicsClockOffset[4]=100"
/usr/bin/nvidia-settings -a "[gpu:2]/GPUMemoryTransferRateOffset[4]=400" -a "[gpu:2]/GPUGraphicsClockOffset[4]=100"
/usr/bin/nvidia-settings -a "[gpu:3]/GPUMemoryTransferRateOffset[4]=400" -a "[gpu:3]/GPUGraphicsClockOffset[4]=100"
/usr/bin/nvidia-settings -a "[gpu:4]/GPUMemoryTransferRateOffset[4]=400" -a "[gpu:4]/GPUGraphicsClockOffset[4]=100"
/usr/bin/nvidia-settings -a "[gpu:5]/GPUMemoryTransferRateOffset[4]=400" -a "[gpu:5]/GPUGraphicsClockOffset[4]=100"
/usr/bin/nvidia-settings -a "[gpu:6]/GPUMemoryTransferRateOffset[4]=400" -a "[gpu:6]/GPUGraphicsClockOffset[4]=100"
/usr/bin/nvidia-settings -a "[gpu:7]/GPUMemoryTransferRateOffset[4]=400" -a "[gpu:7]/GPUGraphicsClockOffset[4]=100"
/usr/bin/nvidia-settings -a "[gpu:8]/GPUMemoryTransferRateOffset[4]=400" -a "[gpu:8]/GPUGraphicsClockOffset[4]=100"
/usr/bin/nvidia-settings -a "[gpu:9]/GPUMemoryTransferRateOffset[4]=400" -a "[gpu:9]/GPUGraphicsClockOffset[4]=100"

/usr/bin/nvidia-settings -a '[gpu:0]/GPUFanControlState=1' -a '[fan:0]/GPUTargetFanSpeed=75' -a '[fan:1]/GPUTargetFanSpeed=75'
/usr/bin/nvidia-settings -a '[gpu:1]/GPUFanControlState=1' -a '[fan:2]/GPUTargetFanSpeed=75' -a '[fan:3]/GPUTargetFanSpeed=75'
/usr/bin/nvidia-settings -a '[gpu:2]/GPUFanControlState=1' -a '[fan:4]/GPUTargetFanSpeed=75' -a '[fan:5]/GPUTargetFanSpeed=75'
/usr/bin/nvidia-settings -a '[gpu:3]/GPUFanControlState=1' -a '[fan:6]/GPUTargetFanSpeed=75' -a '[fan:7]/GPUTargetFanSpeed=75'
/usr/bin/nvidia-settings -a '[gpu:4]/GPUFanControlState=1' -a '[fan:8]/GPUTargetFanSpeed=75' -a '[fan:9]/GPUTargetFanSpeed=75'
/usr/bin/nvidia-settings -a '[gpu:5]/GPUFanControlState=1' -a '[fan:10]/GPUTargetFanSpeed=75' -a '[fan:11]/GPUTargetFanSpeed=75'
/usr/bin/nvidia-settings -a '[gpu:6]/GPUFanControlState=1' -a '[fan:12]/GPUTargetFanSpeed=75' -a '[fan:13]/GPUTargetFanSpeed=75'
/usr/bin/nvidia-settings -a '[gpu:7]/GPUFanControlState=1' -a '[fan:14]/GPUTargetFanSpeed=75' -a '[fan:15]/GPUTargetFanSpeed=75'
/usr/bin/nvidia-settings -a '[gpu:8]/GPUFanControlState=1' -a '[fan:16]/GPUTargetFanSpeed=75' -a '[fan:17]/GPUTargetFanSpeed=75'
/usr/bin/nvidia-settings -a '[gpu:9]/GPUFanControlState=1' -a '[fan:18]/GPUTargetFanSpeed=100' -a '[fan:19]/GPUTargetFanSpeed=100'

____________

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1354
Credit: 7,875,512,546
RAC: 7,898,770
Level
Tyr
Scientific publications
watwatwatwatwat
Message 54648 - Posted: 10 May 2020 | 22:15:26 UTC - in response to Message 54643.

To set the clocks:
-ac, --applications-clocks=MEM_CLOCK,GRAPHICS_CLOCK

This method of setting clocks only work for Maxwell or lower cards. Deprecated for Pascal and later.

For clock control in the later models you need to use the nvidia-settings application.

You use the nvidia-smi application for power level setting.

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 762,719,467
RAC: 129,456
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 54649 - Posted: 11 May 2020 | 3:51:28 UTC - in response to Message 54643.
Last modified: 11 May 2020 | 3:52:03 UTC

[edit]

In one place the clocks are referred to as GPU & RAM and in another as Core & Mem. Aren't they all a form of RAM?

Core is a form of memory that was heavily used around 50 years ago. I remember my first engineering boss telling me that the company's first computer had only 64 kilobytes of core memory. The main program we worked with needed more than that, so it at first had to use unusual steps such as pushing the operating system out of memory, and loading only sections of the program into memory at any one time.

RAM is random access memory, and mem is an abbreviation for memory.

A GPU is NOT a form of memory. It is something that uses a lot of memory, but does not contain much of it.

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,779,632,681
RAC: 1,582,099
Level
Trp
Scientific publications
watwatwat
Message 54655 - Posted: 11 May 2020 | 9:53:29 UTC - in response to Message 54648.

To set the clocks:
-ac, --applications-clocks=MEM_CLOCK,GRAPHICS_CLOCK

This method of setting clocks only work for Maxwell or lower cards. Deprecated for Pascal and later.

For clock control in the later models you need to use the nvidia-settings application.

You use the nvidia-smi application for power level setting.
Thanks Keith, The InterWeb Never Forgets bit me again :-)

I abandoned that approach earlier in favor of trying to get GPUPowerMizerMode=1 to force it into P0. But there's some magic trick to getting this coolbits thing to work which I have yet to figure out.

I'm digesting this man page hoping that I may understand what I'm doing:
http://manpages.ubuntu.com/manpages/bionic/man1/nvidia-settings.1.html

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1077
Credit: 40,231,533,983
RAC: 40
Level
Trp
Scientific publications
wat
Message 54658 - Posted: 11 May 2020 | 11:28:49 UTC - in response to Message 54655.

Did you read my previous post?

You can’t force a high end GTX 10-series card into P0. Only the professional level Quadro and Tesla line cards, and low end cards like 1050s and lower. There was a way to do it on 700 series cards and maybe 900 series cards in Windows, but not Linux if I recall.
____________

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1354
Credit: 7,875,512,546
RAC: 7,898,770
Level
Tyr
Scientific publications
watwatwatwatwat
Message 54661 - Posted: 11 May 2020 | 16:30:42 UTC

The reason that Maxwell and later are not directly overclockable is because Nvidia has never released signed firmware images for the cards.

They have for Kepler and earlier. I had the breakdown in card families wrong earlier. I used to use the -ac method of overclocking on my GTX 670's

This is the original "coolbits" Phoronix.com post.

https://www.phoronix.com/scan.php?page=news_item&px=MTY1OTM

And this snippet explains why. https://www.phoronix.com/scan.php?page=article&item=nouveau-summer-2018&num=1

Re-Clocking
The biggest issue that plagues the Nouveau driver with modern NVIDIA hardware and really hurts its potential adoption is the lack of re-clocking support.

For GeForce GTX 600/700 "Kepler" graphics cards there is manual re-clocking support that has been stable for a while now in the kernel and requires writing "0f" (or the other desired performance state) to /sys/kernel/debug/dri/0/pstate (or similar path depending upon GPU index) to switch from the boot clock frequencies to the highest performance state for the GPU core and memory. At this stage, Nouveau doesn't have any dynamic/automatic re-clocking for their GPUs to switch frequencies based upon GPU utilization. The benchmarks in this article on Kepler hardware as well as Maxwell 1 show the re-clocked performance to their highest (0f) performance state, but unfortunately there isn't re-clocking for Maxwell (GTX 900 series) and newer.

The Maxwell GeForce GTX 900 and Pascal GeForce GTX 1000 series still do not have proper re-clocking support in the Nouveau driver due to the shift to signed firmware images. While NVIDIA released the necessary signed firmware images to Nouveau developers to support hardware acceleration on Maxwell/Pascal, right now it's still stuck to the boot clock frequencies that are very low compared to their advertised clock speeds. NVIDIA would need to release the PMU firmware and likely other documentation for the open-source driver to get to the stage of re-clocking offered on Kepler hardware. But there's been no indication of that happening so unless the Nouveau developers discover some black magic for getting re-clocking working and being able to ramp up the fan speeds too as part of that process, the situation isn't looking good at this moment in time.


There have been some hints from Nvidia that signed firmware images are coming eventually to the Nouveau driver for Pascal and later.

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,779,632,681
RAC: 1,582,099
Level
Trp
Scientific publications
watwatwat
Message 54667 - Posted: 11 May 2020 | 22:07:39 UTC - in response to Message 54658.
Last modified: 11 May 2020 | 22:08:56 UTC

Did you read my previous post? You can’t force a high end GTX 10-series card into P0. Only the professional level Quadro and Tesla line cards, and low end cards like 1050s and lower. There was a way to do it on 700 series cards and maybe 900 series cards in Windows, but not Linux if I recall.
Yes but it still reported being in P0 for a 1080 Ti. I just can't get it to work. Here's what I did:
sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration
sudo xed /etc/X11/xorg.conf
Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BoardName "GeForce GTX 1080 Ti"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Device0"
Monitor "Monitor0"
DefaultDepth 24
Option "AllowEmptyInitialConfiguration" "True"
Option "Coolbits" "28"
SubSection "Display"
Depth 24
EndSubSection
EndSection

Then I made an executable script:
#!/bin/bash
/usr/bin/nvidia-smi -pm 1
/usr/bin/nvidia-smi -acp UNRESTRICTED
/usr/bin/nvidia-smi -i 0 -pl 160
/usr/bin/nvidia-settings -a "[gpu:0]/GPUPowerMizerMode=1"
/usr/bin/nvidia-settings -a "[gpu:0]/GPUMemoryTransferRateOffset[3]=400" -a "[gpu:0]/GPUGraphicsClockOffset[3]=100"
/usr/bin/nvidia-settings -a "[gpu:0]/GPUFanControlState=1" -a "[fan:0]/GPUTargetFanSpeed=75" -a "[fan:1]/GPUTargetFanSpeed=80"
With nvidia-smi -q reporting:
Attached GPUs : 1
GPU 00000000:02:00.0
Product Name : GeForce GTX 1080 Ti
Product Brand : GeForce
Display Mode : Enabled
Display Active : Enabled
Persistence Mode : Disabled
Fan Speed : 75 %
Performance State : P2
Power Readings
Power Management : Supported
Power Draw : 57.57 W
Power Limit : 250.00 W
Default Power Limit : 250.00 W
Enforced Power Limit : 250.00 W
Min Power Limit : 125.00 W
Max Power Limit : 300.00 W
Clocks
Graphics : 1556 MHz
SM : 1556 MHz
Memory : 5200 MHz
Video : 1316 MHz
Applications Clocks
Graphics : N/A
Memory : N/A
Default Applications Clocks
Graphics : N/A
Memory : N/A
Max Clocks
Graphics : 1987 MHz
SM : 1987 MHz
Memory : 5505 MHz
Video : 1620 MHz
Max Customer Boost Clocks
Graphics : N/A
Clock Policy
Auto Boost : N/A
Auto Boost Default : N/A
Processes
Process ID : 1447
Type : G
Name : /usr/lib/xorg/Xorg
Used GPU Memory : 123 MiB

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2353
Credit: 16,375,531,916
RAC: 5,811,976
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 54668 - Posted: 11 May 2020 | 22:17:14 UTC - in response to Message 54667.

Did you read my previous post? You can’t force a high end GTX 10-series card into P0. Only the professional level Quadro and Tesla line cards, and low end cards like 1050s and lower. There was a way to do it on 700 series cards and maybe 900 series cards in Windows, but not Linux if I recall.
Yes but it still reported being in P0 for a 1080 Ti. I just can't get it to work.
<snip>

With nvidia-smi -q reporting:
Attached GPUs : 1 GPU 00000000:02:00.0 Product Name : GeForce GTX 1080 Ti Product Brand : GeForce Display Mode : Enabled Display Active : Enabled Persistence Mode : Disabled Fan Speed : 75 % Performance State : P2

Is it P0 or P2 then?

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,779,632,681
RAC: 1,582,099
Level
Trp
Scientific publications
watwatwat
Message 54669 - Posted: 11 May 2020 | 22:33:05 UTC

I fixed it, forgot the sudo in my Startup Aps command:

sudo /home/aurum/BOINC_PL.sh


Persistence Mode : Enabled
Fan Speed : 79 %
Performance State : P2
Power Readings
Power Management : Supported
Power Draw : 57.76 W
Power Limit : 160.00 W
Default Power Limit : 250.00 W
Enforced Power Limit : 160.00 W
Min Power Limit : 125.00 W
Max Power Limit : 300.00 W
Clocks
Graphics : 1556 MHz
SM : 1556 MHz
Memory : 5200 MHz
Video : 1316 MHz
Max Clocks
Graphics : 1987 MHz
SM : 1987 MHz
Memory : 5505 MHz
Video : 1620 MHz

I don't know how it got set to P0 this morning but it was.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2353
Credit: 16,375,531,916
RAC: 5,811,976
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 54670 - Posted: 11 May 2020 | 23:05:53 UTC - in response to Message 54669.

I don't know how it got set to P0 this morning but it was.
When the calculation starts, there's a spike (<1sec) of P0, then the card switches to P2.

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,779,632,681
RAC: 1,582,099
Level
Trp
Scientific publications
watwatwat
Message 54671 - Posted: 11 May 2020 | 23:16:41 UTC

That must be it since I immediately ran the nvidia-smi -q query.
It seems to be working good. As GG WUs finish I'll propagate it to my other computers. If I really get a 36% reduction in power consumption that will be amazing. I'm surprised this isn't common knowledge among BOINCers.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1354
Credit: 7,875,512,546
RAC: 7,898,770
Level
Tyr
Scientific publications
watwatwatwatwat
Message 54672 - Posted: 11 May 2020 | 23:58:48 UTC

Yes, every time the card unloads a compute task, the driver lets the card go back to P0 power state. If you have applied a core and memory overclock to the lower P2 state to approximate the normal P0 power state under a compute load, the overclocks get applied briefly to the P0 state and can sometimes crash the card if the overclocks are a bit much.

Over at Seti, our Linux developer Petri came up with the keepP2 utility which you run in the background and keeps a small compute load on the card to prevent it from transitioning from the P2 state to the P0 state with overclocks applied.

keepP2.zip

That way it is always safe to run an overclock and never fear a lockup when the BOINC compute load disappears.

This made a big difference in the stability of Pascal cards which have an enormous compute penalty compared to Turing. There was no way I could get away with my +2000Mhz memory overclock when the compute load was removed and the card transitioned to P0 state. Instant crash.

Also before I forget, you can get an overclock applied to P2 state in Windows by using the Nvidia Profile Inspector and setting the "Force P2" setting in the Common settings.

https://www.guru3d.com/files-details/nvidia-profile-inspector-download.html

http://i.imgur.com/EhBH1e0.png%5D

Post to thread

Message boards : Graphics cards (GPUs) : Guide: RTX (also GTX), increasing efficiency in Linux

//