Advanced search

Message boards : Graphics cards (GPUs) : Overclocking

Author Message
Profile MJH
Project administrator
Project developer
Project scientist
Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 23225 - Posted: 3 Feb 2012 | 10:34:25 UTC

Hi,

We're interested in hearing about your experiences overclocking cards - in particular the top end *70s and *80s.
* Do you overclock?
* What settings do you use?
* Which clocks are most relevant to gpugrid performance?
* Do you downclock certain clocks?
* How does it impact stability (failed WUs, system crashes)?
*What OS do you have?

Thanks!

MJH

JLConawayII
Send message
Joined: 31 May 10
Posts: 48
Credit: 28,893,779
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 23239 - Posted: 3 Feb 2012 | 22:01:51 UTC
Last modified: 3 Feb 2012 | 22:05:35 UTC

Not exactly a high end card, but here it is:

Zotac GTX 260
base clock 576 core/1242 shader/1000 memory
OC to 701 core/1512 shader/1100 memory
73°C max @ 55% fan

At these settings the card has excellent stability, only had one WU error that I'm not convinced was related to the overclock. TBH I'm not sure the memory OC is even necessary, I don't think this project uses enough memory bandwidtch for it to be relevant. I believe this card can go higher but I haven't tested it yet. This is under Windows 7 using MSI Afterburner.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23240 - Posted: 3 Feb 2012 | 22:36:55 UTC - in response to Message 23239.

- increase core & shader clock
- memory plays a minor role, but not so much that downclocking would be worth it
- OS does not really matter for OC
- stability: it depends ;)
clock too high and stability suffers, stay within the limits of what your card can do and RAC will increase

And some more:
- keep temperatures down, fans as high as your noise tolerance permits
- if you really want to go for it you can clock higher by increasing the GPU voltage.. this lowers power efficiency of the GPU, though

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23242 - Posted: 4 Feb 2012 | 0:21:24 UTC - in response to Message 23240.

My first Gigabyte GTX470 was ref design and 1st ed. It could barely overclock and due to the single fan exhaust cooling system got very hot very quickly. From 607 to around 615 was about all that was safe. Some tasks could run reliably at 620MHz but some would fail. At stock it was very reliable.

My second Gigabyte GTX470 is 2nd ed. and can OC a bit more (630MHz is safe for all tasks). It uses less amps and power and has a better fan but is still of ref. design. The default fan speed is higher and more powerful so it runs a bit cooler.

I now have a Zotec AMP GTX470 which is factory overclocked to 656MHz and uses a twin frozr fan system. When running a Nathan task at 99% GPU utilization the fans can be run at around 66% and can keep temperatures below 70degC. A Gianni with 88% GPU utilization keeps the card around 65degC at 60% fan speed (~1800rpm). It can comfortably OC to >680MHz (without upping the voltage) which is 12% more than reference.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,230,965,968
RAC: 2,444,590
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23243 - Posted: 4 Feb 2012 | 0:45:26 UTC - in response to Message 23225.
Last modified: 4 Feb 2012 | 1:38:54 UTC

I'm using Windows XP x64 (and x86 too).
I've overclocked my GTX 480s to 800MHz@1050mV, GTX 580 to 850MHz@1063mV, GTX 590 to 720MHz@925mV and 913mV (memory clocks at factory settings on all cards)
Also, I've changed the GPU coolers to Arctic Cooling Accelero Xtreme Plus, so the GPU temps are way below 70°C (except on my GTX 590, which still has the standard cooler plus a 12cm cooler directly above the card, it goes up to 83°C sometimes)
A 80+ Gold (or even a 80+ Platinum) certified power supply is recommended for overclocking, with an adequate headroom for the extra power needed. It's not recommended to use a PSU above 75% of its nominal wattage in long term. (Considering efficiency and longevity)
Remember: the power consumption is in direct ratio with the frequency, but in square ratio with voltage (so 10% more voltage causes 21% more power consumption since 1.1*1.1=1.21), and these adds up.
At higher temperatures the power consumption of a chip goes even higher.
At higher temperatures a chip tolerates less overclocking.
10°C (or K) rise in the chip's operating temperature halves its lifetime (actually its MTBF).
Higher overclocking may work with lesser GPU utilizing workunits, but the highly GPU utilizing tasks may fail with those settings.
To make overclocking worth the effort, workunits should not fail at all. It's easy to loose the 10-15% gain of overclocking by failing workunits, because the running time of a workunit is long (even for the short ones)

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23244 - Posted: 4 Feb 2012 | 1:07:23 UTC - in response to Message 23243.
Last modified: 5 Feb 2012 | 14:36:40 UTC

As well as the potential loss, and outages due to continuous task failures, there are 'recoverable' losses, which can be identified by reduced performance; longer run times at higher clocks.

Driver related downclocking is also more likely to occur at higher temperatures and clocks.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Paul Raney
Send message
Joined: 26 Dec 10
Posts: 115
Credit: 416,576,946
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 25176 - Posted: 20 May 2012 | 13:32:56 UTC - in response to Message 23244.

All of my GTX cards are overclock with different levels of success.

GTX 570
Voltage 1.1V
924MHz

GTX 570HD
Voltage 1.075V
881MHz

GTX 580
Voltage 1.138V
944MHz

GTX 580
Voltage 1.138V
953MHz

Now looking for a GTX 590!

____________
Thx - Paul

Note: Please don't use driver version 295 or 296! Recommended versions are 266 - 285.

Profile TarHeal
Send message
Joined: 24 Sep 11
Posts: 9
Credit: 10,103,862
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 25206 - Posted: 23 May 2012 | 5:00:19 UTC - in response to Message 23244.

Would be very helpful if ACEMD could flag "recoverable" errors in the Event Log. Would that be feasible? When a work unit takes longer than it was initially estimated to take, how can we know when to attribute that to recoverable errors rather than the many other variables that could be to blame? How reliable are those initial time estimates from GPUGrid?

How aware is the GPUGrid app of the hardware it is being run on? If a card's settings are changed, by the user or by the driver, in the middle of a WU, could that information be recorded by BOINC?

On the subject of downclocking - if overclocking tends to be energy inefficient, does downclocking ever improve energy efficiency? That might be a worthwhile area of study, especially if it means your PCs generate less heat and your A/C doesn't have to run 24/7.

Profile TarHeal
Send message
Joined: 24 Sep 11
Posts: 9
Credit: 10,103,862
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 25207 - Posted: 23 May 2012 | 6:18:52 UTC - in response to Message 23243.

Higher overclocking may work with lesser GPU utilizing workunits, but the highly GPU utilizing tasks may fail with those settings.
To make overclocking worth the effort, workunits should not fail at all. It's easy to loose the 10-15% gain of overclocking by failing workunits, because the running time of a workunit is long (even for the short ones)


This is a great point. I've only recently figured out that overclocking to get a higher framerate for games is not the same as overclocking for science. Most video cards are engineered for a consumer market, not to run a 100% processor load 365 days a year. So if you do GPU computing, Nvidia Inspector is your friend. I hope that by closely monitoring GPU utilization and temps for a while I can do a better job figuring out my what my hardware is capable of under the worst case scenario. If I'm gonna have any more errors and wasted hours, I want them to be Nvidia's fault. (Seriously guys, if you're gonna charge $1000 for a graphics card, you should provide drivers that aren't broken.)

Also, having RMA'd a card for a fan that died after only five months of heavy BOiNC usage, I've learned to be very careful when buying low- to mid-range cards; most aren't built to last under such a workload.

TheFiend
Send message
Joined: 26 Aug 11
Posts: 99
Credit: 2,500,112,138
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25208 - Posted: 23 May 2012 | 6:23:42 UTC - in response to Message 25206.


On the subject of downclocking - if overclocking tends to be energy inefficient, does downclocking ever improve energy efficiency? That might be a worthwhile area of study, especially if it means your PCs generate less heat and your A/C doesn't have to run 24/7.


You can improve energy efficiency and reduce heat output by undervolting your cards.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25213 - Posted: 23 May 2012 | 20:15:16 UTC

@Energy efficiency: there's already been done lot's of reasearch into this.

Pure underclocking does not increase power efficiency. If you only consider the GPU and there wouldn't be any leakage, then power efficiency would stay constant during underclocking: you lower power consumption and performance linearly. However, there's a certain amount of sub threshold leakage which a chip always consumes, regardless of the transistors switching. This is significant. By underclocking you only lower the power consumption for switching the transistors, but not this constant power draw. And there's the system around the GPU: CPU, mainboard, chipset, HDD, memory etc. They all consume a fixed amount of power, no matter how far you underclock your GPU. The result: underclocking actually decreases system power efficiency, whereas overclocking increases it.

However, GPU power consumption scales approximately quadratically with voltage. That even includes leakage currents. Therefore the best way to improve power efficiency is to lower the voltage. The chip needs voltage to reach high frequencies, so the ultimate efficiency of the GPU itself is reached at the minimum voltage, together with the highest clock achievable at this setting (this is going to be lower than stock). The peak power efficiency for the entire system may be reached at higher voltage-clock combinations, though, as the fixed amount of power to drive the GPU is still required.

Another way to slightly increase efficiency is to lower the GPU temperature. This reduces power consumption, as a rough rule of thumb, by a few W for every 10 K difference (for contemporary large GPUs).

BTW: as a German I find the thought of crunching with private PCs in rooms with A/C quite strange. I already pay enough for the power consumption of the PCs, I wouldn't want to pay extra just to remove that heat. But we don't usually use A/Cs at home anyway.

MrS
____________
Scanning for our furry friends since Jan 2002

TheFiend
Send message
Joined: 26 Aug 11
Posts: 99
Credit: 2,500,112,138
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25217 - Posted: 24 May 2012 | 9:58:25 UTC - in response to Message 25213.



BTW: as a German I find the thought of crunching with private PCs in rooms with A/C quite strange. I already pay enough for the power consumption of the PCs, I wouldn't want to pay extra just to remove that heat. But we don't usually use A/Cs at home anyway.

MrS


Living in Germany you don't need A/C in your house, but a lot of crunchers live in countries were A/C is more common in residences.

Paul Raney
Send message
Joined: 26 Dec 10
Posts: 115
Credit: 416,576,946
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 25433 - Posted: 2 Jun 2012 | 5:50:28 UTC - in response to Message 25217.

EVGA GTX 590 @ 664MHZ standard memory clock. I will be pushing the memory clock a bit higher over the next few days.
____________
Thx - Paul

Note: Please don't use driver version 295 or 296! Recommended versions are 266 - 285.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,230,965,968
RAC: 2,444,590
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25434 - Posted: 2 Jun 2012 | 7:36:09 UTC - in response to Message 25433.

I will be pushing the memory clock a bit higher over the next few days.

Overclocking memory won't increase the performance of the GPUGrid client much.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25435 - Posted: 2 Jun 2012 | 7:41:30 UTC - in response to Message 25433.
Last modified: 2 Jun 2012 | 8:00:50 UTC

As Zoltan said, there is Probably no point increasing you GPU's memory clock; for GPUGrid tasks you will gain little or nothing in performance, but could overheat the card, or cause task failures. Leave it as is, or increase the core (607MHz) and shaders very modestly. Probably best to not touch the Voltage, unless you have to. Sometimes you can reduce the voltage slightly, increase the fan rate and increase the clocks slightly (maybe less likely for a GTX590 than a single GPU card).
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Paul Raney
Send message
Joined: 26 Dec 10
Posts: 115
Credit: 416,576,946
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 25442 - Posted: 2 Jun 2012 | 11:04:40 UTC - in response to Message 25435.

Thx for the insight on memory clocks and GPUGrid performance. I really did not want to spend hours today tuning memory clocks.
____________
Thx - Paul

Note: Please don't use driver version 295 or 296! Recommended versions are 266 - 285.

Paul Raney
Send message
Joined: 26 Dec 10
Posts: 115
Credit: 416,576,946
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 25554 - Posted: 7 Jun 2012 | 12:46:20 UTC - in response to Message 25442.

My GTX 580s are stable at 972MHz Core clock and 2000MHZ for memory with 1.138V. I want to go to 1,000MHZ but my initial attempt was a failure.

Has anyone used the new EVGA BIOS that unlocks the fans to 100% and raises the max voltage to 1.150? I think with 1.150V, I could get to 1GHz on my core clock.

What is the BIOS upgrade process? Has anyone had a BIOS upgrade fail? Does the card fall back to a default BIOS or is it an RMA issue?

Any help is appreciated.
____________
Thx - Paul

Note: Please don't use driver version 295 or 296! Recommended versions are 266 - 285.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25555 - Posted: 7 Jun 2012 | 15:37:19 UTC - in response to Message 25554.

1 GHz may work with the additinal voltage, but it's going to be close (any way).

Backup BIOS: as far as I know only recent high-end AMDs provide dual BIOS. So you better not unplug anything during flashing..

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25556 - Posted: 7 Jun 2012 | 16:34:32 UTC - in response to Message 25555.
Last modified: 7 Jun 2012 | 16:35:11 UTC

85% fan rate is quite high and 972MHz definitely is; 26% OC.
For such a pricy GPU, flashing just to attempt another 3% isn't worth it IMO, at least not until it's got a lot of miles under it's belt.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Paul Raney
Send message
Joined: 26 Dec 10
Posts: 115
Credit: 416,576,946
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 25557 - Posted: 7 Jun 2012 | 17:54:14 UTC - in response to Message 25556.

It isn't worth the 3% gain. Better to wait to buy the big Kepler and then flash them. These are ebay cards so EVGA will likely not want to provide RMAs :-)

The new BIOS opens the fan to 100% and provides a max voltage of 1.150V Just enough to get to 1GHz!
____________
Thx - Paul

Note: Please don't use driver version 295 or 296! Recommended versions are 266 - 285.

Dave
Send message
Joined: 27 May 12
Posts: 4
Credit: 347,950
RAC: 0
Level

Scientific publications
watwatwat
Message 25558 - Posted: 7 Jun 2012 | 17:54:29 UTC

MSI N580GTX

This card came clocked at 800 MHz out of the box a couple of weeks ago. It's much faster than my old MSI N460GTX Hawk. After running it awhile I bumped the GPU voltage up to 1.1v There's more overhead but I don't plan to go any farther. I started moving my GPU Clock Speed up in 5 MHz increments until about 855 MHZ and in 10 MHz increments above that number. I stopped at 900 MHz mostly because the GPU temp now hits around 70C and I find that to be the high end of my personal comfort range. It leaves me 27C of remaining temp. I can run any project I wish at 900 MHz, even Genefer which warns against overclocking. Other projects are SETI, PrimeGrid, Milkway and Einstein. So I am very happy with my new 580.

I have a question about what I find reported for some work units here. Here's what I found this morning. "# Number of cores" is being reported as 128 but the 580s have 512 cores in them. Is that an erroneous number or is that the number of cores being used? I was running an "MJHARVEY" workunit at the time. Regards, dave


Stderr output

<core_client_version>7.0.25</core_client_version>
<![CDATA[
<stderr_txt>
# Using device 0
# There is 1 device supporting CUDA
# Device 0: "GeForce GTX 580"
# Clock rate: 1.80 GHz
# Total amount of global memory: 1610285056 bytes
# Number of multiprocessors: 16
# Number of cores: 128
MDIO: cannot open file "restart.coor"
# Time per step (avg over 750000 steps): 12.343 ms
# Approximate elapsed time for entire WU: 9257.604 s
called boinc_finish

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25561 - Posted: 7 Jun 2012 | 19:24:10 UTC - in response to Message 25558.

If you've reached the maximum temperature and therefore clock speed you're comfortable with, but have not reached the stability limit yet, you might push the chip a little further: lower the voltage until you find the edge of stability, then add some safety margin. I guess you're already close.. but if you can lower the voltage significantly, you'll safe on electricity cost and lower temperatures. Which in turn might allow you to choose a higher voltage (below 1.10 V) and slightly higher clock speed.

Never mind that number of cores, it means nothing. It's meant for information purpose only, but is still based on the first CUDA capable GPUs, which used 8 Shaders per multiprocessor. This number has gone up over time and has reached 192 on Kepler.

MrS
____________
Scanning for our furry friends since Jan 2002

Paul Raney
Send message
Joined: 26 Dec 10
Posts: 115
Credit: 416,576,946
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 25597 - Posted: 9 Jun 2012 | 12:19:25 UTC - in response to Message 25561.

I flashed the BIOS on one of my 570s that was really hot ~82C. The fan will now go to 100% and this keeps the card in the 70s. I also added a couple of fans to help push air to the fan but it looks like a job for a new blower and some case modifications :-).

The BIOS flash is simple and takes about 2 min. to complete. If you have EVGA cards and they are running hot, use this new BIOS and open up the fans. The amount of cooling at 90% of the fan is far more than at 85%.

Keep Crunching!
____________
Thx - Paul

Note: Please don't use driver version 295 or 296! Recommended versions are 266 - 285.

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25682 - Posted: 13 Jun 2012 | 7:19:44 UTC
Last modified: 13 Jun 2012 | 7:21:52 UTC

GTX 670 w/ 301.42
Core@ 1249, Memory@ 3206

Win7 x64, BOINC 7.0.25, i7-920 @4.317

This card has completed 7 WUs across 3 different types so I think it's good.

IBUCH_5_affTRYP, MJHARVEY_MJHXA1, PAOLA_3EKO

When I have some more time I'll push it a bit harder but for now it is running smooth and cool (fan at 75% temp@50c).
____________
Thanks - Steve

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25713 - Posted: 14 Jun 2012 | 16:33:19 UTC - in response to Message 25682.

GTX 670 w/ 301.42
Core@ 1249, Memory@ 3206

Win7 x64, BOINC 7.0.25, i7-920 @4.317

This card has completed 7 WUs across 3 different types so I think it's good.

IBUCH_5_affTRYP, MJHARVEY_MJHXA1, PAOLA_3EKO

When I have some more time I'll push it a bit harder but for now it is running smooth and cool (fan at 75% temp@50c).


Interesting how 6X0 cards are different in that you can not force a clock setting, you can only set what you would "like" it to be. This usually works just fine under load... and that's the big key ... under load. GPUGrid tasks typically load the GPU sufficiently so the GPU figures it should run full throttle but the load from the IBUCH_5_affTRYP WUs is so low the GPU sees no need to run full throttle and slows down to core@1032 ... 24% reduction ... whine whine whine ...
____________
Thanks - Steve

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25877 - Posted: 26 Jun 2012 | 0:16:06 UTC

Reporting in that on my GTX670 (301.42) w/OC of Core@ 1249, Memory@ 3206, BOINC 7.0.25, Win7x64 box I have been successfully running the new Long WUs (cuda 4.2) all day. Utilization is higher, power usage is up, WUs run very fast = big points for me and lots of nice results for the project!

3 NATHAN_RPS average a bit under 4 hours compared to my GTX480 @1556 shaders on cuda 3.1 app which was closer to 8 hrs. While I realize some of the improvement is the hardware and some is the new app (great job GPUGrid) ... seeing a 2x improvement makes me a happy guy.

Just for a laugh I looked the NATHAN 3.1 WUs on my GTX295 and those were almost 16 hrs :-)

So I guess I'm lucky that all my cards have plenty of stable OC headroom but that's why I buy good solid cards :-)

In the future I won't go dual GPU again (too hard to keep cool, fans must always be at 100%)
I probably won't go reference design either (not as tough as dual but can still be a challenge to cool)

What works for me is to research the card I am OCing to learn it's general characteristics and then figure which WUs at the time push our GPUs the most. Then I crank up the voltage until I can't keep the keep temps roughly at 75c or lower. Next comes upping the clocks as appropriate for the GPU model. Once I reach the OC limit I start stepping down the voltage until I find the sweet spot for the card I am working with!
____________
Thanks - Steve

Post to thread

Message boards : Graphics cards (GPUs) : Overclocking

//