Advanced search

Message boards : Graphics cards (GPUs) : Anyone tried a GTX670 on GPUgrid?

Author Message
MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25353 - Posted: 30 May 2012 | 12:12:34 UTC
Last modified: 30 May 2012 | 12:15:31 UTC

I am looking at getting a Palit GTX670 to replace a GTX570. More specifically their Jetstream version. Has anyone tried a GTX670 on here? What are your crunch times like?

Overview
Memory: 2048MB / 256bit GDDR5
Clock : Base Clock 1006MHz/Boost Clock 1084MHz / 3054MHz (DDR 6108MHz
HDMI / DVI x2 / Display Port

Link to it here
____________
BOINC blog

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25356 - Posted: 30 May 2012 | 13:19:50 UTC - in response to Message 25353.
Last modified: 30 May 2012 | 13:21:04 UTC

GTX 670, GTX 680, GTX 690 is not supported yet (by the CUDA 3.1 application). They will be supported (hopefully) in a couple of days by the CUDA 4.2 application, which is in beta testing right now.

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25368 - Posted: 30 May 2012 | 20:02:36 UTC

On the first betas, my 670 was about 10% slower. Well find out the real differences soon enough though. This was on W7.

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25377 - Posted: 31 May 2012 | 10:58:56 UTC - in response to Message 25368.

On the first betas, my 670 was about 10% slower. Well find out the real differences soon enough though. This was on W7.


Did things improve with the cuda42 version?
____________
BOINC blog

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25378 - Posted: 31 May 2012 | 12:53:38 UTC

I only ran the first beta set on the 670. I may have been playing D3 on the second round on that card.........

Wouldn't see how it would make a difference though. It's going to be slower than the 680 for sure, and -10% is pretty good considering the 680 is $100+ more. Not to mention a pain to find still.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25382 - Posted: 31 May 2012 | 18:38:39 UTC - in response to Message 25378.

Mark might have understood you said "GTX670 10% slower than his GTX570" rather than "GTX670 10% slower than GTX680".

MrS
____________
Scanning for our furry friends since Jan 2002

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25383 - Posted: 31 May 2012 | 18:41:58 UTC

Ah......

Sorry about that. Was typing quickly on my phone. Apologies.

Yes, my 670 was -10% slower than my 680. This was clock for clock. Set both the same speeds.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25405 - Posted: 31 May 2012 | 22:50:23 UTC - in response to Message 25383.
Last modified: 31 May 2012 | 22:59:22 UTC

MarkJ, I think that looks like a decent GPU. The GTX670 should prove to be a good replacement for a GTX570. Perhaps around 30% more work per day than the GTX570 and for less energy, so ball park ~50% to 60% more efficient per Watt.

A GTX670 on W7 should ~match a GTX580 on WinXP.
In terms of performance per Watt however the GTX670 >> GTX500 series cards.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

comfortw
Avatar
Send message
Joined: 28 Oct 08
Posts: 9
Credit: 1,740,304,089
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25408 - Posted: 1 Jun 2012 | 0:28:28 UTC

My GTX670 in Win 7 with 301.42 driver.

GPUgrid -
ACEMD beta version 6.43(cuda42) - GPU load 83% (NVIDIA Inspector 1.9.6.5)

Primegrid -
Genefer (WR) 1.07 (cuda32_13) - GPU load 99% (NVIDIA Inspector 1.9.6.5)



Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25423 - Posted: 1 Jun 2012 | 16:24:47 UTC - in response to Message 25408.

At GPUGrid, W7 and Vista suffer an 11%+ loss in performance compared to WinXP or Linux. When I tested a 2008 server the loss was only ~3%.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

shdbcamping
Send message
Joined: 2 May 12
Posts: 22
Credit: 145,756,579
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwat
Message 25425 - Posted: 1 Jun 2012 | 16:43:04 UTC
Last modified: 1 Jun 2012 | 16:52:17 UTC

True enough. Application programming is the difference. All projets need to define their resources for programming based on the assets and abilities to accommodate the new tech. Please don't let these limitations confuse donors into thinking that a poject doesn't care in the "short run". :)
If it continues over a couple months... that's different :)
They appreciate us until proven differently.
EDIT: We all have to remember that vista and W7 are an entirely different OS background. All of the pogramming has to be adjusted for V and W7. It's not Just the NV or AMD drivers. That's why NV has separate drivers for XP and previous versions and Vista and W7. The latter are not bases on NT tech ;)
Vista never took "massive hold" and neither has W7 yet. Programmers have to do what best results in results.
JIMO, YMMV (Just in my opinion, your mileage may vary) :)
sean

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25447 - Posted: 2 Jun 2012 | 12:53:47 UTC - in response to Message 25425.

Vista and 7 sure enough are based on the NT code basis. Win 7 still identifies itself as ver 6.1., which is referring to the old NT nomenclature. The difference you're talking about is the display driver model, which changed with Vista.

MrS
____________
Scanning for our furry friends since Jan 2002

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25728 - Posted: 16 Jun 2012 | 4:23:02 UTC
Last modified: 16 Jun 2012 | 4:24:55 UTC

Back to the original question...

I have ordered 2 of them. The supplier has a limit of 2 per customer anyway :-)

I see the "production" apps haven't switched to cuda40 or cuda42 yet, so will have to leave the GTX570's in place until that happens. I need to sell the old cards to help pay for the new ones. Hopefully it will happen fairly soon.
____________
BOINC blog

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25729 - Posted: 16 Jun 2012 | 7:25:07 UTC - in response to Message 25728.

Back to the original question...

I see the "production" apps haven't switched to cuda40 or cuda42 yet, so will have to leave the GTX570's in place until that happens.

The short queue already have the cuda4.2 application, and a couple of GTX 680s and GTX 670s are already crunching "production" tasks.

Ken Florian
Send message
Joined: 4 May 12
Posts: 56
Credit: 1,832,989,878
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25784 - Posted: 20 Jun 2012 | 2:08:43 UTC - in response to Message 25353.

What do GPUGrid pros make of this statement from Anandtech regarding the 690?

"Unfortunately for NVIDIA GK104 shows its colors here as a compute-weak GPU, and even with two of them we’re nowhere close to one 7970, let alone the monster that is two. If you’re looking at doing serious GPGPU compute work, you should be looking at Fermi, Tahiti, or the future Big Kepler."

http://www.anandtech.com/show/5805/nvidia-geforce-gtx-690-review-ultra-expensive-ultra-rare-ultra-fast/15

Removing power consumption from consideration, does this really suggest that 590 is preferable to a 690 for GPUgrid? Asked differently, what can one extrapolate from that review/benchamark about a 690's performance here?

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25785 - Posted: 20 Jun 2012 | 2:35:19 UTC

They're discussing double precision. This project is single precision.

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25790 - Posted: 20 Jun 2012 | 12:15:24 UTC

Well one of them is installed and off and running. Pretty pics can be found here

Fortunately it picked up a cuda42 work unit to start with, an IBUCH TRYP, but only seems to be using about 48% load (peak 55%), so its hardly stressing the card.
____________
BOINC blog

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25791 - Posted: 20 Jun 2012 | 12:49:02 UTC

There other ibuch tasks around that get up to 96. The one you have now are actually the slowest out of all the different WUs in the short queue.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 25798 - Posted: 20 Jun 2012 | 18:23:21 UTC - in response to Message 25784.

Asked differently, what can one extrapolate from that review/benchamark about a 690's performance here?

Easy: multiply the throughput of a GTX680 by 2 and you're basically there. Average clock speeds will be slightly lower, but this approximation should be good enough.

Otherwise.. as 5Pot said: DP performance is irrelevant here :)

MrS
____________
Scanning for our furry friends since Jan 2002

mynis
Send message
Joined: 31 May 12
Posts: 8
Credit: 12,361,387
RAC: 0
Level
Pro
Scientific publications
watwatwat
Message 25985 - Posted: 29 Jun 2012 | 5:03:49 UTC

I have a MSI factory overclocked 670 and it gives me computation errors in Linux right off the bat with the 302.17 drivers. I'm thinking it has something to do with the overclock since other people seem to be crunching just fine on Linux with the newest drivers and a 670. Either way, I've reverted back to 295.59 and it averages about four hours on a cuda42 long run, which I would assume is decent since it says 8-12 hours on fastest card in the description.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26034 - Posted: 30 Jun 2012 | 11:33:35 UTC - in response to Message 25985.

Either way, I've reverted back to 295.59 and it averages about four hours on a cuda42 long run

... which tells us that the 302 driver was to blame, not the factory overclock. Bad for nVidia, good for you :)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Carlesa25
Avatar
Send message
Joined: 13 Nov 10
Posts: 328
Credit: 72,619,453
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26071 - Posted: 1 Jul 2012 | 12:00:22 UTC - in response to Message 26034.

HI: I also have installed the 302.17 and with my GTX295 there is no way to run CUDA 4.2 tasks, from what I read is a matter of reinstalling the old 295.59 as soon as you finish the tasks that I have running CUDA 3.1... We'll see.

Robert Gammon
Send message
Joined: 28 May 12
Posts: 63
Credit: 714,535,121
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26202 - Posted: 6 Jul 2012 | 23:25:08 UTC - in response to Message 26071.

I have a GTX480 and it runs GPUGrid Fine and Dandy under the Linux 302.17 driver.

So much so, I am approaching the top 100 GPUGrid clients based on RAC (actually no 146 earlier today, up from 164 last night)

it runs CUDA31 and CUDA42 apps. Most CUDA31 Long runs take about 9 hours. CUDA42 Long runs take about 5 hours.

Mark Henderson
Send message
Joined: 21 Dec 08
Posts: 51
Credit: 26,320,167
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 26203 - Posted: 7 Jul 2012 | 0:03:00 UTC
Last modified: 7 Jul 2012 | 0:05:48 UTC

I had been running some PrimeGrid Cuda WUs until I found out that most Cuda work there is Double Precision. I didn't realize 680s were worse at DP than the 500s are. I guess SP projects are the way to go for the 600s series. I did get the 680 signature 2 with 2 fans and its very fast at SP work such as GPU Grid. I guess Milky Way would be slower also. Its kind of dissapointing Nvidia handicapped the 600s in DP. Maybe the GK110 will be better.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26234 - Posted: 8 Jul 2012 | 20:58:09 UTC - in response to Message 26203.

Yes, you can expect GK110 to be a DP monster.

MrS
____________
Scanning for our furry friends since Jan 2002

Mark Henderson
Send message
Joined: 21 Dec 08
Posts: 51
Credit: 26,320,167
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 26251 - Posted: 9 Jul 2012 | 18:16:56 UTC
Last modified: 9 Jul 2012 | 18:22:48 UTC

Thats great, do you know if most of the projects will all eventually be DP, or is it only the Mathmatically leaning projects that will be this way.

My 570 is faster than my 680 by 15-20% on DP work.
GK110 is on my list definitly.
I just hope its in the less than 600-700 dollars though. That hurts.
I may be released as Tesla or Quadro only, we will see I suppose.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26254 - Posted: 9 Jul 2012 | 20:28:35 UTC - in response to Message 26251.

Each project has different criteria. GPUGrid does not need FP64/DP.

The GTX 680 is much better for here. While the 570 is still good, it's not great at DP. It's just that the GTX680 is awful. If you want a top DP card get a high end AMD/ATI card such as an HD7970. The GeForce cards are by in large better for SP.

A GeForce GK110 will be very pricey, and won't be available this year. If and when a 'GTX 685' does turn up, I would speculatively expect a performance increase of ~70 to 85%, over the GTX680 for here. On DP projects it would obviously be massive compared to the GTX680. Big Kepler will arrive in the form of a Tesla K20 some time this year, probably Q4, but you won't be buying one!
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26290 - Posted: 11 Jul 2012 | 16:09:32 UTC - in response to Message 26251.

do you know if most of the projects will all eventually be DP

Never. And that's a good thing :)
The point being: it always takes more energy and hardware to do DP calculations. And it's not hard to design your DP hardware so that it can do 2 SP operations instead of 1 DP. So at best you can do DP at half the SP rate. That's why in performance-critical applications you should use SP when ever the precision is sufficient.

MrS
____________
Scanning for our furry friends since Jan 2002

Evil Penguin
Avatar
Send message
Joined: 15 Jan 10
Posts: 42
Credit: 18,255,462
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 26294 - Posted: 11 Jul 2012 | 22:33:52 UTC

I think most of you ignored the fact that Anand's compute benchmark consisted of mostly if not all SP benchmarks.

It does seem like Kepler is a step backwards in terms of GPGPU.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26299 - Posted: 12 Jul 2012 | 9:20:23 UTC - in response to Message 26294.

Well, for here it's a good step forward. Obviously it's not for MW and some other projects. Pick a project and pick your cards for it. If you want to run POEM or MW get AMD cards. For here and Einstein get NVidia cards.
The bottom line is that these are gaming cards. It just happens to work well here because it supports CUDA and GPUGrid doesn't need DP. Architecturally there are still issues with it for crunching here but performance is still better than the previous generation.
If you want a full-fat compute card from NVidia you will have to wait for the Tesla K20 (Q4) and then you'll have to be prepared to pay for it! It's unlikely that a GeForce variant of this will appear until next year.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26304 - Posted: 12 Jul 2012 | 19:41:55 UTC - in response to Message 26294.

GP-GPU does not equal DP crunching. In fact, even with the CC 3.0 Keplers being a step backwards in DP performance, this doesn't really matter, since AMD is far superior in raw DP performance to any Fermi or earlier...

MrS
____________
Scanning for our furry friends since Jan 2002

Robert Gammon
Send message
Joined: 28 May 12
Posts: 63
Credit: 714,535,121
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26324 - Posted: 15 Jul 2012 | 22:18:44 UTC - in response to Message 25985.

4 hours is good.

Most of my CUDA42 workunits take about 5.5 hours on my GTX480

That is a bit shy of a 30% performance improvement for double the price.

Course power consumption drops by a similar amount. Its hard to factor the power consumption figure into the overall household utility expense in most areas of the USA (where time/demand pricing is not in force - time of day consumption may raise the utility cost by 100% or more compared to other times of the day

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26338 - Posted: 16 Jul 2012 | 20:50:46 UTC - in response to Message 26324.

Its hard to factor the power consumption figure into the overall household utility expense in most areas of the USA

That's true, but you don't have to do it. 1$ electricity cost is 1$, no matter how much your other devices consume. All you really need is to measure the power consumption at the wall with and without GPU-Grid (or PC on/off) and multiply by running time and your local cost per electricity.

MrS
____________
Scanning for our furry friends since Jan 2002

Rantanplan
Send message
Joined: 22 Jul 11
Posts: 166
Credit: 138,629,987
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26382 - Posted: 19 Jul 2012 | 18:20:44 UTC - in response to Message 26338.
Last modified: 19 Jul 2012 | 18:21:24 UTC

Hello, is it a good idea by following:

I want to overclock my GTX 670 (Asus)

Now i thought i could make it easy, an so:

Raise the voltage by 1mV per 1 Mhz, is it a good idea
or will i burn my chipset away !?

Greets :)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26389 - Posted: 19 Jul 2012 | 22:24:54 UTC - in response to Message 26382.

Bad idea. First: increase clock speed without voltage increase and see how far you get (should be somewhere around 1050 - 1100 MHz, from what I've read). If you're comfortable with the temperature, power consumption and noise at this setting you can push further. At that point "1% more voltage for 1% higher clock" is a fair approximation, although the real function is at least quadratic, maybe even exponential.

MrS
____________
Scanning for our furry friends since Jan 2002

Rantanplan
Send message
Joined: 22 Jul 11
Posts: 166
Credit: 138,629,987
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26398 - Posted: 20 Jul 2012 | 14:16:00 UTC - in response to Message 26389.

hm, i dont did it. Increasing the "power target" does it all. It clocks itself. no thinking about overvolting. Thanks.

klepel
Send message
Joined: 23 Dec 09
Posts: 189
Credit: 4,670,856,793
RAC: 2,502,532
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26399 - Posted: 20 Jul 2012 | 15:46:17 UTC - in response to Message 26398.

hm, i dont did it. Increasing the "power target" does it all. It clocks itself. no thinking about overvolting. Thanks.


What target have you set? So Might do it as well.

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26401 - Posted: 20 Jul 2012 | 16:24:54 UTC

On my GTX 670 it does not matter what I set the power target at it always pulls 1175. So knowing that I was not able to increase / decrease volts I might as well OC as far as is stable ... turns out that 1259 GPU and 3206 MEM is rock solid stable - 99 consecutive successful LONG WUs so far. Win7x64, BOINC 7.0.25.

Looked through some results for GTX 670s accross GPUGrid for Win7x64 with NATHAN WUs: OC I listed above takes about 9%-10% less time per WU and gets me to the ballpark of a stock GTX680!!!
____________
Thanks - Steve

Rantanplan
Send message
Joined: 22 Jul 11
Posts: 166
Credit: 138,629,987
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26402 - Posted: 20 Jul 2012 | 16:48:04 UTC - in response to Message 26401.

i did base overclocking (but to far) , now at 915+90mhz and set power target with Nvidia Inspector at 122% but it wont raise further that 1175mV. Not always stable right now.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26412 - Posted: 23 Jul 2012 | 8:40:13 UTC
Last modified: 23 Jul 2012 | 8:50:53 UTC

I've begun to upgrade my Fermi cards to Kepler cards, and I've made some measurements with my first partly upgraded system.
Its configuration is (at the moment):

ASUS P7P55 WS Supercomputer motherboard
Intel Core i7-870 @ 3855MHz (24*160MHz)
2*2Gb DDR3 2000MHz RAM
320GB HDD
MSI GTX 480 @ 800MHz (1.075V) (with an Artctic Cooling Accelero Xtreme Plus)
Asus GTX-670 DC2 @ 1084MHz (1.137V) (factory overclocked)

A NATHAN_RPS_1120528 runs nearly 13% faster (15137 sec vs 17133 sec) on the GTX 670 @ 1084MHz than on the GTX 480 @ 800MHz.
I've also measured the power consumption at the wall outlet (230V AC).
When my PC was idle (no tasks running, but power management is disabled, so the CPU runs at full speed) it is drawing 178 Watts
When a task is running on the GTX 480 @ 800MHz (99% GPU usage): 378 Watts
When one more task is running on the GTX 670 @ 1084MHz (99% GPU usage): 552 Watts
When 4 rosetta@home are running on the CPU: 625 Watts
So, the extra power consumption of the different parts when they are in use is:
GTX 480 @ 800MHz (99% GPU usage): 200 Watts
GTX 670 @ 1083MHz (99% GPU usage): 174 Watts
CPU 4 cores: 73 Watts
As you can see, the GTX 670 @ 1083MHz consumes 87% of the GTX 480 @ 800MHz.
Hopefully I can do more measurements in this week.

Paul Raney
Send message
Joined: 26 Dec 10
Posts: 115
Credit: 416,576,946
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 26414 - Posted: 23 Jul 2012 | 12:27:30 UTC - in response to Message 26412.

Retvari:

Thank you for the power measurements. This is very helpful. We can use this info to justify the purchase of new cards based on power savings.

I see a 670 in my future!
____________
Thx - Paul

Note: Please don't use driver version 295 or 296! Recommended versions are 266 - 285.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26418 - Posted: 23 Jul 2012 | 21:44:44 UTC - in response to Message 26414.

That's ~30% better performance per Watt.

Different tasks types may show different improvements (25%, 30%, 35%).

The GTX670, 680 and 690 cards are 14.47, 15.85 and 18.74 GFLOPS/W respectively. Doesn't necessarily reflect performance here, but a 680 might be more in line with a 480, and might take it over 40%. That said the 580 would still be more competitive.

The PCIE3 vs PCIE2 debate is still open.

Can you reduce that FOC?
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26420 - Posted: 23 Jul 2012 | 22:27:22 UTC - in response to Message 26418.
Last modified: 23 Jul 2012 | 22:28:50 UTC

...but a 680 might be more in line with a 480, and might take it over 40%. That said the 580 would still be more competitive.

I'll check that tomorrow. I've just finished changing one of my GTX 590 to a GTX 680. (GV-N680OC-2GD)

The PCIE3 vs PCIE2 debate is still open.

I'm not planning to upgrade my motherboards in the near future, but maybe I can put one of my cards to a PC equipped with a PCIe3 capable motherboard.

Can you reduce that FOC?

Sure. To what frequency and voltage? (for the 670 and for the 680)
I'm also planning to measure the power consumption of the GTX 480 at stock speed and voltage.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26421 - Posted: 23 Jul 2012 | 22:47:26 UTC
Last modified: 23 Jul 2012 | 22:49:44 UTC

The GFlops looks very odd in the BOINC manager's log:

NVIDIA GPU 0: GeForce GTX 680 (driver version 30479, CUDA version 5000, compute capability 3.0, 2048MB, 582 GFLOPS peak)
NVIDIA GPU 1: GeForce GTX 590 (driver version 30479, CUDA version 5000, compute capability 2.0, 1536MB, 1244 GFLOPS peak)
NVIDIA GPU 2: GeForce GTX 590 (driver version 30479, CUDA version 5000, compute capability 2.0, 1536MB, 1244 GFLOPS peak)

NVIDIA GPU 0: GeForce GTX 480 (driver version 30479, CUDA version 5000, compute capability 2.0, 1536MB, 1538 GFLOPS peak)
NVIDIA GPU 1: GeForce GTX 670 (driver version 30479, CUDA version 5000, compute capability 3.0, 2048MB, 439 GFLOPS peak)

It looks like I'm downgrading my cards....

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26422 - Posted: 24 Jul 2012 | 10:04:09 UTC - in response to Message 26421.

Ref GFLOPS:

    GTX 670 is 2460
    GTX 680 is 3090.4
    GTX 690 is 2*2810.88=5621.76



Your versions of Boinc are just not reading them correctly (due to architecture changes). A more recent client should manage it (7.0.28), but it's just a reading.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26427 - Posted: 24 Jul 2012 | 23:17:24 UTC - in response to Message 26412.

I've tried to refine my measurements with my partly upgraded configuration.
It's very hard to calculate the power consumption of different parts by measuring the overall power consumption at different workloads, because the parts are heating each other, and it is causing extra power consumption on the previously measured parts. My previous measurements didn't take this effect into consideration, so the extra 174 Watts doesn't come only from the task running on the GTX 670.

no GPU tasks, no CPU tasks, no power management (idle) : 180W

no CPU tasks, no GPU task on the GTX 670
GTX 480 1025mV, 701MHz, 99%, 61°C: 344W (164W)
GTX 480 1050mV, 701MHz, 99%, 63°C: 355W (175W) (+11W)
GTX 480 1075mV, 701MHz, 99%, 64°C: 365W (185W) (+21W)
GTX 480 1075mV, 726MHz, 99%, 64°C: 370W (190W) (+26W)
GTX 480 1075mV, 749MHz, 99%, 65°C: 374W (194W) (+30W)
GTX 480 1075mV, 776MHz, 99%, 66°C: 378W (198W) (+34W)
GTX 480 1075mV, 797MHz, 99%, 66°C: 382W (202W) (+38W)

GTX 480 1075mV, 797MHz, 99%, 71°C: 395W (215W) (+51W)
GTX 670 1162mV, 981MHz, 97%, 66°C: 555W (160W)

+ 4 CPU cores: 620W

GTX 480 1075mV, 797MHz, 99%, 67°C: 383W (203W)
GTX 670 1137mV,1084MHz, 97%, 60°C: 552W (160W)

GTX 480 1075mV, 797MHz, 99%, 71°C: 397W (217W) (+53W)
GTX 670 1137mV,1084MHz, 97%, 65°C: 559W (162W)

GTX 480 1075mV, 797MHz, 99%, 71°C: 397W (217W)
GTX 670 1137mV,1084MHz, 0%, 44°C


GTX 480 1075mV, 797MHz, 0%, 44°C:
GTX 670 1137mV,1084MHz, 97%, 65°C: 341W (161W)

All in all, the GTX 670 is better than I've calculated from my first measurements:
The GTX 670 @ 1083MHz consumes (162W) 75% of the GTX 480 @ 800MHz (217W).

These were my last measurements with this host.
I'm going to change the GTX 480 to a GTX 670 on this host right now, and I will measure the power consumption again after that.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26428 - Posted: 25 Jul 2012 | 0:24:44 UTC - in response to Message 26427.

Nice set of data.

By reducing the second cards heat radiation both cards should benefit somewhat, and you might even see the pair reach +40% performance per Watt over two GTX480's.

Out of curiosity, are your CPU heatsink fins/blades vertical or horizontal?
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26430 - Posted: 25 Jul 2012 | 1:15:11 UTC - in response to Message 26428.
Last modified: 25 Jul 2012 | 1:15:46 UTC

By reducing the second cards heat radiation both cards should benefit somewhat, and you might even see the pair reach +40% performance per Watt over two GTX480's.

I agree. But the gain is even bigger than I expected: my host consumes 520W now under full load (2 GPU tasks on the two GTX 670s, and 4 CPU tasks). It was 625W before. So now my host consumes 105W less than before my first measurement, and probably 210W less than with two GTX 480s @800MHz. I expected 217W-162W(=55W)+~10W gain. I have to double check it tomorrow (runtimes etc.).

Out of curiosity, are your CPU heatsink fins/blades vertical or horizontal?

My CPU heatsink is a Noctua NH-D14, it's fins are vertical, and the axis of the fans are horizontal. My motherboard is vertically mounted, and the GPUs are under the CPU. The cool air comes from the side of the case, and the hot air from the CPU heatsink is exhausted through the back of the case.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26434 - Posted: 25 Jul 2012 | 9:10:31 UTC - in response to Message 26427.

It's very hard to calculate the power consumption of different parts by measuring the overall power consumption at different workloads, because the parts are heating each other, and it is causing extra power consumption on the previously measured parts. My previous measurements didn't take this effect into consideration, so the extra 174 Watts doesn't come only from the task running on the GTX 670.

When I read your first post I thought the same, but then decided "never mind, he already put so much work into these measurements..." ;)

And there's another factor: PSU efficiency is not constant over a large load range. Past 50% load efficiency will probably drop a bit with increasing load.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26436 - Posted: 25 Jul 2012 | 12:03:01 UTC - in response to Message 26434.

Some of the newer PSU's hold steady from ~5% through to 85 or 90%, but not all. So an 850W PSU (for example) could have fairly linear power efficiency from ~40W right up to ~723W. Don't know the PSU in use though, so it might well have struggled for efficiency with the two GTX480's. If it did it's a big consideration, but that's the setup, and the new setup is still 200W better off, without replacing the PSU.

Looking at the PSU's power/efficiency curve would tell you the PSU efficiency at different power usages. Ambient or motherboard/hard drive temps might on the other hand indicate cooling issues (there might have been higher case temps caused by more power usage/lack of heat displacement). Or both.

I notice a 97% GPU utilization for the GTX670 vs a 99% utilization with the GTX480. I would say this is down to dual channel RAM &/or PCIE2, but 2% isn't much to worry about, if that's all it really is; I still expect that with quad channel and seeing 99% moving from PCIE2 to PCIE3 would make a difference, but only be seen by looking at the results run times. So with GTX680's the 2% might rise to 3% and possibly 4 or 5% with GTX690's. With triple or quad channel this would probably disappear, but the runtimes with PCIE3 should better those on PCIE2, even if both show 99% GPU utilization.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Chaos_AD
Send message
Joined: 18 Jul 12
Posts: 2
Credit: 15,245,690
RAC: 0
Level
Pro
Scientific publications
watwat
Message 26437 - Posted: 25 Jul 2012 | 14:41:19 UTC

I would like to ask something and i m sorry if it has been answered in another thread. I am running both WCG and Gpugrid on my pc (3770k/M5G/GTX670). Is it normal that some projects have more gpu usage than others? Also, when i run WCG with all 4c/8t i see 73-74% gpu usage, but when i stop WCG i see 81-82% gpu usage. Why does this happen? Can i do something to rum both wcg and at the same time utilize my gpu at 100%?

Old man
Send message
Joined: 24 Jan 09
Posts: 42
Credit: 16,676,387
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26439 - Posted: 25 Jul 2012 | 17:09:16 UTC - in response to Message 26437.

I would like to ask something and i m sorry if it has been answered in another thread. I am running both WCG and Gpugrid on my pc (3770k/M5G/GTX670). Is it normal that some projects have more gpu usage than others? Also, when i run WCG with all 4c/8t i see 73-74% gpu usage, but when i stop WCG i see 81-82% gpu usage. Why does this happen? Can i do something to rum both wcg and at the same time utilize my gpu at 100%?


Hey. All gpugrid.net work packages need a cpu to feed it. You have a quad-core processor with hyperthreading supported so your processor is shown in eight core cpu in operating system. Leave a one core-free so that the processor can support the graphics cards in its calculations.

BOINC Manager, go to cpu usage (preferences) and select that you want to use 87.5% of the processors so one core is always free for the graphics card.

Very likely you will not get 100% GPU utilization.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26440 - Posted: 25 Jul 2012 | 17:09:18 UTC - in response to Message 26436.
Last modified: 25 Jul 2012 | 17:11:20 UTC

More power consumption and performance measurements are in progress :)
I've a PC with a Core i5-3570K in an Intel DH77EB motherboard for a couple of days. I'm not sure if the motherboard supports PCIe3.0 though.

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26441 - Posted: 25 Jul 2012 | 18:11:36 UTC

My 3rd 680 will be in my hands in about 2 weeks. Since I already have 2 working at PCIe 3 on my x79 when I install the 3rd it will be PCIE 2 x8 (PCIe 1) until I apply the hack which will make it PCIe 3 x8 (PCIe 2). I will test both to see how the times compare.

So in short in 2 weeks I will have results on a 680 across all PCIe bandwidth speeds.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26442 - Posted: 25 Jul 2012 | 18:16:03 UTC - in response to Message 26440.

About the Motherboard Intel says,
"One PCI Express 3.0 x 16 discrete graphics card connector"
"Support for PCI Express* 3.0 x16 add-in graphics card"
"PCI Express* 3.0 support requires select 3rd generation Intel® Core™ processors"

About the i5-3570K Intel says,
"3rd Generation Intel® Core™ i5 Processor"
"PCI Express Revision 3.0"

So I say maybe, just maybe :)

____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26443 - Posted: 25 Jul 2012 | 18:24:12 UTC - in response to Message 26436.

The power supply in my dual GTX-670 host is an Enermax MODU87+ 800W.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26445 - Posted: 25 Jul 2012 | 19:30:31 UTC - in response to Message 26443.

That PSU has a maximum efficiency at around 400W, so its going to be about as efficient at 500W as it is at 300W. Going by the graph the loss of efficiency with two GTX480's vs two GTX670's would be no more than 2%, <10W:


____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Chaos_AD
Send message
Joined: 18 Jul 12
Posts: 2
Credit: 15,245,690
RAC: 0
Level
Pro
Scientific publications
watwat
Message 26447 - Posted: 25 Jul 2012 | 20:34:37 UTC - in response to Message 26439.
Last modified: 25 Jul 2012 | 20:37:55 UTC

Hey. All gpugrid.net work packages need a cpu to feed it. You have a quad-core processor with hyperthreading supported so your processor is shown in eight core cpu in operating system. Leave a one core-free so that the processor can support the graphics cards in its calculations.

BOINC Manager, go to cpu usage (preferences) and select that you want to use 87.5% of the processors so one core is always free for the graphics card.

Very likely you will not get 100% GPU utilization.


Atm i m running wcg with 4c/8t at 100% and gpugrid runs a project at 92% gpu load which goes to 94-95% when i close wcg. So i guess gpu usage depends mainly on the project.

edit: by changing 100% to 87.5% the cpu usage didnt change gpu load at all.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26455 - Posted: 26 Jul 2012 | 1:04:22 UTC - in response to Message 26442.
Last modified: 26 Jul 2012 | 1:10:09 UTC

My experimental PCIe3.0 x16 host is up and running.
I've checked the speed of the PCIe bus with GPU-Z 0.6.3, according to this tool the GPU runs at PCIe3.0 x16.
It has the Gigabyte GV-N680OC-2GD in it (moved from my main cruncher PC)
We'll see how it performs against my old (PCIe2.0 x16) host.
There is no CPU tasks running on this host, because it has a 400W power supply.
It was quite an adventure to install Windows XP x64 on this configuration.
I've checked the GPU with MSI Afterburner 2.2.3:
Usage: 97%
Voltage: 1.175V
Clock: 1137MHz
I had the same numbers with my old host.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26545 - Posted: 4 Aug 2012 | 18:43:40 UTC - in response to Message 26418.

The PCIE3 vs PCIE2 debate is still open.

My experiment with my PCIe 3.0 host is over. Here are the results:

It has processed 5 kind of workunits:

_______ wu type __________________ # of wus _|_ shortest (h:mm:ss)__|_ longest (h:mm:ss) __|_ average (h:mm:ss)
NATHAN_RPS1120528 ________________|_ 16 _|__ 13596.73 (3:46:36) _|_ 13729.89 (3:48:49) _|_ 13633.03 (3:47:13)
PAOLA_HGAbis _____________________|||__ 8 _|__ 17220.55 (4:47:00) _|_ 17661.73 (4:54:21) _|_ 17505.56 (4:51:45)
rundig1_run9-NOELIA_smd _____________|__ 1 _|__ 21593.69 (5:59:53)
run5_replica43-NOELIA_sh2fragment_fixed ||__ 1 _|__ 25169.00 (6:59:29)
run2_replica6-NOELIA_sh2fragment_fixed _||__ 1 _|__ 35357.31 (9:49:17)

BTW it's power consumption was only 247 Watts with a GTX 680 and all 4 CPU cores crunching (3x rosetta + 1x GPUGrid)

For comparison, here are one of my old host's power consumption measurements:

Core i7-970 @4.1GHz (24*171MHz, 1.44v, 32nm, 6 HT cores)
ASRock X58 Deluxe motherboard
3x2GB OCZ 1600MHz DDR3 RAM
2 HDD
GPU1: Gigabyte GTX 480@800MHz, 1.088V (BIOS:1A)
GPU2: Asus ..... GTX 480@800MHz, 1.088V (BIOS:21)

Idle (No CPU tasks, No GPU tasks, No power management): 233W
CPU cores 32°C to 40°C
GPU1 idle 00%: 36°C - GPU2 idle - 00%: 39°C

1 GPU task running:
GPU1 idle 00%, 36°C - GPU2 in use 99%, 51°C: 430W
GPU1 idle 00%, 36°C - GPU2 in use 99%, 60°C: 434W
GPU1 idle 00%, 37°C - GPU2 in use 99%, 65°C: 438W
GPU1 idle 00%, 37°C - GPU2 in use 99%, 69°C: 442W

2 GPU tasks running:
GPU1 in use 99%, 47°C - GPU2 in use 99%, 69°C: 647W
GPU1 in use 99%, 53°C - GPU2 in use 99%, 71°C: 656W
GPU1 in use 99%, 60°C - GPU2 in use 99%, 72°C: 665W
GPU1 in use 99%, 62°C - GPU2 in use 99%, 74°C: 670W
GPU1 in use 99%, 63°C - GPU2 in use 99%, 76°C: 675W
GPU1 in use 99%, 66°C - GPU2 in use 99%, 79°C: 680W

2 GPU tasks and 6 CPU tasks running:
GPU1 in use 99%, 47°C - GPU2 in use 99%, 69°C: 756W
CPU cores 50°C to 66°C

I would like to compare the performance of PCIe2.0 (x16 and x8), as my main cruncher PC has 3 different cards right now (GTX 670 OC, GTX 680 OC, and a GTX 690), but the lack of info about the number of GPU used for crunching in the stderr output file makes it very very hard.

Dylan
Send message
Joined: 16 Jul 12
Posts: 98
Credit: 386,043,752
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwat
Message 26601 - Posted: 13 Aug 2012 | 18:29:51 UTC

Might be a bit late to answer, but I have been using an EVGA GTX 670 SC (which is slightly overclocked) for about a month. I also use the 301.42 driver for the card.

Other specs:
Intel i7-3820 @ 4.3 GHz
16 GB of RAM
Windows 7 64-bit

I usually run GPUGRID during the day while idle and through the night. I run all the applications available and so far my times have been pretty fast.

For long-runs, it takes about 5-7 hours, depending on the task. The standard tasks run for about 2-5 hours also. The beta tasks take anywhere from 30 minutes to 2 hours or so.

Note that the times do range, and as of now I'm crunching many beta tasks so I couldn't give too precise of times on the other tasks.

As for heating, the card stays at about 67 degrees Celsius during long-runs and doesn't go higher. The card load is at about 90-95% during long-runs, also, and ranges around 50% during beta tasks.

The only problems I've had are driver crashes when stopping the beta tasks that have come up recently, which are the Noelia tasks.

Hope this helped.

Profile JStateson
Avatar
Send message
Joined: 31 Oct 08
Posts: 186
Credit: 3,362,427,550
RAC: 83,262
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26689 - Posted: 25 Aug 2012 | 3:24:04 UTC - in response to Message 26601.

Might be a bit late to answer, but I have been using an EVGA GTX 670 SC (which is slightly overclocked) for about a month. I also use the 301.42 driver for the card.

Other specs:
Intel i7-3820 @ 4.3 GHz
16 GB of RAM
Windows 7 64-bit

I usually run GPUGRID during the day while idle and through the night. I run all the applications available and so far my times have been pretty fast.

For long-runs, it takes about 5-7 hours, depending on the task.


Dylan: When comparing your GTX670 long runs to my GTX570 long runs clearly your 670 is slightly faster. However, your CPU times seem extremely large compared to mine. I am guessing that my Q9550 (4 core non-hyperthread) has more cache available than your 8 core i7-3820?

I wonder if there is going to be a big improvement with CUDA5? We are both running CUDA42 gpugrid app.

Dylan
Send message
Joined: 16 Jul 12
Posts: 98
Credit: 386,043,752
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwat
Message 26703 - Posted: 25 Aug 2012 | 20:26:26 UTC - in response to Message 26689.

I'm pretty sure my CPU times are larger because I didn't give a core to the GPU, and instead gave all 8 to another project. Up until now I never really thought about it, and will try to fix it as soon as I can. Thanks for the notice.

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26704 - Posted: 25 Aug 2012 | 22:32:13 UTC

6XX cards always use a full core so if you set BOINC to use 1 less thread than your CPU has available your other projects that use CPU only will process much better!
____________
Thanks - Steve

Post to thread

Message boards : Graphics cards (GPUs) : Anyone tried a GTX670 on GPUgrid?

//