Message boards : Graphics cards (GPUs) : Anyone tried a GTX670 on GPUgrid?
Author | Message |
---|---|
I am looking at getting a Palit GTX670 to replace a GTX570. More specifically their Jetstream version. Has anyone tried a GTX670 on here? What are your crunch times like? | |
ID: 25353 | Rating: 0 | rate: / Reply Quote | |
GTX 670, GTX 680, GTX 690 is not supported yet (by the CUDA 3.1 application). They will be supported (hopefully) in a couple of days by the CUDA 4.2 application, which is in beta testing right now. | |
ID: 25356 | Rating: 0 | rate: / Reply Quote | |
On the first betas, my 670 was about 10% slower. Well find out the real differences soon enough though. This was on W7. | |
ID: 25368 | Rating: 0 | rate: / Reply Quote | |
On the first betas, my 670 was about 10% slower. Well find out the real differences soon enough though. This was on W7. Did things improve with the cuda42 version? ____________ BOINC blog | |
ID: 25377 | Rating: 0 | rate: / Reply Quote | |
I only ran the first beta set on the 670. I may have been playing D3 on the second round on that card......... | |
ID: 25378 | Rating: 0 | rate: / Reply Quote | |
Mark might have understood you said "GTX670 10% slower than his GTX570" rather than "GTX670 10% slower than GTX680". | |
ID: 25382 | Rating: 0 | rate: / Reply Quote | |
Ah...... | |
ID: 25383 | Rating: 0 | rate: / Reply Quote | |
MarkJ, I think that looks like a decent GPU. The GTX670 should prove to be a good replacement for a GTX570. Perhaps around 30% more work per day than the GTX570 and for less energy, so ball park ~50% to 60% more efficient per Watt. | |
ID: 25405 | Rating: 0 | rate: / Reply Quote | |
My GTX670 in Win 7 with 301.42 driver. | |
ID: 25408 | Rating: 0 | rate: / Reply Quote | |
At GPUGrid, W7 and Vista suffer an 11%+ loss in performance compared to WinXP or Linux. When I tested a 2008 server the loss was only ~3%. | |
ID: 25423 | Rating: 0 | rate: / Reply Quote | |
True enough. Application programming is the difference. All projets need to define their resources for programming based on the assets and abilities to accommodate the new tech. Please don't let these limitations confuse donors into thinking that a poject doesn't care in the "short run". :) | |
ID: 25425 | Rating: 0 | rate: / Reply Quote | |
Vista and 7 sure enough are based on the NT code basis. Win 7 still identifies itself as ver 6.1., which is referring to the old NT nomenclature. The difference you're talking about is the display driver model, which changed with Vista. | |
ID: 25447 | Rating: 0 | rate: / Reply Quote | |
Back to the original question... | |
ID: 25728 | Rating: 0 | rate: / Reply Quote | |
Back to the original question... The short queue already have the cuda4.2 application, and a couple of GTX 680s and GTX 670s are already crunching "production" tasks. | |
ID: 25729 | Rating: 0 | rate: / Reply Quote | |
What do GPUGrid pros make of this statement from Anandtech regarding the 690? | |
ID: 25784 | Rating: 0 | rate: / Reply Quote | |
They're discussing double precision. This project is single precision. | |
ID: 25785 | Rating: 0 | rate: / Reply Quote | |
Well one of them is installed and off and running. Pretty pics can be found here | |
ID: 25790 | Rating: 0 | rate: / Reply Quote | |
There other ibuch tasks around that get up to 96. The one you have now are actually the slowest out of all the different WUs in the short queue. | |
ID: 25791 | Rating: 0 | rate: / Reply Quote | |
Asked differently, what can one extrapolate from that review/benchamark about a 690's performance here? Easy: multiply the throughput of a GTX680 by 2 and you're basically there. Average clock speeds will be slightly lower, but this approximation should be good enough. Otherwise.. as 5Pot said: DP performance is irrelevant here :) MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 25798 | Rating: 0 | rate: / Reply Quote | |
I have a MSI factory overclocked 670 and it gives me computation errors in Linux right off the bat with the 302.17 drivers. I'm thinking it has something to do with the overclock since other people seem to be crunching just fine on Linux with the newest drivers and a 670. Either way, I've reverted back to 295.59 and it averages about four hours on a cuda42 long run, which I would assume is decent since it says 8-12 hours on fastest card in the description. | |
ID: 25985 | Rating: 0 | rate: / Reply Quote | |
Either way, I've reverted back to 295.59 and it averages about four hours on a cuda42 long run ... which tells us that the 302 driver was to blame, not the factory overclock. Bad for nVidia, good for you :) MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 26034 | Rating: 0 | rate: / Reply Quote | |
HI: I also have installed the 302.17 and with my GTX295 there is no way to run CUDA 4.2 tasks, from what I read is a matter of reinstalling the old 295.59 as soon as you finish the tasks that I have running CUDA 3.1... We'll see. | |
ID: 26071 | Rating: 0 | rate: / Reply Quote | |
I have a GTX480 and it runs GPUGrid Fine and Dandy under the Linux 302.17 driver. | |
ID: 26202 | Rating: 0 | rate: / Reply Quote | |
I had been running some PrimeGrid Cuda WUs until I found out that most Cuda work there is Double Precision. I didn't realize 680s were worse at DP than the 500s are. I guess SP projects are the way to go for the 600s series. I did get the 680 signature 2 with 2 fans and its very fast at SP work such as GPU Grid. I guess Milky Way would be slower also. Its kind of dissapointing Nvidia handicapped the 600s in DP. Maybe the GK110 will be better. | |
ID: 26203 | Rating: 0 | rate: / Reply Quote | |
Yes, you can expect GK110 to be a DP monster. | |
ID: 26234 | Rating: 0 | rate: / Reply Quote | |
Thats great, do you know if most of the projects will all eventually be DP, or is it only the Mathmatically leaning projects that will be this way. | |
ID: 26251 | Rating: 0 | rate: / Reply Quote | |
Each project has different criteria. GPUGrid does not need FP64/DP. | |
ID: 26254 | Rating: 0 | rate: / Reply Quote | |
do you know if most of the projects will all eventually be DP Never. And that's a good thing :) The point being: it always takes more energy and hardware to do DP calculations. And it's not hard to design your DP hardware so that it can do 2 SP operations instead of 1 DP. So at best you can do DP at half the SP rate. That's why in performance-critical applications you should use SP when ever the precision is sufficient. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 26290 | Rating: 0 | rate: / Reply Quote | |
I think most of you ignored the fact that Anand's compute benchmark consisted of mostly if not all SP benchmarks. | |
ID: 26294 | Rating: 0 | rate: / Reply Quote | |
Well, for here it's a good step forward. Obviously it's not for MW and some other projects. Pick a project and pick your cards for it. If you want to run POEM or MW get AMD cards. For here and Einstein get NVidia cards. | |
ID: 26299 | Rating: 0 | rate: / Reply Quote | |
GP-GPU does not equal DP crunching. In fact, even with the CC 3.0 Keplers being a step backwards in DP performance, this doesn't really matter, since AMD is far superior in raw DP performance to any Fermi or earlier... | |
ID: 26304 | Rating: 0 | rate: / Reply Quote | |
4 hours is good. | |
ID: 26324 | Rating: 0 | rate: / Reply Quote | |
Its hard to factor the power consumption figure into the overall household utility expense in most areas of the USA That's true, but you don't have to do it. 1$ electricity cost is 1$, no matter how much your other devices consume. All you really need is to measure the power consumption at the wall with and without GPU-Grid (or PC on/off) and multiply by running time and your local cost per electricity. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 26338 | Rating: 0 | rate: / Reply Quote | |
Hello, is it a good idea by following: | |
ID: 26382 | Rating: 0 | rate: / Reply Quote | |
Bad idea. First: increase clock speed without voltage increase and see how far you get (should be somewhere around 1050 - 1100 MHz, from what I've read). If you're comfortable with the temperature, power consumption and noise at this setting you can push further. At that point "1% more voltage for 1% higher clock" is a fair approximation, although the real function is at least quadratic, maybe even exponential. | |
ID: 26389 | Rating: 0 | rate: / Reply Quote | |
hm, i dont did it. Increasing the "power target" does it all. It clocks itself. no thinking about overvolting. Thanks. | |
ID: 26398 | Rating: 0 | rate: / Reply Quote | |
hm, i dont did it. Increasing the "power target" does it all. It clocks itself. no thinking about overvolting. Thanks. What target have you set? So Might do it as well. | |
ID: 26399 | Rating: 0 | rate: / Reply Quote | |
On my GTX 670 it does not matter what I set the power target at it always pulls 1175. So knowing that I was not able to increase / decrease volts I might as well OC as far as is stable ... turns out that 1259 GPU and 3206 MEM is rock solid stable - 99 consecutive successful LONG WUs so far. Win7x64, BOINC 7.0.25. | |
ID: 26401 | Rating: 0 | rate: / Reply Quote | |
i did base overclocking (but to far) , now at 915+90mhz and set power target with Nvidia Inspector at 122% but it wont raise further that 1175mV. Not always stable right now. | |
ID: 26402 | Rating: 0 | rate: / Reply Quote | |
I've begun to upgrade my Fermi cards to Kepler cards, and I've made some measurements with my first partly upgraded system. | |
ID: 26412 | Rating: 0 | rate: / Reply Quote | |
Retvari: | |
ID: 26414 | Rating: 0 | rate: / Reply Quote | |
That's ~30% better performance per Watt. | |
ID: 26418 | Rating: 0 | rate: / Reply Quote | |
...but a 680 might be more in line with a 480, and might take it over 40%. That said the 580 would still be more competitive. I'll check that tomorrow. I've just finished changing one of my GTX 590 to a GTX 680. (GV-N680OC-2GD) The PCIE3 vs PCIE2 debate is still open. I'm not planning to upgrade my motherboards in the near future, but maybe I can put one of my cards to a PC equipped with a PCIe3 capable motherboard. Can you reduce that FOC? Sure. To what frequency and voltage? (for the 670 and for the 680) I'm also planning to measure the power consumption of the GTX 480 at stock speed and voltage. | |
ID: 26420 | Rating: 0 | rate: / Reply Quote | |
The GFlops looks very odd in the BOINC manager's log: | |
ID: 26421 | Rating: 0 | rate: / Reply Quote | |
Ref GFLOPS:
GTX 680 is 3090.4 GTX 690 is 2*2810.88=5621.76
| |
ID: 26422 | Rating: 0 | rate: / Reply Quote | |
I've tried to refine my measurements with my partly upgraded configuration. | |
ID: 26427 | Rating: 0 | rate: / Reply Quote | |
Nice set of data. | |
ID: 26428 | Rating: 0 | rate: / Reply Quote | |
By reducing the second cards heat radiation both cards should benefit somewhat, and you might even see the pair reach +40% performance per Watt over two GTX480's. I agree. But the gain is even bigger than I expected: my host consumes 520W now under full load (2 GPU tasks on the two GTX 670s, and 4 CPU tasks). It was 625W before. So now my host consumes 105W less than before my first measurement, and probably 210W less than with two GTX 480s @800MHz. I expected 217W-162W(=55W)+~10W gain. I have to double check it tomorrow (runtimes etc.). Out of curiosity, are your CPU heatsink fins/blades vertical or horizontal? My CPU heatsink is a Noctua NH-D14, it's fins are vertical, and the axis of the fans are horizontal. My motherboard is vertically mounted, and the GPUs are under the CPU. The cool air comes from the side of the case, and the hot air from the CPU heatsink is exhausted through the back of the case. | |
ID: 26430 | Rating: 0 | rate: / Reply Quote | |
It's very hard to calculate the power consumption of different parts by measuring the overall power consumption at different workloads, because the parts are heating each other, and it is causing extra power consumption on the previously measured parts. My previous measurements didn't take this effect into consideration, so the extra 174 Watts doesn't come only from the task running on the GTX 670. When I read your first post I thought the same, but then decided "never mind, he already put so much work into these measurements..." ;) And there's another factor: PSU efficiency is not constant over a large load range. Past 50% load efficiency will probably drop a bit with increasing load. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 26434 | Rating: 0 | rate: / Reply Quote | |
Some of the newer PSU's hold steady from ~5% through to 85 or 90%, but not all. So an 850W PSU (for example) could have fairly linear power efficiency from ~40W right up to ~723W. Don't know the PSU in use though, so it might well have struggled for efficiency with the two GTX480's. If it did it's a big consideration, but that's the setup, and the new setup is still 200W better off, without replacing the PSU. | |
ID: 26436 | Rating: 0 | rate: / Reply Quote | |
I would like to ask something and i m sorry if it has been answered in another thread. I am running both WCG and Gpugrid on my pc (3770k/M5G/GTX670). Is it normal that some projects have more gpu usage than others? Also, when i run WCG with all 4c/8t i see 73-74% gpu usage, but when i stop WCG i see 81-82% gpu usage. Why does this happen? Can i do something to rum both wcg and at the same time utilize my gpu at 100%? | |
ID: 26437 | Rating: 0 | rate: / Reply Quote | |
I would like to ask something and i m sorry if it has been answered in another thread. I am running both WCG and Gpugrid on my pc (3770k/M5G/GTX670). Is it normal that some projects have more gpu usage than others? Also, when i run WCG with all 4c/8t i see 73-74% gpu usage, but when i stop WCG i see 81-82% gpu usage. Why does this happen? Can i do something to rum both wcg and at the same time utilize my gpu at 100%? Hey. All gpugrid.net work packages need a cpu to feed it. You have a quad-core processor with hyperthreading supported so your processor is shown in eight core cpu in operating system. Leave a one core-free so that the processor can support the graphics cards in its calculations. BOINC Manager, go to cpu usage (preferences) and select that you want to use 87.5% of the processors so one core is always free for the graphics card. Very likely you will not get 100% GPU utilization. | |
ID: 26439 | Rating: 0 | rate: / Reply Quote | |
More power consumption and performance measurements are in progress :) | |
ID: 26440 | Rating: 0 | rate: / Reply Quote | |
My 3rd 680 will be in my hands in about 2 weeks. Since I already have 2 working at PCIe 3 on my x79 when I install the 3rd it will be PCIE 2 x8 (PCIe 1) until I apply the hack which will make it PCIe 3 x8 (PCIe 2). I will test both to see how the times compare. | |
ID: 26441 | Rating: 0 | rate: / Reply Quote | |
About the Motherboard Intel says, | |
ID: 26442 | Rating: 0 | rate: / Reply Quote | |
The power supply in my dual GTX-670 host is an Enermax MODU87+ 800W. | |
ID: 26443 | Rating: 0 | rate: / Reply Quote | |
That PSU has a maximum efficiency at around 400W, so its going to be about as efficient at 500W as it is at 300W. Going by the graph the loss of efficiency with two GTX480's vs two GTX670's would be no more than 2%, <10W: | |
ID: 26445 | Rating: 0 | rate: / Reply Quote | |
Hey. All gpugrid.net work packages need a cpu to feed it. You have a quad-core processor with hyperthreading supported so your processor is shown in eight core cpu in operating system. Leave a one core-free so that the processor can support the graphics cards in its calculations. Atm i m running wcg with 4c/8t at 100% and gpugrid runs a project at 92% gpu load which goes to 94-95% when i close wcg. So i guess gpu usage depends mainly on the project. edit: by changing 100% to 87.5% the cpu usage didnt change gpu load at all. | |
ID: 26447 | Rating: 0 | rate: / Reply Quote | |
My experimental PCIe3.0 x16 host is up and running. | |
ID: 26455 | Rating: 0 | rate: / Reply Quote | |
The PCIE3 vs PCIE2 debate is still open. My experiment with my PCIe 3.0 host is over. Here are the results: It has processed 5 kind of workunits: _______ wu type __________________ # of wus _|_ shortest (h:mm:ss)__|_ longest (h:mm:ss) __|_ average (h:mm:ss) NATHAN_RPS1120528 ________________|_ 16 _|__ 13596.73 (3:46:36) _|_ 13729.89 (3:48:49) _|_ 13633.03 (3:47:13) PAOLA_HGAbis _____________________|||__ 8 _|__ 17220.55 (4:47:00) _|_ 17661.73 (4:54:21) _|_ 17505.56 (4:51:45) rundig1_run9-NOELIA_smd _____________|__ 1 _|__ 21593.69 (5:59:53) run5_replica43-NOELIA_sh2fragment_fixed ||__ 1 _|__ 25169.00 (6:59:29) run2_replica6-NOELIA_sh2fragment_fixed _||__ 1 _|__ 35357.31 (9:49:17) BTW it's power consumption was only 247 Watts with a GTX 680 and all 4 CPU cores crunching (3x rosetta + 1x GPUGrid) For comparison, here are one of my old host's power consumption measurements: Core i7-970 @4.1GHz (24*171MHz, 1.44v, 32nm, 6 HT cores) ASRock X58 Deluxe motherboard 3x2GB OCZ 1600MHz DDR3 RAM 2 HDD GPU1: Gigabyte GTX 480@800MHz, 1.088V (BIOS:1A) GPU2: Asus ..... GTX 480@800MHz, 1.088V (BIOS:21) Idle (No CPU tasks, No GPU tasks, No power management): 233W CPU cores 32°C to 40°C GPU1 idle 00%: 36°C - GPU2 idle - 00%: 39°C 1 GPU task running: GPU1 idle 00%, 36°C - GPU2 in use 99%, 51°C: 430W GPU1 idle 00%, 36°C - GPU2 in use 99%, 60°C: 434W GPU1 idle 00%, 37°C - GPU2 in use 99%, 65°C: 438W GPU1 idle 00%, 37°C - GPU2 in use 99%, 69°C: 442W 2 GPU tasks running: GPU1 in use 99%, 47°C - GPU2 in use 99%, 69°C: 647W GPU1 in use 99%, 53°C - GPU2 in use 99%, 71°C: 656W GPU1 in use 99%, 60°C - GPU2 in use 99%, 72°C: 665W GPU1 in use 99%, 62°C - GPU2 in use 99%, 74°C: 670W GPU1 in use 99%, 63°C - GPU2 in use 99%, 76°C: 675W GPU1 in use 99%, 66°C - GPU2 in use 99%, 79°C: 680W 2 GPU tasks and 6 CPU tasks running: GPU1 in use 99%, 47°C - GPU2 in use 99%, 69°C: 756W CPU cores 50°C to 66°C I would like to compare the performance of PCIe2.0 (x16 and x8), as my main cruncher PC has 3 different cards right now (GTX 670 OC, GTX 680 OC, and a GTX 690), but the lack of info about the number of GPU used for crunching in the stderr output file makes it very very hard. | |
ID: 26545 | Rating: 0 | rate: / Reply Quote | |
Might be a bit late to answer, but I have been using an EVGA GTX 670 SC (which is slightly overclocked) for about a month. I also use the 301.42 driver for the card. | |
ID: 26601 | Rating: 0 | rate: / Reply Quote | |
Might be a bit late to answer, but I have been using an EVGA GTX 670 SC (which is slightly overclocked) for about a month. I also use the 301.42 driver for the card. Dylan: When comparing your GTX670 long runs to my GTX570 long runs clearly your 670 is slightly faster. However, your CPU times seem extremely large compared to mine. I am guessing that my Q9550 (4 core non-hyperthread) has more cache available than your 8 core i7-3820? I wonder if there is going to be a big improvement with CUDA5? We are both running CUDA42 gpugrid app. | |
ID: 26689 | Rating: 0 | rate: / Reply Quote | |
I'm pretty sure my CPU times are larger because I didn't give a core to the GPU, and instead gave all 8 to another project. Up until now I never really thought about it, and will try to fix it as soon as I can. Thanks for the notice. | |
ID: 26703 | Rating: 0 | rate: / Reply Quote | |
6XX cards always use a full core so if you set BOINC to use 1 less thread than your CPU has available your other projects that use CPU only will process much better! | |
ID: 26704 | Rating: 0 | rate: / Reply Quote | |
Message boards : Graphics cards (GPUs) : Anyone tried a GTX670 on GPUgrid?