Author |
Message |
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
http://www.anandtech.com/show/5911/nvidia-announces-retail-gt-640-ddr3,
by Ryan Smith.
I'm just saying they now exists, and I'm not saying 'buy one'!
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
|
hm, i just buyed one. lol
Now its the question , will it works well on cuda 4.0 and 4.2 apps ?
Any results on that , is it allready testet with beta apps ?
if there anybody who can tell about results ?
greetings |
|
|
|
Should work in the same way as the bigger Kepler cards. Expect ~1/4 the performance of a full GK104 at similar clocks.
MrS
____________
Scanning for our furry friends since Jan 2002 |
|
|
|
http://www.palit.biz/palit/vgapro.php?id=1894
ok, its this card.
cant calculate the gflops on this , i got no comparsion table.
Edit: ok i found a comparsion table on wiki that says:
12.24 Gflops per wattage = means 65 * 12.24 = ~ 800 gflops , not bad :) |
|
|
|
hm, i just buyed one. lol
Now its the question , will it works well on cuda 4.0 and 4.2 apps ?
Any results on that , is it allready testet with beta apps ?
if there anybody who can tell about results ?
well, at least you could. ;)
got that thing up and running?
|
|
|
|
Not yet, waiting for the ordering confirmation, still waiting , its saturday u know. :o/ |
|
|
|
12.24 Gflops per wattage = means 65 * 12.24 = ~ 800 gflops , not bad :)
384 Shader * 2 Operations per clock * 900 MHz = 691 GFlops. However, 2 Ops/clock per shader is a theoretical maximum, which can hardly be achived in the real world. But we've been calculating nVidia Flops like this for a long time, so at least it's consistent for the sake of comparison.
MrS
____________
Scanning for our furry friends since Jan 2002 |
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
I expect the GT 640 should just about match a GTX 550 Ti in terms of runtime performance, but maybe the 4.2 app will be favourable to it. Obviously the 65W TDP is much more attractive than the 116W of the GTX550Ti.
At the end of 2009 the GT240 entered the scene. It was a bit of a sweet entry level card; an early 40nm GPU only requiring power from the PCIE slot (69W TDP). If the GT 640 offers reasonable performance at <75W then it might be a good entry level card and allow more people to contribute. The recommended minimum PSU for such cards is only ~350W. I would also expect the GT640 to actually use about 35 to 40W.
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
|
Bad news , for me, the ordered GT 640 was a GT 620. A database error by the vendor. |
|
|
|
Very Good News !
I thought i ordered a stallion. They provide me a snail (GT 620), in the end i got now a pony (GT 640) :lol |
|
|
|
PAOLA task 4.2 on its way to completion. Almost about 5 hours , anybody who completed it or another with a GT 640 ? Approximately completion about 8 hours. |
|
|
|
ok IBUCH took about 6 hours. |
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
Performance roughly as expected. It would probably match the GTX 550 Ti, if it could run the CUDA 3.1 app, but on the CUDA 4.2 app it's actually matching a GTX460, due to the GTX600 series improvements in performance over the GTX400/500 series. Cool.
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
|
Performance roughly as expected. It would probably match the GTX 550 Ti, if it could run the CUDA 3.1 app, but on the CUDA 4.2 app it's actually matching a GTX460, due to the GTX600 series improvements in performance over the GTX400/500 series. Cool.
well, REALLY cool!
considering it's only drawing ~50W out of the wall, time to go green..
|
|
|
|
it must be about 65 Watt who´s coming out of the wall,
but it will drag less power as a 550 Ti (116watts).
Think in mind , when u overclock/voltage the power saving
is surely lower ;) so i did, cant hold me back an overclocked
it. So warranty has void :b who cares !? |
|
|
|
Why must it draw 65 W? Because TDP is stated as 65 W by nVidia? Well.. no. Cards usually draw less than TDP at GPU-Grid and crunching in general. Consider this: you're probably not even at 100% GPU utilization, let alone using the TMUs etc.
So it seems like such a card is good for ~44k RAC here.
Overclocking actually improves power efficiency here. Overvolting doesn't.. but I hope you're not pushing the card to 2x the power consumption ;)
MrS
____________
Scanning for our furry friends since Jan 2002 |
|
|
|
but I hope you're not pushing the card to 2x the power consumption ;)
I hope too (looking at my electricity bill), for the rest of the month i can only run this card :D. I am almost at 50,-€ for this month, thats quiet enough for me.
i have not tryed to overvoltage this card, i am hope others may try this before me, please :D. |
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
Long term overvolting is even less recommended on lesser cards.
These are fairly 'green' cards compared to previous generations, but the bigger cards probably do more per watt, especially if you look at the power consumption of the system. Their niche is low end desktop systems with average PSU's.
As it's matching a GTX460 then it's doing the same work for half the electric consumed by the GPU.
The ~44k is for normal tasks. This should improve for the longer tasks, especially if the card can return them inside 24h.
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
|
Well.. overvolting it still wouldn't cost much.
Stock load voltage seems to be 1.00 V. AMD is using 1.20 stock on the same process, so this would be the maximum I'd think about using. At 1.10 V power consumption would increase by a factor of 1.21 (1.10^2/1.00^2), pushing the card from an estimated 50 W to 60.5 W and maybe getting you another 100+ MHz core clock (-> ~10% performance increase).
Overall that's not too bad (considering you'Ve got an entire PC up'n running to feed the card), but 1.20 V may push the boundaries of PCIe power delivery: 1.20^2/1^2 = 1.44, i.e. a 44% increase in power consumption. Here we'd be talking about 72 W for maybe another 50+ MHz - diminishing returns.
MrS
____________
Scanning for our furry friends since Jan 2002 |
|
|
|
I think undervolting maybe an option.
To maintain clockrates an lowering core voltage.
Edit:
ok ,that saved me 5 Watts , cant go lower because of hardware restriction of the card. :/
I have checked all data and think it was a misstake, cant lower the core voltage,
no software worked correctly . Using EVGA Precision and Asus GPU Tweak.
Any guesses ? |
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
MIS Kombustor perhaps?
I think there are issues overvolting and possibly overclocking the GF600 cards. The software will turn up eventually, if there isn't already an overclocking kit. I don't have a GF 600 card so I haven't really looked into this.
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
|
nope, same results. i must be waiting until they evolved that all softwares. :-/
Tryed Win7 64-Bit , same results :-/ |
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
Apparently Voltage auto-adjusts on reference cards, so you can't manually do it, but it should increase with the GPU clock.
This change makes OC'ing the GF600's more like OC'ing the i7-2600K and similar CPU's which automatically adjust the Voltage as required.
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
|
hm , i expirienced more raising the clock results in errors, no automatic voltage raising here on this card :(
Must be only in 3D Mode , in games or benchmarks. |
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
If it causes errors then don't increase the clocks.
It's possible that the reported Voltage is just a reference one, read off the cards reference Voltage or the driver, and not a real live reading.
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
|
everthings fine now, i think the card needed some "burn-in" time. Now voltage settings work but low benefit. |
|
|
|
Which clock speeds do you reach?
Automatic voltage scaling is linked to turbo mode, which GT640 doesn't support.
MrS
____________
Scanning for our furry friends since Jan 2002 |
|
|
|
what will happen when the gt 640 starts to run a cuda 3.1 app ?
i asking why i though i what to put this card between my two GTX 460 .
Any knowledge about this out there ? The fury ones can answer this question ? |
|
|
|
oops ! I did it :(
my two gtx 460 have been excluded, so ich removed the gt 640 again.
now one gtx 460 calculating his first non beta cuda 4.2 app !?
good or bad !?
Edit:
Could someone anyone help plz, if that all three cards will run together that will save me about 100 watts per hour !!! |
|
|
|
Great ! U did well. Now i am crushing only 4.2 tasks . jeepieh !
But still cant overclock the gt 640 with any overclocking software, i have to wait until they evolved their software.
Running 3 cards on a Asus P5N-T Deluxe.... |
|
|
|
Cant clock the GT 640 , under Windows 7 i can use all 3 cards at the same time. That saves me 100 watts per hour. great |
|
|
|
Side note: 100 watts per hour -> 100 W. Power is already divided by time.
MrS
____________
Scanning for our furry friends since Jan 2002 |
|
|
|
Side note: 100 watts per hour -> 100 W. Power is already divided by time.
MrS
ok, then i saved 0.1 kw/h. greetings to the fury ones :)
|
|
|
|
Each hour you save 0.1 kWh :)
MrS
____________
Scanning for our furry friends since Jan 2002 |
|
|
|
ooook, i managed my overclocking problem so far. Used "Nvidia Inspector" to clock the gt 640.
But dont now if stable, or it will crash wen it gets a cuda 3.1 app again. |
|
|
|
the "overclocking" doesnt work, in real there was no overclocking. |
|
|
klepelSend message
Joined: 23 Dec 09 Posts: 189 Credit: 4,718,591,418 RAC: 2,169,719 Level
Scientific publications
|
Does have anyone news about runtimes of long tasks with Cuda4.2 on GTX640 cards?
Rantanplan: I have seen that it takes about 55000 seconds per work unit. However one of your computers labeled to have a GTX640, had pretty much task failures?
I try to decide, if I shall buy an additional GTX640 (two slots) or a GTX 560 TI 448, which a friend might bring me from USA next week. The first would be my choice pricewise as well as it will get only Cuda4.2, and represents a newer technology, the second seems to me still expensive as it is an “older” card and might also consume much more energy as I could verify with my actual two cards from the same series.
|
|
|
|
had pretty much task failures?
i overclocked by 250 Mhz , now its only 200 Mhz , just trying. |
|
|
klepelSend message
Joined: 23 Dec 09 Posts: 189 Credit: 4,718,591,418 RAC: 2,169,719 Level
Scientific publications
|
As I am not very good in computers, might the GT640 work for GPUGRID in a second slot with a GTX670 in the first? My motherboard has two slots and the computer for the GTX640 actually does not exist. |
|
|
|
that will work, no problem. I think but only the GTX is not a triple slot graphics card. |
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
The GT640 should work in the second slot, though you might want to check your PCIE slot performance - if it's only PCIE x1 then if it even works it would be very poor. If both slots are PCIEx16 and stay X16 when the second slot is occupied then that's optimal. If the first slot drops to X8 and the second is only X4, then you would probably still do more work, but would be loosing a fair bit of performance through bandwidth limitations.
We really need to measure the performance loss when using PCIE3 vs PCIE2 and at various amounts of lane count. It was done roughly on PCIE2 with the old app for previous generations of cards, but things have changed since then. Some apps perform much better with additional PCIE bandwidth, especially PCIE3 as it's twice as fast as PCIE2.
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
|
The running times have changed ? now a WU of PAOLA Long Run take approximately 21 hours. |
|
|
|
@klepel: the GTX560 xxxCore Edition is quite fast at GPU-Grid, way faster than the GT640. But it's also a totally different beast power-consumption-wise. Could your power supply, case cooling, ears and electricity bill stand such a card? IMO that would be the main question. GT640 is completely tame in comparison.
MrS
____________
Scanning for our furry friends since Jan 2002 |
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
A reference GTX560 Ti 448 is almost twice as fast as a GT640; ~1.85, but the 65W vs 210W TDP means the GTX560Ti448 uses ~3.25 times the power of a GT 640.
If you had the slots/PCIE bandwidth you would be better off with two GT640's. That said, if I was thinking about 2 I might wait for a GTX660, of some description.
Anyway, as MrS said it's very much down to the PSU.
PS. In the UK a GTX560Ti448 costs ~£190 while a GT640 costs ~£75, so for crunching here the GT640 wins hands down; ~37% better crunching performance per initial outlay and 75% cheaper to run.
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
klepelSend message
Joined: 23 Dec 09 Posts: 189 Credit: 4,718,591,418 RAC: 2,169,719 Level
Scientific publications
|
Thanks a lot for your advice! Considering also your comments on the forum “Lower level 600 release?”, I think it will be best to wait until a release of GTX 660 (Kepler), as I do not need this card urgently, as I do have a spare GT8400:
1)As Rantanplan noted a WU of PAOLA would take around 21 hours to complete, with my lousy internet connection on this side of the world, I will miss most of the time the 24 h limit. (I will not start the argue again, but one of my cards sat idle yesterday, because the two finished long WUs, have not fully up-loaded until I did it manually today)
2)@ExtraTerrestrial Apes: The electric bill is always a concern, that’s why I would opt for a Kepler card. The card would run in a new computer, however until it is build I would try it on my motherboard which has two slots, and there yes I do have only 650 W, might be up-graded to 800 W.
3)@Skgiven: The two Pcie2 slots should stay PCIEx16 with a second card, as I mentioned it is only temporarily. And yes, I am very concerned about performance versus investment and power consumption, so I might well wait until autumn.
|
|
|
|
Waiting sounds about right for you then. And a quality 650 W PSU won't have any problems pushing CPU, GT640 and a GTX680 :)
You could use your 8400 at POEM now. I've tried it today with an 8400GS @ 1.0 GHz: it's doing insane ~33k RAC over there!
MrS
____________
Scanning for our furry friends since Jan 2002 |
|
|
|
We really need to measure the performance loss when using PCIE3 vs PCIE2 and at various amounts of lane count. It was done roughly on PCIE2 with the old app for previous generations of cards, but things have changed since then. Some apps perform much better with additional PCIE bandwidth, especially PCIE3 as it's twice as fast as PCIE2.
hello skgiven and ETA,
i put two gt 640 in my PCIE2 16x slots and the memory controller load is at 70%.
that means the system is not fully busy (~ 30% load free) or am i wrong and a PCIE3 is required ? gpu load is 96-98%.
another strange thing:
the first card in slot:
GPU-temp : 53.0 °C
Fan speed: 35 %
VDDC: 0.9870 Volt
the second card in slot:
GPU-temp : 59.0 °C
Fan speed: 47%
VDDC: 1.0120 Volt
both cards are running with the same speed (clock and memory). :/
greetz
____________
http://www.rechenkraft.net |
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
96-98% GPU utilization is high.
I'm sure PCIE3 will outperform PCIE2 but don't know by how much.
It's normal that different cards require different voltages, though task types can vary temps.
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
|
Hello fellow volunteers: I have a 1GB GT640 running in PCIex1 (on my laptop using the PCIexpress slot).
I would like to contribute to benchmarking this card with this setup, what specs might you be interested?
Preliminary specs (from CPU benchmarks from BOINC):
- driver v 301.42
- cuda version 4.2
- compute capability 3
- 692 GFlops peak
Completed task 2UY5_45_9-PAOLA_2UY5-0-100-RND0091 in 27,250.00 seconds (20.5 h)
best regards,
Jorge. |
|
|
|
With current Paola Wus and a GT 640 u cannot claim a 24 hour bonus, it tooks about 27 Hours to complete.
http://www.gpugrid.net/workunit.php?wuid=3680231 |
|
|
HypeSend message
Joined: 21 Nov 11 Posts: 10 Credit: 8,509,903 RAC: 0 Level
Scientific publications
|
Hey guys,
I'm thinking about getting one of these cards for crunching.
As this thread is over a year old, can you still recommend getting this card?
There's even one from ASUS with only 25W TDP, it has got a pretty slow memory which doesn't matter for GPUGrid anyway, does it?
If not the GT640, which would be a good card at about 50W TDP? |
|
|
|
If not the GT640, which would be a good card at about 50W TDP?
Why TDP and not real wattage ?
Some data from the total system, Intel i7 with GTX 650 TI (see first test last year):
62 W - Idle (EIST off, fixed voltage: idle/load 0.996V) > CPU usage 0%
135 W - GPUgrid (GPU OC +100Mhz GPU clock as Mem clock , standard voltage) > CPU usage 12%
152 W - GPUgrid + Einstein@iGPU (HD4000, OpenCL) > CPU usage 15%
176 W - GPUgrid + Einstein + Docking@CPU (Hyperthreading on 7 cores) > CPU usage 100%
160 W - Einstein paused for testing > CPU usage 100%
135 W - Docking + Einstein paused for testing > CPU usage 12%
62 W - BOINC paused for testing > CPU usage 0%
According to your power supply you will see better efficiency, mine is simple 80plus. Round about you need ~70 watts for this oced GTX 650 TI. It runs with ~90 percent usage crunching short workunits (SANTI_MAR, cuda55) at 30% cooling fan (1100 rmp) and reaches 59 Celsius degree in my case, system is running quiet, but the fan of this GTX 650 TI is powerful enough for smaller cases or less implemented fans.
Last 3 days crunching short workunits for ~14 hours per day > 38k points RAC.
Otherwise, why eliminate your powerful GTX 570, is it too noisy ?
|
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
This project favors faster, mid-range to high end cards.
The GT640 is an entry level card. While it should still work, I wouldn't recommend a GT640 to crunch here (never did, just said it should work).
The GTX 650 Ti 2GB is probably the lowest GPU I would recommend.
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
HypeSend message
Joined: 21 Nov 11 Posts: 10 Credit: 8,509,903 RAC: 0 Level
Scientific publications
|
I'm using the GTX 570 in my "main"-computer, which I only run when I'm at home.
At the moment I'm thinking about building a small 24/7 crunching-machine.
The GTX 570 + i5 4670k draw about 300 watts, which is too much for me (24/7).
So I'm trying to find a card with a nice credit/wattage-ratio to use in the other machine.
I thought about 100 watts 24/7... which will be difficult?
|
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
Your i5 has an 84W TDP and it's pricey. While it won't consume 84W it might use around 60 or 70W crunching CPU projects. The other components would probably use over 30W. Even a GeForce GTX 650 Ti would use around 100W by itself, so your not going to make 100W starting from an i5. Around 200W is doable though, but that's still not a good setup.
Maybe better to read and discuss this in the following thread,
http://www.gpugrid.net/forum_thread.php?id=3518&nowrap=true#33570
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help |
|
|
|
I thought about 100 watts 24/7... which will be difficult?
0,1 * 24 * 365 = 876 kWh * 0,25 € (Germany) = 219 Euro, only for energy, each year.
If you crunch with 1x workunit per day with the GTX 570, you get points & contribute to the project, but don´t have to pay for the new hardware or find a place for it as time for service, backups and so on.
Plus you are more flexible when new hardware (minimum) requirements are defined. Maybe the up to now low end hardware is pushed out of the game in few months.
For my part I suspended all 24/7 crunchers and concentrate now on one single cruncher, which is a compromise between energy consumption (on the long run), loudness (low overclocing in summer, higher oc in winter) and performance (overclocking range, undervolting option).
Looking backward the last years there is a quick development in hardware releases, programming and new standards (CUDA). Middle class componentes will be fast graduated to low end, high end to middle class. But high end consume too much energy, so you are forced to replace in case of inefficieny, not in every period but in those with huge efficiency advantages.
Best wishes for your decision ! |
|
|
HypeSend message
Joined: 21 Nov 11 Posts: 10 Credit: 8,509,903 RAC: 0 Level
Scientific publications
|
Yea it's difficult with our high energy prices... :/
Maybe I'll build a cheap CPU-only machine and use my main system to crunch a GPU WU every now and then.
I'll have to think about it.
|
|
|
|
Is the jump to 2GB in a 650 Ti a big deal? You are talking about $50 more once you factor in the lack of rebate. |
|
|
|
Is the jump to 2GB in a 650 Ti a big deal? You are talking about $50 more once you factor in the lack of rebate.
Not much to the speed of the card but there have been WU's on here that have taken over 1Gb or close to 1Gb of memory which makes them either slow or impossible to run on a 1Gb card. |
|
|
|
As SK said, GT640 DDR3 was never recommended for GPU-Grid due to these reasons:
- It's memory bandwidth starved. Mine runs GPU-Grid at 60 - 70% memory controller utilization, with the memory already OC'ed (not much room). GTX660Ti is already limited by memory bandwidth and runs at ~40% utilization. At Einstein GT640 even runs at 99% memory controller utilization!
- GPU-Grid needs results back fast, hence the bonus for returning WUs early. This makes small cards a bad choice over here.
- For a little more money you can get significantly faster cards.
The new GT640 GDDR5 fares a little better due to more bandwidth available, but compared to GTX650 and higher it's still pretty imbalanced.
And as others have already said: don't buy cards with 1 GB or less for GPU-Grid. That's not sufficient any more even today.
Regarding the idea of a small cruncher to reduce energy cost; I don't think it's a particularly good one due to the following reasons:
- every computer has a certain idle power draw, no matter if you use it or not. For a modern "econo-box PC" that's 30 - 40 W. Which you always have to pay just for the system to be there, no matter how fast or slow it is. If you talk about 100 W overall power consumption that's almost half of the power draw "wasted for nothing"! To summarize: the more of your power budget you actually use for crunching, the more efficient the system becomes.
- PSU efficiency peaks around 40 - 60% load. The smallest high efficiency PSUs are the 400 W FSP Aurum (80+ Gold) and 450 W Antec Earthwatts (80+ Platinum). But running these at 100 W load (25%) misses their sweet spot.. it's not a catastrophe for the small Aurum (~4% loss), but any bigger PSUs really start to suffer, which reduces system efficiency.
- Starting from low-end GPUs there's a range where every x1% more purchase price gets you x2% more performance - with x2 being larger than x1. GT640 is borderline in this regard:
GT640 | 691 GFlops | 28.5 GB/s | 65€
GTX650Ti Boost | 1505 GFlops | 144.2 GB/s | 115€
The comparison doesn't look too bad for theoretical GFlops: GTX650Ti gets you 2.2 times the raw performance for 1.8 times the price. But GT640 can't make proper use of its raw horse power due to being bandwidth limited.. which the 5 times higher value of the GTX650Ti Boost hints at (although this card has a bit more bandwidth than it needs - GTX660Ti has just as much, but wants to sustain 2460 GFlops from that). I'd estimate the overall performance difference between GT640 and GTX650Ti boost in the range of 2.5 - 3.0.
- Faster GPUs will have better resale / reuse value.
What I'd do in your case: pimp your primary rig a bit and you may be able to make it fit for 24/7 crunching, or just a bit more than it's doing now (depending on how much you want to invest). By running the faster rig a bit more often you could get the same throughput but wouldn't have to buy entirely new parts. A few points to consider:
- What PSU do you currently have? Exchanging it for a 80+ Gold model in the 400 - 500 W range (e.g. that 400 W FSP Aurum) could quickly pay for itself by reducing energy cost, depending on what you're currently using.
- Is you CPU heavily overclocked, since you're using a "K"? If so energy efficiency obviously suffers, a lot. In this case consider taking it back a few steps. My i7 3770K is running at 4.10 GHz at 1.03 V - that's pretty efficient and for me OK to run 24/7. And I wouldn't feel a difference compared to e.g. 4.5 GHz at 1.2+ V anyway, except for the additional heat, noise and electricity bill.
If you don't OC: you could lower your CPU voltage significantly below stock (as I have at 4.1 GHz), which should save ~20 W over the stock configuration.
- You could adjust your GTX570's clock speed and voltage down a bit. Fermi doesn't have much room for improvements here, though. Increasing fan speeds to lower temperatures could also gain you a few W - if the noise was OK (probably not).
- You could exchange your GPU for a medium sized Kepler. Performance would stay about the same, while power consumption may be lowered by up to 100 W under load. Running 24/7 this would save you ~200 €/year, which means a GTX660Ti would pay for itself within one year. And can be power-tuned well by reducing the power target.. down to about 100 - 110 W for the card under load (stock: 130 W, it's power target)
- If you're crunching on the CPU you could stop it altogether to get power consumption under control. This depends on your project choices, of course. For soem credits count (-> GPU), while for others WCG badges (CPU time) are holy.
MrS
____________
Scanning for our furry friends since Jan 2002 |
|
|