Advanced search

Message boards : Graphics cards (GPUs) : GPU Grid specific computer

Author Message
Palamedes
Send message
Joined: 19 Mar 11
Posts: 30
Credit: 109,550,770
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22990 - Posted: 18 Jan 2012 | 2:54:31 UTC

Hey guys..

So I want to build a computer that will be solely for GPUGrid computing.. That means I want to squeeze as many GPUs into the case as possible..

This is where I need help.. What motherboard should I get that can hold as many gpus as possible?

How many GPUs are we looking at here?

I'm trying to figure out what's feasible/reasonable... and I want to build this over time. I know I can get 460's for fairly reasonable prices off ebay so if I could get a whole pack of them in a case then that'd be pretty cool..

Thoughts?

TylerChris
Send message
Joined: 12 Feb 10
Posts: 11
Credit: 50,020,466
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwat
Message 22992 - Posted: 18 Jan 2012 | 8:24:33 UTC - in response to Message 22990.

HI
Well if you don't mind going down the amd bulldozer route the new Sapphire
Pure Black looks interesting,will take six single slot cards.
looks pretty too:)

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,206,655,749
RAC: 261,147
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22997 - Posted: 18 Jan 2012 | 13:53:01 UTC - in response to Message 22990.

So I want to build a computer that will be solely for GPUGrid computing.. That means I want to squeeze as many GPUs into the case as possible..

A standard ATX case has only 7 slots at the rear, also the motherboards have 7 PCIe slots at maximum. You should also consider to maximize performance, the motherboard should have as many PCIe x16 slots as possible, because slower PCIe slots hinder the performance of the GPU. Also, a CPU core per GPU is needed to maximize performance.

This is where I need help.. What motherboard should I get that can hold as many gpus as possible?

I would suggest the Asus P6T7 WS SuperComputer motherboard with a 6-core i7 CPU (i7-970, i7-980, i7-980x, i7-990x), as it has 7 PCIe 2.0 x16 slots. Sapphire Pure Black has only two x16, two x8 and two x4 slots (while all of them are capable of taking an x16 card, but they will run slower). When a slot 2011 mb will come out with 7 PCIe 3.0 x16 slots, it will supercede the P6T7 WS SC.

How many GPUs are we looking at here?

At maximum 7 GPUs can be put in a single motherboard, but it will need a very good water cooling (possibly 4 of them). It will also need two good 1200W power supply (as it is advised for longevity and efficiency reasons not to utilize a power supply over 50-70% of it's nominal wattage in long term).

I'm trying to figure out what's feasible/reasonable... and I want to build this over time. I know I can get 460's for fairly reasonable prices off ebay so if I could get a whole pack of them in a case then that'd be pretty cool..

For GPUGRid it's better to have a top end CC2.0 card (GTX 570, 580, 590, 470, 480), than two CC2.1 cards (as only the 2/3rd of their shaders will be used by GPUGrid). I suggest you to wait a couple of months, because nVidia is going to release a new series of GPUs (i.e. Kepler).

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23001 - Posted: 18 Jan 2012 | 19:07:55 UTC - in response to Message 22990.
Last modified: 18 Jan 2012 | 21:41:02 UTC

For GPUGrid, top cards tend to be double slot width, so well spaced out slots are desirable. A motherboard with 7 single spaced slots can support no more double spaced GPU's than a board with 4 double spaced slots!

Express lane count is also very important; the more lanes you have the more cards it can 'fully' accommodate. We know that performance drops when fewer PCIE lanes are available to the GPU. Typically the more slots used, the lower the PCIE lane availability. Often boards move from one PCIE x16 lane to two PCIE x8 lanes and then to one PCIE x8, one or two PCIE x4 slots and the fourth slot is often PCIE x1. The performance of a high end GPU in a PCIE x4 slot is significantly reduced, and would be terrible in a PCIE x1 slot. I found that even a low end GPU's performance suffered significantly in a PCIE X1 slot.

Until recently the best Motherboards have been LGA 1366, with boards for AMD CPU's being a cheaper alternative. Even now i7-980X (and similar 32nm CPU) systems fair slightly better than the i7-3960X for CPU crunching (clock for clock). So until now the only benefit of these overly expensive systems is the reduction in power, and knowing that they were somewhat future-proofed...

With AMD's latest GPU being fully PCIE3.0 compliant this has changed the picture somewhat. While we cannot use these here (yet), they offer potential. Within a few months (Feb to Apr) NVidia's GeForce 600 series will start turning up, and when they do the high end GPU's in this range will all support PCIE3.0.

Note that a PCIE3.0 X8 slot hosting a fully PCIE3.0 compliant GPU is as fast as a PCIE2.0 X16 slot. PCIE3.0 is also backward compatible with PCIE2, so existing GPU's work. IIRC 2011 boards only have 42 PCIE channels, but that is enough for one GTX 600 @ PCIE3.0 X16 and 3 GTX 600's running at PCIE3.0 X8.

I have no intention to use numerous GPU's in a single board, so I bought and presently use an i7-2600 in an MSI motherboard that has two PCIE3.0 slots. Obviously not an option if you want more than 2 GPU's. If and when both are used they drop to PCIE3.0 X8 (but again, as good as two PCIE2.0 X16).
The present card in that system is not benefiting from PCIE3.0 X16, unless it's 18.5% faster due to the overhead reduction (new PCIE3.0 algorithm) - I was just future proofing, and at the time I didn't want to spend the cash speculatively on an immature 2011 based system.

So, for future-proofing, and slightly lower running costs, you could opt for a 2011 plenum board (if you have the cash to build such a rig). If not then a less expensive 1366 based rig, with higher running costs, and if 1 or 2 high end GPU's is all you want then something based on the SB LGA1155's would do.

Anyone thinking about the forthcoming GF600 series should consider a PCIE3.0 board, and if they want an LGA2011 they should check the PCIE3.0 slot specifications. The better boards seem to be one PCIE3 X16 + three PCIE3.0 X8.

Personally, I would not consider LGA2011 until Intel's 22nm cores turn up and fall significantly in price. $1000 for a system without any GPU's is too much spent on the wrong things.

The 'Sapphire Pure Black' for AMD CPU's is only PCIE2. Not sure if there even are any PCIE3 motherboards for AMD CPU's yet?

The Asus P6T7 can only facilitate three x16 GPU's or six x8 GPU's on PCIE2.0 channels and it's single slot width, so you could only have 4 big GPU's.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23004 - Posted: 18 Jan 2012 | 19:55:56 UTC

That is an interesting question which has to be answered directly with another question: how extreme do you want to get? A couple points to consider:

- running costs are going to be very high
- it's going to be very loud
- forget a case, if possible at all, it's not helping cooling
- with water cooling you can use boards with single slot spacing
- there are flexible PCIe riser cables, which let you use boards with single slot spacing and GPUs with their default coolers
- you can use GTX590 to alleviate the "slot pressure"
- using 7+ GPUs (double GPU cards count double here) may require a custom bios from the manufacturer
- BOINC may get troubles recognizing many GPUs
- you may need 2 PSUs, probably rather 80+ Platinum than Gold

So you see, there's no straight answer to "what's possible".. except "a lot" :D

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23006 - Posted: 18 Jan 2012 | 21:56:52 UTC - in response to Message 23004.

Anything over ~$2000 deserves the bespoke touch.

____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Palamedes
Send message
Joined: 19 Mar 11
Posts: 30
Credit: 109,550,770
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23014 - Posted: 19 Jan 2012 | 16:28:05 UTC - in response to Message 23006.

Wow guys this is great information, thank you very much.

It had occured to me that the most slots coming out of a ATX case was 7 and so I assumed that would be the max, but with modern cards being 2 slots wide or even 3 in some cases I didn't know how feasible it would be to try to get all 7 working or if 7 thin cards would be as good as one fat beast..

I didn't even consider the slot speeds/lanes.. Never even occured to me..

I have a GTX 580 with a water block running in one of my current crunchers and it works nicely and I figured that I might be able to get enough of those (or something similar) to actually get 7 running cause they will fit in a single slot at that point, but hell the link Skgiven posted now I'm thinking that's not necessary..

The home brew external case looks like a fun project and exactly the kind of nerdporn I can get behind.. =) Completely nerdy and will likely cause my wife to go into fits and spasms but it looks fun to me!

The only question I would have then is with the PCIE risers; They look to be just ribbon cable that you plug into the board, then up to the card.. but they appear to only use 1/3 of the slot? How does that work?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23017 - Posted: 19 Jan 2012 | 18:23:59 UTC - in response to Message 23014.

You can get full PCIE raisers,


You could also use every other slots on the board directly (so long as you reinforce the back of the motherboard).
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

jlhal
Send message
Joined: 1 Mar 10
Posts: 147
Credit: 1,077,535,540
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23083 - Posted: 22 Jan 2012 | 17:03:02 UTC - in response to Message 23017.

Don't forget , the more GPUs , the more POWER.
I plan 2 GTX590 (this gives 4 GPUs) on an ASUS Sabertooth 990Fx and an AMD fx8150 -> Need a 1200W PSU !
____________
Lubuntu 16.04.1 LTS x64

Palamedes
Send message
Joined: 19 Mar 11
Posts: 30
Credit: 109,550,770
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23084 - Posted: 22 Jan 2012 | 17:08:50 UTC

Or a second PSU.. There are nice little boards over at frozencpu that allow you to have one PSU that turns on the second one..

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23085 - Posted: 22 Jan 2012 | 18:43:50 UTC - in response to Message 23083.
Last modified: 22 Jan 2012 | 18:44:12 UTC

I plan 2 GTX590


Just in case you didn't already know: wait for Kepler, which should be just 1 - 2 months away. Either it's way better than the current ones (one certainly more efficient), or it at least drives drives the prices of current cards down. and you should be able to get some good 2nd hand cards from people who upgrade.

MrS
____________
Scanning for our furry friends since Jan 2002

jlhal
Send message
Joined: 1 Mar 10
Posts: 147
Credit: 1,077,535,540
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23086 - Posted: 22 Jan 2012 | 19:05:34 UTC - in response to Message 23085.

I plan 2 GTX590


Just in case you didn't already know: wait for Kepler, which should be just 1 - 2 months away. Either it's way better than the current ones (one certainly more efficient), or it at least drives drives the prices of current cards down. and you should be able to get some good 2nd hand cards from people who upgrade.

MrS

You are right but as I just bought a GTX590 a few days ago, I want to wait cause this is expensive (more than 700€ for a Gigabyte).
For the moment I want to mount the 2 GTX460 I already have on the Sabertooth mobo I already purchased with the PSU and CPU and DDR and watercooling for the CPU...

____________
Lubuntu 16.04.1 LTS x64

Palamedes
Send message
Joined: 19 Mar 11
Posts: 30
Credit: 109,550,770
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23098 - Posted: 23 Jan 2012 | 15:01:36 UTC - in response to Message 23017.

You can get full PCIE raisers,


You could also use every other slots on the board directly (so long as you reinforce the back of the motherboard).



What kind of "gotchas" are there with using risers like this?

And where can I find them? I have been browsing around and going to my local electronics shops and can't find ribbon risers like this..

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23104 - Posted: 23 Jan 2012 | 18:27:47 UTC - in response to Message 23098.

The obvious gotcha would be length - you can get several different lengths.

Try a search for 'PCIE riser', rather than PCIE raiser (uk en).
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,206,655,749
RAC: 261,147
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23110 - Posted: 23 Jan 2012 | 21:59:43 UTC - in response to Message 23084.

Or a second PSU.. There are nice little boards over at frozencpu that allow you to have one PSU that turns on the second one..

It's very easy to turn on the second PSU, just wire the green cables of the PSUs (they will be grounded together by other cables like the PCIe power cables). You have to choose your second PSU carefully: it has to be regulating the 12V only, and it should use DC-DC converters for making 5V and 3.3V from the 12V. If it's not this type (regulating the 5V and the 12V, and making 3.3V with DC-DC converters), then it should have some load (for example a HDD) on the 5V rail.

Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 23257 - Posted: 5 Feb 2012 | 15:40:58 UTC

I made some research from wikipedia, a cheap vendor and cpubenchmark. Just in case it can help to make a decisition on what Socket and CPU buy.
---------------------------------------- Socket ------ GHz ------ € ------ CPU Mark---- CPU Mark/€
Intel Core i7-950 ---------------------> 1366 ---> 3,2 -----> 317 -------> 6715 ---> 21 -------> 130 W
Intel Core i7-950 ---------------------> 1366 ---> 3,06 ---> 286 -------> 6422 ---> 22 -------> 130 W
Intel Core i5-661 ---------------------> 1156 ---> 3,33 ---> 242,95 ---> 3293 ---> 14 -------> 87 W
Intel Core i5-760 ---------------------> 1156 ---> 2,8 -----> 193,67 ---> 4597 ---> 24 -------> 95 W
Intel Core i5-660 ---------------------> 1156 ---> 3,33 ---> 184,95 ---> 3175 ---> 17 -------> 73 W
Intel Core i5-650 ---------------------> 1156 ---> 3,2 -----> 172,94 ---> 3211 ---> 19 -------> 73 W
Intel Core i3-540 ---------------------> 1156 ---> 3,06 ---> 139,15 ---> 2850 ---> 20 -------> 73 W
Intel Core 2 Duo E7600 --------------> 775 ---> 3,06 ---> 129 -------> 2119 ---> 16
Intel Core i3-560 ---------------------> 1156 ---> 3,33 ----> 121,95 ---> 3148 ---> 26 -------> 73 W
Intel Pentium Dual-Core G6950 --> 1156 ---> 2,8 ----> 100,95 ---> 2039 -----> 20 -------> 73 W
Intel Pentium Dual-Core E6800 ----> 775 ---> 3,33 ---> 96,5 -----> 2373 -----> 25 -------> 65 W
Intel Core i3-550 ---------------------> 1156 ---> 3,2 -----> 96,08 ----> 3100 ---> 32 -------> 73 W
Intel Core 2 Duo E7500 --------------> 775 ---> 2,93 ---> 90 -------> 1988 -----> 22

Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 23260 - Posted: 5 Feb 2012 | 20:26:33 UTC
Last modified: 5 Feb 2012 | 20:47:08 UTC

Thaks very much about all the info to Retvari Zoltan and Skgiven.

It took me a while find a good comparer of the motherboards and go throught all the characteristics. Finally I found this on MSI

The motherboard I chose is Z68A-GD55. Quite cheap.
• 2 PCI Express gen3 x16 slots
• Seems well spaced for 2 GPU
• 32GB Max four unbuffered DIMM of 1.5 Volt DDR3 1066/1333/1600*/2133*(OC) DRAM,
• Military Class II. Supponse it will last longuer if it can handle higher temperarutes.
• Supports Intel® Sandy Bridge processors in LGA1155 package (i3/i5/i7)

Hope this helps.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23261 - Posted: 5 Feb 2012 | 20:38:44 UTC - in response to Message 23260.

That board seems like a solid choice. However, it's nothing special. The PCIe lanes come off the CPU, which means if you use 2 GPUs it's going to be 2 x 8x, is it is with all socket 1155 boards. And the current Sandy Bridge CPUs don't support PCIe 3, as far as I know. That's only going to be possible with Ivy Bridge (soon to come).

However, the bandwidth will be fine for GPU-Grid :)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,206,655,749
RAC: 261,147
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23262 - Posted: 5 Feb 2012 | 23:07:24 UTC - in response to Message 23260.

I'm not sure if you are aware of that none of the CPUs on your list will fit in the motherboard you've chosen.
From the Socket 1155 motherboard series I would suggest the ASUS Maximus IV Extreme-Z motherboard, it's not cheap, but it has 4 PCIe 2.0 x16 connectors, 2 of them can work at x16 speed at the same time.
But if price per performance is important, the best choice is the ASUS P7P55 WS Supercomputer motherboard. It's a Socket 1156 motherboard, so you will need an older Core i7 (i5 or i3) CPU for it (from your list).

Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 23263 - Posted: 5 Feb 2012 | 23:34:22 UTC
Last modified: 5 Feb 2012 | 23:37:37 UTC

I'm planning to build two computers with that motherboard and Pentium i5 2300. with 2 GPUs each.

From my experience I prefer to have 2 computers with the same configuration. Just because when something breaks it's quite easy to find and fix. I spent too much time before with LINUX drivers and old hardware in the other computers I had before.

On the other hand normally prices are exponential. Not only in CPUs, but in the rest of the hardware, so from my point of view I think it's better to have x2 lower end computer than 1 super high.

To be able to handle >4 GPU the processor and the memory has to be i7 or Xeon. With that price I could build a third computer.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23264 - Posted: 6 Feb 2012 | 1:54:23 UTC - in response to Message 23257.
Last modified: 6 Feb 2012 | 2:01:38 UTC

The choice of GPU(s) is the most important consideration.
While CPU performance is significant for GPUGrid, the more GPU's you have the more the motherboard becomes important. There are many architectures to chose from today and performance does not change much across several (i7-9xx, i7-8xx, i7-2000, i7-3900). AMD have a sizable range too, but performance is generally lower. System RAM speeds and amount tend not to make much of a difference for GPUGrid, so don't overly invest there. Ditto for the HDD.

The GPU (CC2.0/2.1), CPU and PSU are key to power and efficiency.
The SB processors offer excellent performance for their power consumption (stock ~65W crunching flat out). A mildly overclocked i7-9XX (to match the SB performance) is likely to use ~130W, which might push your PSU requirements up a notch. On the other hand a good 1366 board could support more than two GPU's at reasonable PCIE levels, and also offers triple channel RAM and more CPU threads.
In terms of crunching, it's been repeatedly demonstrated that the performance of the i7 3960X is no better than an i7-980. So at present the only benefits of these overly expensive systems is reduced power requirements and support for PCIE3, which is supposedly limited to a few recent boards, and as yet not for any NVidia GPU...

PCIE 3 X8 is as fast as PCIE 2 X16 if you can use it, but PCIE3 won't be usable at GPUGrid until a Kepler turns up, or we start using AMD's most recent GPU's and then only on the expensive LGA2011 systems.

I bought a similar board as Damaraland (Z68A-G45), partially because I like MSI/dislike others, but mostly because at the time MSI were the only company locally selling a 'potentially' PCIE3 future proofed LGA1155 mother board. I never expected to see PCIE3 performances with my GTX470; the impossible to obtain (theoretical) performance increase gained by moving from PCIE2 X16 to PCIE3 X16 would make for very little improvement in task runtimes anyway, probably ~1%. Even on a GTX580 it would be no more than 5%. The benefit would however be seen in supporting a much faster GPU.

It never even crossed my mind that the CPU would need to be PCIE3 compatible! I presumed PCIE3 compatibility lay with the motherboard and GPU. So it looks like PCIE3 motherboards are being sold when there are no existing CPU's that fit that board to make it PCIE3 capable. Asus and Gigabyte are at this too, however there is hope if you have a PCIE3 SB board; there will be a 22nm LGA1155 CPU's that will be PCIE3 compatible. If the next generation of GPU is say 60% faster than a GTX580 then using one PCIE3 x16 slot could be ~8% faster than PCIE2 x16. AnandTech demonstrated that the compute capability of an HD7970 is 9% higher using PCIE3 than when using PCIE2.

Anyway PCIE3 is fully backward compatible so I lost nothing, and later this year I will have more options if I decide to upgrade my GPU, even if it means a new 22nm 1155 CPU too.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23269 - Posted: 6 Feb 2012 | 17:11:50 UTC - in response to Message 23264.
Last modified: 6 Feb 2012 | 17:12:04 UTC

AnandTech demonstrated that the compute capability of an HD7970 is 9% higher using PCIE3 than when using PCIE2.

That's only true for this specific code. It really varies on an app-to-app basis: you could only work within the GPU cache, like MW does, or stream huge amount of data between system memory and GPU or require frequent communication between the two. In the latter case a faster interface will help, but only then.

@Damaraland: building two value-computers for crunching with 2 GPUs each seems like a good idea to me. You avoid many hassles, can get away with smaller PSUs and I dare say a Celeron G530 would be enough to power 2 GPUs at GPU-Grid. You could always drop in an Ivy Bridge Quad later on. Just don't skimp on the GPUs, that's not worth it :)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 23271 - Posted: 6 Feb 2012 | 17:33:49 UTC - in response to Message 23269.

You could always drop in an Ivy Bridge Quad later on. Just don't skimp on the GPUs, that's not worth it :)


Hmmm very, very interesting...

Ivy Bridge to launch on April 8

Prices of Ivy Bridge desktop CPUs


Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23276 - Posted: 7 Feb 2012 | 0:15:22 UTC - in response to Message 23271.
Last modified: 7 Feb 2012 | 0:45:42 UTC

When crunching at stock the voltage on my i7-2600 is 1.18 and my i7-2600K is 1.05V. The i7-2600K uses 65W crunching; 30W less than the TDP, or just under 70% of the TDP. If IB with it's 77W TDP performs similarly then it's likely to use around 54W when crunching. The clock of the top IB is 'disappointingly' the same as the SB, 3.5GHz, and there are no more cores, but if it performs as expected, around 15% better clock for clock, then that would be the equivalent of a SB at 4GHz and only using 54W.

In terms of performance per Watt that's about 38% better than SB and at the same price. Tick-tock!?! Yes, but - for a crunching system with a top GPU the power saving would probably be <5% of the entire systems draw. Suddenly that 15% CPU boost doesn't look that special, and nor should it, GPU's are where the heavy work is done. To me Intel doesn't want 1155 to do too well. Intel began with dual channel RAM, rather than triple and 5years after introducing a quad core CPU we are still stuck with 4 cores for desktops. Why not a 4GHz IB, or at least 3.8GHz (without turbo)? It would have been inside 95W. Even a 3GHz 6 core CPU would have been ~95W.

So for GPU crunching the only real benefit of IB on LGA1155 is just PCIE3 support, assuming you have a PCIE3 capable motherboard. Even then this would only noticeably benefit those who purchase a high end PCIE3 capable GPU.
Anyway IB is due on 8th Apr (2months), and NVidia will probably release their big Kepler's around that time.

I would not want to be buying a system for crunching right now; to many not so great options, but if you must there are still choices. Either get a cheap GPU such as a GTX470 or a good GPU like the GTX570. A good GPU will still be a good GPU in 3months and in 6 to 12months you would still get a reasonable return for a SB and a GTX570.

The system choices are:
Something AMD to keep the cost down, an i7-800 or i7-900 based system. All of these have reduced/no upgrades paths for CPU and no PCIE3 capabilities. A bit short sighted and heavy on the running costs, but possibly lighter on the up-front costs, especially with an AMD system.

A SB with a PCIE3 capable motherboard. OK if you will be happy to upgrade to IB and want to get a replacement GPU (or perhaps two, but no more) in the future.

Alternatively you could wait until next week and content yourself with a $285 LGA2011 Core i7 3820 for a year. This will allow you to upgrade/add a PCIE3 GPU whenever they turn up at a reasonable price, say six to nine months, and without having to upgrade the CPU. Should you want to upgrade the CPU then there are existing 3960X and 3930K 12thread 32nm processors, a potential 3980X, probably within the next 6months, and a 22nm IB will come along in the form of Ivy Bridge-E some time in the more distant future (Q4).
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,206,655,749
RAC: 261,147
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23278 - Posted: 7 Feb 2012 | 9:31:31 UTC - in response to Message 23276.

In terms of performance per Watt that's about 38% better than SB and at the same price. Tick-tock!?! Yes, but - for a crunching system with a top GPU the power saving would probably be <5% of the entire systems draw. Suddenly that 15% CPU boost doesn't look that special, and nor should it, GPU's are where the heavy work is done. To me Intel doesn't want 1155 to do too well. Intel began with dual channel RAM, rather than triple and 5years after introducing a quad core CPU we are still stuck with 4 cores for desktops. Why not a 4GHz IB, or at least 3.8GHz (without turbo)? It would have been inside 95W. Even a 3GHz 6 core CPU would have been ~95W.

You've answered your question:

GPU's are where the heavy work is done.

Intel knows it too, that's why the second benefit of IB is its IGP, which is much better than SB's. But it's still not a match for Fermi, nor Kepler, nor the new AMD GCN architecture, so from the crunchers' point of view the only benefit that matters is its PCIe3 support.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23282 - Posted: 7 Feb 2012 | 12:36:09 UTC

Sure, for GPU-Grid the CPU performance doesn't matter. Any Quad would be too much ehre to serve 2 GPUs. However, in BOINC land we sometimes run non-GPU stuff on our CPUs, which is very IB would be an excellent choice.

i7 800: outdated and you don't save much by going for S1156 compared to S1155.
i7 900: even more outdated and 30 W higher idle power consumption of the system. Bad choice (in the context of this thread).

@SK: expect ~2% more CPU performance per clock from IB compared to SB. 15% would be massive, I doubt even Haswell will be able to pull this off (except using new instructions).

Regarding higher clocks: Intel doesn't think it's neccessary right now. And they're probably right about this..

Regarding more cores: Intel is happy to give these to you.. in a fancy socket 2011 dress with 4 memory channels. Going triple channel on the mainstream platform wouldn't have been cost effective.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,206,655,749
RAC: 261,147
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23284 - Posted: 7 Feb 2012 | 12:57:43 UTC - in response to Message 23282.

i7 800: outdated and you don't save much by going for S1156 compared to S1155.

You could save a lot if you could buy some used i7-8x0 series CPU. Also, the socket 1156 motherboards (even new ones) are cheaper. A cruncher MB does not require SATA3 or USB3.0, only PCIe3 matters, when the Kepler and the IB will arrive in april, so I would rather wait to buy anything until then.

i7 900: even more outdated and 30 W higher idle power consumption of the system. Bad choice (in the context of this thread).

Idle power of a cruncher PC? This argument made me LOL. :D

Regarding higher clocks: Intel doesn't think it's neccessary right now. And they're probably right about this..

Agreed. If someone needs the extra speed, one could buy either a K or X series CPU.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23288 - Posted: 7 Feb 2012 | 13:53:44 UTC - in response to Message 23284.

Idle power of a cruncher PC? This argument made me LOL. :D

Sorry, I was actually thinking something.. :p
If you meausre idle power, the (modern) CPU is basically out of the equation. What's drawing power then is first and foremost the mainboard chipset, followed by RAM, drives etc.

So if a platform (with CPUs with similar excellent power saving features) draws 30 W more at idle, the same holds true under load. Example for a 100 W CPU, and simplifying a little:

S1155: idle 40 W, + 100 W CPU -> 140 W
S1366: idle 70 W, + 100 W CPU -> 170 W

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23299 - Posted: 8 Feb 2012 | 1:35:22 UTC - in response to Message 23288.

The i7-800's can only support 8thread CPU's (plenty for most), whereas the i7-980 and similar are 32nm 12thread processors. So for those of us that crunch CPU projects,
140W/8=17.5 and 170/12=14.16 W/thread.
So if you want more threads it's 1366 or 2011. 1366 is cheaper but doesn't support PCIE3. 2011 is more expensive to purchase but cheaper to run and 'some' boards support PCIE3. Lets hope April's arrivals are PCIE3 competent.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

raTTan
Send message
Joined: 17 Mar 11
Posts: 7
Credit: 28,985,881
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 23437 - Posted: 13 Feb 2012 | 2:56:24 UTC

Just please remember that while we compute compute for cures, we also are contributing to disease via environmental degradation. I can understand using your normal computer to crunch in the off-time but its questionable whether buying multi-kilowatt machines specifically for this is worthwhile. Chances are much of your energy comes from non green sources, but even if it does, a lot of pollution goes into the manufacturing of computer parts don't forget.

I'm sure it's fine if only a limited few are doing this but I don't think it would be reasonable for everyone to have 1000 watt computers running 24/7. And please, if you compute in the summer (I don't), dont put it in an air conditioned area because that will effectively triple (I think is a reasonable approximation) your energy consumption to compensate for it. On the flip side you could use it as your heater in the winter, which would effectively mean you are running it for free if you normally need a lot of heat in the area you have it :]

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,206,655,749
RAC: 261,147
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23451 - Posted: 13 Feb 2012 | 13:39:36 UTC - in response to Message 23437.

Just please remember that while we compute compute for cures, we also are contributing to disease via environmental degradation.

This is a hypocritical reasoning.

I can understand using your normal computer to crunch in the off-time but its questionable whether buying multi-kilowatt machines specifically for this is worthwhile. Chances are much of your energy comes from non green sources, but even if it does, a lot of pollution goes into the manufacturing of computer parts don't forget.

It's all the same for the gaming computers, and they don't generate any scientific progress, just pollution (and amusement).
Oh, and don't forget the known and unknown multi-megawatt supercomputers, used for rendering movies, breaking codes, monitoring phone calls, simulating nuclear weapons etc.

I'm sure it's fine if only a limited few are doing this but I don't think it would be reasonable for everyone to have 1000 watt computers running 24/7. And please, if you compute in the summer (I don't), dont put it in an air conditioned area because that will effectively triple (I think is a reasonable approximation) your energy consumption to compensate for it.

Triple (200% more energy for cooling) is overestimation. The air conditioner is a heat pump, it consumes the fraction of the energy compared to the transmitted energy. 15-20% more energy for cooling is reasonable. The Sun heats the Earth's surface in the summer approximately 1.5-2kW per square meters, so an 1kW computer is not too much extra.

On the flip side you could use it as your heater in the winter, which would effectively mean you are running it for free if you normally need a lot of heat in the area you have it :]

This is a self-justification. It's not free, because electricity costs and pollutes double compared to regular heating methods. It's worth it, when the heating is a side effect of crunching.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23454 - Posted: 13 Feb 2012 | 20:27:40 UTC

Compute centers usually calculate with a 1:1 ratio between generated heat and power needed to cool it down. So I agree: running PCs "just for fun" with an AC is not the best idea in the world.. although I understand it's kind of normal in some southern US states.

Running in winter: in Europe and colder regions it's normal to use much more (money-)efficient heating than electricity. However, as far as I understand, it's rather normal in (maybe again southern?) US states to heat electricity. In this case it's a 1:1 exchange, so "basically free".

MrS
____________
Scanning for our furry friends since Jan 2002

Profile K1atOdessa
Send message
Joined: 25 Feb 08
Posts: 249
Credit: 386,970,941
RAC: 1,283,450
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23456 - Posted: 14 Feb 2012 | 4:11:41 UTC - in response to Message 23454.

Electric heat pumps are common in the southern US, where temperatures generally do not fall below freezing. Further north, natural gas is common (what I have in North Carolina), as well as standard heating oil. It's a mixed bag. Geothermal is available, but not widely used to date.

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 23592 - Posted: 21 Feb 2012 | 13:21:53 UTC - in response to Message 23456.

Been crunching a while and had 2 different systems fail due to heat so be careful if you try to cram too many hot things is a box.

Had a 6 core 1055t with 3x 9800GT cards crunching and the mother board caught on fire!

Had 2x ATI HD 5770 cards crunching and the hot one (the top one that had restricted air flow) gave up the ghost. Separate those cards if you want them to last.

mikey
Send message
Joined: 2 Jan 09
Posts: 297
Credit: 6,133,081,625
RAC: 30,069,742
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23593 - Posted: 21 Feb 2012 | 13:56:25 UTC - in response to Message 23592.

Been crunching a while and had 2 different systems fail due to heat so be careful if you try to cram too many hot things is a box.

Had a 6 core 1055t with 3x 9800GT cards crunching and the mother board caught on fire!

Had 2x ATI HD 5770 cards crunching and the hot one (the top one that had restricted air flow) gave up the ghost. Separate those cards if you want them to last.



Or use something like this:
http://www.netstor.com.tw/_03/03_02.php?ODI=
There are several different sizes available.

Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 23597 - Posted: 21 Feb 2012 | 15:10:48 UTC - in response to Message 23592.

Heat always reduces the life of electronic components.

Had 2x ATI HD 5770 cards crunching and the hot one (the top one that had restricted air flow) gave up the ghost. Separate those cards if you want them to last.

That's why I chose the MSI board I mentioned before. The two PCI are well spaced. Plenty space for air flow between 2x GTX 560.


bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 23603 - Posted: 22 Feb 2012 | 4:23:29 UTC - in response to Message 23593.

Or use something like this:
http://www.netstor.com.tw/_03/03_02.php?ODI=
There are several different sizes available.


That looks cool, but how does it work? It looks like all the bandwidth goes to a single PCIe 1x slot??

mikey
Send message
Joined: 2 Jan 09
Posts: 297
Credit: 6,133,081,625
RAC: 30,069,742
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23607 - Posted: 22 Feb 2012 | 15:54:02 UTC - in response to Message 23603.

Or use something like this:
http://www.netstor.com.tw/_03/03_02.php?ODI=
There are several different sizes available.


That looks cool, but how does it work? It looks like all the bandwidth goes to a single PCIe 1x slot??


I don't personally use one so can't answer that, but I am sure the company will.

Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 23609 - Posted: 22 Feb 2012 | 16:40:49 UTC - in response to Message 23607.

I don't personally use one so can't answer that, but I am sure the company will.

There's a pdf with specs on the link

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 23618 - Posted: 23 Feb 2012 | 3:24:17 UTC

I'm having a little trouble with the math but assuming I'm reading the specs correctly that Turbo Box will be a bottleneck. It looks like the Turbo Box runs from a 4x link that must be shared between video cards. The stated top speed is 20Gb/s (note the lower case "b" indicating bits).

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 23624 - Posted: 23 Feb 2012 | 12:13:36 UTC - in response to Message 23618.

Would not recommended that to crunch with a top GPU, and I would not go with a laptop based system anyway. It would cost less to build a desktop box than go with that thing.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

bigtuna
Volunteer moderator
Send message
Joined: 6 May 10
Posts: 80
Credit: 98,784,188
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 23683 - Posted: 28 Feb 2012 | 6:27:28 UTC - in response to Message 23618.

I'm having a little trouble with the math but assuming I'm reading the specs correctly that Turbo Box will be a bottleneck. It looks like the Turbo Box runs from a 4x link that must be shared between video cards. The stated top speed is 20 Gb/s (note the lower case "b" indicating bits).
According to this:


http://en.wikipedia.org/wiki/List_of_device_bit_rates#Computer_buses

a PCIe 2.0 4x bus is good for 16 Gb/s which is the closest thing on the list. This is equivalent to PCIe 1.0 8x speed.

From my crunch boxes with multiple cards I can tell you that 4x vs 16x does make a difference, but not a huge difference. Cards in the 16x slot run perhaps 10% faster than they do in the 4x slot.

Now sharing a 4x slot would be even worse and a high end GPU would make it worse as well.

The laptop link is only 5 Gb/s so the situation is worse still on a laptop.

Wdethomas
Send message
Joined: 6 Feb 10
Posts: 38
Credit: 274,204,838
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwat
Message 41626 - Posted: 6 Aug 2015 | 19:50:33 UTC

I am using a SYS4027GR-TR server from Supermicro. It has an eight gpu capacity. Running right now with six GPU'S. Four GTX 780 ti, one GTX Titan and one GTX Titan X. Server has two six core E5-2600 v2 cpu's. Has two 1000W PSU and two PSU for redundancy. All works fine. 64DB is the noise level at two feet from server. I will need to get some 90 degree pcie 8 pin and 6 pin adapters because the lid will not close because the power connectors on the gpu's are on top. Small problem, but can be fixed.[/img]
____________

Post to thread

Message boards : Graphics cards (GPUs) : GPU Grid specific computer

//