Advanced search

Message boards : Graphics cards (GPUs) : Top hosts exceed 30,000+ RAC

Author Message
J.D.
Send message
Joined: 2 Jan 09
Posts: 40
Credit: 16,762,688
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwat
Message 6046 - Posted: 27 Jan 2009 | 2:45:25 UTC
Last modified: 27 Jan 2009 | 2:47:15 UTC

I just noticed that now two machines exceed the 30,000 mark of recent average credit. (Top Hosts)

Anyone care to speculate when the first machine will exceed 40K and 50K of RAC?
:-)

Matthew Lei
Avatar
Send message
Joined: 4 Dec 08
Posts: 7
Credit: 2,718,779
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 6047 - Posted: 27 Jan 2009 | 4:48:58 UTC

Care to share the specs of your host?

J.D.
Send message
Joined: 2 Jan 09
Posts: 40
Credit: 16,762,688
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwat
Message 6051 - Posted: 27 Jan 2009 | 6:33:58 UTC - in response to Message 6047.
Last modified: 27 Jan 2009 | 6:35:26 UTC

It's a 64-bit Linux system with a total of 4 GT200 class CUDA devices, made possible due to the 2-in-1 GeForce GTX 295. The Phenom 9550 CPU cores are not so impressive as those of Core i7, but they always seem able to keep the GPUs satisfied. Actively running more than two CUDA devices required an upgrade from a 750 Watt to a 1000 Watt power supply, now a Zalman ZM1000-HP.

Meanwhile... anyone yet running an eight GPU quad GTX 295 rig? ;-)

rapt0r
Send message
Joined: 4 Sep 08
Posts: 16
Credit: 9,366,617
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 6053 - Posted: 27 Jan 2009 | 9:55:41 UTC - in response to Message 6051.
Last modified: 27 Jan 2009 | 9:57:29 UTC

Consider the Price of this AMD Phenom the delivered power is although impressive. Now you can upgrade to Phenom II and show me a Intel-System with such a plattform-compatibility.

Question: Do you have 2 CPU-Socket's?

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6071 - Posted: 27 Jan 2009 | 23:46:09 UTC - in response to Message 6053.

Sorry, but what are you talking about? This is about the GPUs.. you just need a board with 2 PCIe slots, 2 GTX 295 and preferrably 4 CPU cores, though on Linux less may do for 4 GPUs.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 6073 - Posted: 28 Jan 2009 | 0:07:53 UTC - in response to Message 6071.

Sorry, but what are you talking about? This is about the GPUs.. you just need a board with 2 PCIe slots, 2 GTX 295 and preferrably 4 CPU cores, though on Linux less may do for 4 GPUs.

MrS


With 6.62 you might not even need that much on windows ... I am seeing less than 1% average CPU with that application. At that CPU rate you could even run all 4 GPU cores on a single CPU system (if we ignore bus bandwidth and CPU I/O bandwidth issues) ...
____________

Profile UL1
Send message
Joined: 16 Sep 07
Posts: 56
Credit: 35,013,195
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwat
Message 6100 - Posted: 28 Jan 2009 | 11:32:41 UTC - in response to Message 6046.

J.D. wrote:
Anyone care to speculate when the first machine will exceed 40K and 50K of RAC? :-)

" If " the rig will crunch without producing computation errors and doesn't freeze I'd expect the 40K to be reached around the weekend... ;) (Knock on wood)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6139 - Posted: 28 Jan 2009 | 21:37:20 UTC - in response to Message 6100.

Good luck mate! That would be almost half the output of my entire team.. :D

@Paul: yes, with 6.62 fewer cores may be perfectly fine. If they're not I wouldn't look for a problem with bandwidth (because that's in the realm of nanoseconds) but rather the 1 ms scheduler interval. If a single CPU core is busy serving GPU 1 and right now GPU 2, 3 and 4 also need an *whatever*, then they'll have to wait until serving GPU 1 is done and the scheduler grants them a time slice. Thus I choose the careful term "preferrably" ;)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 6142 - Posted: 29 Jan 2009 | 0:08:27 UTC - in response to Message 6139.

Good luck mate! That would be almost half the output of my entire team.. :D

@Paul: yes, with 6.62 fewer cores may be perfectly fine. If they're not I wouldn't look for a problem with bandwidth (because that's in the realm of nanoseconds) but rather the 1 ms scheduler interval. If a single CPU core is busy serving GPU 1 and right now GPU 2, 3 and 4 also need an *whatever*, then they'll have to wait until serving GPU 1 is done and the scheduler grants them a time slice. Thus I choose the careful term "preferrably" ;)

MrS


Um, well that is what I would class as CPU I/O bandwidth, because the CPU has only the one channel to service the interrupts ... a rose by any other name ...

But even a multi-core system still has potential issues with bandwidth for the same reason unless the MB has distinct and separate channels for each GPU to be serviced. Then we can get into the same issue with the dual, and soon to come quad, core systems where there is one I/O channel for each card and the two/four GPU cores are contending for service at the same time.

This is an issue that has dogged PCs for, like, forever ... though the CPUs we use have more power than the CPUs of mainframes of yore the I/O is simply not really there ... they are getting there, slowly ... but, some of those old systems were masters at I/O ...

In any case, my opinion we are in violent agreement ...

J.D.
Send message
Joined: 2 Jan 09
Posts: 40
Credit: 16,762,688
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwat
Message 6213 - Posted: 30 Jan 2009 | 12:53:27 UTC - in response to Message 6100.

" If " the rig will crunch without producing computation errors and doesn't freeze I'd expect the 40K to be reached around the weekend... ;) (Knock on wood)


40K!
Even sooner than the weekend. :-)

Profile UL1
Send message
Joined: 16 Sep 07
Posts: 56
Credit: 35,013,195
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwat
Message 6214 - Posted: 30 Jan 2009 | 15:24:10 UTC - in response to Message 6213.

I was pleasantly surprised too...especially because the rig had freezes and produced computation errors...
My next estimate would be 50 K on next wednesday... ;)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6257 - Posted: 31 Jan 2009 | 16:56:01 UTC - in response to Message 6142.

Um, well that is what I would class as CPU I/O bandwidth, because the CPU has only the one channel to service the interrupts ... a rose by any other name ...


I'm still not convinced.

Is there an interrupt at all? I don't know about the new method, but as far as I understand the polling is not an interrupt, it's just a normal task switch, which the scheduler would have done anyway.

The way I see it: a single core executes only one thread at a time. Thus when multiple GPUs need work all except one are blocked.. no matter how much I/O bandwidth you give that cpu, it couldn't execute the other threads at the same time. If you have multiple CPUs (be it physical ones, more cores or logical ones via multithreading) then each of them can process one thread at the same time and, with otherwise perfect software, lags / breaks could be avoided. What I need is the ability to execute several threads at once, not I/O bandwidth.

So.. I'm not sure if we're talking about the same thing ;)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Edboard
Avatar
Send message
Joined: 24 Sep 08
Posts: 72
Credit: 12,410,275
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 6307 - Posted: 2 Feb 2009 | 14:41:36 UTC

Four GPUs and 4 CPU cores means one gpugrid WU/core and so, no WU in cache. I have a 2 cores CPU and a gtx295 (2 GPUs) and I can not get that the Boinc scheduler feeds them without my personal intervention (which, e.g., is impossible if I'm sleeping) (Boinc 6.6.3)

Phoneman1
Send message
Joined: 25 Nov 08
Posts: 51
Credit: 980,186
RAC: 0
Level
Gly
Scientific publications
watwat
Message 6311 - Posted: 2 Feb 2009 | 15:38:25 UTC - in response to Message 6307.

Four GPUs and 4 CPU cores means one gpugrid WU/core and so, no WU in cache. I have a 2 cores CPU and a gtx295 (2 GPUs) and I can not get that the Boinc scheduler feeds them without my personal intervention (which, e.g., is impossible if I'm sleeping) (Boinc 6.6.3)


As mentioned in another thread recently 6.6.3 has a problem with uninitialized variables. Sooner or later, it won't get GPU work reliably.

Boinc version 6.5.0 seems to cause the least trouble; get it from here.

Phoneman1

Profile UL1
Send message
Joined: 16 Sep 07
Posts: 56
Credit: 35,013,195
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwat
Message 6435 - Posted: 5 Feb 2009 | 14:09:54 UTC

Couldn't keep my promise to reach 50K just in time (yesterday)...
...and am wondering if anyone else had some "ghost WUs"...?

Explanation: during the last two days I had WUs, that could only be seen by the BOINC-manager, but not in the web sites task list. So I had eight WUs to crunch whilst in the task list there could only be seen five or six as "in progress". Could have been acceptable...if these WUs would have been listed after they had finished and were submitted...but they seem to have vanished in the Lost-WU-Nirvana...an unnecessary loss of time and credits...kind of annoying...

Alain Maes
Send message
Joined: 8 Sep 08
Posts: 63
Credit: 1,650,875,008
RAC: 2,241,306
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6436 - Posted: 5 Feb 2009 | 14:46:06 UTC - in response to Message 6435.

Yes, I thought also something like that was happening.

Further investigation learned me that these WU were on the web page task list, but then on page two or even three. So just try "next" on top of the web page to see your next 20 WUs and so on. That is were you will find them.

Kind regards

Alain

Profile UL1
Send message
Joined: 16 Sep 07
Posts: 56
Credit: 35,013,195
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwat
Message 6439 - Posted: 5 Feb 2009 | 15:40:26 UTC

When I saw that there were less than the usual eight tasks "in progress" I checked the previous task-sides, but with no success: I couldn't find any new ones. Also, after submitting these "ghosts" neither the 'avg. cr' nor the 'tot. cr' for this host did change...

Phoneman1
Send message
Joined: 25 Nov 08
Posts: 51
Credit: 980,186
RAC: 0
Level
Gly
Scientific publications
watwat
Message 6440 - Posted: 5 Feb 2009 | 16:26:06 UTC - in response to Message 6439.
Last modified: 5 Feb 2009 | 16:26:40 UTC

Ul1, your list of computers shows 4 x i7s but 3 have not contacted the server this month. Those 3 also have a number of work units marked no reply. I wonder if the missing work units are to be found on these i7s??

Did you change your email details on this project or make some other change?? If so it might be worth merging those computers with the same name within this project.

Phoneman1

Profile UL1
Send message
Joined: 16 Sep 07
Posts: 56
Credit: 35,013,195
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwat
Message 6449 - Posted: 5 Feb 2009 | 19:27:38 UTC - in response to Message 6440.

As you mentioned: these rigs haven't done anything for the project this month...but the days I was dealing with the 'ghosts' were late monday and the whole tuesday...

And no: I didn't change anything...and these rigs will be back here as soon as they have cleaned their cache over at SETI...

J.D.
Send message
Joined: 2 Jan 09
Posts: 40
Credit: 16,762,688
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwat
Message 6738 - Posted: 17 Feb 2009 | 23:53:05 UTC - in response to Message 6435.

Couldn't keep my promise to reach 50K just in time (yesterday)...


Woo! 50K!
Here too!
Okay, so my machine took 12 days longer, but still. :-)

Meanwhile, the stats haven't yet shown a machine over 60K...

pharrg
Send message
Joined: 12 Jan 09
Posts: 36
Credit: 1,075,543
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 6780 - Posted: 19 Feb 2009 | 16:14:37 UTC

There are a some motherboards out now that can more than 2 full PCI-e slots. The ASUS P6T6 WS Revolution motherboard, for example, can run 3 boards in full x16 speed. Many will run 3 or more boards, but few will have them all at full x16. This board, and I believe there are a few others out there by other companies, could run 3 GTX295's at once at full speed, though I don't have the money to buy 3 of those right now.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6785 - Posted: 19 Feb 2009 | 18:49:04 UTC - in response to Message 6780.

16 or 8x PCIe 2.0 is uncritical for now. It's a bit critical though to run 6 GPUs (3 cards) in one box.. concerning PSU, cooling and noise.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile UL1
Send message
Joined: 16 Sep 07
Posts: 56
Credit: 35,013,195
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwat
Message 6793 - Posted: 19 Feb 2009 | 22:02:21 UTC - in response to Message 6785.

pharrg wrote:
There are a some motherboards out now that can more than 2 full PCI-e slots. The ASUS P6T6 WS Revolution motherboard, for example, can run 3 boards in full x16 speed. Many will run 3 or more boards, but few will have them all at full x16. This board, and I believe there are a few others out there by other companies, could run 3 GTX295's at once at full speed, though I don't have the money to buy 3 of those right now.

Right at the moment I'm running 3 GTX295 combined with an ASUS Rampage II Extreme, 3 9800GX2 are used together with an ASUS P6T6 WS...and I've got an ASRock X58 SuperComputer capable of running 4 cards waiting for being tested with GTXs. So there are some boards out there now for a while waiting to be used for what they were designed for: heavy crunching... ;)

MrS wrote:
16 or 8x PCIe 2.0 is uncritical for now. It's a bit critical though to run 6 GPUs (3 cards) in one box.. concerning PSU, cooling and noise.

Both rigs are using 850W PSUs from Antec to feed the GPUs and i7-CPUs, both oc'ed...without any probs...and I got an 1250W Enermax PSU to be used if the four card configuration should work. All rigs are air-cooled...and about the noise: well, that's something one has to accept if he wants to run such highly performant rigs... ;)

schizo1988
Send message
Joined: 16 Dec 08
Posts: 16
Credit: 10,644,256
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwat
Message 6794 - Posted: 19 Feb 2009 | 22:38:47 UTC - in response to Message 6793.

I was pleased to see that you are running 3 295's on a Rampage II Extreme, as I just ordered a new system yesterday with that same card and 2 295's but didn't know if it had room for a third one, my wallet on the other hand is not so pleased to learn this. I thought I was going to be able to put off an upgrade for quite a while but now instead I find myself already trying to figure out how toafford a third one Crunching is proving to be an expensive habit but I think I am hooked at this point. If CPU's were Coke, GPU's are most certainly Crack.

Profile UL1
Send message
Joined: 16 Sep 07
Posts: 56
Credit: 35,013,195
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwat
Message 6795 - Posted: 19 Feb 2009 | 22:49:32 UTC - in response to Message 6794.

You would need a case with 8 expansion slots to use three GTX295 together with an R II Ex, e.g. Lian Li V2010, Lian Li PC-P80 or an Lian Li V1000 Plus II (only 7 slots, but it will work)...

schizo1988
Send message
Joined: 16 Dec 08
Posts: 16
Credit: 10,644,256
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwat
Message 6803 - Posted: 20 Feb 2009 | 4:46:18 UTC - in response to Message 6795.


I was sorry to hear about needing 8 slots to install triple 295's, but you do mention a 7 slot case which can manage it, so I still have some hope, as my case is going to be a CoolMaster HAF 932 which has 7 slots. If not I save a few bucks.
If I am not using my system for gaming and won't ever need SLI functionality can I install a non 295 card in the extra slot, like a 280 or 285, or even my current 260.

Profile UL1
Send message
Joined: 16 Sep 07
Posts: 56
Credit: 35,013,195
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwat
Message 6805 - Posted: 20 Feb 2009 | 6:25:46 UTC - in response to Message 6803.

Sorry, but I think it won't work in your case: the Lian Li I've mentioned has the mobo installed upside down, so there's some space between the end of the mobo and the top of the case, enough for the part of the GTX that's overlapping...
The combo of 2 GTX295 + another single slot card should work...

Post to thread

Message boards : Graphics cards (GPUs) : Top hosts exceed 30,000+ RAC

//