Message boards : Number crunching : Specs of the GPUGRID 4x GPU lab machine
Author | Message |
---|---|
The technical specifications for this machine are the following: | |
ID: 4110 | Rating: 0 | rate: / Reply Quote | |
Wanna have it! | |
ID: 4111 | Rating: 0 | rate: / Reply Quote | |
You sure thats a Antec Gamer Twelve Hundred Ultimate Gamer Case ??? I've been looking at them and in fact bought 1 this morning for $119 Shipped and it sure doesn't look like whats in the picture ... ??? | |
ID: 4112 | Rating: 0 | rate: / Reply Quote | |
It is not, you are right. Ours is more expensive and fits two 750W power supplies. One burnt and we got the 1500W. | |
ID: 4113 | Rating: 0 | rate: / Reply Quote | |
:) | |
ID: 4114 | Rating: 0 | rate: / Reply Quote | |
4x NVIDIA GTX 280 Not having any machines with multiple cards, I wonder, does a 4 card machine produce 4x the credit of single card machine or is there some trade off involved? | |
ID: 4116 | Rating: 0 | rate: / Reply Quote | |
4 times, maybe more from next year if we manage to use multiple cards at once. | |
ID: 4117 | Rating: 0 | rate: / Reply Quote | |
I've been looking at them and in fact bought 1 this morning for $119 Shipped and it sure doesn't look like whats in the picture ... ??? Doh.. the 4 GPUs are missing in your case! ;) MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 4118 | Rating: 0 | rate: / Reply Quote | |
hahaha ... not for Long, I ordered 4 280's as Accessories ... ;) | |
ID: 4123 | Rating: 0 | rate: / Reply Quote | |
4x NVIDIA GTX 280 Hi, how long need a one GT280 card to finish one task? how many of points per day is this card able to produce? Im asking, because i think, that something is wrong with my 9800GT. Every task is finished in cca 12hours. thats too much no? thanks ay lot for feedback - sorry for english :D P. | |
ID: 4219 | Rating: 0 | rate: / Reply Quote | |
how long need a one GT280 card to finish one task? Depending on the specific card, a GTX280 will produce something in the order of 13,000 credits per day or 4 WUs, each at about 6 hours each Im asking, because i think, that something is wrong with my 9800GT.No, that looks about right for a 9800GT | |
ID: 4221 | Rating: 0 | rate: / Reply Quote | |
thats nice performance.. | |
ID: 4222 | Rating: 0 | rate: / Reply Quote | |
With a 1500W PSU, are you consistantly pulling 1.5KW per hour? | |
ID: 4315 | Rating: 0 | rate: / Reply Quote | |
Most likely not, but we have not measured it. On Linux, you would be consuming just the amount of 4 GPUs as the processor do nothing. I would say that each 280 should consume around 150W, so the total should be around 800-900W. Still quite a lot even if GPUs are 10 times more power efficient than CPU for the same flops. | |
ID: 4318 | Rating: 0 | rate: / Reply Quote | |
I'm not an electric engineer, but I've seen enough share of posts all over the web claiming that the higher watt powered PSUs powering up your case, the better efficiency you'll get. | |
ID: 4353 | Rating: 0 | rate: / Reply Quote | |
I'm not an electric engineer, but I've seen enough share of posts all over the web claiming that the higher watt powered PSUs powering up your case, the better efficiency you'll get. That would depend on how the PSU is built. A cheap one usually is built cheply and not as efficeint. Some of the more expessive ones are more efficient. But more cost does not necessarily more more efficientcy. Another factor would be heat. Just as with CPUs, if the PSU is running cooler, it may be more efficient. A lot of what you hear may be true, but that would be on a case by case basis. Older PSU's are rated at about 65% efficient. Some new PSU's have a '80 plus' rating, but to get that they need to be that at 20%, 50% and 100% load. Size in Watts does not matter. A 1500W can have the same efficientcy as a 250W if it is built so. | |
ID: 4360 | Rating: 0 | rate: / Reply Quote | |
Take a look here for some good information about power supplys. The bottom line is that PS usually have the best efficiency between 50 and 80% load, whereas below 20% gets ugly. | |
ID: 4440 | Rating: 0 | rate: / Reply Quote | |
So now that the GTX 295 is out. when are you upgrading this baby to 4 of them? | |
ID: 5386 | Rating: 0 | rate: / Reply Quote | |
So now that the GTX 295 is out. when are you upgrading this baby to 4 of them? I think they would wait for the GTX212. See this message thread. | |
ID: 5412 | Rating: 0 | rate: / Reply Quote | |
4 GTX 295 would be awesome.. if only for the sake of it :D | |
ID: 5445 | Rating: 0 | rate: / Reply Quote | |
We are trying to build one. We would like to know what is the real power consumption to see if we can cope with a 1500W PSU. | |
ID: 5458 | Rating: 0 | rate: / Reply Quote | |
Even if we assume maximum TDP power draw of 300W for each card the system would stay below 1.5 kW. I'd estimate power draw under GPU-Grid to fall between 200 and 250 W, so sustained draw should be fine. Problems may arise during peak draw and due to load distribution within the PSU. I'm not sure if anyone could tell you reliably if it will work.. without testing. | |
ID: 5462 | Rating: 0 | rate: / Reply Quote | |
4 GTX 295 would be awesome.. if only for the sake of it :D Is this correct? Does it really take a top of the line quad to keep them fed, whereas a Q6600 could not? | |
ID: 5468 | Rating: 0 | rate: / Reply Quote | |
4 GTX 295 would be awesome.. if only for the sake of it :D Who knows, maybe they want to run some real projects too ... IN that case they want some real CPU power while they be playing with the GPU Grid thingie toys ... :) | |
ID: 5469 | Rating: 0 | rate: / Reply Quote | |
Is this correct? Does it really take a top of the line quad to keep them fed, whereas a Q6600 could not? It's not because it's a top of the line cpu, it's because 4 GTX 295 have a total of 8 GPUs, so ideally you'd want 8 CPU cores to keep them busy. A system with the smallest i7 should still be cheaper and more power efficient than a dual quad core. Ahh, my bad. I was assuming the system would run windows, which currently needs about 80% of one core for each GPU. The Devs probably prefer linux, where the CPU utilization is not a problem anyway. So forget about the i7 comment! MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 5470 | Rating: 0 | rate: / Reply Quote | |
Ahh, my bad. I was assuming the system would run windows, which currently needs about 80% of one core for each GPU. I run Windows Vista x64 right now, and 6.55 app on 6.5.0 BOINC uses about 6-7% of my available CPU - or about 28% of 1 core. I bet you could run 4 of these with a Q6600 (with no other projects) and not have the bottleneck be the CPU. | |
ID: 5471 | Rating: 0 | rate: / Reply Quote | |
I just checked, currently I'm at 11 - 13% of my quad, whereas it used to be 15 - 20%. Anyway, if performance on the GPUs suffers even a little bit you're going to loose thousands of credits a day.. and bite yourself in the a** for not having more cores or linux or a workaournd for win ;) | |
ID: 5473 | Rating: 0 | rate: / Reply Quote | |
If you build a 4x GPU system, the chances of using your CPU for anything else is slim, and the i7 uses significantly more power than C2Q (and DDR3 costs a lot more too). Just think it would be wise to save a couple hundred bucks to go C2Q. Maybe Q9450 or Q9550 instead of Q6600. | |
ID: 5475 | Rating: 0 | rate: / Reply Quote | |
I just don't like the looks of the i7's - they eat tons of power, and i've read lots of reports that they run really hot. I don't like the rediculous price of the X58 boards with tons of stuff I neither need nor want to pay for. But the processors themselves are great. Under load they don't use more power than an equally clocked Penryn-Quad but provide about 30% more raw number crunching performance. And under medium load they use considerably less power than a Penryn, because they can switch individual cores off. About running hot: I can imagine that the single die leads to higher temperatures at the same power consumption compared to a Penryn, where the heat is spread over 2 spatially separated dies. And I'd use proper cooling for any of my 24/7 CPUs anyway. Edit: after reading your edit, maybe I should make my point more clear. The GPU crunches along until a step is finished. Once it's finished the CPU needs to do *something* ASAP, otherwise the GPU can not continue. So if One CPU core feeds 2 GPUs there will be cases when both GPUs finish, but the CPU is still busy dealing with GPU 1 and can not yet care about GPU 2. The toal load of that core may be less than 100%, still on average you'd loose performance on the GPUs. That's why I started my thinking with "1 core per GPU". Later I remembered that under Linux the situation is much less critical. If each GPU only needs about 1% of a core I can imagine that a quad is good enough for 8 GPUs. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 5477 | Rating: 0 | rate: / Reply Quote | |
TDP for the GTX295 is listed as 289 watts here: | |
ID: 5479 | Rating: 0 | rate: / Reply Quote | |
Oh, yeah.. that's what I meant by "300 W" :D | |
ID: 5481 | Rating: 0 | rate: / Reply Quote | |
I`m just curious how you can manage work for 4 GPU`s? With quadcore, you can have just 4 WU`s. And all WU`s will be crunched (nothing in stock). When one WU will be ready crunched, then one GPU will be idle. Even with <report_results_immediately>1, each of your GPU will be idle over some time once every 6 hours. It means for times per 6 hours. It seems that you are wasting at least few hours of your GPU`s every day. | |
ID: 5728 | Rating: 0 | rate: / Reply Quote | |
@AiDec | |
ID: 5734 | Rating: 0 | rate: / Reply Quote | |
Hope this helps ... Sorry, but I don't think so. He's asking for how people are keeping 4 GPus on a quad core fed, where you're limited to 4 WUs at one time. Buying an i7 could help, but that ruins the "good crunching power per investment" relation which makes us use the GPUs in the first place. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 5736 | Rating: 0 | rate: / Reply Quote | |
Hope this helps ... Hmm, I could have sworn that I had as many as 6 tasks locally ... my bad I guess ... I suppose if the project is limiting the downloads to a max of 4 total then I should wait some time before I run out and get a pair of 295s ... I suppose the problem will arise if you run more than GPU Grid on the machine. If you only run GPU Grid then when the task is done and with 0.1 queue or less should contact the scheduler an get more work ... I guess I am still missing something ... ____________ | |
ID: 5738 | Rating: 0 | rate: / Reply Quote | |
@Paul D. Buck Buying an i7 could help(...) The case is not what could. The case is about what is now. Is the owner of this maschine wasting a lot of time of his cards or does he knows some tricks? The case is if there is any sense to have 4 GPU`s (cause I don`t see any sense to have more than 2x280 cause it`s impossible to manage 100% work for more than 2x280). There is just few questions which can tell us a lot about GPUGrid... I thought about multiple GPU`s at one maschine since 6 months. I had a maschine with 3x280 and I couldn`t get 100% work for all GPU`s. I`ve asked for 3xCPU tasks per comp, and I`ve asked for 2xCPU tasks per comp... And nothing happened. Then I`m asking what`s the way to fill up 4 GPU`s. ____________ | |
ID: 5739 | Rating: 0 | rate: / Reply Quote | |
@Paul D. Buck My bad ... I guess the last that is left to me is to suggest water ... probably about 4 gallons worth ... :) I guess I am fortunate in that I only have small GPUs with only one core per card so I don't see the walls ... I would guess the guy that we have been working one getting his 3 GTX 295s working is in for a disappointment too ... Sadly, there are only two projects that use the GPU on BOINC at the moment ... with GPU Grid being the best run to this point with the stablest application. I probably won't go more nuts until Einstein@HOme or some other project comes out with a GPU version of their application. If tax season does not hit me too hard I would like to build a machine again in April/May and by then may be these issues will be ironed out ... anyhow, sorry about the confusion ... ____________ | |
ID: 5740 | Rating: 0 | rate: / Reply Quote | |
I believe that the secret is that they don't run SETI (or any other project than this one). With no CPU-based projects, can't one effectively setup BOINC to use a '0+4' alignment by adjusting the CPU use percentages in the same way that others use a '3+1' in place of the default '4+1'? | |
ID: 5741 | Rating: 0 | rate: / Reply Quote | |
I don't think they run BOINC on their lab machine... My guess is that they manually feed it with jobs... ;) | |
ID: 5742 | Rating: 0 | rate: / Reply Quote | |
Perhaps we can get ETA to prevail on the people that make decisions. We have at least one potential system out there that is going to have 6 cores that will be available for work. | |
ID: 5751 | Rating: 0 | rate: / Reply Quote | |
In my opinion there is two possibilities: | |
ID: 5754 | Rating: 0 | rate: / Reply Quote | |
We have at least one potential system out there that is going to have 6 cores that will be available for work. Take a look at this one: Triple GTX295 I do hope I can keep this configuration...and am not forced to split it up again because I can't feed it 24/7... | |
ID: 5756 | Rating: 0 | rate: / Reply Quote | |
Just to point it out clearly: currently GPU-Grid limits the number of concurrent WUs a Pc can have to the number of cores, i.e. 4 on "normal" quad and 8 on an i7. That's why you can have 6 WUs overall, Paul :) | |
ID: 5791 | Rating: 0 | rate: / Reply Quote | |
Just to point it out clearly: currently GPU-Grid limits the number of concurrent WUs a Pc can have to the number of cores, i.e. 4 on "normal" quad and 8 on an i7. That's why you can have 6 WUs overall, Paul :) Ok, I mis-understood ... or was not fully up to snuff on the details ... heck, I have only been here a couple weeks ... :) So, I can consider getting a triplex of 295's for the i7 machine ... cool ... The real count should be number of GPU cores plus one I would think rather than the number of CPU cores. But that is just me ... As to the other comment, part of the problem is that the BOINC developers, like Dr. Anderson don't seem to be inclined to listen to users that much. This has been a continual problem with the BOINC System in that the three groups don't really interact that well SYSTEM WIDE ... this is not a slam against GPU Grid or any other project specifically ... but, in general, the communication between BOINC Developers, Users (participants) and the project staff is, ahem, poor at best ... THAT said, GPU Grid at the moment is one of the more responsive AT THIS TIME ... At one point in historical time Rosetta@Home was excellent ... six months later ... well ... it has never been the same ... Anyway, with the three groups isolated from each other and no real good structures to facilitate communication ... well ... real issues never get addressed properly ... ____________ | |
ID: 5793 | Rating: 0 | rate: / Reply Quote | |
Hi, | |
ID: 5796 | Rating: 0 | rate: / Reply Quote | |
The real count should be number of GPU cores plus one I would think rather than the number of CPU cores. But that is just me ... Edit: never mind this post, GDFs answer above says enough. Definitely, or even more if the server sees that results are returned very quickly. But BOINC has to know and report the number of GPUs reliably, which doesn't sound too hard but may not be the item of top priority. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 5797 | Rating: 0 | rate: / Reply Quote | |
Hi, Thanks for the answer ... almost as if we knew what we were doing ... :) ____________ | |
ID: 5802 | Rating: 0 | rate: / Reply Quote | |
We are trying to build one. We would like to know what is the real power consumption to see if we can cope with a 1500W PSU. http://www.extreme.outervision.com/psucalculator.jsp 9950 AMD Phenom X4 2.60Ghz, 2x DDR2 ram 4x NVIDIA GTX 295 2x SATA HDD 1x CD/DVD 2x 92mm Fans 4x 120mm Fans At 100% load and adding 20% Capacitor Aging = 1429 Watts 110.9 Amps on the 12 volt rail though. AFAIK there isn't a PSU readily available that can produce that much current on the 12 volt rail. | |
ID: 6099 | Rating: 0 | rate: / Reply Quote | |
Some nice number, but sadly not useful. The GPUs won't run anywhere near their specified maximum power draw of ~300 W. This calculator has no idea how power the cards will draw under 100% GPU-Grid load. And generally I found most calculators gave vastly exaggerated numbers.. but I'd have to give this one the benefit of the doubt. | |
ID: 6138 | Rating: 0 | rate: / Reply Quote | |
Seems very tempting to go dual 1500W power supplies.... Also, upgrading to Phenom II X4 CPU would actually reduce power consumption a little. Or forget the whole thing and build a new core i7 system w/ a dual power supply server chassis.... | |
ID: 6800 | Rating: 0 | rate: / Reply Quote | |
Dual 1.5 kW? If I look at the prices of these units i'm not tempted one little bit ;) And I'm sure you wouldn't need that much power, even for 4 GTX 295. | |
ID: 6817 | Rating: 0 | rate: / Reply Quote | |
4 GTX 295 would be awesome.. if only for the sake of it :D Interesting discussion about needing 8 CPU cores to feed 8 GPUs. Leaving aside for a second the fact that *currently*, GPUGRID won't download 8 WUs unless you have 8 cores, the question is whether, or more accurately, by how much, having less than 8 CPU cores would slow down the GPUs. After thinking about it for a while, I don't think 8 CPU cores are required. Here's why. The argument was made that if one core is feeding two GPUs, and both GPUs need new work at the same time, one will have to wait for the other to be serviced by the CPU. That is true. Let's call such an event a 'collision'. When a collision occurs, a GPU sits idle. That's bad. But it's not an accurate description of what's actually happening inside the computer. Let me explain. In a computer with, say, a Q6600 and 4x GTX295 (8 GPUs), the above example is simplifying the system from 4+8 to 1+2. While mathematically that arithmetic might be correct, it distorts (significantly) the metrics of how the system is going to perform. Assume that it takes 1/8 of a CPU core to service the GPU (which is about right on my Q6600+GTX280 running Vista). In a 1 CPU + 2 GPU system, with a purely random distribution of when the GPUs need new work, you would expect a collision to occur approximately 1/8 of the time. That's a significant performance hit. But let's look at what's happening on the real computer, which is 4+8, not 1+2. Each of the 8 GPUGRID tasks is NOT assigned to a specific CPU core. there's lots (probably a hundred or so) of tasks running on the computer, and all of them get swapped into the register set of an individual CPU core when needed. When that task is pre-empted by another task, its regsiter set is saved somewhere, and another task takes over that core. Since BOINC tasks run at lower priority than anything else, they get pre-empted almost continuously, whenever the computer needs to do anything else, such as servicing interrupts. As a result, the BOINC tasks should be hopping around between the four cores quite a lot. The important thing is that each GPU task is not running permanently on one of the four cores in the CPU, it's running on whichever core happens to be free at that instant. For a 1 core + 2 GPU system to have a collision, you merely need to have the second GPU need new work while the other GPU is in the process of receiving new work. There's a 1/8 chance of this. But in the real computer, with 4 cores, in order for a collision to occur, a GPU has to need new work while *five* of the other 7 GPUs are also requesting new work. What are the odds of that? (Someone correct me if my math is wrong, it's been decades since I studied probability.) With 4 cores, up to 4 GPUs can request work at the same time with 0% probability of collision because all 4 can be serviced at once. (note that I'm simplifying this somewhat...) With the 5th GPU, what's the probability of a collision? In order for a collision to occur, all of the other GPUs would need to request new work at the same time. The odds of that happening are 1/(8^^4), or approximately 0.025%. That's higher than the 0.00% rate with 4 GPUs, but is certainly still an acceptable rate. With the 6th GPU, the probability will rise. The chance of 4 of the other 5 GPUs needing servicing at the same time as the 6th GPU is (1+35)/(8^^5), which works out to 36/32768 or about 0.11%. Still pretty reasonable. With the 7th GPU, the chance of 4 of the other 6 GPUs needing servicing at the same time is (1+42+5!*7^^2)/(8^^6). This evaluates to (1+42+120*49)/262144, or 5932/262144, or 2.26%. With the 8th GPU, the chance of 4 of the other 7 GPUs being busy at the same time is (1+49+6!*7^^2+(6!/3!)*7^^3)/(8^^7), or (1+49+5040*49+120*343)/2097152, or (1+49+246960+4160)/2097152, or 251170/2097152, or 11.98%. So, if you add up all the collision rates avnd average them out over all 8 GPUs, you end up with a grand total of 1.79%. Granted, there's a LOT of uncertainty and extrapolation in those calculations, but if correct, you would see less than a 2% degradation in performance by running on 4 cores instead of 8. FYI, the 1-in-8 CPU utilization factor is based on my experience with a 2.4GHZ Q6600 running windows Vista. I understand that under Ubuntu the CPU utilization is much lower. In that case, the collision rate would drop exponentially, and 4 cores would be MORE than enough. Two (or even one) would probably surfice. I think I read somewhere about an upcoming change to GPUGRID to change the "1 WU per CPU core" rule to "1 WU per GPU". Assuming my calculations are valid, once that change is made there's really no reason to need 8 CPU cores to run 8 GPUs. Regards, Mike | |
ID: 7521 | Rating: 0 | rate: / Reply Quote | |
> "1 WU per CPU core" | |
ID: 7524 | Rating: 0 | rate: / Reply Quote | |
Hi Mike, | |
ID: 7535 | Rating: 0 | rate: / Reply Quote | |
After writing that long post, I was thinking about it some more, and realized that the impact would be even less than my calculations showed. | |
ID: 7538 | Rating: 0 | rate: / Reply Quote | |
Yes i guess the application is setup in multi processing thats the power of these cards do much at once. | |
ID: 7546 | Rating: 0 | rate: / Reply Quote | |
I don't think that the Power Supply is going to be the only problem when trying to run 4 295's, space is going to be a problem too I would imagine as these things are big. What type of motherboard are you planning on using, and what type of case. These things generate lots of heat so cooling is definitely going to be a concern as well, particularly since rather than venting the heat out the back of the card and outside the case these card vent out the side directly into your case. While the cost in the summer will be high, you will be able to save money in the Winter as you could heat a small home with 4 295's running 100% 24/7. | |
ID: 7835 | Rating: 0 | rate: / Reply Quote | |
My GTX 295's push hot air outside the case and suck it from inside. Four i would have to see. I think my sound cables are starting to melt being directly above the 295. The heat is extreme. | |
ID: 7994 | Rating: 0 | rate: / Reply Quote | |
I just had a thought on the WU cache issue, why not make it a user selectable value in preferences, say default 2/host at a time up to whatever the user believes his/her box can handle? | |
ID: 8287 | Rating: 0 | rate: / Reply Quote | |
1. your motherboard is an motherboard with an AMD Chipset(https://www.megamobile.be/productinfo/103151/Moederborden/MSI_K9A2_Platinum_-4x_PCI_Express_x16,_AMD%C2%AE_CrossF/) that means it only has ATI's crossfire X so you use only the first gfx card lolzor | |
ID: 8306 | Rating: 0 | rate: / Reply Quote | |
1. your motherboard is an motherboard with an AMD Chipset(https://www.megamobile.be/productinfo/103151/Moederborden/MSI_K9A2_Platinum_-4x_PCI_Express_x16,_AMD%C2%AE_CrossF/) that means it only has ATI's crossfire X so you use only the first gfx card lolzor as for 1. For GPUGRID SLI has to be disabled, so it also works on crossfire boards. 4 PCI-e = 4 cards that will be used, not only the first one. Do you really think the GPUGRID team built a machine with 4 graphics cards and they wouldn't know if only one card would be used? as for 2. See point 1. SLI has to be disabled, so it doesn't matter if it is a SLI or crossfire board and if it is triple-, quad- or octa-way. ;-) conclusion: Sorry but you obviously don't know what youre talking about... The next time do a little bit more research before posting things like this. ;-) ____________ pixelicious.at - my little photoblog | |
ID: 8309 | Rating: 0 | rate: / Reply Quote | |
Totally off topic but instead of pimping its hardware maybe gpugrid could get itself some reliable hardware that is actually up five 9s and has workunits available occasionally. As it is I am about to go back to Folding@Home with my ps3 and say heck with the boinc credits. I would rather the machine contributes to science instead of sitting there as a paper weight. | |
ID: 8381 | Rating: 0 | rate: / Reply Quote | |
...instead of pimping its hardware maybe gpugrid could get itself some reliable hardware that is actually up five 9s and has workunits available occasionally. Five 9's? Their hardware wasn't at fault; they lost power over a holiday weekend when nobody was around. Have you ever built a datacenter with that kind of reliability? Or rented space in a hosting facility that provides that kind of reliability? I have. It's horrendously expensive, but it's necessary for some applications such as financial services (can't have the stock markets going down every time there's a glitch, can we?). It's not just having reliable hardware. No hardware is perfect. So your software and network need to be designed to continue operating regardless of any single failure, which means having redundant everything -- multiple phone companies, UPS and generator power, geographically diverse locations for your redundant datacenters, etc. As annoying as it is, outages such as this are merely an inconvenience, and building up a high-availability system to run a BOINC project would be an incredible *RECURRING* waste of money that could be better spent elsewhere. There's no reason why a system like this needs that kind of resiliency. That being said, I don't mean to imply that their system couldn't be more resilient than it is today. But we don't have a lot of facts about what happened over the weekend, other than it being due to a power outage. If it was a short power outage and they don't have UPS, then, yes, we can fault them for not having UPS. But if it was an extended power outage, even a large and obscenely expensive UPS wouldn't be sufficient, and backup generator power would have been required to keep the system running. Cost aside, in some locations a generator isn't even an option. Mike ____________ Want to find one of the largest known primes? Try PrimeGrid. Or help cure disease at WCG. | |
ID: 8388 | Rating: 0 | rate: / Reply Quote | |
Totally off topic but instead of pimping its hardware maybe gpugrid could get itself some reliable hardware that is actually up five 9s and has workunits available occasionally. As it is I am about to go back to Folding@Home with my ps3 and say heck with the boinc credits. I would rather the machine contributes to science instead of sitting there as a paper weight. To add to what Michael G. said ... In many cases large organizations schedule power outages over holiday weekends because it impacts the fewest number of people and projects on going in the affected buildings. We saw this outage because power went out on the server systems we depend upon. That said, even Einstein that has something along the lines you suggest has been experencing outages too. And they have systems distributed across multiple locations and they could not stay on the air ... and I am not sure they are out of the woods yet. But that is the point of BOINC, at least in theory, while GPU Grid was out we would have just worked for other projects. The problem is that GPU computing is new to the BOINC world so there were not many places to go to get work. I went onto SaH Beta and did about 50-70 K worth of work for them ... and on one system until I can saturate it back with GPU Grid work I will feed it the last of those tasks ... | |
ID: 8414 | Rating: 0 | rate: / Reply Quote | |
I agree about costs and was being melodramatic. The reputation though of this project on the internet is that it does not reliably feed work units to speciality computing devices such as the ps3 as compared to folding@home. Some of that I am sure is due to a large difference in funding and priorities. I am hoping this will change as to be honest not only do I prefer BOINC credits but I have the machine at remote location I only visit several times a year and couldn't change it even if I wanted for some time. I was smart enough to do backup work on yoyo but again lets be honest and say neither yoyo or seti is doing the kind of work that is as critical as gpugrid and folding@home. Another worthwhile boinc project like rosetta or einstein needs to harness the power of the ps3s and then this becomes a non issue. | |
ID: 8433 | Rating: 0 | rate: / Reply Quote | |
I agree about costs and was being melodramatic. The reputation though of this project on the internet is that it does not reliably feed work units to speciality computing devices such as the ps3 as compared to folding@home. Some of that I am sure is due to a large difference in funding and priorities. I am hoping this will change as to be honest not only do I prefer BOINC credits but I have the machine at remote location I only visit several times a year and couldn't change it even if I wanted for some time. I was smart enough to do backup work on yoyo but again lets be honest and say neither yoyo or seti is doing the kind of work that is as critical as gpugrid and folding@home. Another worthwhile boinc project like rosetta or einstein needs to harness the power of the ps3s and then this becomes a non issue. Einstein is working on a new application for CUDA and possibly OpenCL as is MW ... The Lattice Project just ran a short test of the Garli application on CUDA (with about 5-10% or the tasks hanging as does the CPU application which as I read the project's posts they find "acceptable" and I find abhorrent, I guess it depends on how much you dislike waste). There are a couple other projects in the wings with rumors ... But this is still early days and to be honest *I* think that progress is about as fast as one could expect. Sadly, this still leads us to be in the position where we are lacking in choice for the moment. Of course, if you have an ATI card you have even less choice ... but ... soon enough we should see another project on-line ... Of course, if it HAD been Einstein, we still would have been SOL too because they had an outage this weekend too ... | |
ID: 8436 | Rating: 0 | rate: / Reply Quote | |
Nice discussion, but please do not totally forget the thread topic.. this is a sticky, after all. | |
ID: 8464 | Rating: 0 | rate: / Reply Quote | |
awesome system can just imagine how much work that thing is doing whew... | |
ID: 8806 | Rating: 0 | rate: / Reply Quote | |
att:cencirly Do You Mean That.... | |
ID: 10556 | Rating: 0 | rate: / Reply Quote | |
Would you mind using english sentences? That would help a lot in trying to understand what you want to say. | |
ID: 10562 | Rating: 0 | rate: / Reply Quote | |
The technical specifications for this machine are the following: Old bones I know, but did you ever make any progress into running one task on 4 cards? Thanks, PS. I'm still using that same K9A2 Platinum Motherboard. | |
ID: 16970 | Rating: 0 | rate: / Reply Quote | |
It is working locally, but not for GPUGRID yet. | |
ID: 16991 | Rating: 0 | rate: / Reply Quote | |
It might allow for some faster task turn around. | |
ID: 16994 | Rating: 0 | rate: / Reply Quote | |
Message boards : Number crunching : Specs of the GPUGRID 4x GPU lab machine