Message boards : Graphics cards (GPUs) : Shot through the heart by GPUGrid on ATI
Author | Message |
---|---|
I was planning out a new computer, a Maingear F131, with the express purpose of crunching GPUgrid. I already have a machine, a Maingear Shift Super Stock with Nvidia GTX 670's. For whatever reason, GPUGrid did not do well on this machine. I had to detach from the project. | |
ID: 28109 | Rating: 0 | rate: / Reply Quote | |
hi, | |
ID: 28113 | Rating: 0 | rate: / Reply Quote | |
It is still technically possible to do OpenCL but it does require a lot of work. Only justified if AMD really brings a top card in. Hmm, judging from the performance at most other projects I would say AMD does have very fast cards. What's a 7970? Wouldn't be too hard to make a list of Open_CL projects where the 7970 is top dog. | |
ID: 28116 | Rating: 0 | rate: / Reply Quote | |
O.K., now I know the score on ATI. | |
ID: 28123 | Rating: 0 | rate: / Reply Quote | |
The GPU-Grid code is quite efficient and "compute dense", i.e. it taxes GPUs quite hard. The power consumption is significantly higher than running SETI or Einstein. That's why GDF asked you about temperatures. Prime candidate for your problems are overheating GPUs or insufficient PSU. | |
ID: 28128 | Rating: 0 | rate: / Reply Quote | |
MrS | |
ID: 28132 | Rating: 0 | rate: / Reply Quote | |
if by "the machine kept failing", you mean that it was hanging, then it's probably insufficient power supply. | |
ID: 28135 | Rating: 0 | rate: / Reply Quote | |
So, I guess what I would request is, what would be considered good Nvidia cards to put on a new machine, and what in the way of specifications for a power supply? | |
ID: 28140 | Rating: 0 | rate: / Reply Quote | |
Did you have SLI enabled on you're last rig with all the problems? If so, that may have caused all you're problems, it must be disabled to crunch WU's on GPU Grid. I'm thinking you already knew that though. | |
ID: 28143 | Rating: 0 | rate: / Reply Quote | |
flashawk- | |
ID: 28145 | Rating: 0 | rate: / Reply Quote | |
You got GPU Grid to run on only one card at a time? Did you do the .xml file with | |
ID: 28146 | Rating: 0 | rate: / Reply Quote | |
So, this CPU is old before its time. There is work which is failing to finish, runt times that are ridiculous. I could have the CPU changed out, but it's like a car: what might be next. Not at all like a car. Cars have moving parts that suffer from wear due to friction, CPUs do not. There is likely absolutely nothing wrong with the rest of that computer yet you'll scrap it. Abuse it, refuse to follow every advice given you because "it's not my style" then scrap good kit. Sad beyond words. In fact words fail me. My i7 runs at 60C with stock Intel fan and heat sink, no throttling. Why? Because I rub brains on my problems instead of money. Try it sometime. I can still get Win 7; but I do not know how much longer that will be possible. If the new GPUgrid apps about to come out are like the old then you'll get a 15% performance boost just by putting Linux on that rig. But... | |
ID: 28148 | Rating: 0 | rate: / Reply Quote | |
Woha, there's really a lot here: debugging the old machine, choosing components for the new one and that "old" i7 920. | |
ID: 28165 | Rating: 0 | rate: / Reply Quote | |
@Richard (mitrichr) BOINC doesn't insist on running on GPU 0, it's rather that it can only use the GPUs which are not set to sleep by windows. And any non-SLI-ed GPU without a monitor attached and without the desktop extended to it is considered useless by Win and sent to sleep to save power. It's a shame it can't be waken up for GP-GPU work then. That's why you only crunched on 1 GPU with SLI disabled and the monitor disconnected. I've heard of crunchers putting a dongle on their Video-card and I've never really understood why. Maybe now I do? Is the purpose of the dongle to imitate a monitor to make Windows think the card is attached to a monitor which then prevents Windows from putting the card to sleep? Would that help mitrichr (Richard)? Does Linux put a card to sleep if it doesn't have a monitor attached? | |
ID: 28170 | Rating: 0 | rate: / Reply Quote | |
A 660 Watt Power Supply is to les for two nVidia high-end cards, I found out myself. | |
ID: 28180 | Rating: 0 | rate: / Reply Quote | |
I agree with MrS, an i7-3770K with two GTX660Ti's would be a better option. The only reasons for getting an LGA 2011 CPU would be to have 12 threads or support 4 GPU's. | |
ID: 28184 | Rating: 0 | rate: / Reply Quote | |
MrS- | |
ID: 28185 | Rating: 0 | rate: / Reply Quote | |
Regarding Dagorath, he is just a nasty harsh mean-spirited Canuck who should move to Florida. He does not even like Hockey. You just want me to come down there and teach y'all how to play hockey so you can eventually get a team together ;-) The pics you mentioned at Orbit@home... I'm working on them and will send you copies. As I mentioned the whole works is built mostly from scrap so it doesn't look pretty ATM, it needs paint which is sitting there in the corner exactly where I put it 2 months ago. I like painting even less than hockey but I'm making headway, for example I bought sandpaper last week and put there right beside the paint. Looking at buying a brush soon. | |
ID: 28187 | Rating: 0 | rate: / Reply Quote | |
Regarding Dagorath, he is just a nasty harsh mean-spirited Canuck who should move to Florida. Nice to see you two are getting along :D And you're right, the field of GPU crunching is still developing in a pretty dynamic way. Lot's of opportunities and reward, but also some homework to do (to do it right). @SK: I think you mean "err on the safe side", as in "to error"? MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 28202 | Rating: 0 | rate: / Reply Quote | |
After reading through all of this, it seems to me that there should be somewhere on the web site a statement of the minimum requirements to safely and successfully run WU's on this project. Specifically, cards, power supply, CPU, DRAM, maybe cooling. | |
ID: 28216 | Rating: 0 | rate: / Reply Quote | |
As I revealed, I have a very expensive and powerful computer which managed to work up 17 million credits, but which had to be rebuilt three times. That is a very costly situation, definitely to be avoided if possible. A few of us tried to help you avoid an ugly situation but you wouldn't listen. Whatever losses you suffered are YOUR fault because you would not listen to common sense. If it was all the fault of this project or any other project then why are the rest of us not forced to rebuild our rigs too? Why has this happened only to you? Your rig is no different than several others here with respect to hardware specs. The reason yours cost so much to build and repair is because you don't know how or where to buy. That was all explained to you months ago but you wouldn't have any of it. You made your bed now lay in it and stop blaming it on others. | |
ID: 28221 | Rating: 0 | rate: / Reply Quote | |
hi, or better, is gpugrid willing to break from the nVidia contract? but what i know, just my 2 cents. O.K., now I know the score on ATI. You can add WCG HCC and Poem@home for nice support on real good crunch VGA No1, AMD HD79xx. ____________ | |
ID: 28394 | Rating: 0 | rate: / Reply Quote | |
WGC/HCC will be out of work in less than four months, and I don't know of any new projects that will be using GPUs at all. | |
ID: 28395 | Rating: 0 | rate: / Reply Quote | |
hi, A contract? You mean nVIDIA is paying GPUgrid to use only nVIDIA thereby ignoring thousands of very capable AMD cards installed in machines that run BOINC? What is GPUgrid's motivation for agreeing to that contract... to minimize their production? ____________ BOINC <<--- credit whores, pedants, alien hunters | |
ID: 28396 | Rating: 0 | rate: / Reply Quote | |
If only. MJH. | |
ID: 28399 | Rating: 0 | rate: / Reply Quote | |
While I am not affiliated with GPU Grid I think it has to do with the fact that the GPU Grid code is best suited to CUDA enabled projects which out perform OpenCL out of the box, without custom tailoring settings to optimize OpenCL settings which only just then become comparable to the CUDA set up. | |
ID: 28405 | Rating: 0 | rate: / Reply Quote | |
This projects CUDA code could probably still be ported to OpenCL, but the performance would be poorer for NVidia cards and it still wouldn't work, or work well for AMD cards. | |
ID: 28412 | Rating: 0 | rate: / Reply Quote | |
Before anybody accuses me of having a contract with nVIDIA be aware that I just took delivery of an AMD 7970, it's installed in host 154485@POEM. | |
ID: 28414 | Rating: 0 | rate: / Reply Quote | |
POEM; 7970 + Athlon X2 ? | |
ID: 28415 | Rating: 0 | rate: / Reply Quote | |
This notion that CUDA is better suited for complex data analysis and modeling than OpenCL is widely reported on the 'net. skgiven isn't just making that up because he has a contract with nVIDIA, it's generally accepted as fact. I've seen it reported in several different places and have never seen anybody dispute it. I think a good way to put it is that CUDA is more mature than Open_CL. It's been around much longer, however Open_CL is catching up. It's also true that since CUDA is NVidia's proprietary language they can tweak it to perform optimally on their particular hardware. The downside is that they have the considerable expense of having to support both CUDA and Open_CL. AMD on the other hand dropped their proprietary language and went for the open solution. Time will tell. | |
ID: 28442 | Rating: 0 | rate: / Reply Quote | |
My statement summarizes the current state of affairs, yours holds open future possibilities so I think yours is a better way to put it except, lol, many readers can't appreciate what maturity has to do with a programming platform. I see the possibility that OpenCL might one day be just as capable as CUDA but I think that will be difficult to accomplish due to the fact that it's trying to work with 2 different machine architectures. As you say, time will tell. Amazing things can happen when the right combination of talent, money and motivation are brought to bear on a problem. I could be wrong (I don't read the markets as well or as regularly as many do) but I think sales are brisk for AMD as well as nVIDIA so the money is probably there but it depends on how much of that the shareholders want to siphon off into their pockets and how much they want to plow back into development. | |
ID: 28443 | Rating: 0 | rate: / Reply Quote | |
My statement summarizes the current state of affairs, yours holds open future possibilities so I think yours is a better way to put it except, lol, many readers can't appreciate what maturity has to do with a programming platform. I see the possibility that OpenCL might one day be just as capable as CUDA but I think that will be difficult to accomplish due to the fact that it's trying to work with 2 different machine architectures. As you say, time will tell. Open_CL works with far more than just ATI/AMD and NVidia CPUs. From the Wikipedia: "Open Computing Language (OpenCL) is a framework for writing programs that execute across heterogeneous platforms consisting of central processing units (CPUs), graphics processing units (GPUs), DSPs and other processors. OpenCL includes a language (based on C99) for writing kernels (functions that execute on OpenCL devices), plus application programming interfaces (APIs) that are used to define and then control the platforms. OpenCL provides parallel computing using task-based and data-based parallelism. OpenCL is an open standard maintained by the non-profit technology consortium Khronos Group. It has been adopted by Intel, Advanced Micro Devices, Nvidia, and ARM Holdings. For example, OpenCL can be used to give an application access to a graphics processing unit for non-graphical computing (see general-purpose computing on graphics processing units). Academic researchers have investigated automatically compiling OpenCL programs into application-specific processors running on FPGAs, and commercial FPGA vendors are developing tools to translate OpenCL to run on their FPGA devices." Amazing things can happen when the right combination of talent, money and motivation are brought to bear on a problem. I could be wrong (I don't read the markets as well or as regularly as many do) but I think sales are brisk for AMD as well as nVIDIA so the money is probably there but it depends on how much of that the shareholders want to siphon off into their pockets and how much they want to plow back into development. NVidia is profitable and has JUST started paying a dividend as of last quarter. AMD isn't making a profit, but hopes to in 2013. It's funny that so many kids pounded AMD for trying to rip them off after the 79xx GPUs were introduced, considering that AMD was losing money. AMD does not pay a dividend so the shareholders aren't getting anything. The stock price of AMD has not done well in recent years so the shareholders have been looking at considerable loses. It's too bad for us as competition drives technological advancement. Regards/Beyond | |
ID: 28445 | Rating: 0 | rate: / Reply Quote | |
...competition drives technological advancement. While I don't doubt this statement, I think that better and better (GPU-based) supercomputers will be built whether there is or isn't competition in the (GP)GPU industry. Moreover the GPU based products made for gaming purposes become less and less important (i.e. profitable) along the way. | |
ID: 28446 | Rating: 0 | rate: / Reply Quote | |
That's a pretty impressive list of things people think OpenCL is good for. I won't disagree with them because my expertise on the subject is stretched even at this point. I guess my impression is that it's kind of like a Swiss army knife. They're pretty cool looking things and one thinks there is no job it can't do but the fact is they really don't work that well. Or those screwdrivers that have 15 interchangeable bits stashed in the handle. They're compact and take up a lot less room than 15 screwdrivers but if you look in a mechanic's tool chest you won't find one. They hate them and if you give them one they'll toss it in the trash bin. And so OpenCL maybe does a lot of different things but does it do any of those things well? I honestly don't know, I don't work with it, just an enduser. | |
ID: 28447 | Rating: 0 | rate: / Reply Quote | |
...competition drives technological advancement. If there is a demand for it someone will build a better supercomputer even if they have to use current tech. Seems to me one way to accomplish it would be to build a bigger box to house it then jam more of the current generation of processors into it. Or does it not work that way? Moreover the GPU based products made for gaming purposes become less and less important (i.e. profitable) along the way. Why less profitable... market saturation? ____________ BOINC <<--- credit whores, pedants, alien hunters | |
ID: 28448 | Rating: 0 | rate: / Reply Quote | |
At present CUDA is faster and more powerful for more complex applications. Because of the success of Fermi, NVidia was contracted specifically to build supercomputer GPU's. They did so, and when they were built, that's were they all went, until recently; you can now buy GK110 Tesla's. This contract helped NVidia financially; it meant they had enough money to develope both supercomputer GPU's and gaming GPU's, and thus compete on these two fronts. AMD don't have that luxury and are somewhat one-dimensional being OpenCL only. Despite producing the first PCIE3 GPU and manufacturing CPU's there are no 'native' PCIE3 AMD motherboards (just the odd bespoke exception from ASUS). An example of the lack of OpenCL maturity is the over reliance on PCIE bandwidth and systems memory rates. This isn't such an issue with CUDA. This limitation wasn't overcome by AMD, and they failed to support their own financially viable division. So to use an AMD GPU at PCIE3 rates you need to buy an Intel CPU! What's worse is that Intel don't make PCIE GPU's and can thus limit and control the market for their own benefit. It's no surprise that they are doing this. 32 PCIE lanes simply means you can only have one GPU at PCIE3x16, and the dual-channel RAM hurts discrete GPUs the most. While Haswell is supposed to support 40 PCIE lanes, you're still stuck with dual-channel RAM, and the L4 cache isn't there to support AMD's GPU's! | |
ID: 28449 | Rating: 0 | rate: / Reply Quote | |
If there is a demand for it someone will build a better supercomputer even if they have to use current tech. Seems to me one way to accomplish it would be to build a bigger box to house it then jam more of the current generation of processors into it. Or does it not work that way? It's possible to build a faster supercomputer in that way, but its running costs will be higher, therefore it's might not be financially viable. To build a better supercomputer which fits in the physical limitations (power consumption, dimensions) of the previous one and faster at the same time, the supplier must develop their technology. Why less profitable... market saturation? Basically because they are selling the same chip much cheaper for gaming than for supercomputers. | |
ID: 28451 | Rating: 0 | rate: / Reply Quote | |
...competition drives technological advancement. With more than one competitor GPUs will obviously progress far faster than in a monopolistic scenario. We've seen technology stagnate more than once when competition was lacking. I could name a few if you like. Been building and upgrading PCs for a long, long time. Started with the Apple, then Zilog Z80 based CPM machines and then the good old 8088 and 8086 CPUs... | |
ID: 28452 | Rating: 0 | rate: / Reply Quote | |
...competition drives technological advancement. Neverone to pass up the opportunity for a little brinksmanship or perhaps just the opportunity to reminisce, my first one was a Heath kit a friend and I soldered together with an iron I used as a child to do woodburning, that and solder the size plumber's use. No temperature control on the iron, I took it to uncle who ground the tip down to a decent size on his bench grinder. We didn't have a clue in the beginning and I wish I could say we learned fast but we didn't. It yielded an early version of the original Apple Wozniak and friends built in... ummm...whose garage was it... Jobs'? Built it, took 2 months to fix all the garbage soldering and debug it but finally it worked. After that several different models of Radio Shack's 6809 based Color Computer, the first with 16K RAM and a cassette tape recorder for storage and the last with 1 MB RAM I built myself and an HD interface a friend designed and had built in a small board jobber in Toronto. He earned an article in PC mag for that, it was a right piece of work. That gave me 25 MB storage and was a huge step up from the 4 drive floppy array I had been using. It used OS/9 operating system (Tandy's OS/9 not a Mac thing), not as nice as CPM but multi-tasking and multi-user. Friends running 8088/86 systems were amazed. And it had a bus connector and parallel port we used and for which we built tons of gizmos, everything from home security systems to engine analyzers. All with no IRQ lines on the 6809, lol. I passed on the 80286 because something told me there was something terribly wrong though I had no idea what it was. Win2.x and 3.1 was useless to me since my little CoCo NEVER crashed and did everything Win on a '286 could do including run a FidoNet node, Maximus BBS plus BinkleyTerm. Then the bomb went off... Gates publicly declared the '286 was braindead, IBM called in their option on OS/2, the rooftop party in Redmond, OS/2 being the first stable multitasking GUI OS to run on a PC and it evolving fairly quickly into a 16 bit OS while Win did nothing but stay 8 bit and crash a lot. Ran OS/2 on a '386 I OC'd and built a water cooling system for through the '486 years then a brief dalliance with Win98 on my first Pentium which made me puke repeatedly after rock solid OS/2 and genuine 16 bitness, CPM and OS/9 so on to Linux which I've never regretted for a minute. Windows never really had any competition. IBM priced OS/2 right out of the market so it was never accepted, never reached critical mass and IBM eventually canned it. Apple did the same with Mac but somehow they clung on perhaps for the simple reason they were able to convince the suckers their Macs were a cut above the PCs, the pitch they still use today. CPM died, Commodore died, and Gates was the last man standing, no competition. And that is why Windows is such a piece of sh*t. What other examples do you know of? ____________ BOINC <<--- credit whores, pedants, alien hunters | |
ID: 28456 | Rating: 0 | rate: / Reply Quote | |
There are exceptions to everything, including the statement "competition drives technological advancement". | |
ID: 28457 | Rating: 0 | rate: / Reply Quote | |
Retvari, | |
ID: 28458 | Rating: 0 | rate: / Reply Quote | |
Is that what's happened with the 64 bit ops on nVIDIA cards? In short: no. The number of consumer GPUs are still a lot higher than Quadro, Tesla and supercomputers (basicaly custom Teslas) combined. They don't have that many defective chips. And with GPUs it's easy to just disable a defective SMX, instead of trying to disable certain features seletively. In the Fermi generation they did cut FP performance from 1/2 the SP performance down to 1/8 on consumer GF100 and GF110 chips (the flagships). However, this was purely to protect sales of the more expensive cards. With Kepler it's different: all "small" chips, up to GK104 (GTX680) physically have only 1/12 the SP performance in DP. This makes them smaller and cheaper to produce. Only the flagship GK110 has 1/3 the SP performance in DP. These are the only Teslas which matter for serious number crunching. We'll see if they'll cut this again for GF Titan. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 28460 | Rating: 0 | rate: / Reply Quote | |
An example of the lack of OpenCL maturity is the over reliance on PCIE bandwidth and systems memory rates. This isn't such an issue with CUDA. POEM@Home is special, the performance characteristic seen there is firstly a result of the specific code being run. What part of this can be attributed to OpenCL in general is not clear. E.g. look at Milkyway: they didn't loose any performance (and almost don't need CPU support) when transitioning from CAL to OpenCL in HD6000 and older cards. The reason: simple code. However, there was/is some function call / library being used which was optimized in CAL. It's still being used on the older cards (requires some fancy tricks). However, HD7000 GPUs can't use the optimized CAL routine and loose about 30% performance jsut due to this single function. And nothing has changed in this regard since about a year. That's what maturity and optimization mean for GP-GPU. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 28461 | Rating: 0 | rate: / Reply Quote | |
Is that what's happened with the 64 bit ops on nVIDIA cards? The GTX580 and the 20x0 series Teslas used the same GF100/GF110 silicon. The products were differentiated, in part, by the amount of DP logic enabled. For the Kepler generation, there are separate designs for GTX680 and the Kx0 series silicon -GK114 and GK110 respectively. The former is distinguished by having a simpler SM design and only a few DP units. It will be interesting to see what features tip up in the GK110-using Geforce Titan. I expect DP performance will be dialled down, in part to maintain some differentiation against the Tesla K20c and also to allow reuse of partially defective dies. Anyhow, the GPUGRID application has minimal need for DPFP arithmetic and - furthermore - was developed in a period before GPUs had any DP capability at all. MJH | |
ID: 28462 | Rating: 0 | rate: / Reply Quote | |
Neverone to pass up the opportunity for a little brinksmanship or perhaps just the opportunity to reminisce, my first one was a Heath kit a friend and I soldered together with an iron I used as a child to do woodburning, that and solder the size plumber's use. No temperature control on the iron, I took it to uncle who ground the tip down to a decent size on his bench grinder. We didn't have a clue in the beginning and I wish I could say we learned fast but we didn't. It yielded an early version of the original Apple Wozniak and friends built in... ummm...whose garage was it... Jobs'? Built it, took 2 months to fix all the garbage soldering and debug it but finally it worked. After that several different models of Radio Shack's 6809 based Color Computer, the first with 16K RAM and a cassette tape recorder for storage and the last with 1 MB RAM I built myself and an HD interface a friend designed and had built in a small board jobber in Toronto. He earned an article in PC mag for that, it was a right piece of work. That gave me 25 MB storage and was a huge step up from the 4 drive floppy array I had been using. It used OS/9 operating system (Tandy's OS/9 not a Mac thing), not as nice as CPM but multi-tasking and multi-user. Friends running 8088/86 systems were amazed. And it had a bus connector and parallel port we used and for which we built tons of gizmos, everything from home security systems to engine analyzers. All with no IRQ lines on the 6809, lol. Wow, you're old too! Sounds like a nice piece of engineering you did there. Win2.x and 3.1 was useless to me since my little CoCo NEVER crashed and did everything Win on a '286 could do including run a FidoNet node, Maximus BBS plus BinkleyTerm. Then the bomb went off... Gates publicly declared the '286 was braindead, IBM called in their option on OS/2, the rooftop party in Redmond, OS/2 being the first stable multitasking GUI OS to run on a PC and it evolving fairly quickly into a 16 bit OS while Win did nothing but stay 8 bit and crash a lot. Ran OS/2 on a '386 I OC'd and built a water cooling system for through the '486 years then a brief dalliance with Win98 on my first Pentium which made me puke repeatedly after rock solid OS/2 and genuine 16 bitness, CPM and OS/9 so on to Linux which I've never regretted for a minute. I ran a 4 line ProBoard BBS for years , even before Al Gore invented the internet. OS/2 based of course as that was the x86 multitasking OS that was stable. I really did like OS/2. Gates was the last man standing, no competition. And that is why Windows is such a piece of sh*t. What other examples do you know of? A hardware example: x86 CPUs. Remember when Intel would release CPUs at 5 MHz speed bumps and charge a whopping $600 (think of that in today's dollars) for the latest greatest? The only thing that kept them honest at all was AMD, who due to licensing agreements was able to copy the 286/386/486 and bring out faster, cheaper versions of them all. Later Intel dropped some of the licensing and meanwhile brought out the very good P3. AMD then countered with the Athlon at about the same time Intel brought out the P4. The P4 was designed for high clock speeds to allow Intel to apply it's old incremental "small speed increase ad nauseum" strategy. Unfortunately for Intel, the Athlon was a much better processor than the P4 and Intel had to scramble hard to try to make up the ground (not to mention a lot of dirty tactics and FUD). They did of course but it took them years to do it. If AMD hadn't been there we'd probably still be using P4 based processors with small speed bumps every year. Competition drives technology... Beyond | |
ID: 28463 | Rating: 0 | rate: / Reply Quote | |
Oh yah, I'm older than dirt. I remember back when dirt first came out, you know dirt was clean back then. Competition drives technology... That was a good reminisce about Intel vs. AMD, thanks. Competition does drive technology and I wish I could follow up with another reminisce on precisely that topic but I can't so I'll tell a story about how competition drives people nuts. Recall I was running the CoCo (Color Computer) about the same time friends were running 8088/86 PCs. Mine ran at ~2.8 MHz, their PCs ran at... ummm... what was it... 8 MHz? 10 MHz? Anyway, the point is that the 6809 executes an instruction every 1/4 clock cycle (characteristic of all Motorola chips of that era, perhaps even modern ones) so my cheap little CoCo could pretty much keep up with their 8088/86 machines which execute every an instruction every cycle. (Yes, I'm over-simplifying here as some instuctions require more than 1 cycle or 1/4 cycle). Then one of those 500 MHz bump ups came out and they all forked out the big money for the faster chip and called me to bring my CoCo over so they could show me who was boss dog. Little did they know I had stumbled upon and installed an Hitachi variant of the 6809 that ran at ~3.2 MHz as opposed to the stock Motorola chip at 2.8 MHz and had soldered in a new oscillator to boost that up to ~3.5 MHz. Lol, their jaws dropped when we ran our crude little test suite for my humble little CoCo still kept up with their expensive PCs. Then they started to argue amongst themselves and accuse their ringleader of feeding them BS about the effectiveness of their expensive upgrades to the faster CPU. Oh they were were going to write Intel and tell them off and one said they had obviously gotten some bogus Chinese knockoffs and yada yada yada. I didn't tell them about my upgrade for a week, just let them stew in their own juices because you know, timing is everything. Then I told them and they settled down. They hated it but they settled down, hehehe, the calm before the storm and I let that ride for a week. Then I told them *my* upgrade had cost me about $12 which was the truth and then their jaws hit the floor again as their upgrades had cost.... oh I forget exactly but it was $200 for sure, maybe more. Back then $200 was a lot of money so out came that unfinished letter to Intel and more yada yada yada and steam blowing. Yah, competition drives people nuts, lol. ____________ BOINC <<--- credit whores, pedants, alien hunters | |
ID: 28464 | Rating: 0 | rate: / Reply Quote | |
Is that what's happened with the 64 bit ops on nVIDIA cards? Thanks. I snipped the details for brevity but rest assured they help me fill in the puzzle. Much appreciated. ____________ BOINC <<--- credit whores, pedants, alien hunters | |
ID: 28465 | Rating: 0 | rate: / Reply Quote | |
...competition drives technological advancement. Once again: I don't doubt that. What I was trying to say is that nowadays the need for better supercomputers and the potential of the semiconductor industry drives more the progress than competition. We've seen technology stagnate more than once when competition was lacking. I could name a few if you like. Been building and upgrading PCs for a long, long time. Started with the Apple, then Zilog Z80 based CPM machines and then the good old 8088 and 8086 CPUs... Your sentences insinuating that there was stagnation in computing technology since the invention of the microprocessor because of lacking competition. I can't recall such times despite I'm engaged in home and personal computing since 1983. As far as I can recall, there was bigger competition in the PC industry before the Intel Pentium processor came out. Just to name a few PC CPU manufacturers from that time: NEC, IBM, Cyrix, SGS Thomson, and the others: Motorola, Zilog, MOS Technology, Ti, VIA. Since then they were 'expelled' from this market. Nowadays the importance of the PC is decreasing, since computing became more and more mobile, so PC is the past (including gaming GPUs), smartphones and cloud computing (including supercomputers) are the present and the near future, maybe the smartwatches are the future, and nobody knows what will be in 10 years. | |
ID: 28472 | Rating: 0 | rate: / Reply Quote | |
i think this will help you settle your psu issue and | |
ID: 28485 | Rating: 0 | rate: / Reply Quote | |
For my hardware I get "Our recommended PSU Wattage: 431 W", which is not actually factoring in the modest OC on CPU and GPU. Crunching POEM the machine consumes 205 W, measured at the wall. Running GPU-Grid (quite taxing) and 7 Einsteins on the CPU I reach 250 W, give or take 10 W. | |
ID: 28500 | Rating: 0 | rate: / Reply Quote | |
I'm a former GPUGrid cruncher who just wanted to add my voice to those supporting OpenCL use here. | |
ID: 29398 | Rating: 0 | rate: / Reply Quote | |
I'm a former GPUGrid cruncher who just wanted to add my voice to those supporting OpenCL use here. If you have an HD 7000 series card, then you are in luck on Folding@home. Their new Core_17 (currently in beta testing) does better on AMD than Nvidia, and improvements are still being made. Not only is the PPD better, but the CPU usage is comparably low, which is unusual for OpenCL. Note however that the higher-end cards do much better than the low-end cards, due to their quick-return bonus (QRB). You can wait for the release, or try out the betas by setting a flag in the FAHClient to get them. CUDA has long reigned supreme over there, and so it is quite a change. Note however that you have to log in to their forums to see the beta testing section, which is at the bottom of the page. And you will need to check the Wiki to see how to set the beta flag. Given the amount of development required to get OpenCL to work properly (they have been at it for years), that will get you results far faster than waiting for GPUGrid to do it. | |
ID: 29400 | Rating: 0 | rate: / Reply Quote | |
I'm a former GPUGrid cruncher who just wanted to add my voice to those supporting OpenCL use here. Hi, Jim1348: I cannot find any mention of a likely implementation date for the new Core_17 at Folding@home. Have I missed the implementation date or has it not yet been given us? Regards, John | |
ID: 29401 | Rating: 0 | rate: / Reply Quote | |
John, | |
ID: 29402 | Rating: 0 | rate: / Reply Quote | |
Last month Intel started supporting OpenCL/GL 1.2 on it's 3rd and 4th generation CPU's, and their Xeon Pi. They even have a Beta dev kit for Linux. It's my understanding however that if you are using a discrete GPU in a desktop you won't be able to use the integrated GPU's OpenCL functionality (might be board/bios dependent), but it's available in most new laptops. | |
ID: 29625 | Rating: 0 | rate: / Reply Quote | |
To date NVidia is stuck on OpenCL 1.1, even though the GTX600 series cards are supposedly OpenCL1.2 capable. They haven't bothered to support 1.2 with drivers. I was hoping that the arival of the Titan would prompt NVidia to start supporting 1.2 but it hasn't happened so far. Perhaps the official HD 7990's will encourage NVidia to support OpenCL1.2, given AMD's embarrassingly superior OpenCL compute capabilities. Conversely, AMD has made great strides in OpenCL for their 7xxx series cards. On the WCG OpenCL app even the lowly HD 7770 is twice as fast as as an HD 5850 that uses twice the power, so 4x greater efficiency. At the Open_CL Einstein, the GTX 660 is faster than the 660 TI. In fact the 560 TI is faster than the 660 TI. Seems strange? Edit: I won't even mention that at Einstein the Titan runs at only 75% of the speed of the HD 7970, which is well under 1/2 the price. Oops, mentioned it... | |
ID: 29627 | Rating: 0 | rate: / Reply Quote | |
Pls correct me if I'm wrong: AMD cards are stable, run more advanced software, are faster, use less energy and are cheaper. Not so long ago a mod posted here, the reason why GPUGRID does not use AMD is time of the programmers and a resource problem. OK, we need to accept that. But maybe it's time to think about alternatives that reflect the reality. First nVidia 7xx card are to be shipped, AMD's HD8xxx is under way. I think in a month or so we will see first results that makes a choice easier. | |
ID: 30427 | Rating: 0 | rate: / Reply Quote | |
AMD's have been looked at several times in the past. The project wasn't able to use them for research in the past, but did offer an alternative funding use for these GPU's. Not everybody's cup of tea, but ultimately it was good for the project. I expect new code will be tested on new AMDs. If the project can use AMD GPU's it will, but if not it won't. They are the best people to determine what the project can and can't do, and if they need help, assistance or advice, they have access to it. | |
ID: 30432 | Rating: 0 | rate: / Reply Quote | |
Pls correct me if I'm wrong: While I like AMD GPUs and especially the "new" GCN architecture, I think this goes too far. The AMDs generally provide a bit more raw horse power at a given price point, but not dramatically so. I also don't see a general power consumption advantage sine nVidia introduced Kepler. And stable? Sure, given the right software task. But saying "run more advanced software" makes it pretty difficult. They support higher OpenCL versions, sure. This is likely not an issue of hardware capability but rather nVidia not allocating ressources to OpenCL drivers. This won't change the fact that currently nVidias can not run the "more advanced" OpenCL code.. but I'm pretty sure that anything implemented there could just as well be written in CUDA. So can the GPUs run that ode or not? Really depends on your definition of "that code". So it really boils down to OpenCL versus CUDA. CUDA is clearly more stable and the much more advanced development platform. And nVidias GPUs perform rather well using CUDA. Switch to OpenCL, however, and things don't look as rosy any more. There's still a lot of work on the table: drivers, SDKs, libraries etc. But they're doing their homework and progress is intense. It seems like regarding performance optimizations nVidia has fallen behind AMD here so far that it starts to hurt and we're beginning to see the differences mentioned above. Some of that may be hardware (e.g. the super-scalar shaders are almost impossible to use all the time), some of it "just" a lack of software optimization. In the latter case the difference is currently real.. but for how long? MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 30457 | Rating: 0 | rate: / Reply Quote | |
Latest discussions at Einstein made it necessary te recheck my sight of things. The performance chart I was referring showes the performance with cuda32 apps. So I was compairing latest version openCL against (outdated) cuda, which does not reflect the actual reality. When Einstein switches over to cuda 5x things might look very different and the high-score table will look different. Mea culpa. I apologize for that. Alex | |
ID: 31437 | Rating: 0 | rate: / Reply Quote | |
Latest discussions at Einstein made it necessary te recheck my sight of things. Alex, the reason for Einstein using CUDA32 (according to Bernd) is: "Our BRP application triggers a bug in CUDA versions 4 & 5 that has been reported to NVida. The bug was confirmed but not fixed yet. Until it has, we're stuck with CUDA 3." http://einstein.phys.uwm.edu/forum_thread.php?id=10028&nowrap=true#123838 He goes on to say they reported the bug to NVidia 2.5 years ago. Still not fixed. Glad we didn't hold our breath... | |
ID: 31441 | Rating: 0 | rate: / Reply Quote | |
Yes, thanks for the link, I know this thread. But I did not know that there is a performance increase of about 2 (according to the discussion at Einstein) when using newest hardware and cuda 4x. I have no way to compare the performance of cuda 4x wu's against AMD OpenCL. But with this (for me) new information I have a feeling of some kind of unfairness compairing cuda3x against newest AMD OpenCL. This is what I wanted to point out. As far as the developement at Einstein is concerned: They do a great job there, and they have good reasons not to use cuda4x. And they have wu's for Intel GPU and also for Android (at Albert). | |
ID: 31444 | Rating: 0 | rate: / Reply Quote | |
I have no way to compare the performance of cuda 4x wu's against AMD OpenCL. But with this (for me) new information I have a feeling of some kind of unfairness compairing cuda3x against newest AMD OpenCL. This is what I wanted to point out. All you can compare is what's there. If NV won't fix their bugs, that's their problem. You can certainly compare OpenCL on both platforms in various projects. Suposedly that's where GPU computing is going... | |
ID: 31445 | Rating: 0 | rate: / Reply Quote | |
GPUGrid has seen plenty of CUDA problems in the distant and recent past, and there are reasons why GPUGrid didn't move from CUDA4.2 to CUDA5 and have problems with the existing 4.2. There are different problems at different projects - some can adapt, some can't. | |
ID: 31448 | Rating: 0 | rate: / Reply Quote | |
Im (im)patiently waiting for GPUgrid to get their app working so I can bring my 780 here. Currently it is at Einstein. While I love the project, for what it is, the fact that they're effectively stuck on an older CUDA platform, thus not getting the most they can out of the newer cards makes me not want to crunch there. | |
ID: 31450 | Rating: 0 | rate: / Reply Quote | |
Even GCN is buggy and anything but the most simple models is still very inefficient. Anything complex enough to be called a processor is buggy. We need firmware and drivers to work around these issue. And to make sure they don't get too far in the way of actually using the chip. Regarding GCN not being efficient for handling complex code: I'd say it's actually the other way around, that nVidia took a step back with Kepler for complex and variable code (while gaining efficiency for regular and simpler cases). If things are normal, both architectures are about on par. But there are certainly cases where GCN completely destroys Kepler (e.g. Anand's compute benchmarks). This could be due to bad OpenCL drivers rather than hardware.. but then Fermi shouldn't fare as well as it does in comparison, should it? And there are of course cases where nVidia running CUDA destroys AMD running OpenCL.. software optimization is a huge part of GPU performance. Anyway, the GPU-Grid team seems to have their hands full currently with a reduced staff and many problems popping up. Not much room for further development when bug fixing is so urgent... MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 31508 | Rating: 0 | rate: / Reply Quote | |
We ended up designing and building our own GPU chassis so tired of having poor cooling. Ooh, show us pictures! I want to build a solid (silent) cool (as in temperature, not epicness) case for multiple computers (to make a central house computer, much like central heating, since it will also heat the house), and need ideas. | |
ID: 31816 | Rating: 0 | rate: / Reply Quote | |
Here you go http://www.acellera.com/products/metrocubo/ | |
ID: 31828 | Rating: 0 | rate: / Reply Quote | |
There is no way I would buy a Blue case | |
ID: 31833 | Rating: 0 | rate: / Reply Quote | |
They actually make them in any color :D We have a green and an orange one in the lab and I am still waiting for my pink one, hahah | |
ID: 31877 | Rating: 0 | rate: / Reply Quote | |
and I am still waiting for my pink one, hahah Isn't that Noelias workstation? ;) MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 31884 | Rating: 0 | rate: / Reply Quote | |
Message boards : Graphics cards (GPUs) : Shot through the heart by GPUGrid on ATI