Advanced search

Message boards : Graphics cards (GPUs) : Which graphic card

Author Message
TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30059 - Posted: 18 May 2013 | 11:14:49 UTC

Hello, this are questions for the graphics cards specialists.

Where does Ti stand for with the nVidia cards? Are they better/slower/faster than cards without this addition.

And secondly, lets assume that money is no issue (it is unfortunately) what card would be best for an i7, 940 with 12Gb RAM and a 770 Watt PS?
(that is now having the GTX285 and to slow for the current WU's).

Thanks for your input highly appreciated.

____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30061 - Posted: 18 May 2013 | 12:05:53 UTC - in response to Message 30059.
Last modified: 18 May 2013 | 12:44:53 UTC

I think Ti just means a "performance" version of a GPU, as opposed to an SE or Eco model for example. Cards with 'Ti' are faster/offer better performance, but not necessarily best value...

A typical GTX660Ti is 20 to 25% faster than a reference GTX660, however there is a range of GTX660Ti's probably form about 15% faster to 30% faster than a reference GTX660, and there might be some modest OC versions of the GTX660.

The top GPUs are:
GTX690, 680, 670, 660Ti, 660, 650Ti Boost, 650Ti, 650.

From, http://www.gpugrid.net/forum_thread.php?id=3156&nowrap=true#29914
GTX660Ti - 100% - £210
GTX660 - 88% - £153 (73% cost of a GTX660Ti) – 20.5% better performance/£
GTX650Ti Boost 79% - £138 (66%) – 19.6% better performance/£
GTX650Ti - 58% - £110 (52%) – 11.5% better performance/£

From the above prices the GTX660 offers the best performance/purchase price for mid-range GPU's, however prices vary by region, and when a new GPU has just been released it tends to be more expensive (so perhaps the GTX650Ti Boost will fall in cost and improve it's performance/cost ratio).

I haven't included the Titan because it's not yet capable or running WU's here (new app required). It's performance will probably eclipse the GTX680's, and sit somewhere between a 680 and a 690 (dual GPU). The soon to be released GTX780 (also a CC3.5 'big Kepler', launch date May 23rd, 2013) should also surpass the GTX680, and the GTX770 (basically just a GTX680 with slightly higher clocks, CC3.0 I guess, launch date May 30th, 2013) should also slightly outperform the GTX680.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30062 - Posted: 18 May 2013 | 12:43:33 UTC

Thanks skgiven for your answers.

The Titan is out of reach with more than 1500 euro's.
But I have seen reasonable prices for the 660 and 650 here in the Netherlands, buying in Germany that means...cheaper there.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30071 - Posted: 18 May 2013 | 16:56:24 UTC - in response to Message 30062.

I can see Titan there for ~900€. Not suggesting you buy one, but rather to use this nice price comparison portal :)

In your case I'd go with one reasonably high end GPU now (GTX660 or 660Ti) and decide later on whether to add another card. Try to sell both GTX285 (too inefficient for 24/7 crunching, but still OK for gamers on a budget).

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30073 - Posted: 18 May 2013 | 20:56:51 UTC - in response to Message 30061.

I haven't included the Titan because it's not yet capable or running WU's here (new app required). It's performance will probably eclipse the GTX680's, and sit somewhere between a 680 and a 690 (dual GPU). The soon to be released GTX780 (also a CC3.5 'big Kepler', launch date May 23rd, 2013) should also surpass the GTX680, and the GTX770 (basically just a GTX680 with slightly higher clocks, CC3.0 I guess, launch date May 30th, 2013) should also slightly outperform the GTX680.

For sure the titan will have a poor performance/price ratio at anywhere near current pricing. I made the mistake years ago buying GPUs for this project because of glowing predictions by the staff, and then they performed so poorly I had to quit the project and use them elsewhere. Lesson: don't buy the new GPU without seeing hard, cold performance figures.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30078 - Posted: 18 May 2013 | 22:35:36 UTC - in response to Message 30073.

Lesson: don't buy the new GPU without seeing hard, cold performance figures.

I definitely agree. But out of curiousity: which card was that?

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30087 - Posted: 19 May 2013 | 13:34:53 UTC - in response to Message 30078.

My guess is the GTX460, but if we go back to CC1.1 cards, it was really hit and miss and I got stung several times.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30103 - Posted: 20 May 2013 | 9:30:46 UTC - in response to Message 30071.

I can see Titan there for ~900€. Not suggesting you buy one, but rather to use this nice price comparison portal :)

Thanks ETA that is neat site. I even see some cases with lots of fans.
I will browse there for a while.

The Titan however will stay out of reach for me ;-)
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30105 - Posted: 20 May 2013 | 10:19:12 UTC

One more question. I see a good offer in the Netherlands for an EVGA GTX650Ti SSC 2GB. So that will be super super clocked. Is that the same as over clocking? If yes can I change it via settings?

Personally I like EVGA, have good experiences with it and have had all kinds of brands.
As always thanks for the input.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30107 - Posted: 20 May 2013 | 11:24:55 UTC - in response to Message 30105.

It is clocking above the level nVidia suggested, but EVGA assures you it will work. And usually says it's binned the chips and only put higher performing ones onto SSC models, whereas nVidia can only create their spec to something which worse chips will achieve with enough headroom.

Call it factory overclock or just a different choice of clock speed.

You could always reduce clocks via software, but I expect this won't be neccessary before a few years of 24/7 crunching (chips degrade slightly over time). At which point EVGAs lifetime warrenty might come in handy.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30110 - Posted: 20 May 2013 | 13:22:28 UTC - in response to Message 30105.
Last modified: 20 May 2013 | 13:39:11 UTC

One more question. I see a good offer in the Netherlands for an EVGA GTX650Ti SSC 2GB. So that will be super super clocked. Is that the same as over clocking? If yes can I change it via settings?

Is that the one clocked at 1071MHz? It should be a good card. My 3 MSI 650 Ti cards were factory OCed at 993MHz and they're all running fine at 1084MHz. Haven't had many EVGA GPUs, but the one's I've had have been very good. I still have an EVGA GTX 460 running and it's the best of the four 460 cards that I've owned. I'd recommend MSI Afterburner for controlling the GPU settings and fans. It pretty much handles any brand GPU (both NV and ATI/AMD), multiple varied GPUs, has fewer bugs than the other control apps and is updated regularly.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30111 - Posted: 20 May 2013 | 13:34:05 UTC - in response to Message 30087.

My guess is the GTX460, but if we go back to CC1.1 cards, it was really hit and miss and I got stung several times.

SKG, you have a fine memory. Have you been devouring coconut oil? I bought 4 GTX 460 GPUs to run GPUGrid, as they were predicted to be the most power efficient and best bang for buck compared to the energy gulping 470 and 480. Turns out that the GPUGrid app only used 2/3 of the shaders so the performance was poor. It took 2 years to fix that software bug and that was 2 years I couldn't run GPUGrid. On the upside the 460s worked fine at all other projects like PrimeGrid, POEM, Einstein, WCG, SETI, Collatz, Dtrgn etc. Now we have a similar issue with the titan. So I'd strongly recommend waiting to see hard, cold results before considering any purchases for any project, but especially here.

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30112 - Posted: 20 May 2013 | 13:34:58 UTC

The tests that the card makers use for binning the chips for overclocking are probably directed to graphics use, and not really distributed computing, which has much more stringent requirements and uses other parts of the GPU more heavily than are used for games. As the work units get harder and the temperature rises, you will probably see errors. Then, you will need to reduce the clock on the GPU chip yourself.

You will see all sorts of complaints for Noelias, SDOERR, etc. which I normally don't have a problem with using non-overclocked cards.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30114 - Posted: 20 May 2013 | 13:51:30 UTC - in response to Message 30112.

The tests that the card makers use for binning the chips for overclocking are probably directed to graphics use, and not really distributed computing, which has much more stringent requirements and uses other parts of the GPU more heavily than are used for games. As the work units get harder and the temperature rises, you will probably see errors. Then, you will need to reduce the clock on the GPU chip yourself.

Not really agreeing with this at least for the 650 Ti. It seems to have very modest standard clocking compared to other NV cards (SKG points out for instance that his 660 has very little OC headroom). My experience (a lot, years of GPU computing and 19 cards running at the moment on various projects) with factory OCed GPUs is that they DO generally run faster than non-factory-OCed models. As far as heat with the 650 Ti, mine are all running GPUGrid OCed at 45-49C with quite low fan speeds. That's cooler than any of my other cards except for a tie with 3 HD 7790s running at Einstein.

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30117 - Posted: 20 May 2013 | 14:30:32 UTC - in response to Message 30114.
Last modified: 20 May 2013 | 14:39:01 UTC

The more overclocking, the greater the chance for problems. I was not referring to your specific 650 Ti, but a "super super clocked" card sounds like problems to me. And the overclocking does not need to produce high temperatures for the problems to appear; I have seen problems below 60 C. A high temperature just adds to the likelihood. The binning that the chip makers do (e.g., TSMC, though I don't know that they do for GPU chips) is far beyond the capabilities of the board makers, and exercises many more functions of the chip. That is because they have access to the chip before it is even packaged, and can afford the high cost of the test machines.

In fact, I expect that it is a misnomer to call what the card makers do "binning". It is really just qualification tests on random samples of a lot to ensure that the chips don't fail outright. That is better than nothing, but does not ensure trouble-free operation in distributed computing projects.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30119 - Posted: 20 May 2013 | 15:08:06 UTC - in response to Message 30117.

The more overclocking, the greater the chance for problems. ... a "super super clocked" card sounds like problems to me. ... A high temperature just adds to the likelihood. ... (e.g., TSMC, though I don't know that they do for GPU chips)

There's a lot of guessing and conjecture going on here. Any concrete experience? Evidence?

In fact, I expect that it is a misnomer to call what the card makers do "binning". It is really just qualification tests on random samples of a lot to ensure that the chips don't fail outright. That is better than nothing, but does not ensure trouble-free operation in distributed computing projects.

More conjecture. Do you know that EVGA and other board makers aren't using binning supplied by the foundry? Did you know that even chips from the same wafer can have very widely varying capabilities? My experience is that factory OCed cards work at their factory overclocks on DC projects, often significantly higher. My experience is also that DC projects are generally less demanding and do not stress GPUs as much as heavy gaming.

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30121 - Posted: 20 May 2013 | 15:32:25 UTC - in response to Message 30119.

More conjecture. Do you know that EVGA and other board makers aren't using binning supplied by the foundry? Did you know that even chips from the same wafer can have very widely varying capabilities? My experience is that factory OCed cards work at their factory overclocks on DC projects, often significantly higher. My experience is also that DC projects are generally less demanding and do not stress GPUs as much as heavy gaming.

You can look at what Asus says; they are the best at selecting their chips that I know of in the TOP program; they test chips at random in incoming lots. And yes, I know that chips tested before packaging where all the pads are available can be tested much more thoroughly.

You are speculating; cite a source that says that Nvidia supplies binned chips, and what card makers are using them (I would not be surprised if they test chips more thoroughly for use in supercompters, but that is another matter). Of course some overclocked cards work well until they don't; check it out the next time you get a failed work unit. "Less demanding" probably means you are looking at temperature, but not the different functions of the chips. For example, DC projects don't use the rasterizing units, which can produce more heat, but will use various other functions.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30124 - Posted: 20 May 2013 | 17:52:36 UTC - in response to Message 30121.
Last modified: 20 May 2013 | 17:53:20 UTC

OC'ed cards tend to come with a higher price tag, but as well as the improved clocks you get components of better quality than reference models. Typically, their boards have better quality components, the heatsink dissipates better and the fans are bigger, better at cooling and make less noise. If an OC'ed GPU struggles with a WU type, you can downclock it a bit or up the Voltage slightly, and it's still going to be more efficient than a reference model. If it breaks RTM it, and you are likely going to get a better warranty on such cards.

GPU makers pay a bit of a premium for the best chips, so they already know the performance, at least up to a point. I think they individually tune some GPU types after assembly, but their 'reference models' don't go through this additional testing layer (which is why you might get a good clocker or a dud).
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30126 - Posted: 20 May 2013 | 17:58:38 UTC - in response to Message 30121.
Last modified: 20 May 2013 | 18:04:13 UTC

You are speculating; cite a source that says that Nvidia supplies binned chips, and what card makers are using them

Interesting reversal of "logic" or lack thereof. I was taking issue with your speculation. I said:

> Do you know that EVGA and other board makers aren't using binning supplied by the foundry?
> Did you know that even chips from the same wafer can have very widely varying capabilities?

So can you cite a source that says that Nvidia doesn't supply binned chips?

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30128 - Posted: 20 May 2013 | 21:53:33 UTC - in response to Message 30110.

One more question. I see a good offer in the Netherlands for an EVGA GTX650Ti SSC 2GB. So that will be super super clocked. Is that the same as over clocking? If yes can I change it via settings?

Is that the one clocked at 1071MHz? It should be a good card. My 3 MSI 650 Ti cards were factory OCed at 993MHz and they're all running fine at 1084MHz. Haven't had many EVGA GPUs, but the one's I've had have been very good. I still have an EVGA GTX 460 running and it's the best of the four 460 cards that I've owned. I'd recommend MSI Afterburner for controlling the GPU settings and fans. It pretty much handles any brand GPU (both NV and ATI/AMD), multiple varied GPUs, has fewer bugs than the other control apps and is updated regularly.

Yes that is the one. But I saw a GTX 660 for only little more money. Same 192bit bus but more stream processors. So I am still a bit in doubt.
____________
Greetings from TJ

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30129 - Posted: 20 May 2013 | 22:06:26 UTC - in response to Message 30126.

You are speculating; cite a source that says that Nvidia supplies binned chips, and what card makers are using them

Interesting reversal of "logic" or lack thereof. I was taking issue with your speculation. I said:

> Do you know that EVGA and other board makers aren't using binning supplied by the foundry?
> Did you know that even chips from the same wafer can have very widely varying capabilities?

So can you cite a source that says that Nvidia doesn't supply binned chips?

That is another way of saying that you have no evidence to prove the assertion that the chips are binned. But you may believe it if you want to.

As for heatsinks, power supply components, etc., they often are larger for overclocked cards to handle the extra heat and current load, but that has nothing to do with error rates (except to make them worse if they weren't oversized). I think you need to look at your error rates, which is the relevant data, and stop speculating as to what happens in any given factory.

matlock
Send message
Joined: 12 Dec 11
Posts: 34
Credit: 86,423,547
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 30131 - Posted: 20 May 2013 | 22:53:32 UTC - in response to Message 30128.

Yes that is the one. But I saw a GTX 660 for only little more money. Same 192bit bus but more stream processors. So I am still a bit in doubt.


The GTX 660 is great. The 650Ti has a 128-bit bus (not 192), less L2 cache, and fewer ROPs, in addition to the reduction of stream processors. The 650TiBoost is a 660 in the other regards, but has the same number of stream processors as the 650Ti.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30132 - Posted: 20 May 2013 | 23:35:09 UTC - in response to Message 30131.
Last modified: 20 May 2013 | 23:46:24 UTC

Yes that is the one. But I saw a GTX 660 for only little more money. Same 192bit bus but more stream processors. So I am still a bit in doubt.

The GTX 660 is great. The 650Ti has a 128-bit bus (not 192), less L2 cache, and fewer ROPs, in addition to the reduction of stream processors. The 650TiBoost is a 660 in the other regards, but has the same number of stream processors as the 650Ti.

I agree, if the price is close, get the 660. Did you see skgiven's test and analysis of the 650 TI, 650 Ti Boost, 660 and 660 Ti a few days ago. Read it, it's good information. There's also companion information in the later posts of the same thread.

Edit: Here's a link to the thread:

http://www.gpugrid.net/forum_thread.php?id=3156&nowrap=true#29914

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30133 - Posted: 20 May 2013 | 23:38:33 UTC - in response to Message 30129.

You are speculating; cite a source that says that Nvidia supplies binned chips, and what card makers are using them

Interesting reversal of "logic" or lack thereof. I was taking issue with your speculation. I said:

> Do you know that EVGA and other board makers aren't using binning supplied by the foundry?
> Did you know that even chips from the same wafer can have very widely varying capabilities?

So can you cite a source that says that Nvidia doesn't supply binned chips?

That is another way of saying that you have no evidence to prove the assertion that the chips are binned. But you may believe it if you want to.

Do you really not understand how you turned this around, or are you just trying to be difficult? I hope you're trying to be difficult, it's easier to fix ;-)

matlock
Send message
Joined: 12 Dec 11
Posts: 34
Credit: 86,423,547
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 30135 - Posted: 21 May 2013 | 2:46:52 UTC - in response to Message 30133.

You are speculating; cite a source that says that Nvidia supplies binned chips, and what card makers are using them

Interesting reversal of "logic" or lack thereof. I was taking issue with your speculation. I said:

> Do you know that EVGA and other board makers aren't using binning supplied by the foundry?
> Did you know that even chips from the same wafer can have very widely varying capabilities?

So can you cite a source that says that Nvidia doesn't supply binned chips?

That is another way of saying that you have no evidence to prove the assertion that the chips are binned. But you may believe it if you want to.

Do you really not understand how you turned this around, or are you just trying to be difficult? I hope you're trying to be difficult, it's easier to fix ;-)


Be nice, guys. We are all nerds on the same team. Why not chalk it up to: "I feel comfortable having an overclocked GPU" and "I feel more comfortable having a reference clocked GPU". Debating slight performance differences of 650Ti models is really splitting hairs.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30146 - Posted: 21 May 2013 | 8:36:54 UTC - in response to Message 30132.
Last modified: 21 May 2013 | 8:37:27 UTC

Yes that is the one. But I saw a GTX 660 for only little more money. Same 192bit bus but more stream processors. So I am still a bit in doubt.

The GTX 660 is great. The 650Ti has a 128-bit bus (not 192), less L2 cache, and fewer ROPs, in addition to the reduction of stream processors. The 650TiBoost is a 660 in the other regards, but has the same number of stream processors as the 650Ti.

I agree, if the price is close, get the 660. Did you see skgiven's test and analysis of the 650 TI, 650 Ti Boost, 660 and 660 Ti a few days ago. Read it, it's good information. There's also companion information in the later posts of the same thread.

Edit: Here's a link to the thread:

http://www.gpugrid.net/forum_thread.php?id=3156&nowrap=true#29914

Yes I saw that. I read most technical information from skgiven, seeing his credits and RAC he knows his stuff :) Also at other projects.
But I have read all the information from other "specialists" as well and I will go for the 660 from EVGA, not SC or SSC as they are to expensive for me at the moment.

@ matlock I guess I was mistaken the 650Ti Boost has a 192-bit bus and the 650Ti 128, right?
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30154 - Posted: 21 May 2013 | 10:42:09 UTC

Jim, you're of course right that higher clock speeds increase the risks of calculation errors. But in a well designed chip and with proper error-detecting tests (not sure we have these for GPUs..) this is not a smooth transition, but rather almost a step function. For my GTX660Ti that's "1228 MHz still works at GPU-Grid, 1241 MHz won't". While I do get occasional errors here (I've been watching it closer lately), these are always WUs which also fail for everyone else. So for me I conclude that I'm safe at 50 MHz over the top clock speed the manufacturer choose for my (heavily) factory-OCed card, even for Noelia tasks.

I may have to adjust this down by 13 MHz in a year or two due to chip degradation.. I won't mind. And you may rightfully say "but that's just one example". To which I'd reply that the point is "there is a certain operation point for every card just before the unstable region starts [when clocking up]". That's where you're running the most efficient (unless tasks fail, of course). Find this point, keep some safe distance (generally I'd recommend more like 26 MHz rather than 13 MHz, but I don't always practice what I preach..). If one follows this it really doesn't matter what clock speed the manufacturer has set.

Well, and a factory OC'ed card was at least tested for higher frequencies in games. This includes the shaders.. and makes it more likely the card will also perform better in DC. Actually I found that in practice I could achieve higher clock speeds in DC than in games, depending on the project.

@Beyond: this "only uses 2/3 of its shaders" was not a software error. It's a design choice nVidia makes, which turns out to be more or less helpful depending on the code. All chips after GF110 and GF110 use this scheme (so even Titan).

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30157 - Posted: 21 May 2013 | 14:17:37 UTC - in response to Message 30154.

in a well designed chip and with proper error-detecting tests (not sure we have these for GPUs..) this is not a smooth transition, but rather almost a step function. For my GTX660Ti that's "1228 MHz still works at GPU-Grid, 1241 MHz won't".

I may have to adjust this down by 13 MHz in a year or two due to chip degradation..

This fits experience also. It's also true across projects and not only here. As my GPUs get older, I usually have to slowly step down the OC speed to have them run 100% trouble free.

@Beyond: this "only uses 2/3 of its shaders" was not a software error. It's a design choice nVidia makes, which turns out to be more or less helpful depending on the code. All chips after GF110 and GF110 use this scheme (so even Titan). MrS

But then explain why ACEMD properly uses all the shaders now. As I understand it ACEMD improperly detected the number of shaders in the GF106 (or maybe more precisely those settings were not included in ACEMD, or ACEMD was hardwired to always assume a particular shader ratio). Then a couple of years later ACEMD was finally updated (corrected) to properly detect and use all the shaders in the GF106 based GPUs. Sure sounds like a software problem to me. At least that was the way it was explained to me in a different thread. Someone correct me if I have the details wrong.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30162 - Posted: 21 May 2013 | 15:55:24 UTC - in response to Message 30157.

I think the super-scalar cards are now preferentially favoured by the application. When the top GPU's were the GTX480 to GTX590's it made sense to favour these Compute Capable 2.0 architectures, for project optimization reasons. It now makes more sense to use an app that favours the CC3.0 GeForce 600 GPU's which are all super-scalar. This just happens to make the CC2.1 cards (superscalar GeForce 400 and 500 series GPUs) perform better than they did, and also makes my old GPU comparison tables (with the older apps) obsolete.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30163 - Posted: 21 May 2013 | 16:48:51 UTC - in response to Message 30162.

I think the super-scalar cards are now preferentially favoured by the application. When the top GPU's were the GTX480 to GTX590's it made sense to favour these Compute Capable 2.0 architectures, for project optimization reasons. It now makes more sense to use an app that favours the CC3.0 GeForce 600 GPU's which are all super-scalar. This just happens to make the CC2.1 cards (superscalar GeForce 400 and 500 series GPUs) perform better than they did, and also makes my old GPU comparison tables (with the older apps) obsolete.

Interesting. If this is the way it works it would seem to be a good idea to allow the sending of different optimized apps to different GPU types. That's the way other projects handle this kind of situation AFAIK.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30166 - Posted: 21 May 2013 | 17:10:29 UTC - in response to Message 30163.

That's just my take on the situation. Only the researchers could tell you if that was the case, or how close it is to the situation.

There are many potential issues with having multiple apps (queues, resources, management), but the biggest may be that any research has to be performed on the same app (or one that is essentially the same for the purpose of analysis) for the research to be presentable/acceptable.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30167 - Posted: 21 May 2013 | 18:52:13 UTC - in response to Message 30166.

That's just my take on the situation. Only the researchers could tell you if that was the case, or how close it is to the situation.

There are many potential issues with having multiple apps (queues, resources, management), but the biggest may be that any research has to be performed on the same app (or one that is essentially the same for the purpose of analysis) for the research to be presentable/acceptable.

Other projects seem to handle the same issues easily. What's so hard about setting up a different queue? Whats so hard about having different apps optimized for different NVidia class cards? I'm just not seeing why it should be so difficult. Why not ask for help from some of the other projects, or if need be even hire one of them to set up the queues, etc?

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30187 - Posted: 22 May 2013 | 7:44:13 UTC - in response to Message 30167.

I'm just not seeing why it should be so difficult. Why not ask for help from some of the other projects, or if need be even hire one of them to set up the queues, etc?

That is very difficult in a scientific setting. It has a lot to do with funding and research groups.
Working for another science group, even on hire, does mean that work for the own group will come on hold.
Personally I think that it is more important for the GPUGRID project to get the science right, as that will help cure some terrible diseases. If there is time left or a trainee/internship can work on several apps.
Making one good app is better though for updating and maintenance than several with risk that something is forgotten etc.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30203 - Posted: 22 May 2013 | 14:46:39 UTC - in response to Message 30187.

I'm just not seeing why it should be so difficult. Why not ask for help from some of the other projects, or if need be even hire one of them to set up the queues, etc?

Personally I think that it is more important for the GPUGRID project to get the science right, as that will help cure some terrible diseases.

You may be right. The other side of the coin though is that the work is going to done done much faster if more crunchers and GPUs are accommodated. That's assuming that the project needs the crunching capacity. Maybe it doesn't. How much work is lost and computing time wasted by apps that don't work correctly with many GPUs and WUs that aren't formatted correctly. I think we can see by our experience that a lot is lost. Trade-offs? I'd say there's always trade-offs. Seriously though, setting up new queues should be a simple matter in BOINC. There's more than one forum / e-mail list where developers can get help from others who have already climbed the mountain. One can't be too proud to ask though...

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30208 - Posted: 22 May 2013 | 16:04:48 UTC - in response to Message 30203.

I'm just not seeing why it should be so difficult. Why not ask for help from some of the other projects, or if need be even hire one of them to set up the queues, etc?

Personally I think that it is more important for the GPUGRID project to get the science right, as that will help cure some terrible diseases.

You may be right. The other side of the coin though is that the work is going to done done much faster if more crunchers and GPUs are accommodated. That's assuming that the project needs the crunching capacity. Maybe it doesn't. How much work is lost and computing time wasted by apps that don't work correctly with many GPUs and WUs that aren't formatted correctly. I think we can see by our experience that a lot is lost. Trade-offs? I'd say there's always trade-offs. Seriously though, setting up new queues should be a simple matter in BOINC. There's more than one forum / e-mail list where developers can get help from others who have already climbed the mountain. One can't be too proud to ask though...

I agree with you Beyond the more is crunched the better it is for the project.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30209 - Posted: 22 May 2013 | 16:13:36 UTC - in response to Message 30203.
Last modified: 22 May 2013 | 16:17:50 UTC

I don't think the issue is with the creation of new queues, they have been added and deleted before. It would be more of a maintenance issue.
The project has an inherent need for longer WU's with more steps and larger detail. This makes shorter, more accommodating experiments, less useful. The solution there is to diversify, which is something it appears Gianni is trying to do. Then there is the app situation - GPUGrid is basically a one app project, with different WU types and lengths. From a scientific point of view this means results are comparable, and it allows the researchers to extend runs in order to get more detail.

BTW. I'm not arguing for or against more queues, apps, or better stability (I crunch too), I'm just trying to give my take on the situation.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30225 - Posted: 22 May 2013 | 19:59:34 UTC

Let's take a look at the hard numbers. I've compared the average runtimes of 5 recent Noelia WUs on fast linux hosts (GPUs are less likely to be overlcocked here) and took the theoretical SP performance into account:

runtime in ks | theoretical performance in TFlops | TFlops*runtime
GTX 580: 39.49 | 1.58 | 62.4
GTX 570: 41.92 | 1.40 | 58.7
GTX 480: 46.00 | 1.34 | 61.6
GTX 660Ti: 38.07 | 2.46 | 93.7
GTX 560Ti: 67.71 | 1.26 | 85.3

The quantity "TFlops*runtime" may not be the most intuitive, but it makes sense if we want to compare architecture efficiencies. A low value signals a highly efficient architecture:
- for a given theoretical speed, the longer the WUs take the less the hardware is actually used
- for a given runtime, the more TFlops were neded to achieve it the less efficient the hardware is

What we see here is still a clear ~50% advantage for the CC 2.0 cards, or put the other way around a 2/3 penalty for superscalar GPUs. Just as it has been since the introduction of these cards! Well, I don't have such hard numbers for the older apps at hand, but I suspect they'll be equivalent.

The solution to this apparent paradox is pretty simple, IMO: the Keplers have added so much theoretical performance and gained power efficiency (due to architecture as well as process node) that they're clearly superior to the Fermis. And there are no new non-superscalar GPUs. Hence we forgive the new cards that 2/3 penalty and consider them quite good (which they are, IMO). Hence the suspecion "the client must have improved for the newer cards". I think in the light of these numbers we can quickly stop the discussion about new queues and multiple apps :)

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30231 - Posted: 22 May 2013 | 21:40:00 UTC - in response to Message 30225.

If I have understand correctly, the lower the result of TFlops times runtime, the more efficient the card?
Than would the 660Ti perform worst?
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30282 - Posted: 23 May 2013 | 23:41:05 UTC - in response to Message 30162.
Last modified: 23 May 2013 | 23:41:34 UTC

I think the super-scalar cards are now preferentially favoured by the application. When the top GPU's were the GTX480 to GTX590's it made sense to favour these Compute Capable 2.0 architectures, for project optimization reasons. It now makes more sense to use an app that favours the CC3.0 GeForce 600 GPU's which are all super-scalar. This just happens to make the CC2.1 cards (superscalar GeForce 400 and 500 series GPUs) perform better than they did, and also makes my old GPU comparison tables (with the older apps) obsolete.

You disagree?

Going back a year to when the top GPU's were CC2.0, a GTX470 did 29% more work than a GTX560Ti.
Now, with newer apps, a GTX470 can only do 7% more work than a GTX560Ti.

With super-scaler cards you can never utilize all the shaders fully, but I think its went up from 66% to around 80%.
The theoretical GFLOPS are not a good indicator of performance here, otherwise the GTX650Ti would have been 16% faster than a GTX470 from the outset (as suggested by their GFLOPS), and we wouldn't have needed correction factors to accurately compare GPU's of different compute capability.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2346
Credit: 16,293,515,968
RAC: 6,990,482
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30287 - Posted: 24 May 2013 | 10:29:05 UTC - in response to Message 30282.
Last modified: 24 May 2013 | 10:31:36 UTC

Going back a year to when the top GPU's were CC2.0, a GTX470 did 29% more work than a GTX560Ti.
Now, with newer apps, a GTX470 can only do 7% more work than a GTX560Ti.

I think it's because the newer apps are made with CUDA4.2.

With super-scaler cards you can never utilize all the shaders fully, ...

It's true even for a non super-scalar card :)
I would say that the non-super-scalar cards (still) have a significant advantage over the super-scalar cards in the shader utilizaiton.

... but I think its went up from 66% to around 80%.

This advantage is less than it was with the CUDA3.1 apps (it was around 33%)
It's too bad from the cruncher's perspective that nVidia doesn't make non-super-scalar GPUs anymore, but (as kind of compensation) the good news is that the CUDA4.2 can better utilize the super-scalar architecture than the CUDA3.1.

This discussion is difficult, because we're talking about the performance of a system consisting many parts, all of these parts continuously changing over time, and this change could alter (like it did in the past) their order of significance:
1. I/a The GPU
2. I/b The code running on the GPU
3. II - The computer (also a system consisting many parts)
4. II/a The operating system of the computer
5. II/b The optimization of the BOINC client for the hardware it's running on
6. II/c The hardware components of the computer (beside the GPU)

This is the actual order.
Except for item 2, the participants can optimize this system.
But item 2 is the fundamental of this optimization:
I've changed my Core 2 Quad systems to Core i7 systems to achieve better overall performance by eliminating the PCIe bandwith bottleneck which reduced the performance of the CUDA3.1 client. The CUDA 4.2 client is better in regards of this issue also, so no such change (read it as investment) is needed now from the participants. But you still have to spare a CPU thread per GPU (item 5)

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30300 - Posted: 24 May 2013 | 15:38:43 UTC - in response to Message 30287.

This advantage is less than it was with the CUDA3.1 apps (it was around 33%)
It's too bad from the cruncher's perspective that nVidia doesn't make non-super-scalar GPUs anymore, but (as kind of compensation) the good news is that the CUDA4.2 can better utilize the super-scalar architecture than the CUDA3.1.

This discussion is difficult, because we're talking about the performance of a system consisting many parts, all of these parts continuously changing over time, and this change could alter (like it did in the past) their order of significance:
1. I/a The GPU
2. I/b The code running on the GPU
3. II - The computer (also a system consisting many parts)
4. II/a The operating system of the computer
5. II/b The optimization of the BOINC client for the hardware it's running on
6. II/c The hardware components of the computer (beside the GPU)

This is the actual order.
Except for item 2, the participants can optimize this system.
But item 2 is the fundamental of this optimization:
I've changed my Core 2 Quad systems to Core i7 systems to achieve better overall performance by eliminating the PCIe bandwith bottleneck which reduced the performance of the CUDA3.1 client. The CUDA 4.2 client is better in regards of this issue also, so no such change (read it as investment) is needed now from the participants. But you still have to spare a CPU thread per GPU (item 5)

Interesting. I did notice the large performance gain for GF106 cards with the CUDA 4.2 app. I didn't realize that the new app also improved the situation with PCIe. Performance is not too degraded now even for cards in an X4 slot. Thanks.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30309 - Posted: 24 May 2013 | 18:24:02 UTC - in response to Message 30287.

But you still have to spare a CPU thread per GPU (item 5)


This all is good information Retvari thanks.

One more question: 'keep one CPU core free', does that mean two on an i7 when HT is switched on?

____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30318 - Posted: 24 May 2013 | 21:21:29 UTC - in response to Message 30282.

I think its went up from 66% to around 80%.

Then why would I still be calculating ~66% for a sample size of 3+2?

@TJ: I think he means logical cores.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30322 - Posted: 25 May 2013 | 0:05:36 UTC - in response to Message 30318.

You included a GTX660Ti which going by the theoretical TFlops is identical to the GTX670. We know it's memory bottle-necked as are other cards to varying extents in the GTX600 series - the Memory controller of my GTX660Ti's is 41%, my GTX660 is 31% and the 650TiBoost was ~27%. The theoretical TFlops is not that useful for comparisons. Even years ago we needed to use correction factors. Your sample is also skewed, even between your 580 and 570 the numbers don't add up. The theory says a 580 is 13% faster than a 570, but your sample has a 6% gap. There should be a 4% performance difference between a GTX480 and a GTX570, your sample has it at 9.7%. So the 660Ti isn't a 670, the 570 is probably OC'ed, and your 580 is from this system,
Coprocessors [7] NVIDIA GeForce GTX 580 (1535MB).

Following the move to CUDA 4.2 I remembered one of the researchers posted to say that the 66% shader utilization limit wasn't an issue any more.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30327 - Posted: 25 May 2013 | 10:42:53 UTC
Last modified: 25 May 2013 | 10:46:56 UTC

Hello guys,

A quick question, I am installing my GTX660 currently.
Is it wise to update to the latest BOINC version 7.0.64?

I rad on other projects that this version has problems with Linux.

I run winVista ultimate x64 on the I7 with the new GTX660.

Thanks as always for the help!
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30333 - Posted: 25 May 2013 | 11:50:02 UTC - in response to Message 30327.
Last modified: 25 May 2013 | 11:50:43 UTC

I'm using 7.0.27 on Ubuntu 13.04, with 304.88 drivers. New-ish OS, but mature apps and drivers - Working fine.

I don't know what version of Boinc you are presently using and can't see your system, but generally speaking, if it works, leave it be. If you need 7.0.64/65 for some other project then have a go.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30335 - Posted: 25 May 2013 | 12:29:12 UTC - in response to Message 30333.

I'm using 7.0.27 on Ubuntu 13.04, with 304.88 drivers. New-ish OS, but mature apps and drivers - Working fine.

I don't know what version of Boinc you are presently using and can't see your system, but generally speaking, if it works, leave it be. If you need 7.0.64/65 for some other project then have a go.

That is also my idea "if it works leave it alone".
I Installed the new GTX660 and all the software from the CD.
BOINC has a message that an app will not work.
Probably due to the 305.xx version of the nVidia drivers.
I am updating these now to 314.xx and see if that works.
I will leave BOINC than on 7.0.28, that I'm using on that PC.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30336 - Posted: 25 May 2013 | 12:29:26 UTC - in response to Message 30327.
Last modified: 25 May 2013 | 12:30:00 UTC

Is it wise to update to the latest BOINC version 7.0.64?
I run winVista ultimate x64 on the I7 with the new GTX660.

Yes. There are a huge number of bugfixes in 7.0.64. I'm running it on 11 machines and while not yet perfect, it's better than anything previous. BTW it's the recommended stable version currently:

http://boinc.berkeley.edu/download_all.php

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30341 - Posted: 25 May 2013 | 13:13:30 UTC
Last modified: 25 May 2013 | 13:14:43 UTC

More questions

I have now installed the GTX660 it is running some Milkyway to test as they are fast (approx. 7 minutes).
BOINC 7.0.28, nVidia driver 314.22 CUDA version 5.0, compute capacity 3.0.

Have installed EVGA precision and set GPU clock and fan speed to auto.
GPU Clock is running at 1110MHz, at times less. Should I set this lower?

Now in the BOINC tasks everything is "flashing" and going to other windows, e.g. browser, taskmanger, gives sometimes a white screen for a second.
That was not the case with the GTX285. Things seem to go slower.
How can I change this?

I will update to latest BOINC if Rosetta WU's are ready.
____________
Greetings from TJ

Trotador
Send message
Joined: 25 Mar 12
Posts: 103
Credit: 13,920,977,393
RAC: 5,040,369
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30351 - Posted: 25 May 2013 | 14:29:15 UTC - in response to Message 30335.

[quote]I'm using 7.0.27 on Ubuntu 13.04, with 304.88 drivers. New-ish OS, but mature apps and drivers - Working fine.

That is also my idea "if it works leave it alone".
I Installed the new GTX660 and all the software from the CD.
BOINC has a message that an app will not work.
Probably due to the 305.xx version of the nVidia drivers.
I am updating these now to 314.xx and see if that works.
I will leave BOINC than on 7.0.28, that I'm using on that PC.


fyi, app_config.xml only works in 7.0.40 and higher, it is no related to Nvidia driver.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30365 - Posted: 25 May 2013 | 18:49:09 UTC

I whish I had listened to skgiven!
I am updating BOINC to 7.0.64 and get error message about a key, error 1402
that I don't have right for a key.
I don't know how to get the picture here I have it on the pc as a jpg.

But now is BOINC not working any more. New GTX660 and not to use.
Who can help me please to solve this?
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30367 - Posted: 25 May 2013 | 19:15:01 UTC - in response to Message 30365.
Last modified: 25 May 2013 | 19:17:21 UTC

I am updating BOINC to 7.0.64 and get error message about a key, error 1402
that I don't have right for a key.

Googled the error, seems you have a permissions problem. It has nothing to do with 7.0.64. Read this, follow the instructions:

http://boinc.berkeley.edu/dev/forum_thread.php?id=3248#20925

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30371 - Posted: 25 May 2013 | 21:54:48 UTC - in response to Message 30367.

I am updating BOINC to 7.0.64 and get error message about a key, error 1402
that I don't have right for a key.

Googled the error, seems you have a permissions problem. It has nothing to do with 7.0.64. Read this, follow the instructions:

http://boinc.berkeley.edu/dev/forum_thread.php?id=3248#20925

Thanks Beyond that was the trick.

Well all day busy with installing just a card. Booting my system takes 11 minutes?
Have run all sorts of diagnostics but seems something with PCAngel is the installer and repair utility from Windows via which the system builder installed it. Not funny.
Anyhow all is running now, an NOELIA on the new GTX660 seems to take 78 hours to complete? I'll see.
But system is slow, I have now used 5 cores to crunch Rosetta and one (0.737) for GPUGRID, so 2 left for other things and still slow. Even browsing. I have in FF all my project pages and tasks lists, and switching from one to the other and back is not always fast. The GPU load is around 50% and the GPU clock most at 1110MHz but sometimes lower. Even when typing this FF was not responding for a few seconds. At the moment I am not happy with my updates and tired. I go to sleep but hope to read some suggestions tomorrow from the experts out there.

Thanks for the help and input.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30373 - Posted: 25 May 2013 | 23:48:00 UTC - in response to Message 30371.

A NOELIA WU should take 12 or 13h on a GTX660, 78h is way too long!
Are you going by the % complete and time taken, or the inaccurate remaining time?
50% GPU load is too low and suggests that the Noelia WU needs more CPU access, even though you have 2 free CPU's (perhaps FF is using them and some plug in is eating up a thread). What's the CPU kernel usage like? Is there heavy disk I/O.
11min to boot isn't normal for a desktop system.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30383 - Posted: 26 May 2013 | 8:56:19 UTC - in response to Message 30373.
Last modified: 26 May 2013 | 8:57:47 UTC

A NOELIA WU should take 12 or 13h on a GTX660, 78h is way too long!
Are you going by the % complete and time taken, or the inaccurate remaining time?
50% GPU load is too low and suggests that the Noelia WU needs more CPU access, even though you have 2 free CPU's (perhaps FF is using them and some plug in is eating up a thread). What's the CPU kernel usage like? Is there heavy disk I/O.
11min to boot isn't normal for a desktop system.

Thanks skgiven, but you have to help me with the CPU kernel usage. Is that what the CPU is using? It's around 88%. According to process explorer I/O is very little, most of the time zero. From the 12GB mem, 4.5GB is used.
I don't know what is wrong with this system it is my only not Dell and having issues from the start.
NOELIA is 53% ready in 11:20 hours, estimated still 51:14 hours, but that is not right.
I use now a 550Ti and that is on a Dell quad with vista 32 bit and is running NOELIA's fine in approx 40 hours with a steady GPU load of 91%, monitoring with EVGA NV-Z.

The 660 is in the i7 with vista ultimate 64 bit and has varying GPU load of 5%, 43% up to 63% as I have seen as highest. GPU clock mostly at 1110MHz.

One thing I found is that last night there was 28GB free on my C-drive and now only 7.84 free. Don't know what happened yet??

Closing FF and checking if it was indeed killed, kept the CPU usage at 88%.

There is not a lot of software installed on this system and use it for crunching and some browsing now and then mostly all BOINC project related.

After yesterday's update to he GTX660, newer driver 314.22 and new BOINC 7.0.64 the system seems slower with building up the graphics. Blank screens for a second when switching and such. The remaining time in BOINC Manager is flickering all the time, not on my other 2 PC currently crunching.

Any ideas I can do?
Thanks
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30402 - Posted: 26 May 2013 | 10:08:38 UTC - in response to Message 30322.

You included a GTX660Ti which going by the theoretical TFlops is identical to the GTX670. We know it's memory bottle-necked as are other cards to varying extents in the GTX600 series - the Memory controller of my GTX660Ti's is 41%, my GTX660 is 31% and the 650TiBoost was ~27%. The theoretical TFlops is not that useful for comparisons. Even years ago we needed to use correction factors. Your sample is also skewed, even between your 580 and 570 the numbers don't add up. The theory says a 580 is 13% faster than a 570, but your sample has a 6% gap. There should be a 4% performance difference between a GTX480 and a GTX570, your sample has it at 9.7%. So the 660Ti isn't a 670, the 570 is probably OC'ed, and your 580 is from this system,
Coprocessors [7] NVIDIA GeForce GTX 580 (1535MB).

Following the move to CUDA 4.2 I remembered one of the researchers posted to say that the 66% shader utilization limit wasn't an issue any more.


By using the theoretical GFlops I actually deduce the current correction factor (between the included GPUs). And of course I can't know the real clock speed, that's why I'm not concerned about 5% differences - which is fine if I'm looking for the 33% elephant in the room. So let's add some more numbers:

runtime in ks | theoretical performance in TFlops | TFlops*runtime
GTX 580: 39.49 | 1.58 | 62.4
GTX 570: 41.92 | 1.40 | 58.7
GTX 480: 46.00 | 1.34 | 61.6
GTX690: 30.22 | 2.81 | 84.9
GTX 660Ti: 38.07 | 2.46 | 93.7
GTX660: 42.21 | 1.88 | 79.3
GTX 560Ti: 67.71 | 1.26 | 85.3

The difference between GTX690 and GTX660Ti might actually be attributed to the memory bus. The GTX660 shows exceptional performance (not unlike what we see in real world - it is the new bang-for-the-buck king here), but this ~5% advantage might easily be explained by an OC.

Overall I still see 2 distincly different classes: CC 2.0 cards around 60 ks*TFlops and super scalar cards around 90 ks*TFlops. However, if I exclude the GTX660Ti (due to memory constraints) and leave the GTX660 in (OC?) the average of these cards shifts downwards to 83 ks*TFlops. That's better than a pure 2/3 rule (in which case the average would have been 90), but still significantly less than what the "non-super" scalar cards achieve. This might well be the improvement you've been reading about and actually seems like a realistiv number to me.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30407 - Posted: 26 May 2013 | 12:00:40 UTC - in response to Message 30383.
Last modified: 26 May 2013 | 12:08:00 UTC

A NOELIA WU should take 12 or 13h on a GTX660, 78h is way too long!
Are you going by the % complete and time taken, or the inaccurate remaining time?
50% GPU load is too low and suggests that the Noelia WU needs more CPU access, even though you have 2 free CPU's (perhaps FF is using them and some plug in is eating up a thread). What's the CPU kernel usage like? Is there heavy disk I/O.
11min to boot isn't normal for a desktop system.


Thanks skgiven, but you have to help me with the CPU kernel usage.
Task Manager, View, Show Kernel Times.

Is that what the CPU is using? It's around 88%
88% would be total CPU usage. Kernel usage is less (shown in red), and I just wanted to know if it was high or low (roughly). You are using 6 CPU cores which is 75% of the CPU and running a NOELIA WU (which Should use around half of a CPU), totaling 81%. The system is probably using ~3%, so something(s) else is using ~4% of your overall CPU (32% of a thread). I suspect the Noelia WU might not be using ~50% of a CPU core, and something else is using more than 4%. Could you check this; Boinc Manager (advanced view), tasks, click on the NOELIA WU and then Properties (to the left). CPU time should be ~half of elapsed time.

According to process explorer I/O is very little, most of the time zero. From the 12GB mem, 4.5GB is used.
I don't know what is wrong with this system it is my only not Dell and having issues from the start.
NOELIA is 53% ready in 11:20 hours, estimated still 51:14 hours, but that is not right.
Go by the 53% in 11.2h, rather than the estimated time remaining. That means the WU should take just over 21h to complete. You might see the estimated remaining time depreciate faster than the wall clock.

The 660 is in the i7 with vista ultimate 64 bit and has varying GPU load of 5%, 43% up to 63% as I have seen as highest. GPU clock mostly at 1110MHz.
Not sure what you mean by 5%? 43% to 63% means there is a bottleneck somewhere. Perhaps the CPU, PCIE freq, another app (system or more likely Rosetta), or something else...

One thing I found is that last night there was 28GB free on my C-drive and now only 7.84 free. Don't know what happened yet??
Rosetta - look at a tasks properties in Boinc.

Closing FF and checking if it was indeed killed, kept the CPU usage at 88%.
Rules out FF, but not add ons (Java, downloaders, scripts...)

After yesterday's update to he GTX660, newer driver 314.22 and new BOINC 7.0.64 the system seems slower with building up the graphics. Blank screens for a second when switching and such. The remaining time in BOINC Manager is flickering all the time, not on my other 2 PC currently crunching.
A new driver might be in order, but I would start by suspending Rosetta work and see if the GPU usage rises. If that doesn't improve things, do a restart with Rosetta WU's still suspended (the mini WU's do checkpoint).

For crunching, we recommend that the HDD has 10GB free space, with no more than 80% used. Not sure what type you have, but if it's a standard hard disk drive (not a Solid State Drive) and you have little space on it, that could be a problem, especially if running Rosetta WU's.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30409 - Posted: 26 May 2013 | 12:27:02 UTC - in response to Message 30402.
Last modified: 26 May 2013 | 15:42:12 UTC

MrS., I did some similar calculations and comparisons myself (different cards though). Some even on my own systems (so those were as accurate as possible), but others largely unknown. I think there are two things at play here; the move to the CUDA 4.2 app inherently favored the CC2.1 GPU's (and GK's) over the CC2.0 cards, and the 2/3 issue might have some sort of a fix; in some circumstances there is no 2/3rds limitation. This might actually be app specific. The performance of the different apps presently in use even varies somewhat between the GK's (memory bandwidth/shader cache related), and might vary by OS/WU type.

Below is my GTX660Ti on W7, and a GTX660Ti on Linux (18% difference):

W7
I71R3-NATHAN_KIDc22_2-2-8-RND7308_0 4478415 139265 25 May 2013 | 4:01:52 UTC 25 May 2013 | 16:18:23 UTC Completed and validated 43,598.53 43,475.95 167,550.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)
I51R2-NATHAN_KIDc22_2-1-8-RND1700_0 4475974 139265 22 May 2013 | 18:19:57 UTC 23 May 2013 | 6:44:44 UTC Completed and validated 43,750.94 43,617.13 167,550.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)

Linux
I3R9-NATHAN_KIDc22_2-3-8-RND8249_0 4479099 25 May 2013 | 22:07:28 UTC 26 May 2013 | 10:51:12 UTC Completed and validated 36,793.87 36,662.43 167,550.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)
I84R9-NATHAN_KIDc22_2-2-8-RND1848_0 4478823 25 May 2013 | 11:51:33 UTC 26 May 2013 | 0:37:30 UTC Completed and validated 36,812.21 36,674.85 167,550.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)
I58R2-NATHAN_KIDc22_2-3-8-RND0613_0 4478419 25 May 2013 | 1:40:19 UTC 25 May 2013 | 14:23:32 UTC Completed and validated 36,848.42 36,708.87 167,550.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)

We also know that CPU usage can have a big influence on GPU performance, as can some CPU WU types with some hardware, and we know that PCIE bandwidth has some small influence, but hasn't been fully tested on recent WU's. So, lots of unknowns to contend with. Anyway, these results are closer to what I observed.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30412 - Posted: 26 May 2013 | 13:05:07 UTC - in response to Message 30407.
Last modified: 26 May 2013 | 13:07:49 UTC

A NOELIA WU should take 12 or 13h on a GTX660, 78h is way too long!
Are you going by the % complete and time taken, or the inaccurate remaining time?
50% GPU load is too low and suggests that the Noelia WU needs more CPU access, even though you have 2 free CPU's (perhaps FF is using them and some plug in is eating up a thread). What's the CPU kernel usage like? Is there heavy disk I/O.
11min to boot isn't normal for a desktop system.


Thanks skgiven, but you have to help me with the CPU kernel usage.
Task Manager, View, Show Kernel Times.

Is that what the CPU is using? It's around 88%
88% would be total CPU usage. Kernel usage is less (shown in red), and I just wanted to know if it was high or low (roughly). You are using 6 CPU cores which is 75% of the CPU and running a NOELIA WU (which Should use around half of a CPU), totaling 81%. The system is probably using ~3%, so something(s) else is using ~4% of your overall CPU (32% of a thread). I suspect the Noelia WU might not be using ~50% of a CPU core, and something else is using more than 4%. Could you check this; Boinc Manager (advanced view), tasks, click on the NOELIA WU and then Properties (to the left). CPU time should be ~half of elapsed time.


The kernel is indeed low for 6 cores and little higher for 2 cores. Checking properties NOELIA WU is saying 15:08:53 Elapsed time and 11:15:07 CPU time. The WU is using 0.737 CPUs + 1 nVidia GPU.


According to process explorer I/O is very little, most of the time zero. From the 12GB mem, 4.5GB is used.
I don't know what is wrong with this system it is my only not Dell and having issues from the start.
NOELIA is 53% ready in 11:20 hours, estimated still 51:14 hours, but that is not right.
Go by the 53% in 11.2h, rather than the estimated time remaining. That means the WU should take just over 21h to complete. You might see the estimated remaining time depreciate faster than the wall clock.


That is coorect, BOINC estimations are roughly indeed.

The 660 is in the i7 with vista ultimate 64 bit and has varying GPU load of 5%, 43% up to 63% as I have seen as highest. GPU clock mostly at 1110MHz.
Not sure what you mean by 5%? 43% to 63% means there is a bottleneck somewhere. Perhaps the CPU, PCIE freq, another app (system or more likely Rosetta), or something else...


I mean that the load of the GPU is varying between 5 and 63% not constant 91% on my 550Ti in the other PC. How can I check the PCIE frequency?

One thing I found is that last night there was 28GB free on my C-drive and now only 7.84 free. Don't know what happened yet??
Rosetta - look at a tasks properties in Boinc.


Rosetta is indeed using some space but this is on the D-drive and there is room enough.

Closing FF and checking if it was indeed killed, kept the CPU usage at 88%.
Rules out FF, but not add ons (Java, downloaders, scripts...)


No downloaders active, other things I don't know.

After yesterday's update to he GTX660, newer driver 314.22 and new BOINC 7.0.64 the system seems slower with building up the graphics. Blank screens for a second when switching and such. The remaining time in BOINC Manager is flickering all the time, not on my other 2 PC currently crunching.
A new driver might be in order, but I would start by suspending Rosetta work and see if the GPU usage rises. If that doesn't improve things, do a restart with Rosetta WU's still suspended (the mini WU's do checkpoint).


I have suspended them and indeed GPU load increases, now varying between 50 en 66%, still low. Total CPU is now 28%

For crunching, we recommend that the HDD has 10GB free space, with no more than 80% used. Not sure what type you have, but if it's a standard hard disk drive (not a Solid State Drive) and you have little space on it, that could be a problem, especially if running Rosetta WU's.


BOINC and the BOINC DATA are on the D-drive with 825GB free. So I guess that is not the problem.

Perhaps I need to do a complete new install from the OS on this PC, perhaps Win7 buying? But that will take a lo of time and if I can avoid it with help from yours that would be great. Thanks
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30413 - Posted: 26 May 2013 | 13:12:26 UTC

Is it perhaps an idea to update the the latest nVidia driver 320.18?
Normally I don't use the latest from things, but in this case as my GTX660 is not perming as it should...

Thanks.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30417 - Posted: 26 May 2013 | 15:36:14 UTC - in response to Message 30413.
Last modified: 26 May 2013 | 15:47:11 UTC

You could use GPUZ to see what PCIE speed the GPU is operating at. The PCIE performance impact isn't known for each WU type.

In this case I would suggest a move to the latest WHQL driver (320.18). I've had no problems with its Beta 320.14, and on Windows there doesn't seem to be any major issues.

I would not bother upgrading from Vista to Win7. I suggest you disable Aero, if it's running, and set the Windows Performance Options to Adjust for Best Performance, if they are not at that (computer properties, system properties, under performance click settings, adjust for best performance) [For me on W7 that frees up over 100MB GDDR]. Also do a restart, after suspending the CPU projects to see if it helps. I use CCleaner to control startup programs, to some effect. Might also be worth considering saying as it's taking 11min to start.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30426 - Posted: 26 May 2013 | 20:01:31 UTC - in response to Message 30417.
Last modified: 26 May 2013 | 20:02:06 UTC

You could use GPUZ to see what PCIE speed the GPU is operating at. The PCIE performance impact isn't known for each WU type.

In this case I would suggest a move to the latest WHQL driver (320.18). I've had no problems with its Beta 320.14, and on Windows there doesn't seem to be any major issues.

I would not bother upgrading from Vista to Win7. I suggest you disable Aero, if it's running, and set the Windows Performance Options to Adjust for Best Performance, if they are not at that (computer properties, system properties, under performance click settings, adjust for best performance) [For me on W7 that frees up over 100MB GDDR]. Also do a restart, after suspending the CPU projects to see if it helps. I use CCleaner to control startup programs, to some effect. Might also be worth considering saying as it's taking 11min to start.



Eventually the NOELIA finished in 20,9 hours so twice as fast than on the GTX550Ti but not as expected on the GTX660. I am waiting for Rosie to finish and will then install latest nVidia driver and boot. And run only GPUGRID overnight to see what happen.

I use CCleaner as well.
Aero is on I don't use it but cannot find to turn it off.
Secodn wen right click on computer and properties I don't get the option "system properties" and neither all the rest to adjust for best performance. I did a sear but find nothing yet. Will buy a book over Win Vista.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30428 - Posted: 26 May 2013 | 20:18:29 UTC

With not GPU task active, the system is very responsive, no blank pages and the lines in the BOINC Manager tray are not flickering.
So it must something having to do the the high use of the graphics card or some settings?
____________
Greetings from TJ

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 30429 - Posted: 26 May 2013 | 20:29:38 UTC - in response to Message 30426.

Aero is on I don't use it but cannot find to turn it off.


Right click "My Computer", left click "Advanced", click "Advanced System Settings", in the popup box under Performance click Settings, click adjust for best performance radio button.

How big is you're swap file?

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30430 - Posted: 26 May 2013 | 21:13:18 UTC - in response to Message 30429.

Aero is on I don't use it but cannot find to turn it off.


Right click "My Computer", left click "Advanced", click "Advanced System Settings", in the popup box under Performance click Settings, click adjust for best performance radio button.

How big is you're swap file?

Thanks, I have Aero of and set for best performance.
The swap file size is 12578MB.
____________
Greetings from TJ

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 30431 - Posted: 26 May 2013 | 21:48:29 UTC - in response to Message 30430.

The swap file size is 12578MB.


That's plenty big enough, I thought maybe it might have something to do with you're lag problems, but it would seem not.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30433 - Posted: 26 May 2013 | 22:26:15 UTC - in response to Message 30431.

What did GPUZ say about the PCIE width?
What about other power options? Is anything set to reduce, especially the PCIE (Start, type Power, Power Options)?
Is Prefer Maximum Performance on, in NVidia Control Panel, Power Management Mode (Manage 3D settings).
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30441 - Posted: 27 May 2013 | 10:42:48 UTC - in response to Message 30433.

What did GPUZ say about the PCIE width?
What about other power options? Is anything set to reduce, especially the PCIE (Start, type Power, Power Options)?
Is Prefer Maximum Performance on, in NVidia Control Panel, Power Management Mode (Manage 3D settings).

I can not find the PCIE width.
I did a clean installation of nVidia drivers last night but got the error message that drivers did not install.
Then windows wanted to install some security patches that took more than 40 minutues so I went to sleep.
This morning power it down, so a cold start and removed the nVidia driver (I thought) and installed the new. Again error that they have not been installed.
I give in, I have the system switched off. Normally I don't give in that easy, but this system is causing a lot of pain.
I don't know what to to than smash it against the wall, but I don't do that things.
So perhaps next week a format c: and then completely install OS new.

All Dell's I have, have no problems, less memory, less CPU capacity and speed, weaker OS versions, but no issues all run Aero and respond while crunching with ATI cards or the GTX550Ti (that is my weakest system but flawless for 4 years 24/7.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30443 - Posted: 27 May 2013 | 13:35:58 UTC - in response to Message 30441.

GPUz now tells you the PCIE Bus interface details. In the example below it's operating at PCIE2.0 @ X16 rates,

The displayed bus speeds change while the GPU is in use/not in use. So if your system isn't set for maximum performance the GPU could drop to PCIE1.0. Anyway, its a useful feature for diagnosing issues. In W7 its the NVidia Control Panel, Manage 3D settings (left), Power Management Mode (right) that impacts this, rather than Power Options, but they might still be important in some circumstances. Not sure if that's the case with Vista and W8; in some respect Vista's Power Options were more powerful.
When trying to install the NVidia driver, download it to your desktop (security rights), then Right Click on it, and select Run as Administrator. Also, do an advanced, clean installation.
If it doesn't install, and your system is primarily for crunching, backup any files you have and reinstall the OS - it will save you a lot of stress knowing that you are working from a clean OS install.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30449 - Posted: 27 May 2013 | 17:33:39 UTC

I found the problem with not installing the drivers it was a right problem i deleted the Administer when I changed the rights to got BOINC installed.
I have also get to system to set to best performance.
But more questions:
1. Now the system is best performance I get see true windows, meaning that I can see the background picture, or lines in windows disappear. I have to make them go the the taskbar and backt to get the lines in?

2. Only BOINC and EVGA NZ is running and only one project. GPUGRID with a NATHAN. GPU load still not high, around 50%. Has done 10% in 1 hour. Is this normal for a NATHAN on a GTX660?

3. Can some tell me how I can isert a picture here like skgiven did in the threat below? I can show the EVGA screen so that you perhaps get an idea.

A NOELIA is running on this system with a GTX550Ti at constant 95-96 GPU load.

Thanks as always for the highly appreciated input and needful help.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30460 - Posted: 27 May 2013 | 20:58:49 UTC - in response to Message 30449.
Last modified: 27 May 2013 | 21:00:59 UTC

On the hardware front, what is the clock speed of your i7-940 (CPUZ) and what is the PCIE Bus Interface listed as (see GPUZ picture in previous post), for example, PCIE2.0x16 @X16 2.0.

12GB of RAM and only using 4GB. Shouldn't be any issue there, even if it was badly setup; I can get 99% GPU usage on a system with an old dual core using DDR2 (albeit XP).
770W PSU - If it was able to power a GTX285 it is sufficient (and then some).
The lack of primary HDD space might still be an issue, or indicative of some problem.

I found the problem with not installing the drivers it was a right problem i deleted the Administer when I changed the rights to got BOINC installed.
Installing the latest driver is some progress, but I'm not clear on the security changes you made. When you installed Boinc you didn't do it as a service did you? Is it set to run under your existing user account?

I have also get to system to set to best performance.
That removes some of the WDDM overhead. Is NVidia presently set to max performance?
It would also be useful to be able to see your computer, and the tasks that it returns. I know you have already posted most of the info we have asked for but the first page of the logs is an easy way to post most of the info we need to see.

But more questions:
1. Now the system is best performance I get see true windows, meaning that I can see the background picture, or lines in windows disappear. I have to make them go the the taskbar and backt to get the lines in?
This is normal in this mode, but you can make individual changes to suit your preferences (user defined).

2. Only BOINC and EVGA NZ is running and only one project. GPUGRID with a NATHAN. GPU load still not high, around 50%. Has done 10% in 1 hour. Is this normal for a NATHAN on a GTX660?
It's good to run with nothing else, so we can eliminate other apps as being a source of interference. I still think 50% is probably too low. There are two types of WU from Nate; NATHAN_dhf35 and NATHAN_KID_c22. On my W7 system the NATHAN_KID_c22 takes about 15.5h on a GTX660 and the NATHAN_dhf35 takes ~6.1h.

3. Can some tell me how I can isert a picture here like skgiven did in the threat below? I can show the EVGA screen so that you perhaps get an idea.
You would need to upload a screen grab of CPUz to an image host provider (imageshack, photobucket, tinypic, postimage...). When you have the CPUZ app open, you can press Alt+PtrScn to just capture the open window as an image (without the background). You then need to past it into a photo app such as paint and save it as a .jpg file. After that upload it.

A NOELIA is running on this system with a GTX550Ti at constant 95-96 GPU load.

Thanks as always for the highly appreciated input and needful help.

If you don't get anywhere, you might want to try the GPU in your other system,
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30463 - Posted: 27 May 2013 | 22:25:31 UTC - in response to Message 30402.
Last modified: 27 May 2013 | 23:45:12 UTC

So let's add some more numbers:

runtime in ks | theoretical performance in TFlops | TFlops*runtime
GTX 580: 39.49 | 1.58 | 62.4
GTX 570: 41.92 | 1.40 | 58.7
GTX 480: 46.00 | 1.34 | 61.6
GTX690: 30.22 | 2.81 | 84.9
GTX 660Ti: 38.07 | 2.46 | 93.7
GTX660: 42.21 | 1.88 | 79.3
GTX 560Ti: 67.71 | 1.26 | 85.3

OK, GTX650TiBoost: 49.85 | 1.51 | 75.3

255px13x4-NOELIA_klebe_run2-2-3-RND0242_0 4473004 24 May 2013 | 21:20:02 UTC 25 May 2013 | 12:13:07 UTC Completed and validated 49,848.80 21,179.21 127,800.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)

This GTX650Ti has a middling FOC, but it's in a PCIE2.0 system supported by a skt775 2.66GHz CPU, old dual channel DDR3 (1066), and I rounded up it's TFlops slightly.
A fairly brutal demonstration of the self-inflicted GF600 GDDR Bus problems!

Which graphic card
I have suspected this for a while. Now I'm convinced. For GPUGrid, the really sweet GF600 GPU is unfortunately an OEM card - the GeForce GTX 660 (OEM) 1152:96:32/256B 130W. Anyone got one? - Thought not!
Alas, we will have to settle for the GTX660, or until the GTX650TiBoost prices equilibriate for the more mid-range crunchers. While I would obviously like to see a non-OEM version of the 660/256 GPU, and other mid to high end GPU's with a 256bit bus, the GTX680 is bound to be struggling with a 256bit limitation. To progress the GF700 needs width!
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30474 - Posted: 28 May 2013 | 9:01:26 UTC
Last modified: 28 May 2013 | 9:02:34 UTC

Skgiven thanks for all your help that is very useful to me.

I have make my PC's visible so have a look.
I have also set the nVidia to maximum power, but that did not change a thing.
Something is making my system very slow (since the new GTX660, but that can't be it) and after booting there is 20GB more space on the C-drive. But when it runs a few minutes approx. 18 GB is "gone".
All I can think of is PCAngel, this seems to be a backup tool that is registering all changes on the pc. I will remove it, but have to boot and will wait until Rosie en Milkyway finished. There was no new work here so.

There is also no change in using FF, IE or none active. Rosetta on 5 cores or none and leave all 8 for GPU. Even now MilkyWay is only using 40-50% GPU.

Also best performance or with the aero on seems no difference in GPU and CPU load.
Also the times are flickering in the BOINC tray under both conditions.

If PCAngel removing has no result and I get no tips for setting the graphic card, I will do a complete new installation of the OS over the weekend.

Will try to get some pictures posted later today.

Edit: this is the first result of the new GTX660
6897606 4477493 136981 25 May 2013 | 21:28:26 UTC 26 May 2013 | 18:36:15 UTC Completed and validated 75,259.05 56,067.41 127,800.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30476 - Posted: 28 May 2013 | 10:45:37 UTC - in response to Message 30474.

PCAngel performs "real time backups"! Perhaps you can exclude everything Boinc, but I would still take it off. Such backup strategies are generally not needed, unless its a mission critical server. I bet you don't create hundreds of files every day. The most simple backup strategy for most users is to backup files to an external hard drive once every month or so. Anything more than that is an unnecessary chore.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30478 - Posted: 28 May 2013 | 11:00:44 UTC - in response to Message 30476.

PCAngel performs "real time backups"! Perhaps you can exclude everything Boinc, but I would still take it off. Such backup strategies are generally not needed, unless its a mission critical server. I bet you don't create hundreds of files every day. The most simple backup strategy for most users is to backup files to an external hard drive once every month or so. Anything more than that is an unnecessary chore.

I will remove it shortly, perhaps this is one reason of the long booting.

One thin I forgot to mention: I have installed BOINC on the D-Drive with the data as a sub-directory, and did not thick any of the boxes in the installation process. I have this done always like this on all rigs. Not good?
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30479 - Posted: 28 May 2013 | 11:36:20 UTC - in response to Message 30478.
Last modified: 28 May 2013 | 11:36:43 UTC

It might be the case that some cached files are continuously being backed up onto the primary drive or that PCAngel also backs up the D: drive, but I'm not familiar enough with that software to know exactly what it's doing. I think it's probably the culprit though.

The task you listed shows a Run time of 75,259sec and a CPU time of 56,067sec. That's almost 75% CPU usage to GPU usage. I think others are reporting less while running NOELIA_kleb tasks. I'm seeing ~41% on an [email protected] for a GTX660Ti (W7), 42% on a 560TiBoost 2.66GHz skt 775 CPU (Linux) and similar on my GTX660. My 470 is ~8%, but that's a different architecture. Anyway, that high CPU usage would tie in with the idea that PCAngel is the issue.

If all the Boxes are unchecked then that's fine, and if the last box is checked that's fine too. Just don't install as a service on Vista, W7, W8, 2008 or 2013 servers (otherwise you won't be able to crunch on the GPU).
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30482 - Posted: 28 May 2013 | 12:26:11 UTC - in response to Message 30478.

One thin I forgot to mention: I have installed BOINC on the D-Drive with the data as a sub-directory, and did not thick any of the boxes in the installation process. I have this done always like this on all rigs. Not good?

I have always run BOINC from a separate drive (like D: for instance). It has never caused any problems so you can eliminate that as an issue. I'd say it IS good to keep it off the system dive (usually C: in windows).

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30494 - Posted: 28 May 2013 | 17:54:32 UTC

The PCAngels is removed, it has deleted the recovery partition as well.
The rig is booting 6 minutes faster, and the approx. 22GB is still free on the C-drive. The system is performing quite well. I have now a NATHAN and 5 Rosetta WU's so I gues plenty of CPU free. But not, so have set Rosie to no new work.

I see the GPU load is still going from 4 % to 48%, not what others report with a GTX660. So what can I set with the card to get it run better?
If that is an easy bit then perhaps a new install of the OS is not needed and would safe a lot of time.
The rigs are visible and I will give the main information from BOINC.
Thanks again.

28/05/2013 14:25:47 | | No config file found - using defaults
28/05/2013 14:25:47 | | Starting BOINC client version 7.0.64 for windows_x86_64
28/05/2013 14:25:47 | | log flags: file_xfer, sched_ops, task
28/05/2013 14:25:47 | | Libraries: libcurl/7.25.0 OpenSSL/1.0.1 zlib/1.2.6
28/05/2013 14:25:47 | | Data directory: D:\Science\BOINC\DATA
28/05/2013 14:25:47 | | Running under account TJ
28/05/2013 14:25:47 | | Processor: 8 GenuineIntel Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz [Family 6 Model 26 Stepping 5]
28/05/2013 14:25:47 | | Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss htt tm pni ssse3 cx16 sse4_1 sse4_2 popcnt syscall nx lm vmx tm2 pbe
28/05/2013 14:25:47 | | OS: Microsoft Windows Vista: Ultimate x64 Edition, Service Pack 2, (06.00.6002.00)
28/05/2013 14:25:47 | | Memory: 11.99 GB physical, 23.91 GB virtual
28/05/2013 14:25:47 | | Disk: 831.51 GB total, 814.11 GB free
28/05/2013 14:25:47 | | Local time is UTC +2 hours
28/05/2013 14:25:47 | | CUDA: NVIDIA GPU 0: GeForce GTX 660 (driver version 320.18, CUDA version 5.50, compute capability 3.0, 2048MB, 1889MB available, 2032 GFLOPS peak)
28/05/2013 14:25:47 | | OpenCL: NVIDIA GPU 0: GeForce GTX 660 (driver version 320.18, device version OpenCL 1.1 CUDA, 2048MB, 1889MB available, 2032 GFLOPS peak)
28/05/2013 14:25:47 | rosetta@home | URL http://boinc.bakerlab.org/rosetta/; Computer ID 1514311; resource share 100
28/05/2013 14:25:47 | fightmalaria@home | URL http://boinc.ucd.ie/fmah/; Computer ID 17196; resource share 100
28/05/2013 14:25:47 | Docking | URL http://docking.cis.udel.edu/; Computer ID 132258; resource share 100
28/05/2013 14:25:47 | Einstein@Home | URL http://einstein.phys.uwm.edu/; Computer ID 5987483; resource share 100
28/05/2013 14:25:47 | LHC@home 1.0 | URL http://lhcathomeclassic.cern.ch/sixtrack/; Computer ID 10010265; resource share 100
28/05/2013 14:25:47 | Milkyway@Home | URL http://milkyway.cs.rpi.edu/milkyway/; Computer ID 109183; resource share 100
28/05/2013 14:25:47 | GPUGRID | URL http://www.gpugrid.net/; Computer ID 136981; resource share 100
28/05/2013 14:25:47 | Docking | General prefs: from Docking (last modified 14-Apr-2013 15:45:55)
28/05/2013 14:25:47 | Docking | Computer location: work
28/05/2013 14:25:47 | Docking | General prefs: no separate prefs for work; using your defaults
28/05/2013 14:25:47 | | Reading preferences override file
28/05/2013 14:25:47 | | Preferences:
28/05/2013 14:25:47 | | max memory usage when active: 9822.61MB
28/05/2013 14:25:47 | | max memory usage when idle: 11050.44MB
28/05/2013 14:25:47 | | max disk usage: 300.00GB
28/05/2013 14:25:47 | | max CPUs used: 5
28/05/2013 14:25:47 | | suspend work if non-BOINC CPU load exceeds 25 %
28/05/2013 14:25:47 | | (to change preferences, visit a project web site or select Preferences in the Manager)
28/05/2013 14:25:47 | | Not using a proxy
28/05/2013 14:29:31 | GPUGRID | project resumed by user
28/05/2013 14:29:32 | GPUGRID | Starting task I75R5-NATHAN_KIDc22_2-3-8-RND8519_1 using acemdlong version 618 (cuda42) in slot 0
28/05/2013 14:29:36 | GPUGRID | work fetch resumed by user
28/05/2013 14:33:04 | rosetta@home | work fetch resumed by user

____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30496 - Posted: 28 May 2013 | 18:30:55 UTC
Last modified: 28 May 2013 | 18:42:00 UTC

two screen dumps:




What does it mean as in Windows task manager, the kernel (red) is almost the same as the CPU (green line)?
____________
Greetings from TJ

John
Send message
Joined: 15 Oct 11
Posts: 17
Credit: 81,085,378
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 30501 - Posted: 28 May 2013 | 19:36:48 UTC - in response to Message 30496.

After reading the thread about Low GPU useage. Here is my two cents worth. Please forgive if this has already been covered or is not relevant.
After having the same issues with GPU levels I found that setting the CPU Time and Number of Processors to %100 in Boinc Manager under Computer Preferences fixed it for me, both my GPU's now run @ %85-%90.
Hope that helps..
[/img]

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30505 - Posted: 28 May 2013 | 20:49:57 UTC - in response to Message 30501.
Last modified: 28 May 2013 | 21:49:54 UTC

Before you reinstall the OS, I would suggest you try a few things.

Reset the project, and see if performance improves.
Do a Bios update (sometimes required to support newer cards).
Reinstall the drivers (as administrator), and then Boinc (as administrator).
Also try using the C: drive, just in case there is a disk problem. I know it's better to use a secondary drive (and I sometimes do this), but this could be down to some sort of disk corruption/bad sectors, cache, SATA cable issue. A 'Check Disk' with both boxes checked might fix that, but not always. A memory test might be in order too.

Having kernel usage almost as high as CPU usage indicates a problem.
What does CPUZ say about PCIE? - EVGA NV-Z just says PCIEx16, and doesn't say PCIE2.0.
What motherboard is it, and what's the BIOS version?
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30513 - Posted: 28 May 2013 | 22:58:00 UTC

The GPU-Z screen:


Resetting the project will be tomorrow was this Nathan is finished.
My D-drive is the same as the C-drive, there is one disk with 3 partitions, one gone now the recovery partition.

CPU use is almost 100% (1 GPU WU and 5 Rosetta) in Tasks manager, but no such many when checking. process explorer around 80%.

Sometimes kernel is almost the same as CPU for some minutes, than less again.
The power supply has two cables for GPU, the old GTX285 was using two, the GTX660 only one, I just picked one, but that should make no difference?

Even this typing has some hick-ups, a bit slow at times. Strange.

The MOBO is a XFX but I have to search for the type. Also Bios version must be looked for. This are difficult questions for me. Working with screwdrivers and such is no problem, but all the rest...

@John, I have CPU Time always at 100%. I have also tried Number of processors to 100% but that is not working either. Thanks anyway.

____________
Greetings from TJ

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 30514 - Posted: 28 May 2013 | 23:06:13 UTC - in response to Message 30513.

That version of GPU-Z is really old, look at you're GPU clock speed compared to the memory clock speed, there reversed. You should get the latest version.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30515 - Posted: 28 May 2013 | 23:09:48 UTC - in response to Message 30513.
Last modified: 28 May 2013 | 23:12:29 UTC

The GPU-Z screen:

No issues there; it's PCIE2.0x16.
Resetting the project will be tomorrow was this Nathan is finished.
My D-drive is the same as the C-drive, there is one disk with 3 partitions, one gone now the recovery partition.
A partition is Not the same as a separate drive! You should have said this up front. Do an OS reinstall, Delete the partitions, Format and start again with one partition.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30520 - Posted: 29 May 2013 | 8:39:19 UTC - in response to Message 30515.

A partition is Not the same as a separate drive! You should have said this up front. Do an OS reinstall, Delete the partitions, Format and start again with one partition.

I have 4 pc's with one disk and they all have a C-partition with the OS and a D-partition, installed by Dell and no issues. The kernel times are very low (1-6%) even when crunching all the cores, and CPU use at 100% kernel keeps low. These systems don't have CUDA capable graphics cards though.
I have also read at several places that it is not wise to have only one partition on a disk drive with the OS and all other stuff. But that is what skgiven is suggesting.
If this will be the case and I need to re-install the OS, then I would prefer to use an SSD for the OS and the current disk for all the data (mainly BOINC).
And perhaps Win7 as well.
What do you think about this option?

But skgiven also mentioned that high kernel times indicated a problem. What can that problem be. If the PC is over than a SSD and all the effort would be a waist of time and money. Any way to check this please?
Thanks.

____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30525 - Posted: 29 May 2013 | 11:02:03 UTC - in response to Message 30520.
Last modified: 29 May 2013 | 11:08:04 UTC

I have 4 pc's with one disk and they all have a C-partition with the OS and a D-partition, installed by Dell and no issues. The kernel times are very low (1-6%) even when crunching all the cores, and CPU use at 100% kernel keeps low. These systems don't have CUDA capable graphics cards though.
I have also read at several places that it is not wise to have only one partition on a disk drive with the OS and all other stuff. But that is what skgiven is suggesting.

I'm suggesting you try this given your circumstances, and the fact that using a partition is not any faster, being the same drive, and may even be slower.

If this will be the case and I need to re-install the OS, then I would prefer to use an SSD for the OS and the current disk for all the data (mainly BOINC).

That would speed up booting, but you haven't yet determined if there is an issue with the existing drive, some program or Windows that is causing the performance issue, or if you need to update the Bios to better support these GPU's.

And perhaps Win7 as well.
What do you think about this option?

W7 isn't any faster. If you mostly use it for crunching, what's the point?

But skgiven also mentioned that high kernel times indicated a problem. What can that problem be.

Disk, OS, driver, some app... Some details might help, but frankly it takes a couple of hours to reinstall an OS from disk and that eliminates swathes of potential problems.
Did you check if you Bios has an upgrade that better supports the GPU?

If the PC is over than a SSD and all the effort would be a waist of time and money. Any way to check this please?
Thanks.

Don't know what you mean? Is it a SATA drive, or an older IDE drive? Did you do a disk check?
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30526 - Posted: 29 May 2013 | 11:20:00 UTC - in response to Message 30525.

Yes I did a disk check no issues, memory check also okay.
Did both two days ago.

I mean with "the pc is over" that it is dead, something with the MOBO or other hardware than disk drive or memory. Then an SSD has no use.

But I will try to install vista again on an SSD that will speed up booting what still is taking more than 7 minutes now.An SSD is on its way.
Even with only GPU taks active, the Nathan from yesterday still, the CPU is using more than 50%. So there is an issue. Closed all other programs, typing this on another rig.
Is perhaps having 2 monitors attached to the GTX660 a problem? Wasn't with the GTX285.

How do I do that an BIOS update? Is there any danger?

Thanks
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30530 - Posted: 29 May 2013 | 14:25:03 UTC

I found how to update the BIOS, not to much trouble.
Will do that later as NATHAN is finished.
If that doesn't help I power it down and wait for the SSD to re-install the OS and all important software.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30532 - Posted: 29 May 2013 | 15:01:41 UTC - in response to Message 30526.

I mean with "the pc is over" that it is dead, something with the MOBO or other hardware than disk drive or memory. Then an SSD has no use.

If you have a standard case, replacing the MB is relatively easy and not expensive. First though you have to do your other steps to make sure it IS the MB that's the problem. I would guess not, but it's possible.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30535 - Posted: 29 May 2013 | 17:59:24 UTC

Its is an XFX x58i MOBO and there are no BIOS updates available.
I also found that there are a lot of problems with it.

I will search for a refurbished Dell and install the GTX660 there. That seems the best way.
But will try a OS re-install as well.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30536 - Posted: 29 May 2013 | 19:05:25 UTC - in response to Message 30535.

I will search for a refurbished Dell and install the GTX660 there. That seems the best way.

Not good idea.

But will try a OS re-install as well.

Good idea.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30542 - Posted: 29 May 2013 | 21:37:14 UTC - in response to Message 30536.

I will search for a refurbished Dell and install the GTX660 there. That seems the best way.

Not good idea.

Why not?
I have several Dell all running flawlessly one for 4 years 24/7 and I installed and de-installed a lot stuff there. Still doing great only a low PSU.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30545 - Posted: 30 May 2013 | 1:17:42 UTC - in response to Message 30542.

I will search for a refurbished Dell and install the GTX660 there. That seems the best way.

Not good idea.

Why not? I have several Dell all running flawlessly one for 4 years 24/7 and I installed and de-installed a lot stuff there. Still doing great only a low PSU.

Generally poor case cooling, poor PS, often crippled proprietary MB. You'd be better off (if the MB is bad) to replace your MB (and case if the cooling isn't sufficient). I repair brand name computers for people all the time. They cut a lot of corners compared to machines that are custom built. But do what makes you feel good.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30546 - Posted: 30 May 2013 | 9:45:30 UTC

I did a project reset, power down and did a cold boot.
Allow only short runs to see what happens.
GPU load is still varying a lot.
And something has again taken 18GB of my C-partition.
And the kernel is almost as high as CPU use. Nothing other running than GPUGRID.

After both have finished I power it down and wait for the SSD to arrive.

Thanks for all the help and advise.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30547 - Posted: 30 May 2013 | 9:48:58 UTC - in response to Message 30546.
Last modified: 30 May 2013 | 9:49:45 UTC

At some point in between you said Milkyway was also only running at ~50% GPU utilization. This suggests to me it's either some weird software issue with your Vista install (I'd try Win 7 or 8 next, if you can) or indeed some mainboard strangeness. Since the rig had been running fine before with a smaller card there might be another option: switch that card back in and put the GTX660 into another one of your well-running systems.

Edit@SK: what's the clock speed of your GTX650TiBoost?

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30552 - Posted: 30 May 2013 | 10:09:19 UTC - in response to Message 30547.

1124MHz
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30567 - Posted: 30 May 2013 | 22:39:55 UTC

Is there a way how I can test if the MOBO is malfunctioning?
And what can I check other issues with kernel times so high?

I did checked the disk and the memory, no issues.
But even when not crunching, BOINC closed, still slow.
This box is almost 4 years old so I will not bother with a new MOBO. All parts are 4 years running except two fans which I have already changed.
____________
Greetings from TJ

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 30577 - Posted: 31 May 2013 | 9:55:09 UTC - in response to Message 30567.

Your issues do not seem to me to be hardware-related, rather software. Something in your setup keeps "stealing" CPU cycles and disk space.

I suggest you go through the programs that are installed and start uninstalling. I suspect you will find various pieces of junk in there...

If uninstalling the junk does not help, then the machine may have been infected by some malware, this is Windows after all and even careless browsing of a torrents site can get you infected with one of the gazillion pieces of malware out there, waiting to make your PC a zombie bot of some nasty mafia hacker!

As a last resort, just kill it. Wipe your drive and re-install. Then, try to be more careful what stuff you put on it.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30581 - Posted: 31 May 2013 | 10:41:56 UTC

Or, if you've got some spare drive laying around (could be a super old junk Y30 GB drive) just install testwise onto that one. If everything works well afterwards, it is your installation. If not, it's probably the interaction with the mainboard.

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30588 - Posted: 31 May 2013 | 12:12:49 UTC

Could be some junk, it is I suppose. But I have good security software so that's safe. But is seems very likely a software issue. Adobe installed a lot stuff just as Java. I found a bunch old drivers from nVidia and more. Format and totally new is absolutely necessary and will happen as soon as the SSD arrived, or sooner if my other PC finish's the NATHAN WU first.

But I have done some thinking. I have set my other PC to no new work and will install the GTX660 there and see how that works.
Then I will set up the old rig from scratch and put the GTX550Ti in there (and perhaps the 285 for MilkyWay, does it great there 11 minutes).

Now the question. Can I connect 2 monitors to the 550Ti and leave the 285 without a monitor. Or can I connect one monitor to one card?
There are two 21 inch screen connected to it, very useful when I do some work from home via a VPN connection. So would like that rig with 2 monitors.

As always thanks for the help.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30591 - Posted: 31 May 2013 | 15:44:21 UTC - in response to Message 30588.

Don't know the exact layout,. but any GTX550 should have at least 2 digital outputs. Just take a look at the b*tt of the card ;)

A GTX285 for Milkyway? Yikes... my HD6970 did those WUs in less than a minute! The GTX285 runs DP at 1/8th the SP speed - better than current nVidias (1/24), but no match for (former) high-end AMDs at 1/5th or 1/4.

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30592 - Posted: 31 May 2013 | 16:11:16 UTC - in response to Message 30591.
Last modified: 31 May 2013 | 16:12:50 UTC

Don't know the exact layout,. but any GTX550 should have at least 2 digital outputs. Just take a look at the b*tt of the card ;)

A GTX285 for Milkyway? Yikes... my HD6970 did those WUs in less than a minute! The GTX285 runs DP at 1/8th the SP speed - better than current nVidias (1/24), but no match for (former) high-end AMDs at 1/5th or 1/4.

MrS

I know that I have the card in my PC and 2 monitors to it working fine.
But I mean, can I place a graphics card in a system without monitors to it. Will it then crunch?
Okay the GTX285 is no longer okay, I get the message now :).

What about a GTX580 of 560, they have some speedy results I see at times with wingman?
Would be nice to have a second card besides the GTX550Ti, which take not to much watt for the PSU (770Watt), not the warm (heat) and not to expensive.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30593 - Posted: 31 May 2013 | 16:16:10 UTC - in response to Message 30592.

As far as I know under Win you'd either need to extend the desktop to the 2nd card, attach a monitor to it, or at least a 2nd cable to an existing monitor or use a VGA dummy. But I'm no multi-GPU expert, maybe there's a way around this by now?

GTX580 to 560 are still fine if you already have them (well, not at Milkyway), but I won't get another one for crunching (not even relatively cheap used ones) since the Keplers are far more power efficient (i.e. they'll pay for themselves after some time). GTX660 seems to be the sweet spot right now.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30594 - Posted: 31 May 2013 | 17:12:31 UTC - in response to Message 30593.

All Windows drivers that include CUDA4.2 or newer support a second GPU, without the need to attach a monitor, dummy plug or omnicube. This was introduced well over a year ago. Unless there is some oddity with having 2 monitors supported by one GPU when the other GPU isn't supporting any monitor, I don't think there should be any issue, even with older GPU's.
The GTX780 and GTX770 arrivals resulted in many GPU prices dropping throughout the NVidia GF600 range. Mid range prices are good and might dip further. These cards are reasonably future proofed.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30613 - Posted: 1 Jun 2013 | 16:09:30 UTC

Okay my plan didn't work. The PSU is only 375Watt and the GTX660 need at least a PSU of 450 Watt.
The SSD didn't arrive but I made one partition as skgiven suggested and installed the OS (WinVista) as a new installation with formatting the disk.
Did not get Win7 as skgiven suggested this make no sense, ETA would prefer Win7 or 8.
It took almost my entire Saturday to get it installed with all the updated and reboots. Booting goes faster, but that is about it.
Kernel times are the same with CPU ussage and everthing is still slow. IE oftern sying not responding, driver installation for keyboard and mouse, sometimes not responding and such.
Thus this means the MOBO is kaput?

I will not by a new one, but then it is the memory, than a controller, than the PSU. Then I have to wait for money to buy a refurbished Dell, or by parts and build me a new one.
The GTX550Ti will than be the only contribution from me to the project for a while.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30642 - Posted: 2 Jun 2013 | 21:41:19 UTC

As I mentioned in my previous message, the i7 was completely installed with new OS, but all was working very slow with the new GTX660 installed, even the browser going to NASA.gov took ages. FF downloading wasn't even possible.

Today I removed the GTX660 and put the old heater back the GTX285. Now it is working like a speedboat. All is opening direct with the mouse click.

Could it been that the GTX660 is to new for the MOBO, or that jumpers are wrong?
I used the first slot the one closed to the processor so that should be okay.
____________
Greetings from TJ

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 30645 - Posted: 3 Jun 2013 | 9:11:59 UTC - in response to Message 30642.

I did a little searching about your motherboard. I couldn't find much information about it though - no official info, support site needs registration...

The fact that everything works as expected with your other card, inevitably leads to a single prognosis: the 660 is not fully compatible with your setup.

Now, "setup" is a broad term, meaning both your hardware and software:


    Maybe the 660 stresses your PCIe 2.0 slot / bus too much and exposes some minor incompatibility / BIOS bug your motherboard may have
    Your motherboard supports triple SLI, maybe this causes some trouble.
    Maybe you have to install additional, motherboard-specific drivers to your Vista installation to make it communicate correctly with your PCIe bus and the card.
    Maybe the Nvidia drivers you're using aren't fully compatible or have a bug



As you see, there are many "maybes".. Hardware problems are like that unfortunately, unless you have specialized tools, it's a hide-and-seek "game" you have to play. :(

There are a number of things you can do:

    1. Try your 660 with another system, this will validate the card.
    2. Experiment with your BIOS settings around GPUs, PCIe, SLI, etc. Reset to defaults. Disable any overclocking.
    3. Make sure all system hardware is detected in Windows and you have no question marks in the device manager. Use the latest drivers you can get.
    4. If you have a spare hard drive, setup another version of Windows (XP or 7), install ONLY basic drivers to get you going (probably just for your network card, ONLY if Windows doesn't recognize it by itself), fully update Windows, THEN install motherboard-specific drivers, THEN install the Nvidia driver.
    5. As a last resort, use your 660 with another system.



Come to think about it, you could repeat step 4 above for Vista as well.

I hope all this helps!
____________

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30647 - Posted: 3 Jun 2013 | 10:25:57 UTC
Last modified: 3 Jun 2013 | 10:26:50 UTC

Thanks for your information Vagelis Giannadakis.

It does help a bit.
I searched at the XFX site for the MOBO and several fora report problems with it and no new BIOS. The company seems to be "deaf" for comments. They don't make MOBO's any more if I understand correct.
I installed the drivers from the CD with the MOBO and there are no questions marks.
After the new install of Vista, windows installed more than 150 updates.
All works fine now, fast and the kernel times are very low.

So or the GTX660 is faulty, or, as you said, it is not good with my setup.
I guess the latter. I put it back in, did a cold boot and the graphics experience was directly affected, with slower responses of opening winds, browser response ad such.
I have opened all my cases and there is no system with a PSU of 450 Watt or more what the GTX660 needs. Thus I can not test it.
____________
Greetings from TJ

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 30648 - Posted: 3 Jun 2013 | 10:44:48 UTC - in response to Message 30647.

What about a friend then? You must have a friend or acquaintance with a system that can handle a 660 and test it for you, no? It's not that we're talking about some experimental, next-gen prototype GPU requiring an internal Thunderbolt port! :D

This way, you can test the card and make sure it is fully functional, before going ahead and dishing out cash to upgrade / buy stuff in attempts to make it work.
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30649 - Posted: 3 Jun 2013 | 11:06:36 UTC - in response to Message 30648.
Last modified: 3 Jun 2013 | 11:08:10 UTC

Just move the PSU as well as the GTX660 into the other system. That way you will be able to test if the GPU is faulty, and be able to give the PSU a clean.

Check for a motherboard chipset update. Make sure that the Bios isn't configured to use all PCIE slots if you only want to use one. Also try turning SLi off, if it's on. If two monitors are plugged in, remove one.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30650 - Posted: 3 Jun 2013 | 13:52:12 UTC - in response to Message 30647.

I have opened all my cases and there is no system with a PSU of 450 Watt or more what the GTX660 needs. Thus I can not test it.

This certainly isn't set in stone. A quality PS with lower ratings will easily run your GTX 660. Think you said you had a 375 watt PS in another box. Try it. I've run more powerful GPUs than a 660 on 350 watt power supplies with no problems, as have many others.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30651 - Posted: 3 Jun 2013 | 14:22:23 UTC - in response to Message 30650.

I have opened all my cases and there is no system with a PSU of 450 Watt or more what the GTX660 needs. Thus I can not test it.

This certainly isn't set in stone. A quality PS with lower ratings will easily run your GTX 660. Think you said you had a 375 watt PS in another box. Try it. I've run more powerful GPUs than a 660 on 350 watt power supplies with no problems, as have many others.

That is good information Beyond!
I was tempted to do so, but checked nVidia's website once more and there was the 450Watt quote.
I will set BOINC to no new work and will try this.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30657 - Posted: 4 Jun 2013 | 20:40:01 UTC - in response to Message 30651.

Yeah, those recommendations are pure BS or.. over cautionary, depending on your point of view. I think they assume something along the lines:

- big GPU means he's got to be running a really big gas-guzzling CPU too
- there may be lot'S of HDDs, periphels etc.
- he's probably got some dirt-cheap chinese firecracker PSU which can't output even 300 W

In practice the GTX660 won't consume more than ~130 W, because that's the target power consumption for these cards. Maybe less, depending on application and GPu load.

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30719 - Posted: 7 Jun 2013 | 19:32:19 UTC

Hi guys, I would like to update you.

I put the new GTX660 in another PC and it is now doing a short run from Nathan.
GPU load is steady 91% temperature is 82 degrees C. GPU clock is 1084MHz.

Has done 10% in 20 minutes.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30722 - Posted: 7 Jun 2013 | 20:33:37 UTC - in response to Message 30719.
Last modified: 7 Jun 2013 | 20:34:13 UTC

Great, except for the temperature - download MSI Afterburner and use it to increase the fan speed so that it stays below 70°C ;)
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30724 - Posted: 7 Jun 2013 | 22:10:05 UTC - in response to Message 30722.

Great, except for the temperature - download MSI Afterburner and use it to increase the fan speed so that it stays below 70°C ;)

I used EVGA Precision X to set fan speed on auto. Temperature is now 70-71°C.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30741 - Posted: 9 Jun 2013 | 0:02:14 UTC

I set the "old heater" (GTX285) to work again, to increase my RAC ;-)
Only short runs and then I can experiment with MSI Afterburner.
GPU load steady 95%, estimate 6 hours.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31155 - Posted: 2 Jul 2013 | 14:30:49 UTC

Today my second EVGA GTX660 arrived. I built it in beneath the first one in the Alienware. It has no monitor connected and the SLI bridge is not mounted as well.
Now I request new work and did only get one task. BOINC does indeed see only one GPU.
So here are the questions:
1. How can I get the card working. Do I need he SLI bridge, connect a monitor to it. Both monitors are now in the first card.
2. More worrying thing is that the card is now running at 78°C with EVGA Precison as temperature/fan speed regulating (like Afterburner) and I have set it to auto. How can this be?
Tomorrow one GPU (only one in) ran at 69°C with the same EVGA software and settings? What has happened and how can I resolve this.

Thanks, I am hoping for a quick answer this time so that I can get bot GPU's crunching.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31156 - Posted: 2 Jul 2013 | 14:34:08 UTC - in response to Message 31155.
Last modified: 2 Jul 2013 | 14:38:11 UTC

1. http://www.gpugrid.net/forum_thread.php?id=3156&nowrap=true#31007 or read the FAQ's.

2. Configure a profile in Afterburner; Settings (bottom right corner), Fan tab, enable user defined software automatic fan control. You may have to click Auto and User define after doing this.

Quick enough for you?
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31157 - Posted: 2 Jul 2013 | 15:01:25 UTC - in response to Message 31156.

1. http://www.gpugrid.net/forum_thread.php?id=3156&nowrap=true#31007 or read the FAQ's.

2. Configure a profile in Afterburner; Settings (bottom right corner), Fan tab, enable user defined software automatic fan control. You may have to click Auto and User define after doing this.

Quick enough for you?

Yes very quick skgiven, thanks.

I have it placed in the boinc data directory. It was read as you can see:
7/2/2013 4:52:55 PM | | Re-reading cc_config.xml
7/2/2013 4:52:55 PM | | Not using a proxy
7/2/2013 4:52:55 PM | | Config: use all coprocessors
7/2/2013 4:52:55 PM | | log flags: file_xfer, sched_ops, task

But still no extra task.
I have indeed set EVGA Precision (is same as Afterburner but from EVGA, same menu as well), to autimatic fan control by software and all. In fact I did not change it. The only thing I did was place an extra GTX660 and now one runs hot. There is not a lot of space between both cards though.

Do I need to restart the system? Will that kill the part of Nathan's WU already done?

Another question. When teh two AMD's (5870) where in the Alienware they were both recognised by BOINC and there was no need for a cc_config. How about that?

____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31158 - Posted: 2 Jul 2013 | 15:13:42 UTC - in response to Message 31157.
Last modified: 2 Jul 2013 | 15:23:50 UTC

Whats in the first page of your Boinc logs?

You do need to restart after you install the driver!
You may also want to reinstall EVGA Precision.

Last time I was messing around with installing cards on W7, which includes installing drivers, I disabled Boinc from starting with Windows. I booted, installed the drivers, rebooted, installed Afterburner, restarted and started Boinc again. I was running a GPUGrid WU, but had no issues doing it that way.

You really will need to define a fan curve. The auto-temperature control won't do it for you. It will run hot (78°C) and work will fail.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2346
Credit: 16,293,515,968
RAC: 6,990,482
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31159 - Posted: 2 Jul 2013 | 15:26:53 UTC - in response to Message 31157.

I have it placed in the boinc data directory. It was read as you can see:
7/2/2013 4:52:55 PM | | Re-reading cc_config.xml
7/2/2013 4:52:55 PM | | Not using a proxy
7/2/2013 4:52:55 PM | | Config: use all coprocessors
7/2/2013 4:52:55 PM | | log flags: file_xfer, sched_ops, task

But still no extra task.
I have indeed set EVGA Precision (is same as Afterburner but from EVGA, same menu as well), to autimatic fan control by software and all. In fact I did not change it. The only thing I did was place an extra GTX660 and now one runs hot. There is not a lot of space between both cards though.

There should be at least one slot space between the cards to maintain the temperature.

Do I need to restart the system?

You don't need to restart the system to activate the changes in the cc_config.xml file, you have to re-read the configuration file as you did.
However, if the card is not recoginzed by Windows itself, it may require a system restart. You should check it in the device manager. (Start -> type devmgmt.msc in the search field and press <enter>, click on the + sign beside the display adapters category, and you should see two display adapters by it's actual name ("NVIDIA GeForce GTX660") listed under the category, so "standard VGA adapter" is not appropriate)

Will that kill the part of Nathan's WU already done?

No, it won't. (At least in most cases)

Another question. When teh two AMD's (5870) where in the Alienware they were both recognised by BOINC and there was no need for a cc_config. How about that?

It should be the same with NVidia cards as well, however sometimes this cc_config modification is necessary.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31160 - Posted: 2 Jul 2013 | 15:42:25 UTC

Does Afterburner see both GPUs? If not, how old is the Alienware and what MB is in it?

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31161 - Posted: 2 Jul 2013 | 15:49:57 UTC - in response to Message 31159.

Thanks Zoltan I know a little more.
I have powers the system down and put one monitor in the second card and start it up. Now two GPUGRID tasks are running.
Temperature remains an issue. There are only two slots. Actually 3 but the last on is very short a few centimeters.
When I got the system there where two AMD 5870 in it also with no space in between and they ran MW and Einstein at around 88°C for 3 years but not always 24/7, 3 4 days in a row mostly.
I have made a curve like skiven said. It worked well with one card but not with 2, because due to lack of space as Zoltan mentioned.
Well it is a very heavy case and cramped, to not a lot of space but nice air flow chambers.
If I can't drop the temperature I shall see where else I can put these GTX660's in. For a new system I have to same more money first. But I will buy a large case that's for sure.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31162 - Posted: 2 Jul 2013 | 15:51:39 UTC - in response to Message 31160.

Does Afterburner see both GPUs? If not, how old is the Alienware and what MB is in it?

Both cards are seen by the system and by EVGA NZ and EVGA Precision and GPU-Z.
But that was after I place a monitor in the second card. The Alienware is almost 3 years old.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31164 - Posted: 2 Jul 2013 | 16:26:04 UTC - in response to Message 31162.

Either rebooting allowed the driver installation to complete and Boinc to see both cards, or the issue was with two monitors being hooked up to the one card.

Maybe some time you could hook both monitors back up to the one GPU and reboot to see if that was the issue? If you do let us know either way.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31165 - Posted: 2 Jul 2013 | 16:46:35 UTC

There were no drivers updated as already exact the same card was running in the system.
I will let it finish things first and will then try with both monitors to one card again.
Secondly I have also use the famous Afterburner here and it seems that the maximum Fan speed for the GTX660 is 74%. Can someone say if this is true?
One car is running at 70°C (so okay, the other at 77°C (not okay).

I have now opened all my boxes with a quad or i7 ( no need to check the Pentium D4, and Celeron as that are very old systems (8-15 years, so cards won't work) and none has space for cars with double space.

I can try to see if the T7400 has the right PCIE connector on the right place but I gues it has only one PCIE 16 slot. If I have a new PSU installed.

The last thing is what Beyond suggested to put it in my system with a 380W PSU (and replace with the GTX550Ti). Or a new system.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31169 - Posted: 2 Jul 2013 | 18:01:11 UTC - in response to Message 31165.

I'm not talking about a driver update, just installing it for the hardware. The driver still has to be registered against a new card before Boinc or any other app (EVGA Precision) will recognize it. Also, if you start Boinc then install the driver it wont see the GPU until you restart. Reading cc_config won't make it see the GPU either.

Re Afterburner. Yes, it's true.
I went back from 320 to 314 drivers thinking they were the issue and reinstalled a slightly earlier version of afterburner. I was then able to move one GPU past 74% fan speed (only at 76% now but can go up to 100%), but not the other GPU fan - it cant go past 74% (it's at 63% and 58°C, so it's not an issue for me anyway).
On a well cooled system with one GPU, 74% should be sufficient to cool the GPU, but 2 or 3 GPU's in the same system will still be a challenge, and extra case cooling may be required. I don't recall this being an issue before for Windows, or seeing it when I only had one GPU in a case, but I remember some versions of Linux would not allow your fan to go over ~85%.

At present I have a GTX660Ti and a GTX660 in one case. The gap between them isn't much, so I use a fan to blow onto them and also have an external rear fan to pull air out of the case. I replaced a good Antec CPU heatsink with an H60 to reduce internal heat buildup, especially on the back of the GTX660Ti (it also keeps the CPU a bit cooler).
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31170 - Posted: 2 Jul 2013 | 18:12:03 UTC - in response to Message 31169.
Last modified: 2 Jul 2013 | 18:12:19 UTC

I'm not talking about a driver update, just installing it for the hardware. The driver still has to be registered against a new card before Boinc or any other app (EVGA Precision) will recognize it. Also, if you start Boinc then install the driver it wont see the GPU until you restart. Reading cc_config won't make it see the GPU either.

Okay in that case you where right (again :-) ). Then replacing the monitor with an off system can have easily have nothing to do with it. I will test this with Milkyway (short run) when Nathan's are finished.

There is no room in the Alienware to place more coolers, in fron is a big fan, blowing towards the GPU's, but in the back is a fan with the radiator of the liquid cooling.

I will remove one GTX660 and try it in another system tomorrow. And than it is waiting for a new system. So my RAC will not built up soon.

But I have again learned today, which is nice.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31179 - Posted: 3 Jul 2013 | 11:44:50 UTC - in response to Message 31164.
Last modified: 3 Jul 2013 | 11:46:25 UTC

Maybe some time you could hook both monitors back up to the one GPU and reboot to see if that was the issue? If you do let us know either way.

As promised I tried this. Turned of the PC and put both monitors to one card. There is no SLI connecter fitted. I boot the system and BOINC does see two GPU's.
To be sure I am now crunching little MilkyWay on two GPU's. One is at 63°C and the other 57° with 70-72% GPU load, both.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31186 - Posted: 3 Jul 2013 | 16:25:28 UTC - in response to Message 31179.

Thanks TJ, that eliminates concerns over running two monitors from one GPU when you have two GPU's.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31187 - Posted: 3 Jul 2013 | 16:37:27 UTC - in response to Message 31186.

Thanks TJ, that eliminates concerns over running two monitors from one GPU when you have two GPU's.

Your very welcome. I got a lot of help here, so glad I can contribute a tiny little bit.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31188 - Posted: 3 Jul 2013 | 17:07:02 UTC

One very interesting thing (for me).
My other i7 with the new trial GTX660 was so unresponsive and continuously at 100% CPU load and the kernel times in top as well. That it was to much for me to watch. So I powered it down opened the case and put the old solid GTX285 back in. Powered it up and the WU started on the GTX660, continues on the GTX285. Amazing!
Six Rosies are crunching happily on the CPU and the Kernel times are very low, almost not to see.
So I will leave this system run as it until in dies. Its MARS, and visible, but it shows still a GTX660.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31189 - Posted: 3 Jul 2013 | 17:23:29 UTC - in response to Message 31188.

One very interesting thing (for me).
My other i7 with the new trial GTX660 was so unresponsive and continuously at 100% CPU load and the kernel times in top as well. That it was to much for me to watch. So I powered it down opened the case and put the old solid GTX285 back in. Powered it up and the WU started on the GTX660, continues on the GTX285. Amazing!
Six Rosies are crunching happily on the CPU and the Kernel times are very low, almost not to see.
So I will leave this system run as it until in dies. Its MARS, and visible, but it shows still a GTX660.

One reason is that the 285 is much slower than the 660 and needs far less GPU support (yet the 285 uses far more electricity) :-(
You need to reserve more CPU for the 660 to allow it to run optimally.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2346
Credit: 16,293,515,968
RAC: 6,990,482
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31192 - Posted: 3 Jul 2013 | 17:59:37 UTC - in response to Message 31188.

One very interesting thing (for me).
My other i7 with the new trial GTX660 was so unresponsive and continuously at 100% CPU load and the kernel times in top as well.

When a GPUGrid workunit runs on a Kepler based GPU, it will use a full CPU thread (core, if the CPU is not hyperthreaded).

That it was to much for me to watch. So I powered it down opened the case and put the old solid GTX285 back in. Powered it up and the WU started on the GTX660, continues on the GTX285. Amazing!

That is how it should be :) I did similar changes without loosing a workunit.

Six Rosies are crunching happily on the CPU...

Rosetta@home is a tricky application. Sometimes it could use up to 600 Mbytes of RAM, and read-write 100s of MBytes to the HDD (SSD) at startup. Beside of that, I've found out that it won't gain much RAC when more R@h workuntis running than the number of CPU cores (in your case 4). So when a GPUGrid workunit takes a full thread, there is no point to run R@h applications on hyperhtreaded cores.

and the Kernel times are very low, almost not to see.

The GPUGrid application does not need a full CPU core to feed pre-Kepler GPUs.

So I will leave this system run as it until in dies. Its MARS, and visible, but it shows still a GTX660.

It will show the correct GPU after your BOINC client communicates with the GPUGrid server.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31194 - Posted: 3 Jul 2013 | 18:42:02 UTC
Last modified: 3 Jul 2013 | 18:55:23 UTC

Thanks for the explanation Zoltan.
I leave always 2 cores free on a HT (i7 in my case), so the CPU has 6 and the GPU 2 (uses 1).

Indeed there are strange things at Rosetta, but I really like the science they do. Perhaps I can let it run on the quad with no usable GPU's (cuda 1.0).

So I have GTX660 "left". As I have already a screwdriver in hand all day I thought I put it in the T7400. (Its PSU has 2 6-pins and 1 cable has also a 8-pin, so a GTX690 would fit), there is a PCIE 2.0 line (in the middle of the MOBO in the center of a fan). It works like a train, doing a Nathan at 63-65°C at 65% fan speed.
Was it not all of a bad buy then, however my girlfriend my not find this out, as I still want a new one with the two GTX660 in one system (Yes AMD).

One more thing about the T7400 it two Xeons have only a heat block (6 copper tubes and a lot of Aluminum blades), one fan is placed behind the second CPU, from the front seen, placed under a tilted hook and blows air towards the heat blocks. CPU running at 50-60°C! So not bad from Dell after all.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31206 - Posted: 4 Jul 2013 | 10:09:26 UTC
Last modified: 4 Jul 2013 | 10:10:17 UTC

I put the left over GTX660 in my quad core Vista x86, 24/7 system and just as Beyond predicted, it runs fine with a PSU of only 380Watt.
Temperature went quickly to 76°C when GPUGRID running with the fan speed at 74% (its maximum). I set the GPU power target to 90% and now it runs at 72-73°C with a GPU load of around 85%. So not to bad.
The time estimate for a Nathan LR is 30 hours. Two Einstein WU run on the CPU, but have set to no new work, to see if that changes anything. However I think estimation is wrong as it does ~8% in an hour.

All in all not great, but it can run 24/7 until I have all the parts for a new system in a couple of months, in which I put the two GTX660 I now have running in old systems.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31210 - Posted: 4 Jul 2013 | 11:44:41 UTC - in response to Message 31206.

The time estimate for a Nathan LR is 30 hours. Two Einstein WU run on the CPU, but have set to no new work, to see if that changes anything. However I think estimation is wrong as it does ~8% in an hour.

The estimates don't mean much, sounds like around 12.5 hours. Not bad conidering it's throttled. Why will the fan only go to 74%? You could try setting: "On multiprocessor systems use at most 90% of the processors" in BOINC to reserve another core and see if the GPU utilization goes up. Don't change the "Use at most 100% CPU time" though.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31212 - Posted: 4 Jul 2013 | 12:18:52 UTC - in response to Message 31210.

The time estimate for a Nathan LR is 30 hours. Two Einstein WU run on the CPU, but have set to no new work, to see if that changes anything. However I think estimation is wrong as it does ~8% in an hour.

The estimates don't mean much, sounds like around 12.5 hours. Not bad conidering it's throttled. Why will the fan only go to 74%? You could try setting: "On multiprocessor systems use at most 90% of the processors" in BOINC to reserve another core and see if the GPU utilization goes up. Don't change the "Use at most 100% CPU time" though.

That seem the be the maximum of these cards. Skgiven has the same, he even downgraded the nVidia drivers. With the EVGA software, afterburner, set the fan curve manually to 100% from 50°C does not help. In the programs, Afterburner and EVGA are two yellow lines, it seems that the fan speed can (an will) only operate between both lines.
The quad is set to use only 3 cores, so 1 for GPU and 2 for CPU and one free.
If Einstein is finished GPU has 4 CPU for its one, see what that does in a few hours.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31214 - Posted: 4 Jul 2013 | 16:26:55 UTC - in response to Message 31206.

I put the left over GTX660 in my quad core Vista x86, 24/7 system and just as Beyond predicted, it runs fine with a PSU of only 380Watt.
Temperature went quickly to 76°C when GPUGRID running with the fan speed at 74% (its maximum). I set the GPU power target to 90% and now it runs at 72-73°C with a GPU load of around 85%. So not to bad.
The time estimate for a Nathan LR is 30 hours. Two Einstein WU run on the CPU, but have set to no new work, to see if that changes anything. However I think estimation is wrong as it does ~8% in an hour.

All in all not great, but it can run 24/7 until I have all the parts for a new system in a couple of months, in which I put the two GTX660 I now have running in old systems.

Seems not good. Two Nathans have failed with the ACMD message on the screen. There is now a third one in process, if that fails I will update from 3.14 to 3.20 driver for a last test. Then I swap the cards and wait for a new system.

On the GTX285 one Santi SR failed but looking at the tasks there where more errors of these today. So that seems coincidence.

The old T7400 is happily crunching on the GTX660. Pity only one cards fits. If the 690's where no so expensive...

____________
Greetings from TJ

klepel
Send message
Joined: 23 Dec 09
Posts: 189
Credit: 4,723,349,405
RAC: 1,551,237
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31220 - Posted: 4 Jul 2013 | 19:50:22 UTC - in response to Message 31214.

Seems not good. Two Nathans have failed with the ACMD message on the screen.

Although I do not know what ACMD means, but two Nathans failed in a row as well, the last with a bluescreen. Both on my GTX570, which normaly does not have mayor problems. So you are not alone and it might not be your system.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31222 - Posted: 4 Jul 2013 | 19:56:03 UTC - in response to Message 31220.

Seems not good. Two Nathans have failed with the ACMD message on the screen.

Although I do not know what ACMD means, but two Nathans failed in a row as well, the last with a bluescreen. Both on my GTX570, which normaly does not have mayor problems. So you are not alone and it might not be your system.

Thanks for your info, that is a relief (for my system).
I must have said ACEMD.286, its the program that runs in the task manager, its to feed the GPU I guess, that pops up as an error message.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31235 - Posted: 4 Jul 2013 | 22:04:32 UTC - in response to Message 31222.
Last modified: 4 Jul 2013 | 22:04:47 UTC

When you get a failure, restart the system. Then give it a few days before you go mad installing and reinstalling drivers.

I ran 10WU's without a problem on a W7 system and then had 3 failures yesterday. I restarted, and no problems so far...
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31238 - Posted: 4 Jul 2013 | 22:44:41 UTC - in response to Message 31235.

I don't go mad, I am always calm and patient when I make things (I don't break them) ;-)
I have also ran for weeks without error, but now I am swapping cards from system to system, it give to think when an error occurred.

But the GTX660 is way to warm in the quad case, so I will swap it once more with the 550Ti again.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31244 - Posted: 5 Jul 2013 | 11:20:07 UTC

This morning I replace the GTX660 for the GTX550Ti, as the 660 went the warm in the case and it will go back in the Alienware as soon the CPU cooler arrives.

However, now the 550Ti is running in its previous system where it ran for more than a year, the Core clock is at 405MHz, according to EVGA and GPU-Z software.
I have taken out Afterburner to see if I can get it higher, but no.
How can I do this?

It is running now Nathan (after two errors) at 97% GPU load at only 50°C. That is nice but only 0.528% complete in 26 minutes, that can't be good.

Thanks as always for the input.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31245 - Posted: 5 Jul 2013 | 11:28:33 UTC - in response to Message 31244.

It's downclocking.
Right click on your desktop, click NVidia control panel and set the GPU to Prefer maximum performance.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31246 - Posted: 5 Jul 2013 | 11:44:58 UTC - in response to Message 31245.
Last modified: 5 Jul 2013 | 11:46:55 UTC

It's downclocking.
Right click on your desktop, click NVidia control panel and set the GPU to Prefer maximum performance.

That was the first thing that I did when the card was in the system and it booted after the second time. You told me that earlier and I am a good learner.
So that doesn't help, more idea's.

I dragged the GPU clock slider into the 1500's hit apply and the system booted with an error of the GPUGRID WU off course my fault.

After I installed the card I did an drive update from 314 to 320.18, could that be the cause?
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31247 - Posted: 5 Jul 2013 | 11:51:01 UTC - in response to Message 31246.

It's better to do a clean install (advanced), rather than an upgrade, especially when changing cards.
I would uninstall, then install 314. I would also reinstall GPUZ and EVGA Precision afterwards.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31251 - Posted: 5 Jul 2013 | 12:53:39 UTC - in response to Message 31247.

It's better to do a clean install (advanced), rather than an upgrade, especially when changing cards.
I would uninstall, then install 314. I would also reinstall GPUZ and EVGA Precision afterwards.

I did the clean install, and did indeed reinstall precision X as well after deinstalling it first. Not GPU-Z though (but use that almost not).

If it still runs without error I let it go, and then downgrade to 314.
If an error appears than I downgrade immediately.

____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31252 - Posted: 5 Jul 2013 | 13:03:58 UTC - in response to Message 31251.

Are you running 4 CPU work units on that system? If so, reduce it to 3. Presumably you tried restarting?
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31253 - Posted: 5 Jul 2013 | 13:13:35 UTC - in response to Message 31252.
Last modified: 5 Jul 2013 | 13:14:18 UTC

Are you running 4 CPU work units on that system? If so, reduce it to 3. Presumably you tried restarting?

No maximum 2 CPU, but currently zero, to see if that did anything, but no.
Yes restarting a few times after the ACEM program crach
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31254 - Posted: 5 Jul 2013 | 13:37:59 UTC - in response to Message 31253.

The GTX550Ti GPU should be ~900MHz when crunching. 400MHz means it's definitely downclocked, but what's causing it is the question?
I suggest you recheck the NVidia control panel settings and make sure Prefer Maximum Performance is still selected. If it is, then all I can suggest is that you do a full uninstall, restart and install a previous driver (304 up to 314). I've seen and read about a lot of issues with the 320.x drivers.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31255 - Posted: 5 Jul 2013 | 13:47:22 UTC - in response to Message 31254.

The GTX550Ti GPU should be ~900MHz when crunching. 400MHz means it's definitely downclocked, but what's causing it is the question?
I suggest you recheck the NVidia control panel settings and make sure Prefer Maximum Performance is still selected. If it is, then all I can suggest is that you do a full uninstall, restart and install a previous driver (304 up to 314). I've seen and read about a lot of issues with the 320.x drivers.

Yes thanks I will do that skgiven. It is still maximum performance.
It has now done 2.358% in 02:03:04 hours so it will take 100 hours to complete.
I will update soon, one more loss of a WU can't hurt.
But the coolers arrived, so I first get the Alienware crunching again.

One is an Intel original, feels heavy, they smeared 3 stripes of paste on it with the idea that is flows evenly when running hot....????
But it will fit in the case. The one I try first is a Shuriken B 64mm in height 100mm fan.

____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31259 - Posted: 5 Jul 2013 | 18:54:07 UTC

I have installed driver 314.22, is set to maximum performance. It has booted 3 times after and one time even powered down completely.
The GPU clock went to 974MHz, but all GPUGRID WU's have failed within a few seconds afterwards. Four errors in a row. I slowly become mad....

But to check things I set GPUGRId (GG) at no new work and went to Milkyway. These tasks are running now at 98% GPU load and 65°C.

So is the drive wrong for GG or are the WU at the moment distributed (Nathan's) failing?
I have never had such many errors even not with the Noelia's where so many people did complain about.
____________
Greetings from TJ

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 31264 - Posted: 5 Jul 2013 | 20:16:16 UTC - in response to Message 31259.

What about temperatures? You said 65C when running MilkyWay, what about when running GPUGRID?
____________

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31279 - Posted: 6 Jul 2013 | 12:19:39 UTC - in response to Message 31264.

What about temperatures? You said 65C when running MilkyWay, what about when running GPUGRID?

I don´t know as the card will not crunch GG yet.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31281 - Posted: 6 Jul 2013 | 12:24:55 UTC

When siting in the sun, I thought about GPU´s.

Suppose I build a system with one simple GPU, a GTS or so that drives to monitors for watching BOINC and stuff.
The I place a GTX690, which has 2 processors, so does to GG WU at the same time and fast (for this project), but I do not connect any monitor to this GPU so it has nothing to do with graphics. Would this rig be more stable?

Secondly the special cards, the Fermi Cxxx that where designed for GPU crunching with no monitor connector at all, would be great then, but expensive.
However for some time I hear and read less of them, are they gone?
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31288 - Posted: 6 Jul 2013 | 14:47:14 UTC - in response to Message 31281.

Forget about Fermi now, they're just wasteful compared to Keplers. They're not as bad as to retire them immediately, but don't anyone buy another one of them for crunching. And especially don't anyone buy a Tesla for crunching! They're only worth their price if you make a living from crunching on them.

And I don't think decoupling the display from crunching would really help (been crunching on my single GPU for years). But if you want to do so nevertheless I'd suggest getting a CPU with integrated GPU, as these are basically free and the most power efficient ones available.

Is that 550Ti still downclocking? I know I answered something today in some other thread..

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31291 - Posted: 6 Jul 2013 | 15:22:26 UTC - in response to Message 31288.

Okay thanks I won't.
I thought so that a screen or no screen would hinder. I experienced it when watching streamed internet TV, GG keeps going.

Yes the 550Ti did Einstein at 974MHz for about 1.5 hours then went on 405 again on the same WU. I have removed all EVGA software and reboot. Ran at 974 but eventually 405 again. Afterburner shows clock of 951, GPU-Z of 405???

Idea was to let it finish one Einstein. Then remove driver. Boot. Install latest driver, boot, install EVGA software, boot and then GG SR.

My final rig is a quad Kentsfield, I use it mainly for working from home as it has two 8600GTS and a monitor shared with a pc next, I can use 3 screens with is handy for my job. Big screens then 2 would be okay but I have only 19 ones.
Now I have removed one card and place the GTX660 which sat in the Alienware in the second slot (it says so secondary), one 8600GTS is in the primary with two monitors.
It is now doing a SR. CPU is 34-45°C according to Core Temp, no CPU WU's running (yet). The GTX660 does 86% load at 73°C but I guess room temp of 31.7 has to do with that. This rig is called EARTH.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31297 - Posted: 6 Jul 2013 | 20:41:47 UTC - in response to Message 31291.

I have my i7-3770K's integrated GPU driving the monitor. Alas, it's not very good and I didn't notice any significant benefit WRT using the two (and previously three) NVidia GPUs for crunching, which was at least in part the purpose of getting that CPU and that motherboard. It's really not worth it, unless you are going to utilize the iGPU on a project in which case its REALLY not worth it, as in you have wasted your money on a CPU instead of a real GPU.

31.7°C is not friendly.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31301 - Posted: 6 Jul 2013 | 20:55:34 UTC - in response to Message 31297.

31.7°C is not friendly.

I know, only one small window, and a fan blowing air out. When summer really hits, it can become 35°C with only one PC active, I need one at least.

____________
Greetings from TJ

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 31320 - Posted: 7 Jul 2013 | 6:08:28 UTC - in response to Message 31259.

The GPU clock went to 974MHz, but all GPUGRID WU's have failed within a few seconds afterwards. Four errors in a row. I slowly become mad....


Shouldn't that GPU be running at 900MHz? That could be the reason for the failures, when you uninstall Precision X and After Burner, do you remove the profiles too? If you don't, they could be kicking in again as soon as windows starts, I had it happen to me after uninstalling Precision X and leaving the profiles behind.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31321 - Posted: 7 Jul 2013 | 9:11:54 UTC - in response to Message 31320.

The GPU clock went to 974MHz, but all GPUGRID WU's have failed within a few seconds afterwards. Four errors in a row. I slowly become mad....


Shouldn't that GPU be running at 900MHz? That could be the reason for the failures, when you uninstall Precision X and After Burner, do you remove the profiles too? If you don't, they could be kicking in again as soon as windows starts, I had it happen to me after uninstalling Precision X and leaving the profiles behind.

I don't know about the clock. It 951MHz did it when I first installed it, and saw that a few days ago still. It did the Einstein WU's overnight. I have now set the clock to 900MHz and it is doing a Nathan LR at the moment.
What will happen when I reduce the Mem clock? It set itself to 2178MHz. Skigven adviced to set it to 400 if it wouldn't work, but as it does, I left the "standard" setting.
Indeed good tip about the profiles, but at de-installing I got that message so I removed the profiles as well.
____________
Greetings from TJ

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 31327 - Posted: 7 Jul 2013 | 14:02:38 UTC

I guess you can wait and see what happens now, come to think of it, I was using both After Burner and Precision X when that weird problem happened with profiles loading and the apps had been uninstalled. The memory runs at 4100MHz so that should be 2050MHz to get it at stock speeds and stable with the GPU at 900MHz.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31336 - Posted: 7 Jul 2013 | 19:08:09 UTC

When I lower the voltage little from the GTX550Ti, I see that the GPU clock stays the same (I brought it up to 918 slowly) but temperature dropped from 73 to 69°C and GPU load stays the same (91%).
Is it okay to lower to voltage? What is its effect on crunching? How low can I the voltage bring?
Thanks.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31337 - Posted: 7 Jul 2013 | 19:26:30 UTC - in response to Message 31336.

So long as that voltage works its a good setting as it saves you electric and reduces heat output.
Sometimes reducing the voltage will cause the WU to fail, but as you have dropped the clocks it could well be fine. Leave it and see if the task completes. Don't mess about too much or you will learn nothing; if you keep changing setting you don't really find anything out - it takes time to know if a setup is really good (sometimes days and many tasks). I think you should stick at 918MHz and don't go over it. If tasks fail drop back down to 900MHz and if they still fail increase the Voltage again.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31338 - Posted: 7 Jul 2013 | 19:45:48 UTC - in response to Message 31337.

So long as that voltage works its a good setting as it saves you electric and reduces heat output.
Sometimes reducing the voltage will cause the WU to fail, but as you have dropped the clocks it could well be fine. Leave it and see if the task completes. Don't mess about too much or you will learn nothing; if you keep changing setting you don't really find anything out - it takes time to know if a setup is really good (sometimes days and many tasks). I think you should stick at 918MHz and don't go over it. If tasks fail drop back down to 900MHz and if they still fail increase the Voltage again.

Thanks I leave that GTX550Ti as it is, I am to happy that it is working fine again at 69°
But on the other quad the 660 is running at 74°C. I would like to have that little lower. But with Precision X or Afterburner it is impossible to lower (or increase) the Core Voltage (not selectable). Both EVGA cards both same Precision X version but with different setting. Is that something from the card? Can it be passed in a way to lower that voltage little?
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31339 - Posted: 7 Jul 2013 | 20:04:11 UTC - in response to Message 31338.

Thanks I leave that GTX550Ti as it is, I am to happy that it is working fine again at 69°
But on the other quad the 660 is running at 74°C. I would like to have that little lower. But with Precision X or Afterburner it is impossible to lower (or increase) the Core Voltage (not selectable). Both EVGA cards both same Precision X version but with different setting. Is that something from the card? Can it be passed in a way to lower that voltage little?

I'm using MSI Afterburner 2.3.0 and I can reduce the Core Voltage (mV), the power limit (%), Core Clock (MHz) and Memory Clock (MHz). W7, 314.22.

Reducing any of these should result in the use of less power, though reducing the Voltage could result in failures (but you don't know until you try).
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31340 - Posted: 7 Jul 2013 | 20:30:30 UTC

On your GTX660 an in fact all Keplers with boost, simply reduce the power target until the GPU reduces clock speed and voltage itself. This way you're guaranteed to remain stable, become more energy efficient and lower the heat output. All at a relatively small performance loss.

Open GPU-Z while crunching, check the current GPU power use and then adjust the limit to something lower than that.

On top of that you could still increase clock sped via the offset, but this again eats into the OC margin and may be unstable, just as OC at full voltage.

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31341 - Posted: 7 Jul 2013 | 20:45:15 UTC

Ah I see. I asked because the other GTX660 is in the T7400 and I have not set or changed anything there. Plugged it in, update to latest nVidia driver, set Precision X to automatic control temperature (not changed its curve) and let it run. It is running at 1123MHz and 1,162V at a steady 66°C for almost 82 hours non stop and GPU load around 85%. Nice I thought I will try that with my other 660 as well, but that´s another rig.
I thought my Alienware was awesome, but this old T7400 almost beats it.
____________
Greetings from TJ

Post to thread

Message boards : Graphics cards (GPUs) : Which graphic card

//