Message boards : Graphics cards (GPUs) : Merging of acemd and acemd2
Author | Message |
---|---|
We will be soon merging the two queues into a single one. So there will be left only: | |
ID: 17124 | Rating: 0 | rate: / Reply Quote | |
So will there still be v6.03 and v6.72 WUs? If so I think the merging is a bad idea. Some machines run v6.72 better and some (especially older cards) run v6.03 better. Merging the two types into 1 queue will make us do the babysit and abort shuffle again. Much more work for us, less output for the project. | |
ID: 17125 | Rating: 0 | rate: / Reply Quote | |
6.03 will disappear as it is old by now. There is no reason why it should work better than 6.72. | |
ID: 17128 | Rating: 0 | rate: / Reply Quote | |
We will use only one application in two versions, cuda2.2 (for old drivers) and cuda3.0 for new drivers and fermi. How new do the drivers have to be to use v3.0 (not v3.1?)? | |
ID: 17130 | Rating: 0 | rate: / Reply Quote | |
I don't actually know. It is BOINC that select if the driver is CUDA3 compatible. | |
ID: 17131 | Rating: 0 | rate: / Reply Quote | |
I don't actually know. It is BOINC that select if the driver is CUDA3 compatible. BOINC doesn't "select", BOINC "reports". Compatibility or otherwise is determined by the driver itself. I think for CUDA 3.0 you need at least driver 197.xx, and that's the point where the reduced available memory starts to kick in. Driver 190.xx / CUDA 2.2 seems a nice stable combination for older cards, but have you ruled out CUDA 2.3 (or just decided it doesn't offer any improvement over 2.2 for GPUGrid)? | |
ID: 17133 | Rating: 0 | rate: / Reply Quote | |
I think for CUDA 3.0 you need at least driver 197.xx, and that's the point where the reduced available memory starts to kick in. Thanks for the info. I've been using v195.62 for a long time with great results. If the new plan slows down the v197.45 machines I'll move them back to v195.62 again. | |
ID: 17135 | Rating: 0 | rate: / Reply Quote | |
I don't actually know. It is BOINC that select if the driver is CUDA3 compatible. There is no advantage in using cuda2.3 and cuda2.2 covers a larger driver installation. gdf | |
ID: 17137 | Rating: 0 | rate: / Reply Quote | |
6.03 will disappear as it is old by now. There is no reason why it should work better than 6.72. Is v6.05 replacing both v6.03 and v6.72? Is it simply a renamed v6.72? Looks like it's exactly the same size... | |
ID: 17207 | Rating: 0 | rate: / Reply Quote | |
it's renamed. | |
ID: 17209 | Rating: 0 | rate: / Reply Quote | |
Since v6.03 was shut down I've had to move a 9600GSO and a GT 8800 to Collatz, too many errors with the v6.05 WUs. They ran the v6.03 WUs fine. A 3rd GPU, another 9600GSO that ran well with v6.03 can't get work at all now, probably since it has 384MB of ram, also moved to Collatz. That's 3 decent cards moved from GPUGRID due to problems with the transition. Also problems getting work with the faster cards for the last day. No work at all available for any apps for the last few hours. Things aren't looking good :-( | |
ID: 17275 | Rating: 0 | rate: / Reply Quote | |
We will solve this problem in the next release. I'll keep you informed. | |
ID: 17277 | Rating: 0 | rate: / Reply Quote | |
Since v6.03 was shut down I've had to move a 9600GSO and a GT 8800 to Collatz, too many errors with the v6.05 WUs. They ran the v6.03 WUs fine. A 3rd GPU, another 9600GSO that ran well with v6.03 can't get work at all now, probably since it has 384MB of ram, also moved to Collatz. That's 3 decent cards moved from GPUGRID due to problems with the transition. Also problems getting work with the faster cards for the last day. No work at all available for any apps for the last few hours. Things aren't looking good :-( I'm experiencing something similar when I'm trying to run seti & collatz. Do you have the same thing as I wrote here ? http://lunatics.kwsn.net/gpu-testing/ati-sse3-astropulse-app-openclbrook-beta-testing.msg27299.html#msg27299 | |
ID: 17284 | Rating: 0 | rate: / Reply Quote | |
We will solve this problem in the next release. I'll keep you informed. Thanks, but which problem: the one where older cards produce many more errors or the one that doesn't allow 384MB cards at all? How about bringing back v6.03 in a separate queue until it's fixed? | |
ID: 17294 | Rating: 0 | rate: / Reply Quote | |
Since v6.03 was shut down I've had to move a 9600GSO and a GT 8800 to Collatz, too many errors with the v6.05 WUs. They ran the v6.03 WUs fine. A 3rd GPU, another 9600GSO that ran well with v6.03 can't get work at all now, probably since it has 384MB of ram, also moved to Collatz. That's 3 decent cards moved from GPUGRID due to problems with the transition. Also problems getting work with the faster cards for the last day. No work at all available for any apps for the last few hours. Things aren't looking good :-(I'm experiencing something similar when I'm trying to run seti & collatz. Link doesn't work: "An Error Has Occurred! The topic or board you are looking for appears to be either missing or off limits to you." | |
ID: 17295 | Rating: 0 | rate: / Reply Quote | |
Link doesn't work: It's in a Beta testing area - probably accessible to registered users only. He wrote: I'm having a small problem wit rev 420 and collatz. I don't think it is anything to do with the app_info.xml file: sounds more like BOINC long-term debt, aka 'ATI work fetch priority'. | |
ID: 17297 | Rating: 0 | rate: / Reply Quote | |
He wrote: This is not related to the problem above. What you're describing sounds like the GPU FIFO "feature" in BOINC. The only way around this WAS to use a VERY SMALL queue size. Try BOINC v6.10.56 though as things seem to have improved (at least in my tests). | |
ID: 17299 | Rating: 0 | rate: / Reply Quote | |
I'm using 6.10.56 x64 but it's not working. | |
ID: 17301 | Rating: 0 | rate: / Reply Quote | |
Configure Boinc to keep 0.05 days of work units (for example): | |
ID: 17303 | Rating: 0 | rate: / Reply Quote | |
Configure Boinc to keep 0.05 days of work units (for example): It's not good. I had to change it back to 0.00 to get some work. | |
ID: 17323 | Rating: 0 | rate: / Reply Quote | |
I don't think it is anything to do with the app_info.xml file: sounds more like BOINC long-term debt, aka 'ATI work fetch priority'. A subject Richard and others have been trying without much success to get UCB to take seriously ... There are two other ways to "sometimes" get BOINC back in battery... one is individual project resets to clear the debts and the other is to use the flag in CC Config to clear the debts... Obviously, if you are going to use the project reset method, wait until you run dry next time and then reset the project and that will reset the debts for that single project ... with multi-project interaction the generalized debt reset is sometimes the only way to get BOINC to behave again ... Note that in my personal experience you can start seeing the artifacts of the "debt crisis" in as little as a week though most of the time it is tolerable for up to a month ... | |
ID: 17337 | Rating: 0 | rate: / Reply Quote | |
Message boards : Graphics cards (GPUs) : Merging of acemd and acemd2