Message boards : Graphics cards (GPUs) : BOINC 6.10.17 released for all users
Author | Message |
---|---|
Another new one to test. - WINSETUP: Remove the 'SeDebugPrivilege' prev from the list of privs the installer sets for the BOINC client itself. This is likely to become the official version next week unless "showstopper" bugs are found. ____________ BOINC blog | |
ID: 13302 | Rating: 0 | rate: / Reply Quote | |
Now its the "official" version. From the BOINC Alpha email list... We are pleased to announce that 6.10 is now ready for public use. ____________ BOINC blog | |
ID: 13321 | Rating: 0 | rate: / Reply Quote | |
I wonder if there is any chance to change the view "style" of credits? I mean - instead of 1234567 to show 1,234,567? it's way better visible :-) | |
ID: 13378 | Rating: 0 | rate: / Reply Quote | |
Mark, | |
ID: 13380 | Rating: 0 | rate: / Reply Quote | |
Mark, Benchmarks are un-good for any number of reasons not the least of which you just demonstrated. Especially with today's systems the benchmark is likely to be run completely out of cache and even likely to be fully held in L1 cache no less ... There have been complaints about this since BOINC Beta and solutions proposed ... there is even a new credit system being proposed by UCB though it is not fully developed yet and has a number of holes not the least of which is that it concentrates on single precision floating point as the be-all and end-all of the values that have to be tracked ... even when we have one project that is almost exclusively double precision and others that are pretty much integer based (meaning performance numbers of single precision systems are not relevant to the performance of that project's applications). | |
ID: 13384 | Rating: 0 | rate: / Reply Quote | |
Mark, There was a bit of debate about using real wu as a basis for benchmaking, but then a number of the projects have different types of work. The benchmark is used primarily to work out how quick your machine is so it can estimate how long work will take. Its then adjusted by the DCF value which is kept per project on your PC. Unfortunately there is only one DCF value per project. It doesn't differentiate between cpu and gpu or different apps within a project (eg seti has multibeam and astropulse which take totally different times). In your case the DCF should have adjusted by now to give a reasonable estimate of how long the work will take. It however has to complete some work units 1st to be able to adjust it. Its not my call as to adjusting the benchmark. Really it doesn't matter what number it comes up with, because once the machine has done some work the DCF will have adjusted anyway. ____________ BOINC blog | |
ID: 13385 | Rating: 0 | rate: / Reply Quote | |
Thx guys for your answers :-) | |
ID: 13386 | Rating: 0 | rate: / Reply Quote | |
Thx guys for your answers :-) You are welcome for the answers ... And Mark is right, idiots like me have been proposing for years that we use actual tasks for benchmarking and he is also right that different projects have different tasks which is why my proposals suggested using tasks from as many projects as is possible. This is usually misinterpreted that I am suggesting using projects like CPDN so that they can ridicule the idea. The truth is that there are many projects that have short tasks that could be used and if we did run a suite of tasks on various machines we could more properly characterize them ... The checkpoint code only says how often BOINC will say it is Ok to checkpoint... the application is the one that actually sets the rate. It asks BOINC if it is Ok, or not, then checkpoints if it is Ok ... assuming that the BOINC API is used at all ... in any case, it is up to the project's application to checkpoint ... not sure why you are losing so much ground ...then again, why shut the system off in the first place? :) | |
ID: 13387 | Rating: 0 | rate: / Reply Quote | |
There ia another... hmm... request. If there is any chance to make BOINC save WUs before closing? Coz right now it loosing 2-3% of each WU. Yep, I put "30 secs" in web, but the story is there... This is where the checkpointing is handled. The science app is written by each project, it decides when its going to checkpoint if at all. Milkyway gpu tasks don't, but they only take 60 seconds. Its a trade-off between checkpointing a lot, thus slowing down the machine and checkpointing less often with the need to redo some work when it restarts. On machines with many cores (quads or i7's) I usually put the number higher (90secs on my i7's) and if you had less cores you'd probably have it about 60 which is the default. ____________ BOINC blog | |
ID: 13394 | Rating: 0 | rate: / Reply Quote | |
Just a little clarification on the "write to disk" parameter ... it really means "use this as a multiplier per core" ... so setting 90 seconds on an i7 limits disk writing to every 720 seconds. One of the keys here is that if you tell BOINC not to write to disk it keeps stuffing intermediate results into memory, after that the OS takes over and if you don't have enough memory it writes it to the swap file (which is yes, you know it, on your disk) and when it is allowed to flush from memory to disk it ends up reading from the swap file and writing to the real disk location. I mention this for crunchers who are running kind of tight on memory because setting this parameter for too long could actually slow you down. | |
ID: 13395 | Rating: 0 | rate: / Reply Quote | |
Just a little clarification on the "write to disk" parameter ... it really means "use this as a multiplier per core" ... so setting 90 seconds on an i7 limits disk writing to every 720 seconds. One of the keys here is that if you tell BOINC not to write to disk it keeps stuffing intermediate results into memory, after that the OS takes over and if you don't have enough memory it writes it to the swap file (which is yes, you know it, on your disk) and when it is allowed to flush from memory to disk it ends up reading from the swap file and writing to the real disk location. I mention this for crunchers who are running kind of tight on memory because setting this parameter for too long could actually slow you down. Depends on BOINC version. From the 6.10.14 change log... - client: don't multiply checkpoint interval (i.e., "disk interval" pref) by # processors. ____________ BOINC blog | |
ID: 13404 | Rating: 0 | rate: / Reply Quote | |
Its been superceeded. See seperate message thread for BOINC 6.10.18 details. | |
ID: 13405 | Rating: 0 | rate: / Reply Quote | |
Depends on BOINC version. From the 6.10.14 change log... Which is likely a move in the wrong direction in that systems are getting "wider" all the time and on wider systems with so many tasks "in-flight" the disk activity goes through the roof ... one of the reasons for the multiplier in the first place. If I read the rumors right the i9 is about the corner and it will be at least a 12 CPU chip ... add in even just a pair of GTX295 cards and you are at 16 tasks running ... | |
ID: 13420 | Rating: 0 | rate: / Reply Quote | |
Paul D. Buck | |
ID: 13428 | Rating: 0 | rate: / Reply Quote | |
the idea of such benchmark is to check how hardware changes effects on BOINC. Sure, I can start any other benchmark to check, but the truth is that synthetic results are too far from "real life". Which has long been my assertion. We need to benchmark with actual tasks... For just as long UCB has been rejecting this concept along with a host of others ... | |
ID: 13493 | Rating: 0 | rate: / Reply Quote | |
the idea of such benchmark is to check how hardware changes effects on BOINC. Sure, I can start any other benchmark to check, but the truth is that synthetic results are too far from "real life". that's really pity... in my understanding this is not right and these benchmarks are meaningless at al... ____________ | |
ID: 13496 | Rating: 0 | rate: / Reply Quote | |
Message boards : Graphics cards (GPUs) : BOINC 6.10.17 released for all users