Message boards : Graphics cards (GPUs) : huge result files
Author | Message |
---|---|
those huge result files are causing trouble - probably server bandwidth is too limited to cope with them. | |
ID: 9341 | Rating: 0 | rate: / Reply Quote | |
ID: 9342 | Rating: 0 | rate: / Reply Quote | |
yes 25 mb upload it's too big 25 is lean - i've heard of 50++.. | |
ID: 9344 | Rating: 0 | rate: / Reply Quote | |
Yes, I just had my 2nd extra long WU (13-14hrs on OCed GTX260). Each one had a single file of 53MB, plus the usual normal files. It took me 25 minutes to upload, however, a team mate has been uploading one of his for well over an hour now @4kb/s. Mark | |
ID: 9346 | Rating: 0 | rate: / Reply Quote | |
The WUs which accidently have double the size also seem to have double the output file size. This doesn't change much for the user, though: within the same time you can have one 50MB file or 2 25 MB files. | |
ID: 9350 | Rating: 0 | rate: / Reply Quote | |
The WUs which accidently have double the size also seem to have double the output file size. This doesn't change much for the user, though: within the same time you can have one 50MB file or 2 25 MB files. that's no true - i'm currently uploading a "normal" one, and the largest file alone shows 28.86 MB. | |
ID: 9351 | Rating: 0 | rate: / Reply Quote | |
If memory serves several of my 13 hour tasks were nearly 60M in size. | |
ID: 9357 | Rating: 0 | rate: / Reply Quote | |
that's no true - i'm currently uploading a "normal" one, and the largest file alone shows 28.86 MB. Does it make the 50 MB uploads any worse when you have the choice between 1 times 50 MB or 2 times 28.86 MB? @Paul: true, but then it has been said several times that the extra large WUs were accidents not intended to be repeated. So I don't think it's worth the time to worry about them, as the overall transfer volume is still comparable. Regarding a possible general complaint, that 25 - 30 MB upload is too much for a normal WU: I guess this is something inherent to the project. Remember: GPU-Grid is here to work as a supercomputer with a capability not seen before. Its quite logical to apply such power to large systems, which could not be crunched before.. so there will be many atoms and the positions of all of them have to be used as starting points for the next iteration, i.e. they have to be sent back. I don't know the actual data structure, but the uploads have been rather large from the beginning on. Could also be that the newer files contain debug more information or more frequent result savepoints. I.e. ~800000 time steps are cruched per WU and it could be that previously all atom positions were saved each 100th iteration, whereas now it may be every 20th iteration. But that's just speculation on my part. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 9536 | Rating: 0 | rate: / Reply Quote | |
Message boards : Graphics cards (GPUs) : huge result files