Advanced search

Message boards : Wish list : I wish Noelia wouldn't...

Author Message
Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33632 - Posted: 27 Oct 2013 | 0:25:47 UTC
Last modified: 27 Oct 2013 | 0:26:21 UTC

...send out WU's that consumed 1080MB of video memory to 1024MB cards. This makes them very slow and the computer unresponsive and sometimes means the WU is trashed after running for 30 hrs.

Seems a very simple request to me.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33672 - Posted: 30 Oct 2013 | 12:15:40 UTC - in response to Message 33632.

As far as I understand the problem is that minimum GPU memory size requirements can only be set by project in BOINC, but not per task. This makes sense for projects like SETI, where they seldomly change the work package size. And if they do so it's being done globally.

For GPU-Grid, on the other hand, the limit would ideally have to be set per WU. The limit could differ even within one batch due to different parameters being used, although I don't know whether they're varying e.g. system sizes within batches.

Can the memory size requirement be set per sub-project? If this was true the long runs queue could be split up into 2 (or more) queues for different memory sizes. This makes things a bit more complex.. but could be handled by clearly naming the queues.

MrS
____________
Scanning for our furry friends since Jan 2002

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1576
Credit: 5,601,336,851
RAC: 8,787,904
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33674 - Posted: 30 Oct 2013 | 12:42:19 UTC - in response to Message 33672.

As I said the last time this subject came up (Correct reporting of coprocessors (GPUs)), I'm pretty sure that memory size requirements can be set per plan_class, so a 'high memory' plan class and queue could be created.

But there might be a problem (with current versions of the BOINC client) for computers with mixed GPU cards (some above the memory threshhold, some below), if users configured the hosts to run all GPUs on this project.

Stefan
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 5 Mar 13
Posts: 348
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 33675 - Posted: 30 Oct 2013 | 12:51:07 UTC
Last modified: 30 Oct 2013 | 12:51:32 UTC

It is something we are planning to do eventually (put requirements on WU's or queues).
However, seeing as this batch of Noelia's will finish soon (according to her) and Matt is occupied, other stuff takes priority.

But I liked the thread name, hehe :D It's like a wish list to Santa.

Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33778 - Posted: 5 Nov 2013 | 9:04:53 UTC - in response to Message 33675.

Noelia's will finish soon (according to her).


Maybe, but Noelia's plan for video memory domination has been stepped up a gear with NOELIA_1MG which I see are consuming 1.3 GIG.

I liked the thread name, hehe :D It's like a wish list to Santa.


Thank you. Wouldn't it be nice if forum topic titles where not only descriptive but made you want to click.

Post to thread

Message boards : Wish list : I wish Noelia wouldn't...

//