Advanced search

Message boards : Graphics cards (GPUs) : Resource utilisation: still too immature

Author Message
stanisls
Send message
Joined: 9 Feb 09
Posts: 2
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 6532 - Posted: 10 Feb 2009 | 9:06:56 UTC
Last modified: 10 Feb 2009 | 9:07:07 UTC

Here is my conclusion after I've run GPUGrid for a few hours. I have previously run another non-Boinc project, Folding@Home GPU, for a couple of days as well.

For all GPU-based crunchers the problem is lack of priority queues. A GPU cruncher will hog all available GPU time, leaving for example desktop unresponsive (scrolling a FireFox window, dragging a window around would be jerky). This was not a problem before, when only one application (desktop or game) accessed the GPU resources at any given time. It is a problem with CUDA. nVidia driver MUST implement a way to prioritise tasks on GPU. So that, for example, GPUGrid would run at lowest priority and yield GPU to a high-priority application (a game) or to a real-time application (desktop). Really the same way it is done on a conventional CPU by a modern OS.

BOINC-specifically, GPUGrid is actually wasting CPU time. I have a 4-core CPU. Normally all 4 cores are kept busy by various BOINC tasks. In comes GPUGrid. It is counted as a task, but as it runs on GPU, one of the cores of my CPU is not utilised. I tried setting the limit in preferences to allow 5 simultaneous tasks on a multi-core CPU, but BOINC would still only start 4 and with GPUGrid almost not touching the CPU, it is only used for 75%.

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6533 - Posted: 10 Feb 2009 | 9:25:44 UTC - in response to Message 6532.

BOINC-specifically, GPUGrid is actually wasting CPU time. I have a 4-core CPU. Normally all 4 cores are kept busy by various BOINC tasks. In comes GPUGrid. It is counted as a task, but as it runs on GPU, one of the cores of my CPU is not utilised. I tried setting the limit in preferences to allow 5 simultaneous tasks on a multi-core CPU, but BOINC would still only start 4 and with GPUGrid almost not touching the CPU, it is only used for 75%.


It depends on the BOINC version you run. You can have 4+1 on a quad core machine (assuming you have 1 gpu). Under 6.4.5 you can fudge it my overriding the number of cpu's, under 6.5.0 it works okay (but it won't shutdown tasks on exit). The various 6.6.x version also will run 4+1 but have work-fetch issues which they are trying to sort through.

The current (6.62) version of the GPUgrid app wants .13 of a core to feed the gpu, so it will happily co-exist with another science app running on the same core.
____________
BOINC blog

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 6537 - Posted: 10 Feb 2009 | 15:49:15 UTC

One other note, memory is important. I have 1G or better on all GPU cards and see little of the stuttering and the like on any system. Of course I do not do high intensity gaming ... but even on the 9800 GT I have no hesitations on the system I can point to the GPU as the cause ...

YMMV

stanisls
Send message
Joined: 9 Feb 09
Posts: 2
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 6538 - Posted: 10 Feb 2009 | 17:15:05 UTC - in response to Message 6533.
Last modified: 10 Feb 2009 | 17:18:19 UTC

You can have 4+1 on a quad core machine (assuming you have 1 gpu). Under 6.4.5 you can fudge it my overriding the number of cpu's, ...


I am running 6.4.5. Could you tell me how to override the number of CPUs? I've already set the preference "On multiprocessors, use at most 'Enforced by version 5.10 and earlier'" to 5, but I can't find the option to set the exact number of CPUs...

I have nVidia GForce 8600GT overclocked from 475/475 to 585/555 with 512MB of dedicated video memory, driver version 181.22. The system itself has 4GB of RAM (Vista 64bit).

Even though GpuGrid is useless while I use the machine, I might still run it during the nights. It seems I will manage the deadlines (according to BoincView 1.4.2 estimates)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6546 - Posted: 10 Feb 2009 | 20:48:09 UTC

Just use 6.5.0 and running 4+1 shouldn't be a problem.

And what you're generally asking for will be delivered with DirectX 11, as far as I understand. It introduces compute shaders, which can run along whatever else is going on on the GPU.

Paul, it's not the memory (yet). With the first WUs GPU-Grid plus an XP desktop consumed about 70 MB of video memory. Some of the current WUs are larger, but not by much (see proportional increase in time per step for these). Your not seeing this drastic slow down in screen response time because your cards are too fast. Your 9800GT is a good 4 times faster than his card.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 6549 - Posted: 10 Feb 2009 | 21:39:48 UTC - in response to Message 6546.

Just use 6.5.0 and running 4+1 shouldn't be a problem.

And what you're generally asking for will be delivered with DirectX 11, as far as I understand. It introduces compute shaders, which can run along whatever else is going on on the GPU.

Paul, it's not the memory (yet). With the first WUs GPU-Grid plus an XP desktop consumed about 70 MB of video memory. Some of the current WUs are larger, but not by much (see proportional increase in time per step for these). Your not seeing this drastic slow down in screen response time because your cards are too fast. Your 9800GT is a good 4 times faster than his card.

MrS


Thanks for the update ... If I knew what I was doing I would not be here ... :)

I am trying to learn as fast as I can ... I jsut can't keep all these different cards straight ... even with the lists ... sigh ...

Some day I will get it right ...

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6551 - Posted: 10 Feb 2009 | 21:44:23 UTC - in response to Message 6549.

You're doing fine :)
And actually I find myself consulting this site rather often.. as soon as someone mentions a card slower than a 9600GT, it's just not worth to keep these in mind!

MrS
____________
Scanning for our furry friends since Jan 2002

Profile K1atOdessa
Send message
Joined: 25 Feb 08
Posts: 249
Credit: 395,402,681
RAC: 1,594,520
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6553 - Posted: 11 Feb 2009 | 1:18:06 UTC - in response to Message 6551.

as soon as someone mentions a card slower than a 9600GT, it's just not worth to keep these in mind!


My twin 8800GT's little brother 9500GT is busting his butt crunching, even if it does take the poor guy about 40 hours to crunch one WU. :-) Don't forget about the little guys. LOL

Profile dentaku
Send message
Joined: 10 Feb 09
Posts: 13
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 6578 - Posted: 11 Feb 2009 | 22:47:00 UTC
Last modified: 11 Feb 2009 | 22:50:39 UTC

I also think, that this "all or nothing" GPU usage is just too restrictive. I know, my graphics card might not be "fast enough" for GPUGRID, but it is able to finish GPUGRID WUs in time (GeForce 9400GT, 512 MB). The only problem is, that I can run 11 other BOINC projects (CPU) with 100% CPU usage without any problems in the background. But GPUGRID is just sucking too much from the GPU resulting in sluggish and joppy desktop usage which is no fun.

I often read that the SETI@home GPU version doesn't have this problem (can't check as I have Linux and there is no CUDA version for Linux out yet). So, it doesn't seem to be a CUDA API problem.
____________
GPUGRID on 64 Bit Linux with GeForce 9400GT (512MB), AMD X2 4850e (2 * 2,5 GHz), 8 GB RAM

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6582 - Posted: 11 Feb 2009 | 23:04:12 UTC - in response to Message 6578.

I often read that the SETI@home GPU version doesn't have this problem (can't check as I have Linux and there is no CUDA version for Linux out yet). So, it doesn't seem to be a CUDA API problem.


I explained this many times before, so i'll keep this short: currently the GPU is blocked as long as it executes general purpose calculations. In seti or folding@home the calculations can be split into smaller parts and you'll get screen refreshs in between. That's why you don't see the lag as badly. It's a limitation of DirectX 10, CUDA, Drivers, GPU-Grid.. everything together.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile dentaku
Send message
Joined: 10 Feb 09
Posts: 13
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 6584 - Posted: 11 Feb 2009 | 23:43:23 UTC - in response to Message 6582.

Okay, this means it's a problem of the GPUGRID WU's. I don't know the algorithms of WU's but I'm a software developer and therefore can't see a problem to just design the WU processing in a more resource friendly way. I guess, all BOINC projects have some calculations within loops and countless iterations. So, it would be no problem to split such calculations in smaller "chunks" ... just like SETI@home and Folding@home?
____________
GPUGRID on 64 Bit Linux with GeForce 9400GT (512MB), AMD X2 4850e (2 * 2,5 GHz), 8 GB RAM

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6589 - Posted: 12 Feb 2009 | 9:16:49 UTC - in response to Message 6584.

Okay, this means it's a problem of the GPUGRID WU's.


Well, a property of the GPU-Grid WUs. They tried to split calculations even further, but performance was horrible.

MrS
____________
Scanning for our furry friends since Jan 2002

Post to thread

Message boards : Graphics cards (GPUs) : Resource utilisation: still too immature

//