Message boards : Graphics cards (GPUs) : Help with my GTX 980
Author | Message |
---|---|
So just got a used Zotac AMP 980 on Ebay. Installed it this afternoon and has been running for a few hours. Wow! it gets hot. Like 72C hot. The backplate is almost too hot to touch. I backed the TDP from 80% down to 70%. The GPU clock went from 1292 to 1227. Temp is now 64C. | |
ID: 40808 | Rating: 0 | rate: / Reply Quote | |
So just got a used Zotac AMP 980 on Ebay. Installed it this afternoon and has been running for a few hours. Wow! it gets hot. Like 72C hot. The backplate is almost too hot to touch. I backed the TDP from 80% down to 70%. The GPU clock went from 1292 to 1227. Temp is now 64C. Yes. Is that common? Yes. These cards have to dissipate 165W (at most), while the GTX 660Ti has only 150W TDP, but because it's superscalar, not all of the CUDA cores could be used by the GPUGrid client, so the real TDP of a GTX 660Ti is much lower than a GTX980's. Should I remove the back plate? Definitely not. Won't that help cool the card? No, it won't. My 660ti didn't have a back plate. It's because a GTX660Ti is a much shorter card, and the back plate is a high-end feature. I also noticed that the GPU load was at 80%. I cut back on 2 WCG WUs so my i7 won't hyperthread and the GPU load went up to 84%. I never thought my rig wouldn't be able to feed this beast. I have an i7 at 4Ghz with 2100 RAM. Wow. Blame it on the Windows Display Driver Model architecture of modern Windows OSes. Are there any other tweaks I should be doing to up the GPU load? You could try to set the swan_sync environmental variable to make the GPUGrid client use a full CPU core. Start button -> type systempropertiesadvanced to the search box -> press enter, or click on the result -> click on the "environmental variables" button near the bottom -> click on the "new" button near the bottom (system variables section) -> variable name: swan_sync -> variable value: 1 -> click OK 3 times -> exit BOINC manager with stopping all scientific applications -> restart BOINC manager Thanks as always. You're welcome. | |
ID: 40810 | Rating: 0 | rate: / Reply Quote | |
Thank you for the quick reply. This is the first time I have had a card better than a 660ti or $250 type card. I think I will save for another 980. Then I can run two 980s and the 750. Max this board out. | |
ID: 40811 | Rating: 0 | rate: / Reply Quote | |
Hi! variable name: swan_sync -> Shouldn't it be value=0 ? | |
ID: 40885 | Rating: 0 | rate: / Reply Quote | |
Hi! It's value doesn't matter, only its presence. See this post. The "recommended" value is 1. See this post. | |
ID: 40886 | Rating: 0 | rate: / Reply Quote | |
Disclaimer: Sry to ressurrect the thread. Anyway, I tried the above method on my Gtx 970. It seems that I go from ~70% GPU usage (with 10~15% CPU) to ~75% GPU usage (with 30~33% CPU usage). With 2 tasks running at a time, I go from 80% to 90%. Is this how much it was supposed to be, or should it be going higher? Also, can you tell me a place where I can read what's actually going on? I like learning how things work, and this swan method seems rather intriguing... | |
ID: 41175 | Rating: 0 | rate: / Reply Quote | |
Anyway, I tried the above method on my Gtx 970. It seems that I go from ~70% GPU usage (with 10~15% CPU) to ~75% GPU usage (with 30~33% CPU usage). With 2 tasks running at a time, I go from 80% to 90%. On modern Windows OS's that much you can get. You can gain a little more if you lessen the number of CPU tasks running. Also, can you tell me a place where I can read what's actually going on? 1. The WDDM (Windows Display Driver Model) is to blame for that lower GPU usage. 2. The GPU usage shown by the corresponding utilities is not equal to the ratio of the CUDA cores utilized. The latter could be measured by the power draw of the card. Older CUDA apps may show higher GPU usage, while the card's power draw is lower than when the GPUGrid's app running. I like learning how things work, and this swan method seems rather intriguing... The swan method acts a very simple way: it continuously polls the GPU to see if it's finished a piece of calculation instead of waiting for the whole GPU subsystem to warn the app about it, so the latency of the latter is eliminated. The latency caused by the WDDM overhead cannot be eliminated. You have to use such an OS which doesn't have it (WinXP, or Linux) | |
ID: 41180 | Rating: 0 | rate: / Reply Quote | |
Good info, Retvari! | |
ID: 41183 | Rating: 0 | rate: / Reply Quote | |
The WDDM issue appears to be exasperated with bigger (more powerful)/newer NV GPU's. However it may be the case that the apps cannot yet fully exploit the potential of the more recent GPU architectures (or not, or when running some task types; we know performance over the latest NV generation of GPU's varies with task type). It would be interesting to see if there was any gain from running 2 WU's on XP-x64, 2003R2server, or Linux. If there was, assumptions would need to be rewritten for the GTX900 generation. | |
ID: 41187 | Rating: 0 | rate: / Reply Quote | |
More great points skgiven! | |
ID: 41189 | Rating: 0 | rate: / Reply Quote | |
Message boards : Graphics cards (GPUs) : Help with my GTX 980