Message boards : Graphics cards (GPUs) : GTX 960
Author | Message |
---|---|
| |
ID: 39360 | Rating: 0 | rate: / Reply Quote | |
My guess is that a GTX 960 (GM206) will turn up towards the end of January. | |
ID: 39370 | Rating: 0 | rate: / Reply Quote | |
Count another guess for 192 bit memory bus with 3 GB GDDR5. | |
ID: 39375 | Rating: 0 | rate: / Reply Quote | |
A GM206 might be a 1024/8SMM or... GTX960 variant could possibly be a 10SMM GTX 970m (GM204/1280CUDA/80TMU/48ROP/192bit/3GB)or a GTX960ti variant- GTX980m at GM204/1536CUDA/96TMU/64ROP/256bit/4GB | |
ID: 39379 | Rating: 0 | rate: / Reply Quote | |
I was going by this Zauba shipment,
https://www.zauba.com/import-GTX960-hs-code.html | |
ID: 39381 | Rating: 0 | rate: / Reply Quote | |
How 2 GTX 960 will perform over a GTX980? if the price and performance are and energy effiency can match (using just one 6/8-pin?) with some overclock room. I'm in. | |
ID: 39382 | Rating: 0 | rate: / Reply Quote | |
Until we see the actual specifications we can only speculate. A 128bit bus or a boost cap could cripple it as an SP compute card, or it could come with a 256bit bus, 3 or 4GB RAM and boost really well. A dud or a lean mean crunching machine... | |
ID: 39383 | Rating: 0 | rate: / Reply Quote | |
I agree: that Zaube shipment from last fall was probably an egineering sample based on GM204, hence the 4 GB and 256 bit bus. It probably represents the performance nVidia was targeting for GTX960 back then. The unusually low memory clock could well simulate highly clocked GDDR5 on a 192 bit bus. | |
ID: 39384 | Rating: 0 | rate: / Reply Quote | |
128bit bus confirmed for "GTX960" with decent clocks. Also- GTX960ti and GTX 965ti variants will be released soon after. | |
ID: 39393 | Rating: 0 | rate: / Reply Quote | |
128 bit is barely enough for GM107 - things are certainly getting interesting! It could also be misinformation of those shops, or speculation on their part. nVidia can implement 192 bit busses with 2 and 4 GB, as they did with GTX660Ti. | |
ID: 39395 | Rating: 0 | rate: / Reply Quote | |
Today at CES, Aorus announced a forthcoming (2nd Q) laptop with dual GTX965M GPUs. It comes with a 15.6" 3840*2160 (4k) screen. Basically it's a £2K games console! | |
ID: 39403 | Rating: 0 | rate: / Reply Quote | |
Wow, they're cutting GM204 in half? That's a seriously expensive proposition (for nVidia). I suppose this 965M will transition to GM206 as quickly as possible. | |
ID: 39405 | Rating: 0 | rate: / Reply Quote | |
Yes, that would be half the cuda cores and half the bus of a 980, so I expect performance to be around half. The 760 had half the cores of a 780, but used the same bus width. | |
ID: 39406 | Rating: 0 | rate: / Reply Quote | |
Well, the GTX960 WILL be 2GB, 128bit, 1177 to 1240MHz ref boost GPU, 3500MHz/7GHz GDDR5: Date HS Code Description Origin Country Port of Discharge Unit Quantity Value (INR) Per Unit (INR)
8-Jan-2015 84733030 GTX 960 2GB DDR5 128BIT 1177-1240/7010 HDCP - ZT-90301-10M (PCI,PCB POPULATED,VGA CARD,COMPUTER ACCESS) China Delhi Air Cargo NOS 160 2,290,154 14,313 https://www.zauba.com/import-gtx960-hs-code.htmlLooks like a ZOTAC Card so possibly not reference GPU clocks. No messing this time - 160 units sent! Cost; INI 14,313 = £150, $226, €191 excluding import duty & VAT. Performance should be ~half that of a GTX980, possibly slightly more if it boosts higher (assuming no boost lock). ____________ FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help | |
ID: 39435 | Rating: 0 | rate: / Reply Quote | |
With 1024 cores its performance if it scales might be around that of a GTX 660Ti or a GTX 670 but for 1/3rd less power (or better). GTX960 Benchmarks (3Dmarks firestrike) have appeared. If there not fake: The GTX960 [1024CUDA/64?TMU/32ROPS] Performance is near GTX770. Certain boards will feature a single 6 or 8pin for the possibly 90-125W GTX960. http://wccftech.com/nvidia-geforce-gtx-960-reference-overclocked-performance-revealed-performs-slightly-faster-radeon-r9-280/ | |
ID: 39477 | Rating: 0 | rate: / Reply Quote | |
An ASUS OC version has since appeared on a shipping manifest: | |
ID: 39499 | Rating: 0 | rate: / Reply Quote | |
STRIX GTX970/980 sell really well. Fan control design is clever. EVGA non-blower models also stop spinning when below 60C. My guess is that a reference design version would be somewhere around 95W. Fully expect these to OC-boost to 1350+ and then some. 1500MHz might well be true but for here we will have to wait and see. Yes- with Ti variants filling in the ~100W-145 gap. Most GTX970/980 have 125% power limit. (Zotac is limited to 111% on there overclocked GPU's) If GTX960 (fullGM206?) is rated at 75-100W with a 125% power limit: 1500MHz is very possible. Full dies always overclock better than a Cut down. Example: GK110 GTX780 can't reach GTX780ti speeds or GTX760 vs. GTX770. Maxwell GTX980's have higher reference clocks than the GTX970. | |
ID: 39501 | Rating: 0 | rate: / Reply Quote | |
GTX 960 AMPI EDITION 2GB DDR5 128BIT 1266-1329/7010 HDCP - ZT-90303-10M (PCI,PCB POPULATED,VGA CARD,COMPUTER ACCESS) | |
ID: 39517 | Rating: 0 | rate: / Reply Quote | |
Full dies always overclock better than a Cut down. Empirically this is correct, but with everything else being equal a full die would clock worse simply because it produces more heat. And the cut-down version can have the slowest part disabled, which can yield more frequency headroom. The reason you're seeing the premium cards clock higher is that chips which clock better are more likely to be promoted to premium cards (if there are no defects). MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 39545 | Rating: 0 | rate: / Reply Quote | |
ID: 39566 | Rating: 0 | rate: / Reply Quote | |
Full dies always overclock better than a Cut down. It is a bit of a puzzle why the cut-down chips don't do better. I expect it is because they disable portions of the active circuitry that are not functioning correctly, but leave behind the clock lines. That just maintains the capacitive load without the means to drive it at full speed. | |
ID: 39567 | Rating: 0 | rate: / Reply Quote | |
http://www.tomshardware.com/reviews/nvidia-geforce-gtx-960,4038-8.html Includes PCIe/PEG measurements and power targets (120-160W) for most GTX960. Evga provides a 8 pin while Gigabyte G1 is [2] 6pin and remaining boards are [1] 6pin. Looking at guru3d thermal shots: Galax has the coolest VRM and core temps. Newegg lists GTX960 at 199-209usd. | |
ID: 39577 | Rating: 0 | rate: / Reply Quote | |
It is a bit of a puzzle why the cut-down chips don't do better. I expect it is because they disable portions of the active circuitry that are not functioning correctly, but leave behind the clock lines. That just maintains the capacitive load without the means to drive it at full speed. Well, I'm fine with my explanation given in my last post. The point you're making here has one weak spot: we're talking about lot's of millions transistors here. If the capacitance of fused-off transistors would hamper the performance of transistors in a neighbouring SM, we'd have a huge capacitive problem. At usual transistor densities (as close as possible) the capacitive load would be prohibitively high [if this was true] and our entire chip design, scaling and technology development would have to be changed. Luckily it's not that bad :) On topic: it should be interesting how GTX960 actually performs here. 8/5 or 60% better than a GTX750Ti is expected, but the memory speed doesn't scale as well as the crunching power (30% higher bandwidth due to clock speed). This difference doesn't sound dramatic, so we may well see performance in the range of 50 - 60% higher. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 39702 | Rating: 0 | rate: / Reply Quote | |
The point you're making here has one weak spot: we're talking about lot's of millions transistors here. If the capacitance of fused-off transistors would hamper the performance of transistors in a neighbouring SM, we'd have a huge capacitive problem. At usual transistor densities (as close as possible) the capacitive load would be prohibitively high [if this was true] and our entire chip design, scaling and technology development would have to be changed. Luckily it's not that bad :) No, I am referring to the "clock lines", which are the metallic conductors that carry the clock signals over the chip. They would not be so easy to disconnect when you do a chip repair; it is easier just to turn off transistors, and so the clock lines might be left in place. They would then be a large load on the clock-drivers that are left operational. That is a bit of speculation of course; it could have to do with various other portions of the circuitry, but it appears to be something basic. Nvidia would not want to lose performance on the cut-down chips if they didn't have to. | |
ID: 39705 | Rating: 0 | rate: / Reply Quote | |
Ah, now I got your point. Power consumption of the clock signal is indeed a serious issue in modern chips. But during recent years there has often been talk about better "clock gating" when new chips were presented. I always understood this as not delivering the clock signal to regions of the chips which are currently not in use, i.e. power gated. If the ycan do this, they can also clock gate deactivated SMMs. | |
ID: 39709 | Rating: 0 | rate: / Reply Quote | |
I always understood this as not delivering the clock signal to regions of the chips which are currently not in use, i.e. power gated. If the ycan do this, they can also clock gate deactivated SMMs. Possibly so. But there are clock lines and then there are clock lines. Some are "local", which would be easier to turn off, and some are "global", which might not be. And some would be intermediate between the two. What you do for repair is probably different than what you do in normal operation, but beyond that is beyond the scope of this discussion I am sure. | |
ID: 39710 | Rating: 0 | rate: / Reply Quote | |
My GTX 980 was getting to noisy for me, so I swapped it out for a EVGA GTX 960 SSC (mainly to test the ACX2.0+ cooler). The money I've saved will probably be used for a Titan 2 or I might as well just stick with the 960 until Pascal comes out. | |
ID: 39827 | Rating: 0 | rate: / Reply Quote | |
Quickly looking at your results shows the GTX960 card being a bit faster than half the performance of your GTX980 - which is nice, since it's only got half the raw power. But saving only 45 W? Given the TDPs this number is plausible, but is that worth it? You could have simply lowered the power target on the GTX980 or lowered the fan speed, which would have made it boost less while staying at 80°C. | |
ID: 39943 | Rating: 0 | rate: / Reply Quote | |
Hi Extra, | |
ID: 39992 | Rating: 0 | rate: / Reply Quote | |
Thanks for the more precise numbers, Dave! | |
ID: 40006 | Rating: 0 | rate: / Reply Quote | |
The only valid way to measure power draw is to measure it. Use a power meter such as the very affordable Kill-a-watt or equivalent. The theories are fine but until you measure it you just don't know. | |
ID: 40039 | Rating: 0 | rate: / Reply Quote | |
Huh? I did measure power draw. | |
ID: 40047 | Rating: 0 | rate: / Reply Quote | |
I know, was referring to all the theory going on elsewhere. Sorry for the misunderstanding, I should have been more clear. Theorizing is fine to a point but actual measurement such as you did is the only real way to know. I've been fooled more than once by assuming things about power draw and then finding out I was all wet after measuring. | |
ID: 40051 | Rating: 0 | rate: / Reply Quote | |
You appear to be crunching Einstein on your CPU. Huh? I did measure power draw. ____________ FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help | |
ID: 40056 | Rating: 0 | rate: / Reply Quote | |
You appear to be crunching Einstein on your CPU. Well, universe, space. Look at the stars at a clear night ;) Funny you asked though. Because in fact I stopped crunching E@H and switched to Rosetta@Home for my CPU. R@H seems to need all the CPU power it can get. | |
ID: 40126 | Rating: 0 | rate: / Reply Quote | |
You appear to be crunching Einstein on your CPU. I think the subject of skgiven's question is not your motivation for crunching Einstein@home, but the reason for the power drop you've measured. That reason is that the other project's applications could not utilize the latest GPUs (regardless of any GPU utilization readings by different tools) as much as GPUGrid does, partly because the other projects are using older CUDA versions. | |
ID: 40127 | Rating: 0 | rate: / Reply Quote | |
Oh, no, the power drop is real. Everything else stayed the same. All I did was swapping out the GTX 980 for a 960. The CPU utilization remained unchanged. | |
ID: 40133 | Rating: 0 | rate: / Reply Quote | |
The power drop from 320W to 275W was simply explained by the GPU TDP/usage drop; 165 to 120 is 45W. | |
ID: 41428 | Rating: 0 | rate: / Reply Quote | |
Every time I peak at my 960 it's doing a short run, is it just me? | |
ID: 41648 | Rating: 0 | rate: / Reply Quote | |
Every time I peak at my 960 it's doing a short run, is it just me? It's probably set in your preferences that you accept work only from the short queue (perhaps it's the default setting and you haven't changed it). | |
ID: 41650 | Rating: 0 | rate: / Reply Quote | |
Ok seems to have been fixed, now oddly my 750Ti is not getting anything lol | |
ID: 41898 | Rating: 0 | rate: / Reply Quote | |
It looks like you have it set up to only get short and that queue has gone dry, have you tried it on longs? | |
ID: 41900 | Rating: 0 | rate: / Reply Quote | |
Message boards : Graphics cards (GPUs) : GTX 960