Advanced search

Message boards : Graphics cards (GPUs) : 780Ti vs. 770 vs. 750Ti

Author Message
tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41389 - Posted: 23 Jun 2015 | 16:03:09 UTC

I am doing an analysis of my three GPUs, comparing their credit delivery performance, one against the other.

1. All WUs completed within 24 hours. That's why the 750Ti does not appear in the Gerard numbers(!)
2. The Gerards are from 23 April, when I installed the 780Ti, and the Noelias are from 10 June.
3. Any % improvement difference between the 'ETQunbound' and '467x' Noelias is very marginal.
4. There are fewer 780Ti and 750Ti WUs than you might expect. But these two GPUs are on the same rig, and I have been sharing the Gerard loads between them so that all WUs complete inside 24 hours. None of these WUs are in the analysis.

My conclusions?

• The 780Ti is only around 33% better than the 770 for GPUGrid. Big surprise, given the price differential.
• 2x750Ti deliver more credits than one 780Ti, provided they can complete in under 24 hours; i.e., no recent Gerards! I've not done the sums, but the price difference is staggering.
• Perhaps the 'best bang for the buck' is an external device that will support many 750Ti GPUs, if Gerard can be persuaded to moderate his processing demands... Is there such a thing??


Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41390 - Posted: 23 Jun 2015 | 17:19:14 UTC - in response to Message 41389.

• 2x750Ti deliver more credits than one 780Ti, provided they can complete in under 24 hours; i.e., no recent Gerards! I've not done the sums, but the price difference is staggering.
• Perhaps the 'best bang for the buck' is an external device that will support many 750Ti GPUs, if Gerard can be persuaded to moderate his processing demands... Is there such a thing??

It is quite possible for the GTX 750 Tis to complete the Gerards reliably in under 24 hours, but there are a few tricks involved.



The GTX 750 Tis are not good, they are great for efficiency, which is especially welcome during the summer.

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41391 - Posted: 23 Jun 2015 | 17:52:25 UTC - in response to Message 41390.


It is quite possible for the GTX 750 Tis to complete the Gerards reliably in under 24 hours

Thanks for the response, Jim.

I see you are able to do that, but my problem is my Internet connection. I'm 3kms from the telephone exchange and everyone in the village is Netflix-ing! My best connection speed is 2 megs. Often it's 0.5 megs. It can take three hours to upload a 90 meg Gerard result.

Not sure why GPUGrid penalises me for having a poor connection, but that's the way it is!!

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41392 - Posted: 23 Jun 2015 | 18:51:52 UTC - in response to Message 41391.

Not sure why GPUGrid penalises me for having a poor connection, but that's the way it is!!

Is it too much to ask that the project give credit based on WU processing time rather than sent/received time?

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,206,655,749
RAC: 261,147
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41393 - Posted: 23 Jun 2015 | 22:32:54 UTC - in response to Message 41392.
Last modified: 23 Jun 2015 | 22:34:58 UTC

Not sure why GPUGrid penalises me for having a poor connection, but that's the way it is!!

Is it too much to ask that the project give credit based on WU processing time rather than sent/received time?

From the project's point of view the reason of a task's delayed return is indifferent.
How could the project tell from the processing time that your host missed the bonus' deadline because of a slow internet connection, and not because the GPU was offline, or it was crunching something else?
Besides, if you know your internet connection is slow, and you don't want to miss the bonus' deadline you should choose your GPU according to both of these conditions. (e.g. if you can't have a faster internet connection then you have to buy a faster GPU, a GTX960 for example)
From the project's side a working solution to this problem could be if the bonus would not be assigned to the workunit itself, instead it would be assigned to the host, so if the previous workunit returned within 24 hours by the host, then the last workuint would gain the +50% bonus credit. In this case if the host has a slow and a fast GPU, or it has two almost fast enough GPUs then it would gain the +50 bounus for all workunits.
But this method does not reflect the way this project works:
A given simulation consists a series of workunits, each one continue the work from the previous one, so from the project's point of view it is better if a workunit is returned as fast as possible, so the whole simulation could be finished as fast as possible. So the current bonus method serves better the project's goals, than your (or my) suggestion.

[CSF] Thomas H.V. DUPONT
Send message
Joined: 20 Jul 14
Posts: 732
Credit: 126,845,366
RAC: 190,805
Level
Cys
Scientific publications
watwatwatwatwatwatwatwat
Message 41394 - Posted: 24 Jun 2015 | 8:19:29 UTC - in response to Message 41389.

Thanks for this very interesting report, tomba, and thanks for sharing.
Really appreciated :)
____________
[CSF] Thomas H.V. Dupont
Founder of the team CRUNCHERS SANS FRONTIERES 2.0
www.crunchersansfrontieres

eXaPower
Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 41396 - Posted: 24 Jun 2015 | 16:35:53 UTC - in response to Message 41394.

Thanks for this very interesting report, tomba, and thanks for sharing.
Really appreciated :)

+1

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41397 - Posted: 24 Jun 2015 | 17:34:18 UTC - in response to Message 41394.

Thanks for this very interesting report, tomba, and thanks for sharing.
Really appreciated :)

My pleasure, Thomas!

Greetings from the woods, 3kms from La Garde Freinet. Its Netflixing inhabitants are killing my Internet service !!

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41398 - Posted: 24 Jun 2015 | 19:15:59 UTC - in response to Message 41389.

Perhaps the 'best bang for the buck' is an external device that will support many 750Ti GPUs. Is there such a thing??

No takers on this thought, but perhaps there are mobos that will take three (four?) double-width 750Ti GPUs??

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41402 - Posted: 25 Jun 2015 | 15:04:50 UTC - in response to Message 41389.
Last modified: 25 Jun 2015 | 15:53:47 UTC

Put a GTX750Ti into a Q6600 system (DDR3). At stock and running 2 CPU tasks it would take between 26 and 27h to complete one of Gerard's long tasks. The 750Ti's GPU utilization was around 90%
Enabled SWAN_SYNC and rebooted, reduced the CPU usage to 50% (to still run one CPU task), overclocked to 1306/1320MHz (it bounces around) and the GPU utilization rose to 97%.
GPU-Z says it's in a PCIE 1.1 x16 slot using a X4 PCIe bus.
Going by the % progress the task should now finish under 23h.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41403 - Posted: 25 Jun 2015 | 17:57:07 UTC - in response to Message 41402.

Thanks for the reply, skgiven!

Enabled SWAN_SYNC and rebooted

Did that. Task Manager now tells me that my two acemd.847-65 tasks are running at 100%.

overclocked to 1306/1320MHz

Now I'm in trouble... Which sliders do I use to get to 1306/1320MHz ??

eXaPower
Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 41404 - Posted: 25 Jun 2015 | 18:21:27 UTC - in response to Message 41403.
Last modified: 25 Jun 2015 | 18:53:16 UTC

Which sliders do I use to get to 1306/1320MHz ??

"GPU clock offset" slider. Boost bins are in 13MHz intervals.

Begin by raising one bin at a time until you reach 1306 or 1320.

At you're current 1.2V/1150MHz clock -- +156MHz (on the GPU clock offset slider) equals to 1320MHz which is a total of 12 boost bins. If you stay under 80C the boost clock should stay at 1320. If not - clocks will fluctuate a few bins. This is normal. Unless EVGA program is incorrectly reading out voltage - 1.2V/1150MHz might not leave alot headroom for an overclock.

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41411 - Posted: 26 Jun 2015 | 16:41:36 UTC - in response to Message 41403.

Enabled SWAN_SYNC and rebooted.

Did that. Task Manager now tells me that my two acemd.847-65 tasks are running at 100%.

Looks like I gain 20 mins on a Noelia and 35 mins on a Gerard.

Worthwhile! Thank you.

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41413 - Posted: 26 Jun 2015 | 17:11:28 UTC - in response to Message 41404.

Which sliders do I use to get to 1306/1320MHz ??

"GPU clock offset" slider. Boost bins are in 13MHz intervals. Begin by raising one bin at a time until you reach 1306 or 1320.

Thanks for the response, eXaPower!

Did that. Pushed it up to 1276 for a temp around 70C and no additional fan noise where the ambient is 25C:



A bit puzzled why GPU-Z says I'm only at 1147...



At you're current 1.2V/1150MHz clock -- +156MHz (on the GPU clock offset slider) equals to 1320MHz which is a total of 12 boost bins. If you stay under 80C the boost clock should stay at 1320. If not - clocks will fluctuate a few bins. This is normal. Unless EVGA program is incorrectly reading out voltage - 1.2V/1150MHz might not leave alot headroom for an overclock.

Not sure what you're saying here. Was I already at 1320??

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41415 - Posted: 26 Jun 2015 | 18:37:39 UTC - in response to Message 41413.

A bit puzzled why GPU-Z says I'm only at 1147...


1147MHz is the clock without boost. Click the GPU-Z Sensors Tab to see what it actually is.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41417 - Posted: 27 Jun 2015 | 6:08:14 UTC - in response to Message 41415.

A bit puzzled why GPU-Z says I'm only at 1147...

1147MHz is the clock without boost. Click the GPU-Z Sensors Tab to see what it actually is.

Thanks skgiven! Yep - the sensors tab shows 1276. So much to learn...

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41419 - Posted: 27 Jun 2015 | 8:57:22 UTC - in response to Message 41402.

Enabled SWAN_SYNC

As previously reported, I did that, but...

This WU just finished. The top entry, for my 750Ti, says "SWAN Device 1". Lower down, for my 780Ti, it says "SWAN Device 0". The 750Ti is the one driving video.

Is this the way it is, or is there something else to do?


Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41420 - Posted: 27 Jun 2015 | 11:33:27 UTC - in response to Message 41419.
Last modified: 27 Jun 2015 | 11:34:32 UTC

The tasks will report SWAN Device 0 or SWAN Device 1 irrespective of setting SWAN_SYNC. AFAIK it effects both cards the same (after a reboot).

Noticed that your 750Ti temperature crept up to 82C:

# GPU 1 : 79C
# GPU 1 : 80C
# GPU 0 : 69C
# GPU 1 : 81C
# GPU 0 : 70C
# GPU 0 : 71C
# GPU 1 : 82C

Suggest you reduce your temp target a bit.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41422 - Posted: 27 Jun 2015 | 13:18:56 UTC - in response to Message 41398.
Last modified: 27 Jun 2015 | 13:22:51 UTC

Perhaps the 'best bang for the buck' is an external device that will support many 750Ti GPUs. Is there such a thing??

No takers on this thought, but perhaps there are mobos that will take three (four?) double-width 750Ti GPUs??

There are such things but they are expensive, there is a performance loss and it's not necessary.
An MSI Z77A-G45 (and many similar boards) has 3 x16 PCIe slots, and you could additionally use up to 3 of the PCIe X1 slots (albeit at a performance loss of 15% or more depending on setup).

After a quick look at an overclocked GTX750Ti on XP, in theory 2 overclocked GTX750Ti’s on a quad core system optimized for GPUGrid could do 12.5% more work than 1 overclocked GTX970. The 750’s would also cost less to buy and less to run.

Two 750Ti’s would cost about £200 new while a 970’s would cost around £250 new.
Second hand £120 to £160 vs £210 to £240; roughly £140 vs £225.

Assuming a 60W system overhead:
The power usage of the 2 750Ti’s would be 2*60W+60W=180W.
The power usage of the one 970 would be 145W+60W=205W.
That’s 13.8% less power for 12.5% more work or a system performance/Watt improvement of 28%.

Does it scale up to 4 750Ti's?

4 750Ti’s would cost about £400 new while the 970’s would cost around £500 new (~£300 vs £450 second hand).
The power usage of the 4 750Ti’s would be 4*60W+60W=300W.
The power usage of the 2 970’s would be 2*145W+60W=350W.
It’s 16.6% less power for 12.5% more work or a system performance/Watt improvement of 31%.

So 12.5% more work, £100 less to buy and 50W less power consumption.

On the down side:

If you are hit by the WDDM overhead (Vista, W7, W8, W10, 2008Server and 2012Server…) then you may miss the 24h 50% bonus deadline for returning some tasks (or might just scrape under it). This shouldn’t be a problem on XP or Linux with an overclocked GTX750Ti, but at reference clocks and/or on a non-optimized system you would still be looking at 26h+ for some tasks (so you do need to overclock these).
On WDDM systems the GTX970’s can run 2 tasks at a time to increase overall credit, effectively negating the theoretical 12% improvement of the 750Ti’s (haven’t checked if it is 12% on a WDDM system).
I cannot increase the power usage of my GTX750Ti, unlike the 970’s.
The 750Ti’s might not hold their value as long and IMO being smaller would be more likely to fail.
If the size of a tasks increase then there is a likelihood that the 750Ti will no longer return work in time for the full bonus. That said they should still get the 25% bonus (for reporting inside 48h) and could run short WU’s for some time.
While it’s no more expensive to get a motherboard with 2 PCIe X16 slots, and some have 3 slots, very few have 4 slots. While in theory you could use an X1 slot with a powered riser, the loss of bus width would reduce performance by more than the 12% gain. However, the 750Ti would still be cheaper to buy and run, and you might be able to tune it accordingly. It's also likely to suffer less than a bigger card raised from an X1 slot.

For comparison purposes:
The power usage of a single 750Ti system would be 60W+60W=120W.
As the throughput is only 56.25% of a 970 and the power usage is 58.53% overall system performance per Watt is a bit less (4% less) than a 970.
Similarly, in terms of system performance per Watt a GTX980’s 10.8% better than a system with a single GTX750Ti.
If you compare one GTX980 against 3 GTX750Ti’s you would find that the 3 GTX750Ti’s can do about 2.6% more work but use 180W compared to the 165W of the GTX980.
The GTX980 is therefore the better choice in terms of system performance/Watt (by 4%).
However, a new GTX980 still costs £400 while three GTX750Ti’s cost £300 and you could pick up 3 second hand 750Ti’s for around £200.

Obviously you could buy a system that uses more or less power, which would change the picture a bit, but basically if you are only going to get one card get a bigger one and if you want max performance/Watt on the cheap for now, build a system with two, three or four GTX750Ti’s on XP or Linux.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,206,655,749
RAC: 261,147
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41424 - Posted: 27 Jun 2015 | 13:54:16 UTC - in response to Message 41422.
Last modified: 27 Jun 2015 | 13:57:29 UTC

On the down side:
...
The 750Ti’s might not hold their value as long and IMO being smaller would be more likely to fail.

I agree only with the first part of this statement.
Larger cards have higher TDP, resulting higher temperatures, which could induce shorter lifespan.

If the size of a tasks increase then there is a likelihood that the 750Ti will no longer return work in time for the full bonus. That said they should still get the 25% bonus (for reporting inside 48h) and could run short WU’s for some time.

That's why I don't recommend GTX 750Ti for GPUGrid. This card (and the GTX 750) are the smallest ones of the Maxwell series, and since there is the GTX 960 it's a much better choice taking all three aspects (speed, price and energy efficiency) into consideration.

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41425 - Posted: 27 Jun 2015 | 15:16:22 UTC - in response to Message 41420.

Noticed that your 750Ti temperature crept up to 82C:

# GPU 1 : 79C
# GPU 1 : 80C
# GPU 0 : 69C
# GPU 1 : 81C
# GPU 0 : 70C
# GPU 0 : 71C
# GPU 1 : 82C

Suggest you reduce your temp target a bit.

Thanks for that, eagle-eyed skgiven!

I reduced the target from 80C to 78C and have been running GPU-Z for several hours with GPU Temperature set to Average. The number is 76.9C.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41426 - Posted: 27 Jun 2015 | 18:01:28 UTC - in response to Message 41424.
Last modified: 27 Jun 2015 | 18:04:52 UTC

On the down side:
...
The 750Ti’s might not hold their value as long and IMO being smaller would be more likely to fail.

I agree only with the first part of this statement.
Larger cards have higher TDP, resulting higher temperatures, which could induce shorter lifespan.

Was speaking from experience of using GT240's. The fans failed on most of the cards I went through.
My interpretation of the higher TDPs is that the cards need to be built better (able to actually dissipate 140 or 165W of heat). Not that 165W is a lot; a lot is 230W (770) or 250W (780). In my experience bigger cards can often run at higher temps because they are built better, they don't fail tasks, but that doesn't mean they should be run hot and having a higher TCP doesn't necessarily mean higher temps; GPU's with better cooling stay cooler no matter what the TDP is and that's down to the manufacturers (with the exception of ref designs).

Found that my 650Tiboost (134W) and 660 cards couldn't handle the heat too well. These were closer to the 120W TDP of the 960.

If the size of a tasks increase then there is a likelihood that the 750Ti will no longer return work in time for the full bonus. That said they should still get the 25% bonus (for reporting inside 48h) and could run short WU’s for some time.

That's why I don't recommend GTX 750Ti for GPUGrid. This card (and the GTX 750) are the smallest ones of the Maxwell series, and since there is the GTX 960 it's a much better choice taking all three aspects (speed, price and energy efficiency) into consideration.

The 960 has a couple of pluses (no issue returning in time, newer generation) but I don't have a GTX960 and have not seen any significant reports of one (actual power usage, temps, OC's, optimized runtimes/performance). It might be a better option for some but I don't like the Watts it pulls for a 1024shader GPU; a GTX970 is 25% more efficient and a GTX980 is 46% (GFlops/W SP), and they are quite pricey too. Cheapest new is £150.

The 750Ti is GM 107 whereas the 960 is GM106 and the 970/980 are GM104. The GM106 is very much a medium sized card.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

eXaPower
Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 41427 - Posted: 27 Jun 2015 | 18:17:07 UTC - in response to Message 41422.

I cannot increase the power usage of my GTX750Ti, unlike the 970’s.

A custom vBIOS could help. (Maxwell BIOS tweaker tool and NVflash) I would always hit a powerlimit when clocks were around 1320Mhz for GERALD's so I flashed. A lot of 750 series are set at a 38.5W hard limit . At 60W = 157% of the original BIOS power cap.

Second hand £120 to £160 vs £210 to £240; roughly £140 vs £225.

Secondhand custom PCB 970's are nearly commanding 100% of their MSRP in the states while the typical resale 970's are 80-90%.

Zoltan wrote:
That's why I don't recommend GTX 750Ti for GPUGrid. This card (and the GTX 750) are the smallest ones of the Maxwell series, and since there is the GTX 960 it's a much better choice taking all three aspects (speed, price and energy efficiency) into consideration.

I agree: a 960 can operate at 1.5GHz while 1.4GHz is considered a great overclock for the 750's.

A caveat: comparing host's with fast 750's - they have up to <75% output of a 960. 4/5SMM vs. 8SMM scaling has yet to show a ~50% decrease in runtime. This might be due to 960's 1024CUDA with 1024MB of L2cache (1/1 core/cacheKb) ratio while a 750 is 4/1 ratio at 2048MB of L2c with 512CUDA. Also the 960's core/128bit bandwidth ratio could be an issue in itself. I've yet to see a 960 an ACEMD runtime 50% better than the 750series. (Please correctly me if I'm mistaken) An app update could change this.

And thanks to skgiven for the informative posts.

On a another note -- a great gig in the sky occurred -- GM107 owners with a custom vBIOS: don't always trust what the temp target/overclock setting is in a non-controlled environment.

Last night a powerful sea breeze developed - dropping ambient temps from 85F to 60F. My systems are located (sun blasted room with five windows). My 750 decided to manually overclock past >1438MHz without my doing (Gerald error) during the 20hr runtime. Before I realized this behavior: a NOELIA_ETQ completed okay at 1425Mhz with no simulation error messages. A another GERALD WU down-clocked to 405MHz while at 1425MHz. No sim error but a CUDA error at 1425 for GERALD.

When the GPU core dropped to 70C: the OC crept higher and higher on it own. The cool 53F dense sea air has the 750 GPU at 70C OP when the temp target is 80C allowing the boost bins to past a designated 1412MHz. OV is at the standard 1.2V. OV can be raised .025 but without water cooling I wont risk it.

After a reset and a new overclock to 1359MHz with 80C target since it really cool at the moment - boost currently at 1346/1359MHz which is completely stable for GERALD and NOELIA's. My former NV inspector overclock was 1412MHz (80C temp target) for a counter against the hot ambient that would down clock a few boost bins - the 80C temp target is fighting heat. (WC will fix this) During hot days: clocks mostly bounce around 1333-1372MHz (lowest is 1293MHz/1.081V) if ambient is near 100F.

My GM107 hard limit for GERALD's seems to be 1412MHz and 1425 for NOELIA's. Anything above 1400MHz is considered golden for GM107. If temps are below target and a GPU isn't hitting it's power limit - all Maxwell's expect GM107 might be able to clock ACEMD 1.5GHz stable. 1.5GHz GM200 yet to be proven at ACEMD but once I get one this will my goal and hopefully long-term stable. A user reported 1465MHz. This is highest known GM200 ACEMD clock so far.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41429 - Posted: 27 Jun 2015 | 22:47:27 UTC - in response to Message 41427.

1.5GHz sounds good but with only 1024MB of L2cache and a 128bit bus the 960 is badly bottlenecked and the high freq. comes at the cost of power.

Lots of good posts and valid opinions in this thread.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 755,070,933
RAC: 197,388
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41430 - Posted: 28 Jun 2015 | 4:09:59 UTC - in response to Message 41427.
Last modified: 28 Jun 2015 | 4:12:04 UTC

I cannot increase the power usage of my GTX750Ti, unlike the 970’s.

A custom vBIOS could help. (Maxwell BIOS tweaker tool and NVflash) I would always hit a powerlimit when clocks were around 1320Mhz for GERALD's so I flashed. A lot of 750 series are set at a 38.5W hard limit . At 60W = 157% of the original BIOS power cap.

Where do I get a custom vBIOS or the software for building one? It might help with my problems trying to install a GTX 750 Ti in one of my computers - apparently the standard BIOS blocks use of all brands of GTX 750 Ti by some problem that prevents the boot procedure from paying any attention to the keyboard if a GTX 750 Ti is installed. I've already tried two different brands; both block the boot procedure at the point where keyboard input is needed to go any further.

I've already found and tried what appears to be the latest BIOS update for that computer model; no difference.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41435 - Posted: 28 Jun 2015 | 8:59:00 UTC - in response to Message 41430.

Robert, the custom vBIOS referred to below is for the GPU not the motherboard, so it won't help you out. Your only solution is a different motherboard/system.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 755,070,933
RAC: 197,388
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41438 - Posted: 28 Jun 2015 | 14:24:33 UTC - in response to Message 41435.

Robert, the custom vBIOS referred to below is for the GPU not the motherboard, so it won't help you out. Your only solution is a different motherboard/system.


A different motherboard might be an option if I can get sufficient information to choose one that will fit into that case, run with the original power supply, and work the first time I try to boot it. I have not yet found sufficient information on how to match motherboards to cases and power supplies, though.

A new computer would require rather extreme measures, such as not running it at the same time as both of my current desktops, or even moving to a different building first.

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41439 - Posted: 28 Jun 2015 | 16:06:02 UTC - in response to Message 41422.

An MSI Z77A-G45 (and many similar boards) has 3 x16 PCIe slots, and you could additionally use up to 3 of the PCIe X1 slots (albeit at a performance loss of 15% or more depending on setup).

Yes - looks like it could support three double-width GPUs though I doubt there's room for more than one GPU on an x1 and that would have to be single-width.

Unfortunately, when I built my main rig 18 months ago I went for the ASUS Asus Sabertooth 990FX R2.0 'cause it had four PCIe x16 slots. I found that only two double-width GPUs fit in it! Should have done my homework...

On another topic, today I got my first Gianni since February. It's gunna take 24+ hours on my OCed 750Ti. Looks like there's a trend for our goodly scientists to demand ever more processing power.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41440 - Posted: 28 Jun 2015 | 19:05:54 UTC - in response to Message 41439.

I doubt there's room for more than one GPU on an x1 and that would have to be single-width.


Powered PCIe Risers :)
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41446 - Posted: 29 Jun 2015 | 14:05:18 UTC - in response to Message 41440.

I doubt there's room for more than one GPU on an x1 and that would have to be single-width.

Powered PCIe Risers :)

OK. Checked 'em out. Watched a couple of videos but they were illustrating the riser connections with the mobos on the work bench.

Where do the riser-attached GPUs go when the PC is closed up??

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41450 - Posted: 30 Jun 2015 | 18:09:06 UTC - in response to Message 41446.

When you start using risers it's gone way past basic closed case design. It's very much an open case/bench worktop design.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41460 - Posted: 1 Jul 2015 | 15:51:27 UTC - in response to Message 41450.

When you start using risers it's gone way past basic closed case design. It's very much an open case/bench worktop design.

Ah! Got it. Thanks.

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41490 - Posted: 6 Jul 2015 | 10:54:24 UTC - in response to Message 41389.

On June 23 I reported this comparison of my three GPUs:




On June 27 I made some changes:

Rig 1 - 780Ti & 750Ti:
Activated SWAN_SYNC and upped the 750Ti GPU clock from 1150 to 1275.

Rig 2 - 770:
Activated SWAN_SYNC.

Here are the numbers since those changes:



Noelias:
10%+ improvement on the 750Ti, 15+% on the 770 just with SWAN_SYNC, and only 4% on the 780Ti.
The 780Ti lost 12 percentage points against the 750Ti
The 780TI lost 13+ percentage points against the 770(!) How can this be since the only changes relative to these two GPUs was SWAN_SYNC on both...
Even with a faster 750Ti, the 770 gained 6 percentage points against it.

Gerards
The 770 gained 8%+ from SWAN_SYNC, the 780Ti only 4%+.
The 780Ti lost 5 percentage points against the 770.

Summary:
OCing the 750Ti gives, for me, 10% more credits.
SWAN_SYNC is a worthwhile freebee, especially on my 770: 15%+ more credits on Noelias and 8%+ more on Gerards.

eXaPower
Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 41494 - Posted: 6 Jul 2015 | 16:43:35 UTC - in response to Message 41490.

Tomba - thanks again for sharing.

Here is a Gerald credit/power rating:
Average credit/hr divided by the (GPU) BTU/h

(250W) 780ti (853BTU/h) = 23.6 credit/h per BTU/h. (credit/hr = 20190)
(230W) 770 (784BTU/h) = 19.3 credit/h per BTU/h (credit/hr = 15135)
The 780ti has 18.3% higher credit/h per BTU/h

NOELIA's:

(60W) 750ti (204 BTU/h) = 43 credit/h per BTU/h (credit/hr = 8849)

The 750ti/770/780ti NOELIA/GERALD combined credit for 8 days is 8635000 =(1079375 RAC/day) and 44973 RAC/hr.

The 750ti/770/780ti together = 44208 BTU/day or 1842BTU/hr.
All (540W) GPU's together have a 24.4 credit per BTU rating.
The 770 is bringing down the overall average due to power hungry dynamics.
Maxwell's credit per BTU rating is >50% higher than 770.

The higher a credit/h per BTU/h - the more efficient a GPU.

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41497 - Posted: 6 Jul 2015 | 19:23:23 UTC - in response to Message 41494.


Average credit/hr divided by the (GPU) BTU/h

Me no understand BTU/h...

eXaPower
Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 41498 - Posted: 6 Jul 2015 | 19:32:47 UTC - in response to Message 41497.
Last modified: 6 Jul 2015 | 19:46:40 UTC

The British thermal unit (BTU or Btu) is a traditional unit of energy equal to about 1055 joules.


The BTU is often used to express the conversion-efficiency of heat into electrical energy in power plants. Figures are quoted in terms of the quantity of heat in BTU required to generate 1 kW·h of electrical energy. A typical coal-fired power plant works at 10,500 BTU/kW·h, an efficiency of 32–33%.

https://en.wikipedia.org/wiki/British_thermal_unit

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41692 - Posted: 23 Aug 2015 | 15:08:23 UTC - in response to Message 41490.

At the bottom is my post of 6 July. Since then I have continued to monitor the performance of my three GPUs.

A feature of these past 2+ months has been the narrow ranges of credit delivery, by GPU and WU; NOELIA & GERARD, illustrated by the standard deviation percentages below. So I am confident that, for me, the percentage comparisons are truly representative.

Final conclusions?

* the 780Ti has been a big disappointment. Much less than 2x the 750Ti and only 22% to 25% better than the 770.
* SWAN_SYNC does little for the 780Ti but much for the 770 and 750Ti
* wish I had some more-recent GPUs to put through their paces!



On June 23 I reported this comparison of my three GPUs:



On June 27 I made some changes:

Rig 1 - 780Ti & 750Ti:
Activated SWAN_SYNC and upped the 750Ti GPU clock from 1150 to 1275.

Rig 2 - 770:
Activated SWAN_SYNC.

Here are the numbers since those changes:



Noelias:
10%+ improvement on the 750Ti, 15+% on the 770 just with SWAN_SYNC, and only 4% on the 780Ti.
The 780Ti lost 12 percentage points against the 750Ti
The 780TI lost 13+ percentage points against the 770(!) How can this be since the only changes relative to these two GPUs was SWAN_SYNC on both...
Even with a faster 750Ti, the 770 gained 6 percentage points against it.

Gerards
The 770 gained 8%+ from SWAN_SYNC, the 780Ti only 4%+.
The 780Ti lost 5 percentage points against the 770.

Summary:
OCing the 750Ti gives, for me, 10% more credits.
SWAN_SYNC is a worthwhile freebee, especially on my 770: 15%+ more credits on Noelias and 8%+ more on Gerards.
[list=][/list]

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41694 - Posted: 23 Aug 2015 | 15:56:08 UTC
Last modified: 23 Aug 2015 | 16:25:09 UTC

With four GTX 750 Tis set up as I indicated above, my RAC has now stabilized at 1,059,238.69 PPD, or 264810 PPD/card. Each card draws 90% of TDP according to GPU-Z, or 54 watts per card. So each card yields 4904 PPD/watt.

I know that is significantly above my GTX 960 when I tried it, and now use that card on POEM or Folding. But the 128 bit memory bus on the 960 (the same as for the GTX 750 Ti) seems to limit it here, and also on Einstein as I recall.

EDIT: If you look at the 960 on a value basis, it is a very good card, and it is still very energy-efficient as compared to the others; nothing beats the Maxwells that I have found. I just happen to have the 750 Tis and need to find the best use for them in the projects I support.

Post to thread

Message boards : Graphics cards (GPUs) : 780Ti vs. 770 vs. 750Ti

//