Page 2 of 3 FirstFirst 1 2 3 LastLast
Results 11 to 20 of 24

Thread: MSI GeForce GTX 770 Lightning Tested in SLI

  1. #11

    Default

    Ah atiboy yeah I know you of course. I don't think we've dealt before on carb. I piddle in puddles, hardware-wise.

    So $480-550 hey? Pretty much 680 prices then for a very marginal speed boost (that was $499 msrp at launch). There's only so much you can do without a die shrink though. I can't say I'm overawed by 7xx but I also don't think it's just a money-grab.

  2. #12
    Thread Killer MKII The Joker's Avatar
    Join Date
    Feb 2009
    Location
    The Hardware Section
    Posts
    10,600

    Default

    Quote Originally Posted by jasong View Post
    Ah atiboy yeah I know you of course. I don't think we've dealt before on carb. I piddle in puddles, hardware-wise.

    So $480-550 hey? Pretty much 680 prices then for a very marginal speed boost (that was $499 msrp at launch). There's only so much you can do without a die shrink though. I can't say I'm overawed by 7xx but I also don't think it's just a money-grab.


    Kinda weird we have never done a deal lol
    You know what my biggest issue is, with this card and the previous gen lightning. They advertise all these amazing overclocking features:

    •Unlocked BIOS,
    •Digital PWM Controller,
    •Enhanced Power Design (8+8pin power connectors),
    •Xtreme Thermal design◦Twin Frozr IV cooler with Dust Removal technology
    ◦Propeller Blade technology,
    ◦Dual 10cm PWM fans,
    ◦SuperPipe Technology,
    ◦Nickel-plated Copper Base,
    ◦Dual Form-in-one Heatsinks

    •Military Class III Components,◦CopperMOS
    ◦Golden Super State Chokes
    ◦High-Conductive Capacitors
    ◦Dark Solid Capacitors

    •3×3 OC Kits◦V-Check points
    ◦Triple Overvoltage
    ◦Triple Temp Monitor

    But Nvidia still locks down the voltage...lol
    Eat - Sleep - Overclock - Repeat

  3. #13

    Default

    Not just locks it down, the boost even kills your overclocking potential. They do seem to be going the Intel route of gradually phasing out user overclocking.

  4. #14

    Default

    Well, the scaling is pretty good, but the GTX680 will drop down in price and it'll be the better buy for Nvidia fans. This will just overclock higher and use slightly less power. Its a nice tweak for Kepler, but ultimately this is pretty much what Nvidia planned in the first place.

    Quote Originally Posted by jasong View Post
    This is pretty much the same as AMD's move from 5xxx to 6xxx. They refined the product rather than focusing on big performance jumps. It ended up actually being confusing because people don't know why a 6850/70 should perform worse than a 5850/70. It doesn't mean anything in terms of Nvidia suddenly becoming evil though.
    The people who were familiar with the cards did know, actually. AMD jumped from VLIW5 to VLIW4, which was a major change in architecture. In terms of how the number of stream processors are determined, VLIW4 has less physical stream processors but a higher overall throughput. The HD6870, with around 224 *real* stream processors was able to keep up with the GTX460 (334 stream processors) and in ideal situations is actually quite a bit faster. The jump from VLIW5 to VLIW4 acrhitectures was to reduce the number of idle components.

    The only reason why VLIW5 (HD5000 series) was faster than VLIW4 (HD6000) was because in situations where the game or benchmark could take advantage of the extra hardware, it cruised past easily. Today, my HD6870 is faster than the 1GB GTX460, is on par with the HD7850 and is about 5% slower than the HD5870 which was much more expensive. If they had soldiered on, the successor to the HD6870 would have delivered HD7950-class performance for the same price as the HD7850. But VLIW4 is expensive to scale up and would have just resulted in the same issue later on.

    The reason why AMD bumped down the model numbers to new price levels was to accommodate future scaling and possible in-family performance tweaks for models like a HD6875 (or some crap name like that) - I very much doubt that they had the full concept of GCN nailed down at the HD6000 launch, so they were covering their bases for the future. They still use VLIW4 architecture for the APU family, which is why they're so damn efficient and capable of playing most games with medium settings at 720p - throughput of those chips is phenomenal.

    Quote Originally Posted by The Joker View Post
    Launching a "new" range of cards that are barely faster than the previous gen cards while costing a lot more($480-$550 apparently).
    Meh, doesn't bother me really. If they can tweak their designs and offer better performance-per-watt ratios while they wait to port it over to 20nm, I'm cool with that. At least we know now that the GTX770 will replace the GTX680 at the same price range, not a higher one.

    Quote Originally Posted by simon View Post
    but down the line when more advanced tech comes out do you think that these cards will be able to keep up much or for the mid range cards do they use a system that already squeezes everything out of it?
    What exactly are you asking here? Most of the cards available today starting from R1800 and up will be playing at 1080p two years from now, assuming the card has a 2GB frame buffer. I'd say the GTX780 will only begin to really show its age four years from now. The larger memory bus, 3GB of RAM and the huge amount of shader power still has more headroom through overclocking and driver improvements.

    Quote Originally Posted by The Joker View Post
    But Nvidia still locks down the voltage...lol
    But on the flip side, if they hadn't done this, they wouldn't have been able to bring back the frame metering hardware they added on the G80 series. Overvolting to ranges higher than the extra circuitry could handle could have been detrimental to their goal of solving the SLI stutters.
    Last edited by Wesley; 28-05-2013 at 08:54 PM.

  5. #15
    simon's Avatar
    Join Date
    Apr 2011
    Location
    Same place I was 2 minutes ago
    Posts
    2,786

    Default

    Quote Originally Posted by The Joker View Post
    Exactly the same reason I am sitting with a couple of monster machines, most notably a top end AMD machine and a Top end Sandy bridge E machine. Just to run on the best, as you know cpu's don't affect gaming all that much, its only now that we're starting to see games making use of more than 2 cores, Games like BF3, Crysis 3, Metro Last light and so on.

    Great example of this is Crysis 3. When the game launched the i5 3570K, AMD FX8350 and I7 3770K were neck and neck while running a GTX680. All sitting on the exact same fps. After update 1.3 the 3570K and FX8350 are still pretty much deadlocked but the i7 3770K gained a massive 15fps over those 2 cpu's.

    Its pretty much just to run the best hardware out there and to get the best possible results.
    There are people out there that believe a i3 3220 will actually bottleneck a GTX660TI or a 7950, that's why you also rarely see benchmark reviews with mid range cpu's or low range cpu's.

    Edit: Maybe I am being a little mean towards Nvidia, I was actually looking for 15-20% increase from these cards but I should have known that a re-badge with new memory and slightly faster boost clocked wouldn't have brought huge performance increases.
    But the difference of fps on a dual core and a quad core with games making use of the 4 or more cores would be noticeable so why don't they do benchmarks showing top end dual core and top end 4+ core so that the different segment of buyers can see
    contact me if you are in Cape Town and looking for PC Hardware at very competitive pricing. We also do web and graphic design.

  6. #16
    Thread Killer MKII The Joker's Avatar
    Join Date
    Feb 2009
    Location
    The Hardware Section
    Posts
    10,600

    Default

    Quote Originally Posted by Wesley View Post
    Well, the scaling is pretty good, but the GTX680 will drop down in price and it'll be the better buy for Nvidia fans. This will just overclock higher and use slightly less power. Its a nice tweak for Kepler, but ultimately this is pretty much what Nvidia planned in the first place.



    The people who were familiar with the cards did know, actually. AMD jumped from VLIW5 to VLIW4, which was a major change in architecture. In terms of how the number of stream processors are determined, VLIW4 has less physical stream processors but a higher overall throughput. The HD6870, with around 224 *real* stream processors was able to keep up with the GTX460 (334 stream processors) and in ideal situations is actually quite a bit faster. The jump from VLIW5 to VLIW4 acrhitectures was to reduce the number of idle components.

    The only reason why VLIW5 (HD5000 series) was faster than VLIW4 (HD6000) was because in situations where the game or benchmark could take advantage of the extra hardware, it cruised past easily. Today, my HD6870 is faster than the 1GB GTX460, is on par with the HD7850 and is about 5% slower than the HD5870 which was much more expensive. If they had soldiered on, the successor to the HD6870 would have delivered HD7950-class performance for the same price as the HD7850. But VLIW4 is expensive to scale up and would have just resulted in the same issue later on.

    The reason why AMD bumped down the model numbers to new price levels was to accommodate future scaling and possible in-family performance tweaks for models like a HD6875 (or some crap name like that) - I very much doubt that they had the full concept of GCN nailed down at the HD6000 launch, so they were covering their bases for the future. They still use VLIW4 architecture for the APU family, which is why they're so damn efficient and capable of playing most games with medium settings at 720p - throughput of those chips is phenomenal.



    Meh, doesn't bother me really. If they can tweak their designs and offer better performance-per-watt ratios while they wait to port it over to 20nm, I'm cool with that. At least we know now that the GTX770 will replace the GTX680 at the same price range, not a higher one.



    What exactly are you asking here? Most of the cards available today starting from R1800 and up will be playing at 1080p two years from now, assuming the card has a 2GB frame buffer. I'd say the GTX780 will only begin to really show its age four years from now. The larger memory bus, 3GB of RAM and the huge amount of shader power still has more headroom through overclocking and driver improvements.
    Its so awesome to have a full on tech know it all here now
    Makes me feel so happy, there was a time when this part of the forum was pretty much dead, apart from a few regulars and a odd noob here and there. Now its pumping.

    Back on track though. I still feel that they are taking advantage of the uninformed if that makes any sense, and yes I know we do live in a age where any and all info is just a click away. Maybe that's just me lol
    The local pricing on the 770's are gonna be pretty bad as well.

    I still maintain your better off sticking to a 7970 or GTX680 and if it came down to a choice between those two I would have to recommend the 7970 over the GTX680.
    Eat - Sleep - Overclock - Repeat

  7. #17
    Thread Killer MKII The Joker's Avatar
    Join Date
    Feb 2009
    Location
    The Hardware Section
    Posts
    10,600

    Default

    Quote Originally Posted by simon View Post
    But the difference of fps on a dual core and a quad core with games making use of the 4 or more cores would be noticeable so why don't they do benchmarks showing top end dual core and top end 4+ core so that the different segment of buyers can see
    Every now and then you'll see Tom's hardware do pretty much exactly what your asking.
    The main issue with doing what your asking is time and hardware availability.

    A lot of reviewers actually pay for the hardware, so yeah they may end up getting samples of the GTX780 to review but they still have to cough up for the rest of the rig, so imagine having to do that.
    They also have strict dead lines.
    Eat - Sleep - Overclock - Repeat

  8. #18

    Default

    Quote Originally Posted by simon View Post
    But the difference of fps on a dual core and a quad core with games making use of the 4 or more cores would be noticeable so why don't they do benchmarks showing top end dual core and top end 4+ core so that the different segment of buyers can see
    Take any CPU-intensive game benchmark of the Core i5-3330 or the i5-3470. The Core i3-3225 can reach the same high-end framerates thanks to Hyper-threading, but when the game becomes very taxing and requires more physical cores, your framerate just about drops in half and it acts like a regular dual-core. Dual-cores are becoming a rarity in today's world. AMD only makes two dual-core chips, the rest starts from four cores and scales up.

    Intel pretty much bets its entire budget line on dual-cores with strong single-thread performance, but when the crap hits the fan the A8-5600K is still running smoothly, whereas the equally priced i3-3220 will hiccup a bit until it recovers in time for the next intensive scene/battle. That's why Tom's Hardware changed their entry-level recommendations from Intel's Celeron and Pentium chips to a three year-old Athlon X4 and the Phenom II X4 945.

    Ideally, no-one should really settle for a dual-core without Hyper-threading these days, its just not worth it. I can understand doing it for budget reasons, but games are really beginning to leave single-thread performance behind in favour of parallelism. That's also why you don't see many benchmarks on it. I mean, Tom's did a Intel performance test comparing chips from the last few generations and the dual-cores, even the Core 2 E8400 (such a nice chip,too), does take an arrow to the knee in games like Crysis 3 and Far Cry 3.

    At least League of Legends is playable on older, low-end hardware.

    Quote Originally Posted by The Joker View Post
    Its so awesome to have a full on tech know it all here now
    Makes me feel so happy, there was a time when this part of the forum was pretty much dead, apart from a few regulars and a odd noob here and there. Now its pumping.
    Heh, thanks. I know what a dead tech section is like, its never a nice thing to watch.

    Quote Originally Posted by The Joker View Post
    I still maintain your better off sticking to a 7970 or GTX680 and if it came down to a choice between those two I would have to recommend the 7970 over the GTX680.
    I'd recommend the HD7970 as well. Both the bundle and the performance is better than the GTX680 and the Metro: Last Light bundle. Nvidia doesn't really care because Kepler is earning them so much profit, funny enough.

  9. #19
    simon's Avatar
    Join Date
    Apr 2011
    Location
    Same place I was 2 minutes ago
    Posts
    2,786

    Default

    Quote Originally Posted by The Joker View Post
    Every now and then you'll see Tom's hardware do pretty much exactly what your asking.
    The main issue with doing what your asking is time and hardware availability.

    A lot of reviewers actually pay for the hardware, so yeah they may end up getting samples of the GTX780 to review but they still have to cough up for the rest of the rig, so imagine having to do that.
    They also have strict dead lines.
    But surely if they buy the cpu's or whatever they have those rigs that they are able to use. Basically what I am getting at is it is difficult to ascertain what sort of fps I will get with my rig for a certain game or if something will be a particular bottleneck (right now its becoming my gpu). So I have to start saying ok, that card gets that sort of fps with that type of rig, my rig with gpu is say 25% worse off, I don't play with shadows etc so hmm maybe in the region of I don't know 70% fps that they got with that rig on high details? But with comparing say a top end i3 or low level i5 of the current generation I can get a rough estimate of how my 2400 compares with cpu, ram obviously usually being an overkill (difference between 8 gig and 16 gig barely noticeable and all) and a 5770 comparable to say a "7650" type thing.
    hope you guys can understand this.

    Quote Originally Posted by Wesley View Post
    Take any CPU-intensive game benchmark of the Core i5-3330 or the i5-3470. The Core i3-3225 can reach the same high-end framerates thanks to Hyper-threading, but when the game becomes very taxing and requires more physical cores, your framerate just about drops in half and it acts like a regular dual-core. Dual-cores are becoming a rarity in today's world. AMD only makes two dual-core chips, the rest starts from four cores and scales up.

    Intel pretty much bets its entire budget line on dual-cores with strong single-thread performance, but when the crap hits the fan the A8-5600K is still running smoothly, whereas the equally priced i3-3220 will hiccup a bit until it recovers in time for the next intensive scene/battle. That's why Tom's Hardware changed their entry-level recommendations from Intel's Celeron and Pentium chips to a three year-old Athlon X4 and the Phenom II X4 945.

    Ideally, no-one should really settle for a dual-core without Hyper-threading these days, its just not worth it. I can understand doing it for budget reasons, but games are really beginning to leave single-thread performance behind in favour of parallelism. That's also why you don't see many benchmarks on it. I mean, Tom's did a Intel performance test comparing chips from the last few generations and the dual-cores, even the Core 2 E8400 (such a nice chip,too), does take an arrow to the knee in games like Crysis 3 and Far Cry 3.

    At least League of Legends is playable on older, low-end hardware.



    Heh, thanks. I know what a dead tech section is like, its never a nice thing to watch.



    I'd recommend the HD7970 as well. Both the bundle and the performance is better than the GTX680 and the Metro: Last Light bundle. Nvidia doesn't really care because Kepler is earning them so much profit, funny enough.
    contact me if you are in Cape Town and looking for PC Hardware at very competitive pricing. We also do web and graphic design.

  10. #20

    Default

    Apparently the card's retailing for $399. Reviews at http://www.hardocp.com/article/2013/...w#.UadUhJzsTwo and http://www.pcgamer.com/review/nvidia...tx-770-review/ . Really good value it seems! Not sure what prices that would translate to locally?

    EDIT: According to the HardOCP review, the MSI card retails for $460
    Last edited by Namdrater; 30-05-2013 at 03:39 PM.

Similar Threads

  1. Cell C LTE for gaming tested
    By James in forum Gaming News Articles
    Replies: 4
    Last Post: 21-01-2013, 01:14 PM
  2. Lightning Returns: Final Fantasy XIII revealed
    By James in forum Gaming News Articles
    Replies: 6
    Last Post: 04-09-2012, 11:20 AM
  3. Lightning storm, Lan port dead
    By Rain Moodly in forum Hardware and Gadgets
    Replies: 12
    Last Post: 25-03-2012, 06:49 AM
  4. Indie Royale launches Serious Sam Lightning Pack
    By James in forum Gaming News Articles
    Replies: 0
    Last Post: 25-01-2012, 11:10 AM
  5. MSI Geforce NX7900 GTX vs Zotac Geforce 9600GT
    By Venomrush in forum Hardware and Gadgets
    Replies: 27
    Last Post: 03-06-2009, 11:15 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •