Well, the scaling is pretty good, but the GTX680 will drop down in price and it'll be the better buy for Nvidia fans. This will just overclock higher and use slightly less power. Its a nice tweak for Kepler, but ultimately this is pretty much what Nvidia planned in the first place.
The people who were familiar with the cards did know, actually. AMD jumped from VLIW5 to VLIW4, which was a major change in architecture. In terms of how the number of stream processors are determined, VLIW4 has less physical stream processors but a higher overall throughput. The HD6870, with around 224 *real* stream processors was able to keep up with the GTX460 (334 stream processors) and in ideal situations is actually quite a bit faster. The jump from VLIW5 to VLIW4 acrhitectures was to reduce the number of idle components.
The only reason why VLIW5 (HD5000 series) was faster than VLIW4 (HD6000) was because in situations where the game or benchmark could take advantage of the extra hardware, it cruised past easily. Today, my HD6870 is faster than the 1GB GTX460, is on par with the HD7850 and is about 5% slower than the HD5870 which was much more expensive. If they had soldiered on, the successor to the HD6870 would have delivered HD7950-class performance for the same price as the HD7850. But VLIW4 is expensive to scale up and would have just resulted in the same issue later on.
The reason why AMD bumped down the model numbers to new price levels was to accommodate future scaling and possible in-family performance tweaks for models like a HD6875 (or some crap name like that) - I very much doubt that they had the full concept of GCN nailed down at the HD6000 launch, so they were covering their bases for the future. They still use VLIW4 architecture for the APU family, which is why they're so damn efficient and capable of playing most games with medium settings at 720p - throughput of those chips is phenomenal.
Meh, doesn't bother me really. If they can tweak their designs and offer better performance-per-watt ratios while they wait to port it over to 20nm, I'm cool with that. At least we know now that the GTX770 will replace the GTX680 at the same price range, not a higher one.
What exactly are you asking here? Most of the cards available today starting from R1800 and up will be playing at 1080p two years from now, assuming the card has a 2GB frame buffer. I'd say the GTX780 will only begin to really show its age four years from now. The larger memory bus, 3GB of RAM and the huge amount of shader power still has more headroom through overclocking and driver improvements.
But on the flip side, if they hadn't done this, they wouldn't have been able to bring back the frame metering hardware they added on the G80 series. Overvolting to ranges higher than the extra circuitry could handle could have been detrimental to their goal of solving the SLI stutters.




