Login to Account Create an Account
Leak: NVIDIA GTX Titan Performance Decimates all Single GPU Cards!
The first slide shows the monster card as having an 837MHz base clock, 876MHz boost clock, 6GB GDDR5 memory connected to a 384-bit memory bus as well as a few other specs. Somewhat lower clocks than were hoped for in previous rumours and considerably lower than the GTX 680's 1006/1502MHz GPU clocks. This isn't surprising however, since the GK110 GPU die is way bigger at around twice the number of transistors.
NVIDIA today finally unleashes after a long wait their GeForce GTX Titan which is based on the highly efficient and powerful GeForce Kepler architecture. The GeForce GTX Titan with its GK110 core delivers amazing GPU performance and a massive 6 GB memory for users to run higher resolution 3D Stereoscopic and 3D Vision setups. Without a doubt, NVIDIA has reclaimed the title of the fastest GPU on the planet with the GeForce GTX Titan.
The second slide reveals that the card has 2688 CUDA Cores, 7.1 billion transistors and delivers 4,500 gigaflops of performance. That CUDA core count is the most interesting figure, since it reveals that this flagship GK110 GPU is still a cut down version of the real thing, as used in the K20 HPC card. This is because 14 SMX units deliver 2688 CUDA Cores, while the full 15 SMX units would have delivered 2880 CUDA Cores, as I explained here:
I wouldn't be surprised if an uncut version of the GK110 with all 15 SMX untis enabled is released in a few months to a year, when manufacturing yields improve.
This is why 6GB is an odd figure, so something is wrong here. I'm inclined to believe that the 512-bit spec is a typo, especially as the GeForce version is likely to have only 14 SMX units, rather than the full 15. This gives rise to 2688 Cuda Cores for 14 SMX units as opposed to the 2880 Cuda Cores for 15 SMX units in the Tesla K20 HPC card which is based on the full, uncut GK110 GPU. Of course, if the full GK110 had 16 SMX units, then we would see a massive 3072 Cuda Cores and a 512-bit memory bus. But we'll just have to continue dreaming about such a chip since NVIDIA didn't design this version, no doubt due to transistor, power and heat budget constraints.
The third slide explains a bit about the cooling system fitted to the Titan, which looks to be of the same high quality magnesium alloy as that fitted to the GTX 690. WCCFtech explain this in some detail:
The cooling design NVIDIA used for the GeForce GTX Titan is of very high quality and is the same scheme from the GeForce GTX 690 passed over to the Titan. The outer shroud is made of magnesium alloy metal within which lies two separate heatsink blocks. One is a vapor chamber block which sits on top of the GK110 core and consists of many aluminum fins running parallel to each other while a smaller heatsink block is located at the far end of the PCB beneath which lies electrical components to cool off. Air is provided by a single 90 mm PWM controlled fan.
The fourth slide gives an idea of how the GPU Boost 2.0 clock speed system works, which basically equates to running the GPU at the highest clock possible while keeping it within its electrical and thermal specifications. This new version allows users to manually overvolt and overclock the GPU without any restrictions, unlike the Boost scheme used in the GTX 680.
The fifth slide shows the mindblowing performance of a tri SLI GTX Titan setup compared to a quad SLI (dual card) GTX 690 setup. Note that the benchmark is purely comparative, with no actual framerates showing. The GTX 690 rig therefore has a performance of 1.0, since it's the reference being compared to. What I'd really like to see however, is a performance face-off between a single Titan and a single GTX 690. With any luck it will beat it, especially if overclocked, since the stock clock of the GK110 GPU is quite a bit lower than that of the GK104 used in the GTX 680 and GTX 690 graphics cards.
Note that the Titan supports quad SLI and benchmarks with such a setup will be very interesting to see, indeed. In fact, it's quite possible that the fastest overclocked CPU may not be fast enough to feed it data to show its full performance potential...
The sixth slide compares the acoustics of the Titan with the GTX 680, showing it to be a bit quieter, while delivering significantly higher performance in Crysis 3. Note that the GTX 680 is slightly louder in many reviews than the GTX 580 which preceded it, so it looks like the Titan will be comparable in noise performance to a GTX 580.
To my mind, the noise performance is a far more important metric than it first appears, since a high performance card that drives you batty with the noise isn't worth having. Or wear earplugs. This is an area where NVIDIA consistently leads AMD, so AMD would do well to improve on it.
Finally, the seventh slide shows comparative benchmarks between the GTX Titan, GTX 680 and AMD's HD 7970 GHz Edition graphics cards. Once again, only comparative graphs are shown, without framerates, but the GTX Titan clearly wins this one as well. At 1920 x 1200 it delivers 35% more performance than a GTX 680 and 32% more than a HD 7970 GHz Edition. Comparisons with other games and benchmarks such as Far Cry 3 and 3DMark are also shown.
Not long now to wait for the official reveal!
Leaked benchmark courtesy of Egypthardware.
The NVIDIA GeForce GTX Titan will be paper launched very soon, with availability beginning next week at a reassuringly expensive $900 or so. With any luck, the GTX 690 will be significantly reduced, giving us bargain hunters something to look out for.
Finally, we now have a second article all about the Titan's new GPU Boost 2 and 80Hz adaptive Vsync features!