Jumat, 21 September 2018

Nvidia GeForce RTX 2080 Ti: The game in 4K at 60 fps finally become reality

Nvidia GeForce RTX 2080 Ti: The game in 4K at 60 fps finally become reality

Nvidia GeForce RTX 2080 Ti Review. Raytracing in the line of sight! The GeForce RTX 2080 Ti award. Twenty-seven months. This is the exceptional lifespan of Nvidia's GeForce GTX 10 graphics card series. Excellent at launch and still offering excellent performance in games, they are now replaced by the GeForce RTX 20. Obviously announced as faster than the previous ones, these new models bring with them the ability to accelerate the hybrid rendering for ever more realistic games. The GeForce RTX 2080 Ti is the spearhead of this new line and is positioned as the ideal solution to play in 4K at more than 60 frames per second.

Nvidia GeForce RTX 2080 Ti CHARACTERISTICS:
  • Chip: TU102
  • GPU frequency: 1350 MHz
  • Memory: 11 GB
  • GDDR6 memory
  • Memory frequency: 1750 MHz
  • Double height: Heatsink
  • 3x DisplayPort 1.4 outputs; 1x HDMI 2.0b; VirtualLink
  • Driver tested: 411.51

PRESENTATION
The GeForce RTX 2080 Ti and GeForce RTX 2080 are the first Nvidia graphics cards to offer hardware acceleration of raytracing.
The GeForce RTX 2080 Ti and GeForce RTX 2080 are the first Nvidia graphics cards to offer hardware acceleration of raytracing.

Like all GeForce RTX 20s, the RTX 2080 Ti uses a "Turing" GPU. We have discussed the subject extensively, broadly and crosswise, in a dedicated article. We will not go back in detail on the novelties brought by this architecture, but to summarize the key points. Turing brings a new method of distribution of computing units, units dedicated to the IA (Tensor Cores), units dedicated to the acceleration of rendering raytracing (RT Cores) as well as an optimization of the different levels of cache.

A GPU engraved in 12 nm, frequencies at half mast
Graphic representation of the TU102 chip of the GeForce RTX 2080 Ti (left). On the right, the detail of a SM block. Click to enlarge. © Nvidia
Graphic representation of the TU102 chip of the GeForce RTX 2080 Ti (left). On the right, the detail of a SM block. Click to enlarge. © Nvidia

The GeForce RTX 2080 Ti is based on the use of a TU102 chip consisting of 18.6 billion transistors, but not all calculation blocks are used. Thus, of the 72 SM blocks present, only 68 are active on this model. Each SM has 64 calculation units, 4 texturing units, 6 deep learning units (Tensor) and 1 dedicated raytracing (RT) block. In the end, this gives rise to a chip gathering no less than 4,352 calculation units, 272 units dedicated to textures, 544 Tensor Cores and 68 RT Cores. The whole is accompanied by 88 rendering units. So this is a nice evolution compared to the GeForce GTX 1080 Ti which included for example "only" 3,584 calculation units as shown in the table below.

GeForce RTX 2080 TiGeForce RTX 2080GeForce RTX 2070Titan XpGeForce GTX 1080 TiGeForce GTX 1080GeForce GTX 1070
ArchitectureTuringPascal
GPUTU102TU104TU106GP102GP104
Engraving12 nm16 nm
Units of calculation4,3522,9442,3043,8403,5842,5601,920
Tensor Units544368288-
RT units684636-
Texture units272184144240224160120
Render units8864968864
Freq. based1 350 MHz1 515 MHz1 410 MHz1 405 MHz1 480 MHz1 607 MHz1 506 MHz
Freq. Boost1 545 - 1 635 MHz1,710 - 1,800 MHz1,620 - 1,710 MHz1,582 MHz1 733 MHz1,582 MHz
Memory11 GB GDDR68GB GDDR612GB GDDR5X11GB GDDR5X8GB GDDR5X8 GB GDDR5
Freq. memory1,750 MHz1426 MHz1 375 MHz1,250 MHz2000 MHz
Bus352 bits256 bits384 bits352 bits256 bits
Bandwidth616 GB / s448 GB / s548 GB / s484 GB / s320 GB / s256 GB / s
TDP250 - 260 W215 - 225 W175 - 185 W250 W180 W150W
Launch price$ 1,259$ 849$ 639$ 1,349€ 824$ 789$ 499

On the other hand, the operating frequency does not change much on paper. It must be said that the transition from a 16 nm etching process to a 12 nm FFN process (Nvidia specific TSMC FinFET) is not a big change. Indeed, contrary to what may suggest the change of nomenclature, the 12 nm is more to see as an optimization of the 16 nm process. Therefore, the expected gain on power consumption will not be as marked as when switching from 28 nm to 16 nm.

The base frequency is thus announced at 1350 MHz, against 1480 MHz on the 1080 Ti - a regression therefore on the minimum frequency held by the new chip. On the other hand, the average GPU Boost frequency is increasing since it goes from 1 582 MHz to 1 635 MHz on the Founders Edition models (the model tested here). Beware, the Founders Edition are no longer seen as reference models, but as models overclocked factory. Thus, some Nvidia partners will be able to offer non-overclocked variants, whose average GPU Boost frequency will be set at 1 545 MHz, or at a frequency slightly lower than that of the 1080 Ti.

GPU Boost 4.0: a slight evolution for more control
Nvidia has nevertheless revised its frequency adjustment system and passes the GPU Boost in its fourth iteration. With GPU Boost 4.0, it is still a question, as on GPU Boost 3.0, to adjust the operating frequency according to the applied load, a threshold of power consumption and the temperature of the chip. The change is in the temperature management, which has an intermediate temperature threshold before which the frequency passes to its nominal value. According to Nvidia, this would maintain a higher frequency than previously when the cooling system of the graphics card is efficient.

An overclocking simplification tool that evolves only slightly
Evga's Precision X1 will feature the Nvidia Scanner feature to simply overclock your graphics card.
Evga's Precision X1 will feature the Nvidia Scanner feature to simply overclock your graphics card.

Nvidia also decided to let users change the value of the temperature thresholds. No home utility for this, but a third-party utility such as Evga's Precision X1 or MSI's Afterburner, for example. This software can also integrate the Nvidia Scanner kit so as to offer a simplified manual overclocking.

Similar to the automatic overclocking system introduced on the GeForce GTX 10, the Nvidia Scanner still allows utilities to act on the voltage level to define which maximum frequency is the most stable. The new system would nevertheless be more accurate with a more realistic GPU load. The user, for his part, must first set the limit of consumption and temperature - as much put the sliders to their maximum - and play on the temperature thresholds we spoke a little higher. In short, it is automatic in large part, but still not entirely.

The GeForce RTX 2080 Ti and RTX 2080 can operate in pairs, in SLI via an NVLink bridge. What allow to play on an 8K TV, ensures Nvidia.
The GeForce RTX 2080 Ti and RTX 2080 can operate in pairs, in SLI via an NVLink bridge. What allow to play on an 8K TV, ensures Nvidia.

GDDR6 makes its debut
Nvidia also took the opportunity to inaugurate the GDDR6. On its top-of-the-line 10-series models, the company used GDDR5X and retained classic GDDR5 on mid-range and entry-level models. The GeForce RTX 2080 Ti, RTX 2080 and RTX 2070 for their part are equal weapons at this level and therefore all enjoy this new type of memory that allows both to increase the flow and is energetically more interesting with a gain 20% on energy efficiency. 

On the GeForce RTX 2080 Ti, Nvidia does not change the quantity and so always offers 11 GB of memory and a bus to 352 bits. The changeover to a frequency of 1750 MHz however makes it possible to propose a bandwidth of 616 GB / s, well superior to the 484 GB / s observed on the GTX 1080 Ti and on theRadeon RX Vega 64  - AMD's card uses HBM2, which is much more expensive to produce and especially to integrate.

A neat food stage
Nvidia also assures that it has been paying close attention to the power supply of its card, which displays a thermal envelope of 260 watts (2 x 8-pin PCIe power connectors are present), 10 watts more than the model it replaces. It is thus a question of a stage composed of not less than 13 phases intended for the supply of the GPU and of 3 phases for the graphic memory. It is also about iMON DrMOS systems, able to activate only when it is necessary - fewer units are thus engaged in idle,for example - to optimize energy efficiency. This is typically the type of power stage used on high-end graphics cards from Nvidia's partners. This did not prevent our test model from sending a strident sound ( coilwhine ) at its reels in the menu of some games.

NOISE
The cooling system is new and relies on the use of two fans that cool a radiator that hugs the full width of the board.
The cooling system is new and relies on the use of two fans that cool a radiator that hugs the full width of the board.

With its RTX 20, Nvidia is making a big change in the cooling system. Exit turbine type ventilation found on the models of the brand since GeForce 6 (2004) and place a system based on two axial fans. Once again, the manufacturer is on the side of its partners who, in their high-end versions, use this type of cooler - sometimes with three fans. From experience, we know that this type of system proves less noisy than turbines while offering more efficient cooling. The disadvantage is that some of the generated hot air is forced inside the housing rather than being completely expelled. Proper ventilation of the housing is therefore preferable. Note in passing that the finish of the map is rather admirable, with its gray metal shell with beveled corners. Only the central part, made of plastic, would have won to be more neat.

At the back of the cards, a metal plate helps heat dissipation. The finish of the GeForce RTX 20 is also excellent.
At the back of the cards, a metal plate helps heat dissipation. The finish of the GeForce RTX 20 is also excellent.

In order to offer a convincing system, Nvidia also chose not to use copper heat pipes, but rather a steam chamber plate, applied directly to the GPU and spreading across the entire width of the board. to be in full contact with the finned radiator. A metal plate then makes the connection between the steam chamber and the rest of the components (memory chips, power stages). At the back, the metal plate is not only decorative, unlike the one we usually meet; it serves here as heat sink. Two fans 8.5 cm in diameter cool all.
And it must be admitted that the result is convincing in games where ventilation is particularly discreet, at the edge of the audible one meter of a closed box. The Nvidia system is thus much quieter than any of the partner brand systems (Asus, Zotac, Gigabyte, MSI) taking place on the GeForce GTX 1080 and 1080 Ti, to say. On the cooling efficiency side, the system maintains the GPU at its temperature limit of 87 ° C. 

On the other hand, the ventilation remains at almost the same noise level at rest. Blame the fan on top of the feed stage that continues to spin rather quickly off load. A problem that Nvidia hopes to fix with a driver update.

CONSUMPTION
The power supply goes through the use of two PCIe cables: 2x8 pins on RTX 2080 Ti and 8 + 6 pins on RTX 2080.
The power supply goes through the use of two PCIe cables: 2x8 pins on RTX 2080 Ti and 8 + 6 pins on RTX 2080.

As we have seen, switching to 12 nm is a big optimization of the GTX 10 16 nm process rather than a real new process. In games, Nvidia's new card is more energy-consuming - more computing units to power, in particular - than the model it replaces. There are peaks at 296 watts against 260 watts on the 1080 Ti. The average is around 290 watts, which is quite close to what is observed on a Radeon RX Vega 64. Note the 25 watts of power consumption at rest. A particularly high value (we expect between 5 and 10 watts usually) that would be due to a bug, according to Nvidia. The firm said to work on the subject to propose a patch via its driver.


The absolute power consumption analyzed alone does not mean much. It must be correlated with the level of performance achieved in order to appreciate it through a value called energy efficiency. And there, the ratio between the number of frames per second achieved in the games and the power consumption gives the RTX 2080 Ti as slightly better than the GTX 1080 Ti. Not surprisingly, it offers a much better result than AMD's RX Vega 64.

PERFORMANCES IN THE GAMES
In the games, spent several minutes of non-stop testing, the card sees its real GPU frequency oscillate between 1395 and 1470 MHz for an average of 1 455 MHz or about 50 MHz less than the GeForce GTX 1080 Ti.


On our panel of games, the GeForce RTX 2080 Ti is on average 34% faster than the GeForce GTX 1080 Ti when games are run in 4K. The newcomer is also 25% ahead of the Titan Xp, the fastest model Pascal architecture has emerged. In other words, AMD is pale with a Radeon RX Vega 64, which is 79% behind average.




Obviously, the gap is narrowing as the definition goes down. The RTX 2080 Ti thus maintains a lead over the GTX 1080 Ti of 24% in WQHD and 10% in Full HD - the processor, a Core i7-8700K , limiting performance more and more. Compared to the Titan Xp it is about 15% and 5% respectively.

Finally
The slice lights up in green. It is possible to play on brightness and effects via third-party applications like Evga Precision X1
The slice lights up in green. It is possible to play on brightness and effects via third-party applications like Evga Precision X1

With the GeForce RTX 2080 Ti, Nvidia promises a graphics card capable of delivering more than 60 frames per second in 4K. In practice, it is true that many games can be run at more than 70 fps while the rest is close to the 60. A more than convincing result that we can only salute all the more than the RTX 2080 Ti is, at the time of writing these lines, the only graphics card on the market capable of proposing this. The various improvements brought by the Turing architecture are bearing fruit since we reach an average difference of 25% at best compared to the best Pascal cards, the Titan Xp. However possible that this difference may seem insufficient in the eyes of some players who have waited more than two years between the two generations of cards.

Because the problem for Nvidia is that the firm uses here a particularly large chip and used in most of it. In clearer, Nvidia still has enough to offer a "supermodel" type Titan exploiting all the units of the chip, but the difference with the RTX 2080 Ti can not be huge. And not to be deceived, the arrival of a chip even more massive can be on the program as the GPU used here is already huge. In short, unless you switch to 10 nm, Turing seems to deploy almost all its strength through the GeForce RTX 2080 Ti.

Dual ventilation is both efficient and quiet. Good ventilation of the housing is however necessary.
Dual ventilation is both efficient and quiet. Good ventilation of the housing is however necessary.

Suddenly, the firm is focusing on other areas to sustain this range with the promise of a better rendering thanks to raytracing and, to a lesser extent, the DLSS (a less greedy system than conventional antialiasing). Nvidia goes even further by assuring us that it is the video game industry that is pushing hybrid rendering and not itself, which, for its part, simply wants to offer something to exploit it properly. We summarize the state of these two technologies in our box.

There remains the question of the tariff. Offered at € 1,259, the GeForce RTX 2080 Ti is a particularly expensive graphics card, offered in the tariff segment held by the Titan until then. A high price that Nvidia justifies by the premium aspect of the cooling system and, above all, the size and complexity of the chip used - which we will not contradict. As such, it offers better for cheaper than the Titan Xp. But we will not fail to notice that it is at least 70% more expensive than the GeForce GTX 1080 Ti while offering only 34% additional performance. The raytracing and implementation of units dedicated to deep learning(for the DLSS in particular) are therefore paying a high price, so hope that the promises at this level are completed. The player equipped with a 4K screen will not really have any other alternative, unfortunately.

Hybrid rendering and DLSS: promises for the future
With Turing, Nvidia is not limited to offering a more powerful series than the previous one. The company also relies on two other assets supposed to embellish the games. Starting with the hybrid rendering combining classic rasterization and raytracing. The promise is simple: to offer games with more realistic effects thanks to a hardware acceleration of the part of the rendering realized in raytracing. For now, we have only been able to try a demo made available by Nvidia as well as a draft of the next version of 3DMark in addition to testing a few titles at Gamescom. The nature "beta" or "alpha" of these tests does not allow us to draw a definitive conclusion at this stage, but we will not fail to return to the subject through a dedicated article.

The other idea of ​​Nvidia is the DLSS, its new system to reduce aliasing in games. As discussed in our article on Turing architecture, this system is less resource-intensive than conventional antialiasing systems while offering the same rendering quality. Once again, we have not been able, at the time of this writing, to thoroughly test this new system on real games. The DLSS will take part in dozens of games over the coming weeks. We will also come back to this in a dedicated article.

Finally, Nvidia has also reviewed a part of its "engine" to manage the rendering type HDR. Failing in some cases on the GeForce GTX 10, it would be more effective here - understand that the activation of the HDR will have very little impact on performance. Nvidia thus picks up AMD at this level. Due to lack of time, we were unable to carry out comprehensive tests on the subject. Same story: we will come back soon.


STRONG POINTS:

  • Game in 4K at 60 fps finally now possible.
  • Energetic efficiency.
  • Silence of operation.
  • The promise of hybrid rendering.
  • VirtualLink output.
  • Quality of manufacture.

WEAK POINTS:

  • Ventilation does not shut off at rest.
  • Still little visibility on raytracing performance.

CONCLUSION:
Nvidia delivers here a graphic card perfectly adapted to play in 4K at 60 frames per second or more. A first that is saluted especially as the firm brazenly distance the best of the rival cards, the Radeon RX Vega 64. The firm also aims for rendering better quality via raytracing and spared no efforts to achieve by achieving a particularly complex chip, expensive to produce, which is not without consequences on the price of the card. The hybrid rendering, although pushed by the industry according to Nvidia, remains a very good promise on which it will still be necessary to be both cautious and patient since the first games bringing this support are not expected before mid-October 2018 earlier.
Previous Post
Next Post

post written by:

We love new inventions, slap the wizard will review new gadgets especially for you.

0 komentar: