NVIDIA GeForce RTX 3080 Founders Edition English Review – New performance levels at lower costs
The wait is over and the embargo for the NVIDIA GeForce RTX 3080 Founders Edition review has been lifted. Now we can publish the information we have been able to collect in just two days of non-stop testing. NVIDIA has initially announced three video cards using its new architecture, AMPERE, and the RTX 3080 is the first product to be available.
This release marks a milestone for our portal, as we have revamped our graphics card benchmarking system and updated our game list to accommodate more modern titles.
The hype for the new NVIDIA video cards is quite high, and as the title suggests, what surprises does the new green company’s flagship have in store for us? Which, by the way, has made the acquisition of ARM official.
On this occasion, Luis Padilla (rush) has been responsible for the technical section, and he will explain the new architecture AMPERE, and what new features bring the new graphics from NVIDIA. After this, we will continue with the testing I have prepared (benchmarks), with their respective final analysis.
The NVIDIA GeForce RTX 3080 Founders Edition has come out fresh from Jensen’s kitchen and has been sampled to us for this review.
NVIDIA GeForce RTX 3080 – AMPERE and RTX “RELOADED”
For the “green company”, the NVIDIA GeForce RTX 3080 represents a new stage in the consumer graphics card segment, where it has enjoyed a comfortable position in the performance segment of the market in recent years. For Jensen Huang’s company there were two major challenges in this generation.
The first was to ensure that the use of Ray Tracing technology was not prohibitive in terms of frames per second and performance for most players, while at the same time achieving visual benefits that could be considered worthwhile in games.
The second was to provide benefits to users beyond gaming, as well as business applications that are currently better served by products such as the A100 or Quadro series. But what is raytracing? To find out, you have to know a little about how graphics cards build the images you see, using both rasterization and ray tracing.
Rasterization is a technique that has been used for years to show three-dimensional objects in the form of pixels, for a two-dimensional screen, but keeping an illusion of being three-dimensional objects. A set of pixels can form a polygon, which is a triangular object representing three-dimensional models. Polygons usually intersect each other, and each one has important information about the composition of the objects, such as their depth. This is why processing the polygons in each frame requires a lot of computing power, which is provided by the video cards.
Raytracing, on the other hand, works with the lighting and shadows of an object to determine its color level and other characteristics. This means that the graphics card must calculate all the light sources in a scene, how these light sources interact with each other, and how they interact with other objects.
This technique has been used in the film industry for many years, but its arrival on the video game consumer market has been thanks to the greater power of video cards such as the RTX series, since raytracing is very taxing in terms of hardware specifications, usually more than rasterization.
For this reason, NVIDIA is not only betting on the increase of “brute force” in specifications, it is also refining RTX technology in the new Ampere architecture. In the words of NVIDIA’s CEO, this represents “the biggest generational jump” in its history, claiming that it is even greater than the jump in capabilities between the Pascal architecture compared to Maxwell. To prove this, we will have to look at the data provided by the company, before moving on to the benchmarks.
NVIDIA AMPERE Series
|Tarjeta gráfica||GeForce RTX 2080 Founders Edition||GeForce RTX 2080 Super Founders Edition||GeForce RTX 3080 10 GB Founders Edition|
|Nombre código de GPU||TU104||TU104||GA102|
|Arquitectura de GPU||NVIDIA Turing||NVIDIA Turing||NVIDIA Ampere|
|Núcleos CUDA / SM||64||64||128|
|Núcleos CUDA / GPU||2944||3072||8704|
|Núcleos Tensores / SM||8 (2da Gen)||8 (2da Gen)||4 (3ra Gen)|
|Núcleos Tensores / GPU||368||384 (2da Gen)||272 (3ra Gen)|
|Núcleos RT||46 (1ra Gen)||48 (1ra Gen)||68 (2da Gen)|
|Velocidad de reloj de GPU Boost (MHz)||1800||1815||1710|
|FP32 TFLOPS pico (no Tensor)||10.6||11.2||29.8|
|FP16 TFLOPS pico (no Tensor)||21.2||22.3||29.8|
|BF16 TFLOPS pico (no Tensor)||NA||NA||29.8|
|INT32 TOPS pico (no Tensor)||10.6||11.2||14.9|
|FP16 Tensor TFLOPS pico con FP16 acumulado||84.8||89.2||119/238|
|FP16 Tensor TFLOPS pico con FP32 acumulado||42.4||44.6||59.5/119|
|BP16 Tensor TFLOPS pico with FP32 acumulado||NA||NA||59.5/119|
|TF32 Tensor TFLOPS pico||NA||NA||29.8/59.5|
|INT8 Tensor TOPS pico||169.6||178.4||238/476|
|INT4 Tensor TOPS pico||339.1||356.8||476/952|
|Tamaño y tipo de memoria "frame buffer"||8192 MB GDDR6||8192 MB GDDR6||10240 MB GDDR6X|
|Interfaz de memoria||256-bit||256-bit||320-bit|
|Velocidad de reloj de memoria (velocidad de datos)||14 Gbps||15.5 Gbps||19 Gbps|
|Ancho de banda de memoria||448 GB/sec||496 GB/sec||760 GB/sec|
|Velocidad de llenado de pixeles (Gigapixeles/seg)||115.2||116.2||164.2|
|Unidades de texturas||184||192||272|
|Velocidad de llenado de téxel (Gigatexeles/seg)||331.2||348.5||465|
|Tamaño de caché L2||4096 KB||4096 KB||5120 KB|
|Tamaño de archivo de registro||11766 KB||12288 KB||17408 KB|
|TGP (Potencia gráfica total)||225 W||250 W||320 W|
|Número de transistores||13.6 mil millones||13.6 mil millones||28.3 mil millones|
|Tamaño de pastilla de circuitos||545 mm2||545 mm2||628.4 mm2|
|Proceso de manufactura||TSMC 12nm FFN (FinFet NVIDIA)||TSMC 12nm FFN (FinFet NVIDIA)||Samsung 8nm 8N NVIDIA Custom Process|
|Precio de lanzamiento||799 USD||699 USD||699 USD|
AMPERE – SM, diving deep in architectural changes
Let’s start with the second generation Stream Multiprocessor (SM). The SM is the basic unit of the AMPERE graphics cards, and according to NVIDIA, the AMPERE SM has twice the computing power of the Turing SM.
The execution performance of FP32 instructions (32-bit floating point) is twice as good in Ampere. In Ampere’s SM there is a new path for FP32 and INT32 operation data, which in total achieve a speed of 128 FP32 operations per clock cycle.
The number of shader calculations is 2 calculations per clock process in Ampere, as opposed to Turing which achieves 1 calculation per process. As a result, the new generation of NVIDIA offers 30 Shader-TFLOPS compared to 11 of the previous generation.
We also have third generation Tensor Cores. These offer a performance of 238 Tensor-TFLOPS in Ampere against 89 in Turing. These cores are capable of easing load from the Deep Neural Network (DNN) for faster processing of tensor calculations, and their greater power results in a more efficient usage of DNN for raytraced image processing, and in particular with the use of DLSS technology, which we will see later.
Each Multiprocessor Stream has 4 FP32 shaders, and four FP32/INT32 shaders. In addition, each SM has 4 third generation Tensor Cores, a second generation raytracing (RT) core, 128KB of L1 cache, and four texture mapping units (TMU). The RT core in Ampere has a performance of 58 RT-TFLOPS, compared to 34 RT-TFLOPS in Turing.
A RTX 3080 contains two blocks (or GPCs, Graphic Processing Clusters) of 10 SM each, and four GPCs of 12 SM each. It also houses 5MB of L2 cache, a GigaThread Engine that programs blocks of threads, three Raster Engines (which convert vectors to pixels), and 10 Memory Controllers, which work with the GDDR6X memory through the respective controller.
In sum, these improvements should allow, perhaps not twice the performance in every case, but very significant increases in performance when using raytracing and DLSS. The three cores, Shader, Tensor Core and RT Core, work in parallel to increase frames per second (FPS), and reduce latency when generating each frame in a raytraced image.
According to NVIDIA, a frame that was generated using only shaders and contained an image created with raytracing, can be generated with a latency of 51ms, in a game like Wolfenstein: Youngblood. This same frame, with the use of the new shaders, RT cores and tensor cores, can be generated with a latency of only 12ms, and 20ms if shader and RT core are used but without the tensor core. This is key to the improvements in Ampere, which make raytracing worthwhile.
Keep in mind that the Turing generation of GPUs had a remarkable performance for games without raytracing, but when this feature was activated there were very noticeable drops in FPS rates, which forced users to disable the feature in many cases. Now NVIDIA is looking to make raytracing worthwhile, but it will be the benchmarks that will tell if it was successful.
NVIDIA DLSS, more performance with lower resolutions
DLSS, or Deep Learning Super Sampling, is a technology that, according to NVIDIA, allows for the rendering of sharper, more detailed images at the same resolution, with higher FPS rates. It works by combining images from different frames to obtain a final image. This results in higher image quality and more FPS at the same time.
Ampere’s third-generation tensor core allows for higher performance when using DLSS technology than Turing, and it will be necessary to determine through benchmarks to what extent the use of DLSS helps in games that support it.
NVIDIA also claims that performance per watt is 1.9 times better in Ampere than in Turing. In order to achieve this, they optimized their design and manufacturing processes, as well as software and algorithms. However, the RTX 3080 has a TGP (Total Graphics Power, NVIDIA’s power consumption measurement) of 350W, compared to 250W for the RTX 2080 SUPER.
GDDR6X memory and emphasis on data speeds
The RTX 3080 (Founders Edition) has 10GB of 320-bit GDDRX6 memory. This type of memory is quite recent, and was first announced by Micron earlier this month, with the RTX 3090 and RTX 3080 being the first products to use it. They assure that it achieves higher speeds than with previos memory types, and higher bandwidths.
According to Micron, GDDR6X memory uses four-level amplitude modulation pulses (PAM4). This technology makes it possible to significantly increase the memory bandwidth, but at the cost of lower noise tolerance (signal to noise ratio or SNR), which requires more advanced noise correction processes, such as Forward Error Correction (FEC).
Each GDDR6X memory component has a bandwidth of 84 GB/s, and the total system bandwidth is more than 1 TB/s. The company also claims to have achieved this improvement in a product that is easy to produce in large amounts, and also with lower power per transaction (pJ/bit) than previous versions.
As for the RTX 3080, NVIDIA assures that the data transmission speed is double that of conventional GDDR6 memory, a statement that is in line with what Micron expressed in the announcement of this product. This will contribute to the inference of AI processing, raytracing in games, and video processing at 8K resolutions.
New cooling design and PCB density
The RTX 3080’s cooling solution involves a new plate and cooler design. In the Founders Edition we saw a design that allows air from the fans to pass directly through the PCB module. This PCB is also 50% smaller, and has a higher density. NVIDIA ensures that this will result in the highest possible thermal performance with less noise.
The RTX 3080’s cooling system has a front fan at the top, which pushes hot air outside with extra large fan mounts, while the fins direct the airflow for better cooling. In turn, the exposed fin assembly improves the design for more comfortable airflow. There is also a hybrid vapor chamber with a heat sink to distribute the heat, and another fan at the back of the card, to bring the air back into the equipment.
The design is also paired with a smaller and denser PCB, and the 12-pin power connector. The RTX 3080 has 18 power phases, which provide space for overclocking, and achieve, according to the company, greater energy efficiency. Finally, they claim that the RTX 3080 will be three times quieter than the 2080 Founders Edition, and its cooling system will reduce the temperature by about 20 degrees. These are, therefore, great promises that will have to be corroborated in the tests.
RTX IO – Reduced loading and file decompression times
In addition to the technology and architecture placed on the RTX 3080 for graphics processing, NVIDIA will bring RTX IO, a technique that uses the processing power of the GPU to quickly decompress files such as large game assets. This technique can be used to supplement the CPU in decompressing large files, improving performance and load times even on systems with fourth generation SSDs.
RTX IO will require developers to integrate their games with this technology, and work is underway with Microsoft to make it compatible with DirectStorage as well. It has no release date at this time.
NVIDIA Reflex, anti-latency optimizations
Another technology introduced by Ampere GPUs is NVIDIA Reflex, which enables monitoring and reducing system latency in competitive gaming. The company claims it employs optimizations of the GPU and the game itself, to improve response times in milliseconds, beyond factors such as network connection. In sports every millisecond counts, therefore NVIDIA Reflex could give, in theory, a competitive advantage to users. It will be the experience of professional players that will determine the success of this technology.
RTX Broadcast, tools for streamers and content creators
NVIDIA RTX Broadcast, on the other hand, comes in handy to help streamers and creators of live content. One of its features was RTX Voice, which allowed noise cancellation in voice recordings and streaming, using AI processing techniques. It worked quite well on Turing cards with RTX, and now it will be taken to a new level.
In this iteration, RTX Broadcast will have tools for live video processing, capable of removing unwanted backgrounds and replacing them as if using chroma equipment, with RTX Greenscreen. Another tool is RTX AR, which allows modeling of, track its movements and use it in augmented reality applications, such as controlling characters with facial gestures. There is also RTX Style Filters, which can add filters and other elements to a webcam image.
By using the new tension cores and optimizing the Ampere cards for artificial intelligence processing, NVIDIA promises high quality effects and transformations to images and audio in real time for streamers and casters.
And for content creators and users of AI computing power, NVIDIA promises with NVIDIA Studio, enhanced hardware acceleration support for video processing in programs such as Adobe Premiere Pro, Redcine-X Pro, and DaVinci Resolve, with improvements in 8K video rendering and color correction.
Similar improvements are offered in Adobe Photoshop and Lightroom, for improved integration of effects, filters, and working with RAW images. Other fields that can benefit from NVIDIA STUDIO are 3D animation, 3D rendering with Blender and motion blur using hardware, architectural visualization and world design, and video transmission in conjunction with RTX Broadcast.
Unboxing and photos – NVIDIA GeForce RTX 3080 Founders Edition
Our forte is not taking pictures (we hope to improve it in the future) but we bring you the unboxing of the NVIDIA GeForce RTX 3080 Founders Edition graphics card. The packaging is quite elegant and the “look” that the model has, is very industrial.
On a personal level, I had my doubts about its physical appearance, but having it in my hands, the elegant design of the new Founders Edition has charmed more than one.
Nevertheless, we took more emphasis on the temperature and noise results, so we suggest you read the corresponding section.
If you purchase a Founders Edition, please note the following (12-pin connector)
We always suggest not using “Daisy chain” or Y connections when powering up your video cards. Connect only one PCIe power connector per connector on the video card.
With the new Founders Edition, the famous 12-pin PCIe connector makes its first appearance and as mentioned in the previous paragraph, NVIDIA suggests using an 8-pin PCIe connector for each connection (no Daisy chain).
Synthetic benchmarks and games (1080p, 1440p, 2160p)
As you saw in our new list of Gaming Benchmarks 2020 GPUs, we have updated from scratch both our list of games, as well as the hardware we will use from now on for video card reviews.
Please note that we do not have some reference models, such as the GeForce RTX 2080 Ti Founders Edition, and have had to use the GIGABYTE GeForce RTX 2080 Ti GAMING OC, which has on average 3% more performance than the Founders.
Luckily, we have the GeForce RTX 2080 SUPER and 2070 SUPER Founders Edition, and they will be appearing in today’s benchmarks. We will be adding more data next week, such as results from the GeForce RTX 2060 SUPER Founders Edition and a card representing the GeForce RTX 2060 (EVGA XC Ultra).
But where are the AMD Radeon NAVI RX 5000 on these benchmarks?
The answer is simple. We did our review of the AMD Radeon RX 5700 XT, but it was only on loan condition (not sampled) so we have no way to benchmark. Note, the data we made of the RX 5700 XT cannot be compared with this new data, but we are making efforts to borrow one and complete our test bench.
It does not take a genius to know that the GeForce RTX 3080 competes at a much higher level than AMD’s flagship video card, the Radeon RX 5700 XT, so for the moment we can take a few days before analyzing them one against the other. The NVIDIA GeForce RTX 3070 will be more comparable against the RX 5700 XT.
A reminder about statistics..
Before detailing our configuration system, let’s refresh a bit what AVG FPS, 1% LOW and 0.1% LOW are.
AVG FPS: As the name implies, it is the average of the frames per second within a specific sequence. It is the most widely used measure, but it does not tell the whole story, as there are FPS drops.
1% LOW: Within an entire data set of frames per second, the 1% LOW is the value equivalent to the lowest 1% within the data set of frames (of the specific sequence ordered upwards). In simplest terms, it is the frame where you see the FPS drop that exists within a specific sequence.
Test bench (XanxoGaming GPU Benchmark 2.0)
Our renewed test bench has the best processor for gaming today, the Intel Core i9 10900K at 5.3 GHz. We use this processor, since it is the one that will generate less bottleneck to the system, and we have manually overclocked it to 5.3 GHz to all cores, with a 5.0 GHz Uncore.
The focus is on achieving 100% performance of the video card, NVIDIA GeForce RTX 3080 Founders Edition.
As for the dilemma of using PCI Express 3.0 versus 4.0, through internal testing by the green company, they have reported that it is not a determining factor for testing, and that it weighs more than the processor being used at the same time as the video card.
We will try to do controlled testing with the Ryzen 9 3900X to see if PCI Express 4.0 has performance impact on “rasterization” and “rasterization + Ray Tracing/DLSS”, but for now the consensus is a NO.
For future technologies, such as RTX IO, there could be some positive implications of having PCI Express 4.0 (for storage).
CPU: Intel i9 10900K @ 5.3GHz All Core // Uncore 5GHz
Motherboard: Z490 AORUS MASTER (F6b)
RAM: G.SKILL FlareX 3200 MHz CL14 4x8GB
GPU (lo que estamos testeando): NVIDIA GeForce RTX 3080 Founders Edition
Operating System: Windows 10 Home Versión 2004
Liquid Cooling system: Custom Water Loop (EK+Bitspower)
SSD: Crucial BX300 120GB + Silicon Power P34A80 1TB
Driver: NVIDIA Press Driver
Power supply: Seasonic Prime 1300W Platinum
3DMark synthetic tests (Firestrike Ultra/TimeSpy Extreme y Port Royal)
Our benchmarks include three synthetic load tests: Firestrike Ultra, Timespy Extreme and Port Royal. We always tell users that this kind of loading does not reflect the performance of a game in real life and that is why they have a synthetic denotation, but some people still take them as a reference.
Firestrike Ultra (DX11)
Timespy Extreme (DX12)
Port Royal (Ray Tracing)
Gaming – Rasterization
All tests are performed in the highest quality available, unless otherwise specified.
Assassin’s Creed: Origins (1080p, 1440p, 2160p)
Game engine: AnvilNext 2.0
Battlefield V DX11 (1080p, 1440p, 2160p)
Game engine: Frostbite 3
Borderlands 3 (1080p, 1440p, 2160p)
Game engine: Frostbite 3
Control (1080p, 1440p, 2160p)
Game engine: Northlight Engine
Death Stranding (1080p, 1440p, 2160p)
Game engine: Decima
DOOM Eternal (1080p, 1440p, 2160p)
Game engine: Id Tech 7
F1 2020 (1080p, 1440p, 2160p)
Game engine: EGO Engine 3.0
Final Fantasy XV
Game engine: Luminous Studio 1.5
Forza Horizon 4 DX12 (1080p, 1440p, 2160p)
Game engine: Forzatech
Metro Exodus (1080p, 1440p, 2160p)
Game engine: 4A Engine
Prey (1080p, 1440p, 2160p)
Game engine: CryEngine 4
Red Dead Redemption 2 (1080p, 1440p, 2160p)
Game engine: Rockstar Advanced Game Engine (RAGE)
Shadow of the Tomb Raider DX 12 (1080, 1440p, 2160p)
Game engine: Foundation
Shadow of War (1080, 1440p, 2160p)
Game engine: LithTech Jupiter EX
Strange Brigade DX12 + Async (1080p, 1440p, 2160p)
Game engine: Asura
The Witcher 3 (1080p, 1440p, 2160p)
Game engine: REDengine 3
Gaming – Ray Tracing and Deep Learning Super Sampling (DLSS)
The games we will use to measure the performance of Ray Tracing and DLSS are somewhat limited, but we will be expanding the selection in the future, as the trend is a greater adoption of these technologies by developers.
Metro Exodus will be one of the titles to be tested and in Full HD it does not offer a DLSS option, although 1440p and 2160p do have a DLSS option with the video cards we tested today.
On the other hand, the game that has best implemented so far the new technologies that NVIDIA promoted with the launch of video cards TURING, is undoubtedly CONTROL.
CONTROL combines the beauty and graphic section in rasterization techniques with elements of Ray Tracing and super scaling through artificial intelligence inference, with the use of NVIDIA DLSS.
Remember that Control has been updated and now uses DLSS 2.0.
Metro Exodus (Ray Tracing and DLSS)
Metro Exodus was one of the first titles to receive Ray Tracing after the release of Turing and the GeForce RTX 2000 series video cards. It also received DLSS and today we will not be analyzing the graphic quality when using both options (RT and DLSS), but their impact on performance as:
–Rasterization only (part of our game benchmarks package)
–Ray Tracing plus Deep Learning Super Sampling (DLSS)
As we mentioned previously, in 1080p there is no DLSS option for the cards that we were able to test with the little time we had to do the review, but there is Ray Tracing. 1440 and 2160p offer both options.
Productivity (Vray y Luxmark)
We received our sample on Monday, September 14 at noon and have had less than 48 hours to make this review, so in terms of productivity, the tests that we will offer for the launch will be quite limited. Nevertheless, we will expand this list next week, in addition to live testing on our Facebook channel, and live testing using RTX Broadcast while we play.
With what little we have tested, we can conclude that the NVIDIA GeForce RTX 3080 is a monster for productivity, as long as software takes advantage of hardware acceleration through CUDA, Ray Tracing and artificial intelligence inference. ChaosGroup’s Vray Benchmark and Luxmark 4.0 (uses OpenCL Pathtracing GPU) will be the first two tests to show how useful and elementary a professional workflow video card is becoming under the GeForce family.
Vray NEXT – GPU Only
By the way, it is not a mistake that the GeForce RTX 2070 SUPER performs a little better than the RTX 2080 SUPER in this test. The difference between the RTX 2080 Ti and the new RTX 3080 in Vray is substantial.
Temperatures/sound levels/consumption and overclock
In this section we will be analyzing the dissipation solution offered by the NVIDIA GeForce RTX 3080 Founders Edition. NVIDIA has significantly changed the design for this release, so temperatures, noise level will be of interest.
We will also be looking at the power consumption of the RTX 3080 compared to its predecessor, the GeForce RTX 2080 Ti.
The card arrived on Monday September 14th at noon, so we have had little time to test it. Therefore, we will be expanding the testing with overclocking later on.
For this test, we used HWInfo64 (the most current beta version) and took into consideration the maximum temperature reported by the sensor, in a torture test with Metro Exodus at a resolution of 2160p for half an hour without stopping.
For those who are curious, we do not use RT or DLSS for this graph, but we do have problems with both options enabled, having zero impact on temperatures, at least for this title.
The temperatures are the delta over the room temperature in degrees Celsius.
We measure the noise about 20 centimeters away from the video card with a decibel meter. We cannot control the ambient noise, but the measurements are usually made late at night and the room noise is always about 37-38 decibels.
Here is the comparison table.
We use a wall mounted watt meter (for the moment) under this test and it measures the total consumption of the system. We observed several “loops” in this test incessantly after having spent about 15 minutes running.
Remember, the consumption is from the wall, so the actual consumption (or what the system asks for) is lower, as there is loss of efficiency during the process. We use a Seasonic gold source (Prime 1300W) so its efficiency factor should be close to 92% (we use 220W in Peru).
For people who were “panicking” about requiring 1200W and similar supplies, we assure them that with NVIDIA’s suggestion of 750W (which is good quality gold) will be more than enough for gaming.
Remember that we have a Core i9-10900K processor overclocked to 5.3 GHz with a quite high voltage and a custom loop.
Due to time constraints, we are not yet able to show final overclock results, BUT it seems that the sample we got was not so lucky in the silicon lottery, as you can only increase the Core Clock to +70/80 MHz with a memory upgrade to +800/900 MHz in Afterburner.
We have been informed that another sample has managed to overclock +150 MHz to the Core Clock, so it will depend quite a bit on the luck of silicon and we hope that the top-of-the-range AIB models will come with better binning, although we have to test it personally.
Final analysis – Higher performance at a lower price and a universe (ecosystem) called RTX
It’s always good to doubt marketing, so it’s always good to have independent reviews with media that can support results using a methodology. That is why it is important that the “review outlets” are kept as independent as possible.
Having said this, we will analyze some things that people will probably “misinterpret” when reviewing reviews.
How to compare apples to apples – cost…
Although NVIDIA stated that her “FLAGSHIP” is the RTX 3080, there are two ways to compare it.
-Compare it with Turin’s flagship, the GeForce RTX 2080 Ti (1199 USD) or…
-Compare it with the GeForce RTX 2080 SUPER (699 USD)
We kept the second option because it makes the MOST SENSE, since the launch or reference price of the GeForce RTX 2080 SUPER, is 699 USD. The price of the new flagship at the moment is also 699 USD (GeForce RTX 3080).
The price of the GeForce RTX 2080 Ti started at 999 USD, with a much more realistic price of 1199 USD for the Founders Edition. The high-end models, which were recommended for purchase were above 1199 USD.
Another way to look at it, is that the cost to get a flagship video card from NVIDIA, has dropped from 1199 USD to 699 USD, a 41.70% reduction in price.
It is worth reminding you that these prices are from the United States, so, due to tariffs and taxes, the price reduction can be more than 41.70%.
In conclusion, we will put the performance differences with the video cards we have compared today, but the fair comparison of generation change, would be with the GeForce RTX 2080 or 2080 SUPER (for its price).
Performance differences in rasterization (RTX 3080 vs. 2080 Ti, 2080 SUPER and 2070 SUPER)
If you checked our performance tables in 1080p, you probably noticed something… In several games, despite having the processor overclocked at 5.3 GHz, we got bottleneck results.
We will include the three resolutions, but the two that need to be taken into account are 1440p and 2160p.
NVIDIA GeForce RTX 3080 Founders Edition (699 USD) Versus
|GIGABYTE GeForce RTX 2080 Ti GAMING OC (1250 USD)||14.16%||22.98%||28.46%|
|NVIDIA GeForce RTX 2080 SUPER (699 USD)||31.32%||48.16%||60.70%|
|NVIDIA GeForce RTX 2070 SUPER (499 USD)||49.61%||70.38%||86.25%|
The best scenario would have been to buy a 2080 Ti Founders Edition, but we don’t have one in stock. For your reference, there is approximately a 3% performance difference in favor of the GIGABYTE RTX 2080 Ti GAMING OC.
As mentioned, there is a bottleneck in our 1080p testing on some of the titles we tested and the difference between the GeForce RTX 3080 and RTX 2080 Ti is 14.16% in favor of the RTX 3080. At 1440p, the RTX 3080 has a performance of almost 23% more at average FPS and 28.46% at 2160p.
We could say that the GeForce RTX 3080 Founders Edition would have a 25% improvement in 1440p and 31% in 2160p compared directly to a 2080 Ti Founders.
Now the video card that we are most interested in comparing (apples to apples) is the GeForce RTX 3080 versus the 2080 SUPER.
The improvement in 1080p, 1440p and 2160p is 31.32%, 48.16% and 60.70% respectively (average FPS) using rasterization only.
For the first time I think I can say, that 4K UHD gaming (2160p) is going to be a reality even in AAAs games using rasterization. While a 2080 Ti in several demanding titles could reach above 60 average FPS, in the 1% LOW metric, this could drop to less than sixty, which makes the experience in 2160p not so pleasant.
With this generational improvement at a lower cost, 2160p gaming in demanding games (95% of them) could be above 60 FPS including “downs”.
Finally, the RTX 3080 has an improvement of 49.61%, 70.38% and 86.25% compared to the GeForce RTX 2070 SUPER (1080p, 1440p, 2160p).
These generational comparison figures are higher compared to the graphics cards that came out before the NVIDIA “SUPER” refresh, i.e. the regular RTX 2080 and RTX 2070.
What about Ray Tracing and DLSS?
We will summarize it in a few words…
The improvement of the RTX 3000 Ampere series using Ray Tracing, combined with DLSS, are above the improvements we see in rasterization alone. The increase and changes in specialized cores for Ray Tracing (RT Cores) and hardware acceleration using artificial intelligence inference, DLSS (Tensor Cores) play a strong role with AMPERE.
Metro Exodus + Control (1440p, 2160p – Sum of Average FPS)
Metro Exodus uses an older version of NVIDIA DLSS, so we preferred to take Control as a separate case, having several of the benefits of having DLSS 2.0, but we added Metro Exodus, which despite its older implementation of RT and DLSS, is part of the RTX family of games.
NVIDIA GeForce RTX 3080 Founders Edition (699 USD) Versus
|GIGABYTE GeForce RTX 2080 Ti GAMING OC (1250 USD)||29.05%||31.78%|
|NVIDIA GeForce RTX 2080 SUPER (699 USD)||61.54%||67.86%|
|NVIDIA GeForce RTX 2070 SUPER (499 USD)||86.29%||95.83%|
We will be expanding more titles in the future (like Doom Eternal with RTX and Cyberpunk 2077 on the list) but comparing these two games, we see the improvements that the RTX 3080 has versus the RTX 2080 SUPER is higher than the average of just rasterization. In 1440p it has an improvement of 61.54% and 67.86% in 2160p.
The rest of the comparisons like the RTX 2080 Ti are in the table above, but don’t forget also the historical price of each one of them, especially on its release date.
Control – the game with better Ray Tracing y DLSS implementation so far
Control is one of the most generous artistic games of recent times and its implementation of Ray Tracing and DLSS is one of the best representations of the gamble/risk evolution that NVIDIA took with AMPERE. This game uses the NVIDIA DLSS version 2.0 and here the percentage difference comparison of the RTX 3080 Founders Edition versus the rest of the cards tested today.
NVIDIA GeForce RTX 3080 Founders Edition (699 USD) Versus
|GIGABYTE GeForce RTX 2080 Ti GAMING OC (1250 USD)||35.48%||37.25%|
|NVIDIA GeForce RTX 2080 SUPER (699 USD)||68.00%||75.00%|
|NVIDIA GeForce RTX 2070 SUPER (499 USD)||90.91%||100.00%|
Once again, the best generation is shown as far as RT Cores and Tensor Cores (Ray Tracing and Inference of Artificial Intelligence) are concerned, even with the GeForce RTX 2080 Ti, with approximately 35%.
The point of focus (the comparison with the GeForce RTX 2080 SUPER) shows an improvement of 1.68x and 1.75x times at 1440p and 2160p respectively.
Cost per FPS (Rasterización) – GeForce RTX 3080 offers a massive price reduction
Note: It is 3AM, I hope I have not made a mistake in the following calculations:
Costo por FPS - NVIDIA AMPERE RTX 3080 Founders Edition
|NVIDIA GeForce RTX 3080 Founders (699 USD)||$3.47||$4.15||$6.77|
|GIGABYTE GeForce RTX 2080 Ti GAMING OC (1250 USD)||$7.08||$9.14||$15.55|
|NVIDIA GeForce RTX 2080 SUPER (699 USD)||$4.56||$6.16||$10.88|
|NVIDIA GeForce RTX 2070 SUPER (499 USD)||$3.70||$5.05||$9.00|
The chart speaks for itself. The GeForce RTX 2080 Ti was NVIDIA’s flagship card and was the best of the best for the average consumer (not including Titan RTX). Also its price/benefit efficiency was not very good, as it usually is in top-of-the-line products. We expect this to be the case with the GeForce RTX 3090, but it will certainly be a gamer’s delight and a powerful tool for productivity.
With the launch of the NVIDIA GeForce RTX 3080 Founders Edition, we are before a new level of performance, which perhaps some will underestimate since they “expected a higher performance than the RTX 2080 Ti”, but is achieved with a substantial cut in price.
I repeat, the table of cost per FPS in USD tells this story.
We have not factored in Ray Tracing and DLSS, but you don’t need to be a NASA scientist to realize that the cost per FPS is even lower (that’s better), since the improvement of these two technologies is greater than just rasterization.
Productividad – What we’ve seen so far
In a short time, Luxmark has been the only engine we have tested using Path-Tracing. The new improvements in this particular title for the workflow on which it depends, is quite impressive. In certain scenes there is a two-fold improvement compared to the GeForce RTX 2080 SUPER.
We will be testing other workflows in the next few days, but the tool improvements used by Ray Tracing (and probably DLSS) are substantial.
A quick note about consumption
We hope eventually to get the new tool that NVIDIA has launched to measure consumption through hardware (PCAT) but we will still use the total system consumption, measured from the wall.
With the change to Samsung’s 8NM 8N manufacturing process, specially designed for NVIDIA, in our test, the consumption of the GeForce RTX 3080 is 2.45% higher compared to the GeForce RTX 2080 Ti but with a performance improvement of 24% in this title.
It is not the most accurate method/comparison, but it gives us an idea of the efficiency improvement of the new manufacturing process.
What does the launch of the NVIDIA GeForce RTX 3080 leave us with?
Let’s start and elaborate on the second idea in this section; a universe or “ecosystem” called RTX. With the launch of Turing, the bet made by NVIDIA is seeing many fruits. This launch not only starts a new era in gaming, since a popular title like Fortnite, is adopting RTX in its entirety (Ray Tracing and DLSS).
And not only that, it has created a standard that consoles will only be adopting. Fortnite RTX, a game that we just mentioned will be the trigger for several players to adopt Ray Tracing and DLSS to the masses.
Also being a widely used game engine, we hope that games using UNREAL will be able to more easily adopt Ray Tracing and DLSS as graphics options.
Also, as we mentioned at the beginning, this release is not only about improvements in rasterization, ray tracing performance and DLSS, but also new tools for gamers.
We do not have the necessary equipment to do controlled testing of NVIDIA Reflex, but it is a quite interesting option and it is available with this release.
What we did have time to test was RTX Broadcast and we liked it in general. We advise you to review the article that rush has prepared (which will be expanded further) using RTX Broadcast.
As a spoiler, we can say that although there are some things to improve/criticize in RTX Broadcast, there are functions within the software that work very well (we tested it with a GeForce RTX 2060).
Yes, it is compatible with RTX 2000 and 3000 series.
RTX is much more than just Ray Tracing and there is a strong combination with what NVIDIA has laid out as a goal, to accelerate the world with artificial intelligence and change the paradigm of what has been the standard for generations, which has been a gamer world with only rasterization, but…
What about AMD?
As we stated in the benchmarks section, the hardware for the Radeon RX 5700 XT video card review was on loan. Although we do not have at the moment, hardware to re-test the RX 5700 XT with our new wave of tests, the RX 5700 XT simply does not compete or compare to the new GeForce RTX 3080.
We are still in the process of getting one for our database, but the release of this video card and also the next GeForce RTX 3070, makes all mid/high end cards lose a lot of value using RDNA1.
Even though these two paragraphs will inflame the AMD fanboys, that is the reality. Apart from them, the year of life of RDNA1 has had many highlights, with many bugs, drivers with problems that persist on a smaller scale even today.
On the other hand, this is not good news for anyone (i.e. the end user) since we want NVIDIA to have competition, both to have competitive prices, as well as to continue its innovation.
I would like to add in this section, that there have been no significant changes to the NVIDIA encoder (NVENC) in this generation, it remains the same one used in Turing. Luckily, NVENC Turing/Ampere are very good quality encoders with a small performance gap.
It generates me a little personal alert, that maybe there were no improvements, because of the lack of competition from AMD in this aspect (AMD AMF is simply not recommended for H264 streaming using hardware acceleration compared to NVENC or X264). This is supposition, of course… (not assertion).
AMD has a big battle ahead, because not only is it competing with rasterization, ray tracing and hardware acceleration improvements using artificial intelligence inference, but also the battalion and ecosystem that revolves around NVIDIA RTX.
Software development and support for developers and reinventing performance in the workflow of professionals, is something that could be the Achilles’ heel of RDNA2 graphics cards.
It is my fear (and luck for NVIDIA in case I am right) that RDNA2 will be competing against TURING and not AMPERE. In a few weeks we will know that AMD announces more we will not know as it compares with AMPERE until who knows when …
A last critique (and the possible strategy from NVIDIA for this generation)
If I have any criticism of this release, it’s about RTX IO (that’s why I didn’t mention it previously). Theoretically speaking, the option is more than interesting, but it is not available for release. We will have to wait some more time to see the importance and changes this feature makes with GeForce RTX cards (yes, it is compatible with 2000 series too).
On one hand, I understand that it is an essential part of AMPERE and one of the reasons to upgrade the standard to PCI Express Gen 4.
The possible strategy for this generation (speculating a bit here) is that, if RDNA2 doesn’t offer top-of-the-line competition at the GeForce RTX 3080 and 3090 level, NVIDIA would maximize profits with the new graphics. Watch out, it’s not bad and I always remind you (end users) that every company marginalizes and the best thing for the end user is competition.
That’s why being a blind fanboy of one brand/company or another, is not good.
Interestingly, the GeForce RTX 3080 is located at a good price and there is a tremendous hole within the $699 USD and $1499 USD that the GeForce RTX 3090 costs, that NVIDIA could launch a product if there is competition.
Over time, it could accumulate silicon that is not good enough for a GeForce RTX 3090, but better than the GeForce RTX 3080 and launch a series in between. Obviously, I don’t expect something like that in the near future…
-New king of gaming in the market, surpassing the GeForce RTX 2080 Ti, formerly an NVIDIA flagship.
-Gigantic price reduction compared to the old flagship RTX 2080 Ti (1199 USD versus 699 USD).
-Comparing apples to apples, the GeForce RTX 3080 (699 USD) offers nearly 50% improvement in rasterization at 1440p and 60.70% at 2160p versus the GeForce RTX 2080 (699 USD).
-The improvements in Ray Tracing and DLSS are tremendous.
-Up to twice the improvement in productivity applications where RT Cores and/or DLSS are used (we want to increase more tests).
-NVIDIA GeForce RTX 3080 Founders Edition noise is quite low using the factory vent curve.
-New level of performance for the price, “dynamiting” the previous NVIDIA flagship, the RTX 2080 Ti
-Quite large RTX ecosystem/omniverse (RTX Broadcast, NVIDIA Reflex, RTX IO)
-Improved consumption per FPS.
-RTX IO not available with release.
-Slightly elevated temperatures using the default ventilation curve (good news for AIB).
-Our unit didn’t overclock very well (although it can be something simple as a sort of silicon).
The only variable we have yet to test is the upcoming release of the NVIDIA GeForce RTX 3090 Founders Edition. We don’t know if we will have a test sample by September 24th, but it will be the replacement for the TITAN CLASS (now under the name GeForce).
The only thing we can say is that we have NVIDIA content for several more weeks.
The NVIDIA GeForce RTX 3080 Founders Edition receives our platinum award, our recommendation, as well as the performance award, price/performance award (for setting a new benchmark), and aesthetics award. The new industrial finish makes the new Founders look like a piece of art.
I hope that AMD will be competitive (it’s always good) and if that doesn’t work out with RDNA2 (top of the range) I hope that NVIDIA and the right people within the company will continue to innovate and not rest on their laurels, because we end users want an NVIDIA that is always at the forefront of innovation.
If you liked this article, you help us a lot by hitting the like button and sharing. If you have any questions, you can follow us on our social networks or support us to continue creating content through Patreon: