iXperiential Tech Blog: GPUs and the NVIDIA RTX 3000 Series

iXperiential Media
4 min readSep 15, 2020

With NVIDIA’s recent release of the RTX 3000 series GPUs the performance has more than doubled that of the previous generation RTX 2000. For iXperiential this means half of our staff will be scrambling to upgrade their workstations, but it also means that the clients can look forward to a higher level of performance and much richer — true to life graphics.

NVIDIA RTX 3000 Series Graphics Card
NVIDIA RTX 3000 Series GPUs

This article will take a deeper look into the technology behind NVIDIA’s powerful new line of graphics cards but first let’s also look at some of the history:

When they were created, shaders were programs executed on the graphics card (GPU) that modify the color of a pixel and/or the color and the position of a vertex when rendering a 3D scene. They were used to create effects like bloom or depth of field. These days, they are so integrated into the rendering pipeline than even the most basic UE4 material is using a truckload of shaders. Shaders can even be used to do non-rendering tasks like physics. At any given time when using a software engine like Unreal or PlayCanvas, a ton of shaders are running on the GPU. The faster the GPU can compute them the faster the framerate is. Those shaders are often made of complex mathematical operations involving floating point numbers like the X Y Z coordinates of a vertex. Hence the standard measure of performance for GPU is called FLOPS for Floating point Operations Per Second aka how many operations you can do on floating point numbers per seconds. The same standard is used to measure pretty much any type of computing unit from supercomputers to Raspberry Pis.

· The GeForce GTX 1080 introduced in 2016 at $599 was outputting 8.9 TeraFLOPS (tera = 1,000 billion).

· The GeForce RTX 2080 introduced in 2018 at $699 was outputting 10 TeraFLOPS.

· The GeForce RTX 3080 that was just announced at $699 is outputting 29.8 TeraFLOPS.

For the new standard of NVIDIA GPUs that means three times the theorical performance for the same price. Early benchmarks are talking about a 70 to 80% performance boost in-game at 4K compared to the 2080. This level of performance is groundbreaking. The next tier below 3080- the RTX 3070 (which will be sold at $499) should outpace the 2080 Ti sold $1,199.

Rounding out the series there is also an RTX 3090 sold $1,499 that embeds 24 GB of VRAM and is bound to become the new standard for GPU rendering replacing the 2080 Ti.

Earlier this summer we wrote an article about Unreal Engine V, as mentioned in the article you may recall that the engine will use 3D models made of millions of polygons and 8K textures. This means that 3D models will take a lot of disk space… but nobody wants to download a 300 GB interactive. Instead data will be compressed most likely using an algorithm like LZMA or Kraken. The only issue here is that GPUs don’t know how to use compressed data — so the assets needs to be decompressed on the fly before being sent to the GPU which has two problems:

  1. It takes a lot of CPU time.
  2. It is inefficient: CPU can only access the system memory which means the assets are decompressed in the system memory and then copied in the video memory.

The RTX 3000 series introduces RTX IO (for Input/Ouput), a revolution in computer graphics. Since modern SSDs and modern GPUs are using the same bus called PCI Express, RTX 3000s are able to load compressed data directly from the SSD and decompress it directly into video memory bypassing the CPU and the system memory and leaving them available for better things. The new GPUs can do it blazingly fast. Another bit of good news is that this technology will be cross-platform: Sony is using the same trick for the PS5 and Microsoft for the Xbox Series X. In fact, Microsoft calls it DirectStorage and RTX IO is based on it. The bad news is that not only you will need an RTX 3000 card to use it but the games will have to implement the technology. But it’s almost certain that Epic Games will implement it inside Unreal Engine V.

Deep Learning Super Sampling (DLSS) is also getting some updates. The idea behind DLSS is to render your game at a lower resolution and use AI to upscale the rendering at a high-resolution in real time. With DLSS 1.0, each game needed to be trained which was quite the process. But with DLSS 2.0, all the games are sharing the same neural network and the result is impressive. Games can run natively at 1080p but are rendered in 4K without any notable difference with native 4K. This is a huge GPU saving! And now Nvidia is introducing DLSS 2.1 which will work for VR! DLSS is compatible with RTX 2000s and 3000s series.

This all may mean more to some readers than others depending on their understanding of the processes and components that make up a GPU — but the days of when a graphics card was thought of solely in the mind of a PC gamer obsessing over the “overclocking stats,” GPU benchmarks, and how many teraflops are behind us… More recently NVIDIA has become a bit of a household name — boasting one of the highest performing technology stocks in the market — their GPUS are used for a range of applications including: Artificial Intelligence, 3D rendering, autonomous driving, mining cryptocurrency, and (still) powering high end PC gaming rigs.

--

--

iXperiential Media

We are an elite team of creative innovators with over 25 years of experience delivering visual, interactive, and immersive experiences that go beyond.