The battle for realism: The evolution of realtime 3D graphics – RedShark News

Posted: September 2, 2021 at 2:29 pm

Itd be easy to get the impression that realtime 3D is really coming of age. For a long time, all computer graphics looked like well Tron, at best, and often not nearly as slick.

Then pre-rendered 3D started looking good enough for things like spaceships, and now were able to clamber almost (but not quite) all the way out of the uncanny valley in which CG humans have often lived. Realtime CG renderers have not been idle, of course, and ever since 2007s seminal Crysis, Nvidia and AMD, and the people who write the code, have been pushing hard to make video games look as good as movies.

And havent they? Were now using code written to draw video games, things like Unreal Engine, to produce graphics for LED volumes that were expecting to look photorealistic. OK, that doesnt often involve human beings, but recent releases such as Cyberpunk 2077and Watch Dogs: Legion are both, if occasionally, capable of producing scenes capable of provoking a fleeting impression of a real city being just the other side of the thin film transistors. Isnt this convergence; can we expect to see a future in which the difference between pre-rendered and realtime 3D becomes irrelevant?

Well, maybe, but this isnt it. Until very recently, video games invariably used what was fundamentally a development of techniques going decades. First, we started making 3D objects out of triangles. Why triangles? Because no matter where we put three points in space, they always describe a two-dimensional, flat area. At first, the triangles werent even filled in wireframe graphics you could see straight through. Next, we figured out things looked better if we filled them in solid. Then we started sticking bits of image data onto those triangles, so that theyd have some texture, and about the same time started taking notice of where virtual light sources were in the scene to make the triangles brighter or darker.

That level of technology worked out okay through the 2000s, but there are some obvious things it didnt do, things that pre-rendered 3D could. Until recently video games couldnt show accurate reflections they could sometimes do something that sort of looked like a reflection of the environment in a chrome surface, but look closely, and it was just a fixed image that didnt represent changes in the environment thats being reflected. They also didnt show shadows, at least not without a lot of workarounds, and shadows of unmoving objects such as terrain and buildings were pre-rendered into images then pasted onto the triangles to simulate light and shadow.

That creates on good example of the sort of limitations realtime 3D engines often impose in order to achieve what they do. Ever wonder why you cant blow up walls in the average first-person shooter? Because, at least in part, the pre-rendered lighting stops being valid if you move a large shadow-casting object out of the way. Maya (or its renderer) renders that sort of things fresh, every frame, or at least it can if it needs to.

Still, combine all those reasonable-looking but actually very inaccurate tricks together, evolve for ten or fifteen years, and we get Crysis. It should be very clear, though, that a lot of shortcuts are involved. That spectacular mountain range? A fixed background image; you can never go there. The fine detail on your sci-fi rifle? The rivets are clever types of image mapping, so dont look too close. That mountainside full of trees? Geometry instancing allows the system to use multiple copies of the same tree, scaled and rotated and scattered. Statue reflected in a pool of water? Thats the time-honoured technique of taking the picture of the statue, flipping it upside down, and relying on the irregular surface of the water to hide the fnords.

Problems occur with that statue if, for instance, we cant see the top of the statue because were looking down at the water, but we should be able to see the reflection of the top of the statue. We cant flip an image we havent rendered. Basically, video games cheat. A lot. Thats fine, when it works. The problem is that all the workarounds become more work than doing it properly.

Define properly? Well, for a short while, since, say, the release of Nvidias RTX series, games have been at capable of cheating slightly less, through ray tracing. Situations like that reflecting pool become a little more reliable if we actually work out where the rays of light would go for real. Its easier conceptually, if not computationally, and lots of workarounds are still required. One workaround that should be recognisable to prerendered CG people is noise reduction, based on the fact that we cant sample every ray from of every light for every pixel on the screen. We sample a random selection, which gives us a noisy image, and we apply noise reduction.

And that is something thats very much in common with the approaches used by conventional 3D graphics software. With that and ray tracing, theres certainly growing commonality between realtime and non-realtime renders, and therefore no surprise that things seem like theyre starting to converge. What we need to recognise is that the scope and scale of what realtime 3D can do is necessarily limited, for all of the reasons weve considered here. In 2021, were some way from being able to create arbitrarily large and complex scenes and accurately simulate all of the lighting in realtime in the way we can in software like Cinema 4D, where itll have some time to go away and think about things.

Still, its hard to complain that some of the most respected television currently in production is using technology that was unequivocally built to bring Playstation owners joy, and with great success. Without video games, wed have no GPU processing, no realtime LED volumes, and, most importantly of all, no Cyberpunk 2077. Ill be busy for the next hour or so.

Go here to see the original:

The battle for realism: The evolution of realtime 3D graphics - RedShark News

Related Posts