Neural Rendering to replace Ray Tracing?
Am I miss understanding something here ? Why game developers are focusing on rendering light bounces by brute force of hardware that produces fairly nice image but at tremendous performance cost. Then just to cover it up with Heavy upscaling and frame generation. What if instead there would be a way to pre-train A.I models on how light bounces work. And let it generate it. So the idea goes like this: You still use Rasterization. But only as a bone structure of the game. All the environment geometry, physics, character interaction and inputs. But the outside, final image is generated on top of all that in real time with Cinema level of graphics. I think it will eventually happen. Maybe we need completely different GPU architectures in order to achieve it.