Next generation consoles and ray-tracing thoughts20 May 2019
This week you’re getting some thoughts on next generation consoles and ray tracing.
What we know
To start off, let’s go through the facts:
- DXR / RT cores were announced in March 2018 by nVidia and Microsoft. As other companies had working demos they must have started long before (at least 1 year).
- PS5 has some custom hardware: ‘The GPU, a custom variant of Radeon’s Navi family, will support ray tracing’. How much, we don’t know yet, but it’s still early.
This is where things start to get interesting.
The question is, were AMD involved in the DXR spec at all?
Normally Direct X features are developed with all the major players having some input, but it’s not clear if this was the case.
I can think of two scenarios here:
- AMD had no idea about DXR until Navi (their next GPU architecture) was mostly completed.
- AMD knew and Navi will have some form of acceleration hardware for DXR.
If it’s the first option then Sony wouldn’t have known raytracing was coming, and whatever the PS5 has will be a very basic implementation.
On the other hand, the second option suggests that AMD’s current lack of interest is just a marketing tactic to try to keep people waiting for Navi.
It’s fair to say the gaming community is still not sold on raytracing. On the other hand, every graphics developer I’ve spoken to is impressed with the RT cores and nVidia’s implementation.
While this is still first-generation hardware nVidia have proven two things:
- There’s an efficient hardware acceleration path for ray tracing on the GPU. Anyone trying to argue there’s no benefit, or that a software implementation is just as good, doesn’t know what they are talking about. Even with the RT cores being a small percentage of the GPU they give a massive performance boost (between 2 and 6 times compared to a software approach).
- It is feasible for ray tracing to ‘mostly’ replace raster-based GPUs in the future.
Currently there isn’t the performance on nVidia’s cards to really push ray tracing use, but we have to start somewhere, and if they can swap raster units out for RT cores on future cards you can expect the performance to increase rapidly.
I've seen estimates that the RT cores are using less than 10% of the die space, and even with this they outperform every other card and implementation.
For anyone pointing at the CryEngine ray tracing demo it’s not doing quite what you think it is. Reading the full post from Crytech this doesn’t throw very many rays at all, which explains the good performance, and even then, they admit it runs a lot faster on a card with RT cores.
Levels of ray-tracing support
When it comes to ‘supporting ray-tracing’ I can think of a few different levels:
- Nothing: Hardware vendor ignores it completely and leaves it up to developers (this is the worst case in some ways but given the availability of game engines it’s not so bad).
- No hardware but driver support: Not only do the engines not need a full implementation, but it also gives free improvements over time without requiring game updates.
- Some hardware accel: AABB and/or ray-triangle intersection instructions. This could give good performance improvements with modest investment.
- Full hardware DXR: This includes accelerating BVH building/updating, as well as AABB, and ray intersection testing.
For the PS5 I’m expecting 3, at least some hardware acceleration with a Sony API for using it. Since they have complete control over the APIs and hardware this would be the easiest way. Performance will be better than just a software implementation, but a long way behind level 4.
Given Microsoft’s involvement with DXR I’m hopeful that it will have more than just a basic level 3 implementation, maybe close to level 4. If AMD weren’t involved in the DXR specification, then I’d expect it to be a lot more advanced that Sony’s implementation (given that Microsoft had much longer to work on it). Of course, it’s always possible that Microsoft didn’t think that it’s worth the die space this generation, but if the hardware is as similar as I think it will be, both Sony and Microsoft will be looking to add exclusive features to tempt people onto their console.
From the rumours, final-ish PS5 dev kits are already out there, which suggests the major hardware design is done. Given a mid/late 2020 launch this does seem a bit earlier than previous generations but given that the machine is based on Navi it makes sense. As neither Microsoft or Sony seem to be in a rush, getting things done early leaves more time for software, both OS and games, as well as possibly lower production costs.
AMD recently announced a ‘Next Horizon Gaming’ at E3 on June 10th where they’re going to “unveil the next generation of AMD gaming products.”. In other words, by the end of E3 we’ll have a much better idea of what the future of raytracing is going to be on console.
It looks like the next console gen should be a lot more interesting than the current one.
This generation Microsoft got stuck as they wanted 8GB of RAM, proven to be the right choice given the requirements of recent games, but this forced them to use DDR3 over GDDR5, as large GDDR5 chips weren’t guaranteed to be around for the launch. If they went with GDDR5 and the chips weren’t there, their design wouldn’t have worked as they wanted 2-3GB for the OS/apps and 5-6GB for the game, and the only option would have been to delay the launch.
The alternative was to use DDR3, which was already available at the density they needed, but didn’t this didn’t have the bandwidth. You can’t just swap out the memory controller part way through the design process so they would have had to make the decision in early to mid-2013.
Sony decided that 4GB was enough (leading to around 3.5GB for the game) and then got lucky with 4Gb GDDR5 becoming available in early 2014, so they just swapped out the 2Gb chips for 4Gb and had 8GB. Telling developers they’ve got double the memory to play with isn’t a problem, telling them they’ve got half of what they expected is.