• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

How much money are you willing to pay for a next generation console?

  • Up to $199

    Votes: 33 1.5%
  • Up to $299

    Votes: 48 2.2%
  • Up to $399

    Votes: 318 14.4%
  • Up to $499

    Votes: 1,060 48.0%
  • Up to $599

    Votes: 449 20.3%
  • Up to $699

    Votes: 100 4.5%
  • I will pay anything!

    Votes: 202 9.1%

  • Total voters
    2,210
Status
Not open for further replies.

DukeBlueBall

Banned
Oct 27, 2017
9,059
Seattle, WA
I think next year's Navi will be more power efficient from architectural improvements and node improvements. It won't surprise me to see base clocks go up 10% across the board.
 

Deleted member 40133

User requested account closure
Banned
Feb 19, 2018
6,095
You used to speculate that Microsoft would use Vega. Call it next gen Vega. All we read here is speculation, chances are that everyone is wrong.

I said they could, it was plausible, and I gave reasoning. I've also said over a year ago on this very forum multiple times I expect Microsoft to be stronger. Which to clarify, I still believe they will have the stronger console
 

bear force one

Attempted to circumvent ban with alt account
Banned
Oct 26, 2017
4,305
Orlando
Some of y'all just need to be a little less transparent with the bullshit is all. We got a leak on Ariel/Gonzalo clocking at 2Ghz and another link to PS4/Pro and some of you immediately felt the need to mention Reiner and how he must be wrong and Scarlett is clearly going to be more powerful yada yada. There's definitely a problem with the thread if any sliver of information is going to start the fanboy drivel.
Amen. Transparent at this point.

We could do without the carnival of stupid Everytime a new positive PS4 factoid hits.
 

nelsonroyale

Banned
Oct 28, 2017
12,135
Zen 2 8C/16T with full L3 Cache = @70mm2
Navi 10 Full 40CU + 256-Bit Bus = @251mm2



Yes.
320-Bit Bus = 10~15mm2+? So, more bandwidth?
8CU = 40mm2+? So, more CUs?
But still less powerful?

Given 2ghz is higher than Navi 10, plus RT hardware is likely in, can we even be certain it is navi 10 now? Plus, we have performance figures obviously not dimensions now.
 
Jan 17, 2019
964
No one is arguing that.
It's a speculation thread.
With Komachi new info we start the discussion about performance.
What's the problem?


Claiming clearly Reiner is wrong ( without providing any hard proof to beat his tweet ) and claiming clearly Scarlet WILL BE more powerful for sure WASN'T speculation on your side. You said these two things like it's a fact.
 
Last edited:

modiz

Member
Oct 8, 2018
17,885
Ok, so seems like nothing else leaked since I went to sleep. Surprised no article was made yet, but I guess that is hard to do when only komachi knows where to access the AMD leak. No one here or in beyond 3D managed to find this right?
Let's try making sense of it all and going through everything we know about the sony's Console once again.
Warning: long post incoming!
First in January a known AMD leaker, APISAK, has found Gonzalo and Ariel, Gonzalo is a gaming APU and has a 8 core CPU clocked at 1.6 GHz base clock and 3.2 GHz boost clock, and the GPU Ariel had 1GHz clock. Also it had an unknown cache size, implying that it uses a different amount of cache, likely cut down. Gonzalo was marked as Engineering Sample 2 and it's code fit the PlayStation production, also the base clock of 1.6 GHz would make sense for a PlayStation console because it is similar to previous implementation of backwards compatibility with the PS4 Pro. Ariel iGPU has the id 13E9, to my understanding, 13E0 to 13FF are all reserved for Navi 10 LITE.
Later in early April, a Quality Sample version of Gonzalo had leaked, once again by APISAK, this time with a 1.8Ghz GPU clock and updated Ariel id of 13F8, which already sounded very high clock speed. But other than that,it didn't really tell us much else.
a week later the plans for the next generation PlayStation have been discussed by Mark Cerny for Wired, the proximity of Gonzalo getting updated to the article seemed suspicious indeed.
In the article the following things were confirmed: 8 core zen 2 CPU (Gonzalo also listed 8 core), Navi GPU with ray tracing, specialized audio chip to enable more accurate audio to the world, backwards compatibility with the PS4 generation, and lastly a custom SSD that, according to mark cerny, has a higher raw bandwidth than any SSD on PC. Cerny also said that not only is the read speed important (implying that by high bandwidth he was referring to read), but so is the IO mechanism and the software stack.
In May, AMD has announced their 7nm Zen 2 CPUs, surprise to no one, they are great CPUs for their prices. Also AMD announced their Navi architecture, based on a new RDNA architecture, 25% IPC boost over the previous generation, much more power efficient, etc.
Also in May our own user gofreak found a patent that details potential improvements to the SSD to lower its latency, improve its bandwidth etc. You can read more in the thread they made for this patent, very interesting stuff:
www.resetera.com

PS5 - a patent dive into what might be the tech behind Sony's SSD customisations (technical!)

This will be one for people interested in some potentially more technical speculation. I posted in the next-gen speculation thread, but was encouraged to spin it off into its own thread. I did some patent diving to see if I could dig up any likely candidates for what Sony's SSD solution might...
In June AMD NAVI was unveiled as the RX 5700 and the RX 5700XT, 36CU and 40CU GPUs, each with 8GB of GDDR6. These GPUs are based off the 40CU Navi 10 die. It makes sense then that gonzalo is based off a similar die because its GPU is based off the Navi 10LITE die.
Navi launched, its gaming performance had been pretty good, but not incredible,during the launch period IPC tests were made for Navi and it is found out Navi actually has a higher or equal IPC NVidia's latest architecture, and 39% higher IPC than Polaris (PS4 pro and X1X GPU architecture).
A few weeks ago, a product was found on user benchmark called AMD Flute. It is still no clear what AMD Flute is, but it has the same CPU clocks as Gonzalo, also it mentions that not only it has 8 cores, but 16 threads, and GPU Id of 13F9, or a Navi 10 LITE, which means this is most likely the whole system that has Gonzalo, or what we assumed to be the PlayStation 5 devkit. The CPU did show a lower score Han expected, and what seems to be a quarter of the cache of the normal ryzen 3000 CPU, this is possibly the cause of the lower score. Flute has 16 chips of GDDR6 each of 1GB, seems to be a downclocked version of the 18Gbps memory, it makes sense if the final version uses 8 chips of 2GB instead.
And finally we get to yesterday, where the reliable AMD insider komachi has found out that AMD has accidentally uploaded a lot of data with a public access, in there he found mentions if Oberon and Ariel, he explained he thinks they are the same. Oberon has 3 GPU clocks listed:
Gen 0 with 800MHz clock, gen 1 with 911MHz clock, and gen 2 with 2000MHz, this information is critical. Gen 0 and gen 1 are obviously PS4 and PS4 pro gpu clocks., Which means that without a doubt Oberon is the PS5, and has 2GHz clock. Why would it have those clocks? The reason is this; the PlayStation 5 seems to use a very similar backwards compatibility solution to the PS4 Pro, it will have the same Compute unit count of 36CU, so that to be compatible with the base PS4, you could disable half the compute units and clock the GPU at 800MHz for safe compatibility activate all the compute units and clock at 911MHz for safe PS4 pro compatibility. 36 Compute units clocked at 2000MHz would give us 9.2TF, with 39% higher IPC than Polaris we will get the equivalent of 12.8TF Polaris GPU, or 3x PS4Pro or a little higher than 2X X1X GPU performance. If this is true then Sony has a small but fast GPU, which should help in reducing cost, and also it might have a bit higher performance than a wider but lower clocked GPU due to clocks scaling better than CU count.
Then comes the question: why has the GPU clock increased. This is speculation territory, but, 40CU with 1800MHz would have an equal amount of TF, so maybe until they get the 36CU 2000MHz to work, they gave developers 40CU 1800MHz. Needs to be mentioned that usually console manufacturers uses disables a few compute units, but has been established before, in order to have 44CU with 4 disabled, due to Navi's design, you would need to waste a lot more space, compared to 40CU with 4 disabled to have 36CU. So it makes more sense going that route.
Komachi has been saying that Oberon is Ariel, so if we connect everything and really Ariel, Gonzalo, Flute and Oberon are all PS5 related code names then we can make a close to final spec list:

Zen 2 CPU with 8 core 16 threads clocked at 3.2GHz, quarter of desktop ryzen 3000 CPU cache.
Navi GPU with 36 compute units clocked at 2GHz = 9.2 TF which roughly equals to 2X X1X or 3x PS4Pro in GPU performance.
16GB GDDR6 with a bandwidth of around 530GB/s I think.

There are 2 question marks left:
Price
SSD size and bandwidth

Phew that was a lot of typing, but I felt like this was necessary at this point.
Please correct me on anything here or ask questions where needed.
 
Oct 25, 2017
3,595
Claiming Reiner is wrong ( without providing any proof to beat his tweet ) and claiming Scarlet WILL BE more powerful for sure ISN'T speculation on your side.

Did you read the IF from my post or you're just ignoring what i wrote?
Until we see the final specs from both console what he said is just a speculation without proof.

Given 2ghz is higher than Navi 10, plus RT hardware is likely in, can we even be certain it is navi 10 now? Plus, we have performance figures obviously not dimensions now.

There's no Navi 10 with RT Cores.
 
Feb 10, 2018
17,534
If Scarlet is weaker, there will be meltdowns. The power narrative is something Microsoft is actively courting.

I think Scarlet will be more powerful but this PS5 sounds delicious.

It depends, if PS5 is the same price and within 10% of scarlett I don't think people will be bothered.
Phil has said that they are making the best gaming focused box they can, so as long as they do that, fans will be happy.
I don't think its going to be so apples to apples though. With Lisa su saying, that sony + ms both have there different flavours of sauce.
Also this narrative that "xbox fans will meltdown but PS5 fans won't" is rather halarious, MS have been more vocal with there power claims but with the cerny interview, news articles and Internet chatter(reiner etc) I would say power expectations for the PS5 are just as high as scarletts.
 
Jan 17, 2019
964
Did you read the IF from my post or you're just ignoring what i wrote?
Until we see the final specs from both console what he said is just a speculation without proof.



There's no Navi 10 with RT Cores.

Reiner's proof are the devs. You didn't talked with them, he did.
"if" you say? Depends in which context.

Reiner/Colin rumour?
No evidence at all Scarlett is weaker.
Probably the rumour is wrong based on early preview devkits.
If we are looking at Scarlett video, the SOC around 380mm2+... Scarlett will be more powerful for sure.



Because the early preview Devkits.
Like many said.
Reiner/Colin rumour is wrong.
Or half of the 380mm+ SOC is Raytracing/L3Cache.
Or MS don't give a fuck and will release a Lockhart 4TF box and not the Anaconda.
lol
 

More Butter

Banned
Jun 12, 2018
1,890
If Scarlet is weaker, there will be meltdowns. The power narrative is something Microsoft is actively courting.

I think Scarlet will be more powerful but this PS5 sounds delicious.
I think there might be an internal meltdown from a certain bird if PS5 is weaker...

Seriously though, did something new happen to instigate the console v console talk? As I read this thread I feel like I missed some new leak or something.
 

Silencerx98

Banned
Oct 25, 2017
1,289
Remind me again, if we're going with the assumption that Oberon/Ariel is related to Navi 10 Lite in some way, was Navi 10 Lite planned to be fabricated on 7nm+/EUV? Assuming the latest leaks are correct and PS5 GPU is targeting 2 GHz, it seems most are thinking 7nm+ is likely to significantly reduce power consumption from 7nm.
 

bear force one

Attempted to circumvent ban with alt account
Banned
Oct 26, 2017
4,305
Orlando
Did you read the IF from my post or you're just ignoring what i wrote?
Until we see the final specs from both console what he said is just a speculation without proof.

Except Reiner wasn't speculating. He was told in no uncertain terms. You can claim he's lying but then onus of proof is on you.
 

More Butter

Banned
Jun 12, 2018
1,890
Ok, so seems like nothing else leaked since I went to sleep. Surprised no article was made yet, but I guess that is hard to do when only komachi knows where to access the AMD leak. No one here or in beyond 3D managed to find this right?
Let's try making sense of it all and going through everything we know about the sony's Console once again.
Warning: long post incoming!
First in January a known AMD leaker, APISAK, has found Gonzalo and Ariel, Gonzalo is a gaming APU and has a 8 core CPU clocked at 1.6 GHz base clock and 3.2 GHz boost clock, and the GPU Ariel had 1GHz clock. Also it had an unknown cache size, implying that it uses a different amount of cache, likely cut down. Gonzalo was marked as Engineering Sample 2 and it's code fit the PlayStation production, also the base clock of 1.6 GHz would make sense for a PlayStation console because it is similar to previous implementation of backwards compatibility with the PS4 Pro. Ariel iGPU has the id 13E9, to my understanding, 13E0 to 13FF are all reserved for Navi 10 LITE.
Later in early April, a Quality Sample version of Gonzalo had leaked, once again by APISAK, this time with a 1.8Ghz GPU clock and updated Ariel id of 13F8, which already sounded very high clock speed. But other than that,it didn't really tell us much else.
a week later the plans for the next generation PlayStation have been discussed by Mark Cerny for Wired, the proximity of Gonzalo getting updated to the article seemed suspicious indeed.
In the article the following things were confirmed: 8 core zen 2 CPU (Gonzalo also listed 8 core), Navi GPU with ray tracing, specialized audio chip to enable more accurate audio to the world, backwards compatibility with the PS4 generation, and lastly a custom SSD that, according to mark cerny, has a higher raw bandwidth than any SSD on PC. Cerny also said that not only is the read speed important (implying that by high bandwidth he was referring to read), but so is the IO mechanism and the software stack.
In May, AMD has announced their 7nm Zen 2 CPUs, surprise to no one, they are great CPUs for their prices. Also AMD announced their Navi architecture, based on a new RDNA architecture, 25% IPC boost over the previous generation, much more power efficient, etc.
Also in May our own user gofreak found a patent that details potential improvements to the SSD to lower its latency, improve its bandwidth etc. You can read more in the thread they made for this patent, very interesting stuff:
www.resetera.com

PS5 - a patent dive into what might be the tech behind Sony's SSD customisations (technical!)

This will be one for people interested in some potentially more technical speculation. I posted in the next-gen speculation thread, but was encouraged to spin it off into its own thread. I did some patent diving to see if I could dig up any likely candidates for what Sony's SSD solution might...
In June AMD NAVI was unveiled as the RX 5700 and the RX 5700XT, 36CU and 40CU GPUs, each with 8GB of GDDR6. These GPUs are based off the 40CU Navi 10 die. It makes sense then that gonzalo is based off a similar die because its GPU is based off the Navi 10LITE die.
Navi launched, its gaming performance had been pretty good, but not incredible,during the launch period IPC tests were made for Navi and it is found out Navi actually has a higher or equal IPC NVidia's latest architecture, and 39% higher IPC than Polaris (PS4 pro and X1X GPU architecture).
A few weeks ago, a product was found on user benchmark called AMD Flute. It is still no clear what AMD Flute is, but it has the same CPU clocks as Gonzalo, also it mentions that not only it has 8 cores, but 16 threads, and GPU Id of 13F9, or a Navi 10 LITE, which means this is most likely the whole system that has Gonzalo, or what we assumed to be the PlayStation 5 devkit. The CPU did show a lower score Han expected, and what seems to be a quarter of the cache of the normal ryzen 3000 CPU, this is possibly the cause of the lower score. Flute has 16 chips of GDDR6 each of 1GB, seems to be a downclocked version of the 18Gbps memory, it makes sense if the final version uses 8 chips of 2GB instead.
And finally we get to yesterday, where the reliable AMD insider komachi has found out that AMD has accidentally uploaded a lot of data with a public access, in there he found mentions if Oberon and Ariel, he explained he thinks they are the same. Oberon has 3 GPU clocks listed:
Gen 0 with 800MHz clock, gen 1 with 911MHz clock, and gen 2 with 2000MHz, this information is critical. Gen 0 and gen 1 are obviously PS4 and PS4 pro gpu clocks., Which means that without a doubt Oberon is the PS5, and has 2GHz clock. Why would it have those clocks? The reason is this; the PlayStation 5 seems to use a very similar backwards compatibility solution to the PS4 Pro, it will have the same Compute unit count of 36CU, so that to be compatible with the base PS4, you could disable half the compute units and clock the GPU at 800MHz for safe compatibility activate all the compute units and clock at 911MHz for safe PS4 pro compatibility. 36 Compute units clocked at 2000MHz would give us 9.2TF, with 39% higher IPC than Polaris we will get the equivalent of 12.8TF Polaris GPU, or 3x PS4Pro or a little higher than 2X X1X GPU performance. If this is true then Sony has a small but fast GPU, which should help in reducing cost, and also it might have a bit higher performance than a wider but lower clocked GPU due to clocks scaling better than CU count.
Then comes the question: why has the GPU clock increased. This is speculation territory, but, 40CU with 1800MHz would have an equal amount of TF, so maybe until they get the 36CU 2000MHz to work, they gave developers 40CU 1800MHz. Needs to be mentioned that usually console manufacturers uses disables a few compute units, but has been established before, in order to have 44CU with 4 disabled, due to Navi's design, you would need to waste a lot more space, compared to 40CU with 4 disabled to have 36CU. So it makes more sense going that route.
Komachi has been saying that Oberon is Ariel, so if we connect everything and really Ariel, Gonzalo, Flute and Oberon are all PS5 related code names then we can make a close to final spec list:

Zen 2 CPU with 8 core 16 threads clocked at 3.2GHz, quarter of desktop ryzen 3000 CPU cache.
Navi GPU with 36 compute units clocked at 2GHz = 9.2 TF which roughly equals to 2X X1X or 3x PS4Pro in GPU performance.
16GB GDDR6 with a bandwidth of around 530GB/s I think.

There are 2 question marks left:
Price
SSD size and bandwidth

Phew that was a lot of typing, but I felt like this was necessary at this point.
Please correct me on anything here or ask questions where needed.
Thank you for this. Very informative.
 
Oct 25, 2017
3,595
Reiner's proof are the devs. You didn't talked with them, he did.
"if" you say? Depends in which context.

What devs? Where is the proof? Where is the official final specs?
Again, just rumours/speculations without proof.
He said PS5 has more Flops based on Devs info's about Targeting specs. You can believe it, i can believe it, everyone can believe it. Fine!
This means he knows the final specs from both console 18 months before the launch? If you believe in that, fine.
So far Komachi info's are much more factual :)

Except Reiner wasn't speculating. He was told in no uncertain terms. You can claim he's lying but then onus of proof is on you.

I didn't say he's lying.
I said he can be wrong based on Komachi leaks and speculations from this thread.
Like i said before, the info can be anything.
Until the final specs of both console, he's info's is just a rumour.
 

bear force one

Attempted to circumvent ban with alt account
Banned
Oct 26, 2017
4,305
Orlando
It's not a rumor. He was literally told by people who know.
This is why I called it a carnival of stupid.

Transparent.
 

Maverick-Swe

Member
Nov 26, 2017
327
Sweden
No matter the spec TLOU2 is gonna look dorp dead gorgeous on Ps5 and same will Forza Horizon 5 on Scarlett. Both consoles will deliver a next gen performance we've all been waiting for.
If Sony and Ms are planning to roll out true ultra high end streaming in near future they won't really need better spec than this apart from a good Internet connection from the user which most of us already have anyway.

I'm happy these rumours are out so I can sit back and relax for 6-10 months when the official spec will be released. Keeping my expectations low and everything else will be a nice surprise.
I'm sure none of us will not be disappointed holiday 2020.
 
Oct 25, 2017
3,595
If he's wrong the devs all lied to him. Did they all lie? The thread was becoming a carnival because of you and a few others last page.

Because of me?
A lot of people are attacking me because i said Scarlett can be more powerful.
Why you're so offended?
Don't be mad when someone on a speculation thread says something you don't agree.
Just ignore me and move on :)
 

bear force one

Attempted to circumvent ban with alt account
Banned
Oct 26, 2017
4,305
Orlando
Because of me?
A lot of people are attacking me because i said Scarlett can be more powerful.
Why you're so offended?
Don't be mad when someone on a speculation thread says something you don't agree.
Just ignore me and move on :)
Oh again I'm not offended. I think you're silly.

Reiner has cred. You don't.
 

Deleted member 40133

User requested account closure
Banned
Feb 19, 2018
6,095
What devs? Where is the proof? Where is the official final specs?
Again, just rumours/speculations without proof.
He said PS5 has more Flops based on Devs info's about Targeting specs. You can believe it, i can believe it, everyone can believe it. Fine!
This means he knows the final specs from both console 18 months before the launch? If you believe in that, fine.
So far Komachi info's are much more factual :)



I didn't say he's lying.
I said he can be wrong based on Komachi leaks and speculations from this thread.
Like i said before, the info can be anything.
Until the final specs of both console, he's info's is just a rumour.

OH MY GOD. He is th executive editor of game Informer. Everytime he says something he's presenting as information he is putting his reputation on the line, his job is his reputation as a reputable reporter. And who are these devs? No reporter/journalist would ever give up their sources, reporters have literally been dragged to court before they gave anything up. This is just mind boggling, people took Brad Sam's and Jez Corden at their word. But am impartial reporter says something against the accepted narrative and people lose their minds
 

MrKlaw

Member
Oct 25, 2017
33,155
We don't know how big the CUs will be yet. We are assuming 36CU is a small die but RT may make the CUs larger. People looking at the Scarlett render and going 'oh it's a big die therefore more CUs' are doing the same thing in reverse.
 

bear force one

Attempted to circumvent ban with alt account
Banned
Oct 26, 2017
4,305
Orlando
OH MY GOD. He is th executive editor of game Informer. Everytime he says something he's presenting as information he is putting his reputation on the line, his job is his reputation as a reputable reporter. And who are these devs? No reporter/journalist would ever give up their sources, reporters have literally been dragged to court before they gave anything up. This is just mind boggling, people took Brad Sam's and Jez Corden at their word. But am impartial reporter says something against the accepted narrative and people lose their minds
Exactly. Oluasc has no intention of good faith here.
 
Oct 25, 2017
3,595
OH MY GOD. He is th executive editor of game Informer. Everytime he says something he's presenting as information he is putting his reputation on the line, his job is his reputation as a reputable reporter. And who are these devs? No reporter/journalist would ever give up their sources, reporters have literally been dragged to court before they gave anything up. This is just mind boggling, people took Brad Sam's and Jez Corden at their word. But am impartial reporter says something against the accepted narrative and people lose their minds

Exactly. Oluasc has no intention of good faith here.



RUMOURS
From him, not from me.

But i'm fine.
Move on :)
 

SharpX68K

Member
Nov 10, 2017
10,529
Chicagoland
Mhmy53o.gif
 

Silencerx98

Banned
Oct 25, 2017
1,289
I feel maybe we could all take a step back from the latest Oberon leaks and appreciate what a good bit of console optimization can do. I will share some findings I made while researching real time global illumination in games for my thesis paper this past semester and I hope you all appreciate it :)

So first let me bring up that will all due respect to David Cage, I don't think we should take his 4K/no RT and 1080p/RT comments at face value. For starters, he was likely giving a very rough estimation through a brute force method and things could definitely be more positive once the new consoles are out and programmers build the rendering pipeline specifically to the strengths of the new hardware. Honestly, as impressive as Nvidia's RTX solution seems now, I feel like improvements in performance could be made albeit at a cost of visual fidelity and lighting accuracy. I would like to admit since I don't have access to the algorithms, this is purely guesswork but as anyone tinkering with PC settings would know, Nvidia specific settings aren't the end all solutions as they usually come with a very heavy performance hit, which is expected since these features aren't targeting a specific set of hardware within a closed box. A good example is this generation with Ubisoft's custom SSBC ambient occlusion solution made for consoles, which delivered results far superior to traditional SSAO and occasionally close to HBAO+ with the performance hit less than SSAO. One of the crucial ways to achieve this was that SSBC was rendered at half or even quarter resolution then upscaled unlike HBAO+ which renders at full resolution.

Now, onto the real time GI solution that impressed me the most this generation from a visuals/performance ratio; we're looking at Q Games's very ambitious cascaded voxel cone tracing solution used in The Tomorrow Children. From the start, the tech for the game called for a dynamic lighting engine because most of the environment was destructible and had to be updated at runtime. Therefore, a real time GI solution was needed to account for changes in the scene. The team first looked into multiple approaches including VPL's and yes, real time ray tracing but eventually settled on voxel cone tracing. As a brief summary to the technique, it essentially shares some similarities to ray tracing, but the difference is that a ray will intersect at a point within a scene whereas a cone will intersect at an area or a volume. This makes cone tracing far less demanding to render because it covers a much larger region but the properties of estimations will also change. Since it is no longer the exact value of a point obtained but rather an area, filtering is required, which in turn produces an average value instead. The scene is then voxelized, which is to say the scene of geometry is processed as 3D grids within the world space and sparse octree structures are built. After voxelization is complete, direct lighting is injected into cones which are traced to calculate the intersections within a scene.

However, voxel cone tracing on its own would prove far too demanding on PS4 hardware, so the team went to create their custom solution; cascaded voxel cone tracing. Some of the approaches taken include storing the voxels in a 3D texture instead of octrees and tracing the cone three times to obtain bounce lighting. Even with these measures in place, the technique was still too demanding to run on PS4 with a 33 ms target frame buffer, so further optimizations were necessary. Only one out of the six cascade levels are updated every frame while the others would follow in multiples of two proceeding from the second frame as the developers realized the human eye is slow to register changes in indirect lighting. The final screen space cone traced results are then calculated at only 1/16th of the screen resolution, which is 1920 x 1080 on PS4 or 1/4 of both dimensions. The end results required only 6 ms of GPU time to render, ideal for the game's target 33 ms or 30FPS.

To summarize my post, basically I'm saying we should be hopeful that next gen developers will come up with custom solutions for real time ray tracing which would be far cheaper than currently available methods. For instance, the rays could be traced at 1/8th of native resolution then upscaled or checkerboarded and frankly it would be almost impossible to spot the difference since we're talking about thousands of rays cast into the scene at runtime. We shouldn't immediately feel down that enabling ray tracing in next gen games means we're stuck at 1080p again
 

modiz

Member
Oct 8, 2018
17,885
I feel maybe we could all take a step back from the latest Oberon leaks and appreciate what a good bit of console optimization can do. I will share some findings I made while researching real time global illumination in games for my thesis paper this past semester and I hope you all appreciate it :)

So first let me bring up that will all due respect to David Cage, I don't think we should take his 4K/no RT and 1080p/RT comments at face value. For starters, he was likely giving a very rough estimation through a brute force method and things could definitely be more positive once the new consoles are out and programmers build the rendering pipeline specifically to the strengths of the new hardware. Honestly, as impressive as Nvidia's RTX solution seems now, I feel like improvements in performance could be made albeit at a cost of visual fidelity and lighting accuracy. I would like to admit since I don't have access to the algorithms, this is purely guesswork but as anyone tinkering with PC settings would know, Nvidia specific settings aren't the end all solutions as they usually come with a very heavy performance hit, which is expected since these features aren't targeting a specific set of hardware within a closed box. A good example is this generation with Ubisoft's custom SSBC ambient occlusion solution made for consoles, which delivered results far superior to traditional SSAO and occasionally close to HBAO+ with the performance hit less than SSAO. One of the crucial ways to achieve this was that SSBC was rendered at half or even quarter resolution then upscaled unlike HBAO+ which renders at full resolution.

Now, onto the real time GI solution that impressed me the most this generation from a visuals/performance ratio; we're looking at Q Games's very ambitious cascaded voxel cone tracing solution used in The Tomorrow Children. From the start, the tech for the game called for a dynamic lighting engine because most of the environment was destructible and had to be updated at runtime. Therefore, a real time GI solution was needed to account for changes in the scene. The team first looked into multiple approaches including VPL's and yes, real time ray tracing but eventually settled on voxel cone tracing. As a brief summary to the technique, it essentially shares some similarities to ray tracing, but the difference is that a ray will intersect at a point within a scene whereas a cone will intersect at an area or a volume. This makes cone tracing far less demanding to render because it covers a much larger region but the properties of estimations will also change. Since it is no longer the exact value of a point obtained but rather an area, filtering is required, which in turn produces an average value instead. The scene is then voxelized, which is to say the scene of geometry is processed as 3D grids within the world space and sparse octree structures are built. After voxelization is complete, direct lighting is injected into cones which are traced to calculate the intersections within a scene.

However, voxel cone tracing on its own would prove far too demanding on PS4 hardware, so the team went to create their custom solution; cascaded voxel cone tracing. Some of the approaches taken include storing the voxels in a 3D texture instead of octrees and tracing the cone three times to obtain bounce lighting. Even with these measures in place, the technique was still too demanding to run on PS4 with a 33 ms target frame buffer, so further optimizations were necessary. Only one out of the six cascade levels are updated every frame while the others would follow in multiples of two proceeding from the second frame as the developers realized the human eye is slow to register changes in indirect lighting. The final screen space cone traced results are then calculated at only 1/16th of the screen resolution, which is 1920 x 1080 on PS4 or 1/4 of both dimensions. The end results required only 6 ms of GPU time to render, ideal for the game's target 33 ms or 30FPS.

To summarize my post, basically I'm saying we should be hopeful that next gen developers will come up with custom solutions for real time ray tracing which would be far cheaper than currently available methods. For instance, the rays could be traced at 1/8th of native resolution then upscaled or checkerboarded and frankly it would be almost impossible to spot the difference since we're talking about thousands of rays cast into the scene at runtime. We shouldn't immediately feel down that enabling ray tracing in next gen games means we're stuck at 1080p again
you dont need to guess that, we know as a fact NVidia's ray tracing is really expensive, Crytek has its own implementation of ray tracing running on a Vega 56:
 
Jan 17, 2019
964
What devs? Where is the proof? Where is the official final specs?
Again, just rumours/speculations without proof.
He said PS5 has more Flops based on Devs info's about Targeting specs. You can believe it, i can believe it, everyone can believe it. Fine!
This means he knows the final specs from both console 18 months before the launch? If you believe in that, fine.
So far Komachi info's are much more factual :)



I didn't say he's lying.
I said he can be wrong based on Komachi leaks and speculations from this thread.
Like i said before, the info can be anything.
Until the final specs of both console, he's info's is just a rumour.

Mentioning devs names to jeopardize their jobs. Yeah, man. That's how it works. LOL
Calling Reiner out, you must provide the hard proof to beat most reliable info and that's his info. Got that?


Because of me?
A lot of people are attacking me because i said Scarlett can be more powerful.
Why you're so offended?
Don't be mad when someone on a speculation thread says something you don't agree.
Just ignore me and move on :)

Yeah, spin, spin, spin me around. You clearly said WILL BE and how Reiner is wrong. Based on what? Oh, you saw PS5s SOC size. Smaller = weaker

Reiner/Colin rumour?
No evidence at all Scarlett is weaker.
Probably the rumour is wrong based on early preview devkits.
If we are looking at Scarlett video, the SOC around 380mm2+... Scarlett will be more powerful for sure.

Because the early preview Devkits.
Like many said.
Reiner/Colin rumour is wrong.
 

Silencerx98

Banned
Oct 25, 2017
1,289
you dont need to guess that, we know as a fact NVidia's ray tracing is really expensive, Crytek has its own implementation of ray tracing running on a Vega 56:

Right, but what compromises did Crytek take to achieve this? What was the display resolution of the demo? Was this ray tracing technique only applied to reflections, shadows, global illumination or all three? How feasible is this in an actual game? There's a lot to look forward to either way :)

Edit: Actually, watching the video for the first time now, it's clear the reflections are ray traced. Elements outside of screen space remain in reflections and the reflections themselves are of very high quality without the artifacts you typically see in screen space methods. However, I'm unsure whether shadows are also ray traced and there's no instance that shows real time GI as far as I'm aware. Also, the title claims the video ran at 4K with a Vega 56? If true, that's very impressive but yeah, I wish we get more details on Crytek's implementation
 

Andromeda

Member
Oct 27, 2017
4,856
Remind me again, if we're going with the assumption that Oberon/Ariel is related to Navi 10 Lite in some way, was Navi 10 Lite planned to be fabricated on 7nm+/EUV? Assuming the latest leaks are correct and PS5 GPU is targeting 2 GHz, it seems most are thinking 7nm+ is likely to significantly reduce power consumption from 7nm.
7np (7nm Performance) makes the more sense. it's a good compromise between 7nm and 7nm+ with the advantage of being 100% compatible with 7nm.


7np : 7% performance increase or 10% power consumption reduction
7nm+: 10% performance increase or 15% power consumption reduction, 20% increase in density.

10% power consumption reduction instead of 15% is a good compromise for nextgen. And this die could be easier to cool down than using 7nm+ because it'll consume ~6% more than 7nm+ while being ~17% bigger.
 
Last edited:

modiz

Member
Oct 8, 2018
17,885
7np (7nm Performance) makes the more sense. it's a good compromise between 7nm and 7nm+ with the advantage of being 100% compatible with 7nm.


7np : 7% performance increase, 10% power consumption reduction
7nm+: 10% performance increase, 15% power consumption reduction, 20% increase in density.

10% power consumption reduction instead of 15% is a good compromise for nextgen. And this die could be easier to cool down than using 7nm+ because it'll consume ~6% more than 7nm+ while being ~17% bigger.
is it 7% performance increase AND 10% power consumption reduction, or is it a "OR" situation?
 

Silencerx98

Banned
Oct 25, 2017
1,289
modiz So doing some reading, the Crytek demo only featured real time ray traced reflections, no mention of shadows or global illumination. Still very impressive stuff, nonetheless, but this shows you how it was possible on a Vega 56 :P
 

chris 1515

Member
Oct 27, 2017
7,074
Barcelona Spain
you dont need to guess that, we know as a fact NVidia's ray tracing is really expensive, Crytek has its own implementation of ray tracing running on a Vega 56:


Raytracing is expensive not only on Nvidia but in realtime rendering and offline rendering. This is why it tooks years to use it invented in 1979 used only in end of 90s begining of 2000s in offline rendering...


The raytracing part is only a visibility algorithm between two points but this is slow...

RT will always be slower than rasterization. You will never reach the same resolution and/or the same framerate.

Edit: I speak about raytracing not voxel approximation.
 
Status
Not open for further replies.