I think next year's Navi will be more power efficient from architectural improvements and node improvements. It won't surprise me to see base clocks go up 10% across the board.
You used to speculate that Microsoft would use Vega. Call it next gen Vega. All we read here is speculation, chances are that everyone is wrong.
Amen. Transparent at this point.Some of y'all just need to be a little less transparent with the bullshit is all. We got a leak on Ariel/Gonzalo clocking at 2Ghz and another link to PS4/Pro and some of you immediately felt the need to mention Reiner and how he must be wrong and Scarlett is clearly going to be more powerful yada yada. There's definitely a problem with the thread if any sliver of information is going to start the fanboy drivel.
Amen. Transparent at this point.
We could do without the carnival of stupid Everytime a new positive PS4 factoid hits.
Don't be that sensitive. Also, I think the ship has sailed on PS4 badmouthing. We're discussing next gen in here lol.
Zen 2 8C/16T with full L3 Cache = @70mm2
Navi 10 Full 40CU + 256-Bit Bus = @251mm2
Yes.
320-Bit Bus = 10~15mm2+? So, more bandwidth?
8CU = 40mm2+? So, more CUs?
But still less powerful?
Then why even bring Scarlett up if there is nothing new to discuss in which anything can be inferred?
No one is arguing that.
It's a speculation thread.
With Komachi new info we start the discussion about performance.
What's the problem?
Claiming Reiner is wrong ( without providing any proof to beat his tweet ) and claiming Scarlet WILL BE more powerful for sure ISN'T speculation on your side.
Given 2ghz is higher than Navi 10, plus RT hardware is likely in, can we even be certain it is navi 10 now? Plus, we have performance figures obviously not dimensions now.
If Scarlet is weaker, there will be meltdowns. The power narrative is something Microsoft is actively courting.
I think Scarlet will be more powerful but this PS5 sounds delicious.
Sensitive. Read last page. I spoke truth. And obviously i mistyped PS4 so super funny. You've already been called out so I'll just ignore any further comments directed my way.Don't be that sensitive. Also, I think the ship has sailed on PS4 badmouthing. We're discussing next gen in here lol.
Did you read the IF from my post or you're just ignoring what i wrote?
Until we see the final specs from both console what he said is just a speculation without proof.
There's no Navi 10 with RT Cores.
Reiner/Colin rumour?
No evidence at all Scarlett is weaker.
Probably the rumour is wrong based on early preview devkits.
If we are looking at Scarlett video, the SOC around 380mm2+... Scarlett will be more powerful for sure.
Because the early preview Devkits.
Like many said.
Reiner/Colin rumour is wrong.
Or half of the 380mm+ SOC is Raytracing/L3Cache.
Or MS don't give a fuck and will release a Lockhart 4TF box and not the Anaconda.
lol
I think there might be an internal meltdown from a certain bird if PS5 is weaker...If Scarlet is weaker, there will be meltdowns. The power narrative is something Microsoft is actively courting.
I think Scarlet will be more powerful but this PS5 sounds delicious.
Did you read the IF from my post or you're just ignoring what i wrote?
Until we see the final specs from both console what he said is just a speculation without proof.
Did you read the IF from my post or you're just ignoring what i wrote?
Until we see the final specs from both console what he said is just a speculation without proof.
There's no Navi 10 with RT Cores.
Thank you for this. Very informative.Ok, so seems like nothing else leaked since I went to sleep. Surprised no article was made yet, but I guess that is hard to do when only komachi knows where to access the AMD leak. No one here or in beyond 3D managed to find this right?
Let's try making sense of it all and going through everything we know about the sony's Console once again.
Warning: long post incoming!
First in January a known AMD leaker, APISAK, has found Gonzalo and Ariel, Gonzalo is a gaming APU and has a 8 core CPU clocked at 1.6 GHz base clock and 3.2 GHz boost clock, and the GPU Ariel had 1GHz clock. Also it had an unknown cache size, implying that it uses a different amount of cache, likely cut down. Gonzalo was marked as Engineering Sample 2 and it's code fit the PlayStation production, also the base clock of 1.6 GHz would make sense for a PlayStation console because it is similar to previous implementation of backwards compatibility with the PS4 Pro. Ariel iGPU has the id 13E9, to my understanding, 13E0 to 13FF are all reserved for Navi 10 LITE.
Later in early April, a Quality Sample version of Gonzalo had leaked, once again by APISAK, this time with a 1.8Ghz GPU clock and updated Ariel id of 13F8, which already sounded very high clock speed. But other than that,it didn't really tell us much else.
a week later the plans for the next generation PlayStation have been discussed by Mark Cerny for Wired, the proximity of Gonzalo getting updated to the article seemed suspicious indeed.
In the article the following things were confirmed: 8 core zen 2 CPU (Gonzalo also listed 8 core), Navi GPU with ray tracing, specialized audio chip to enable more accurate audio to the world, backwards compatibility with the PS4 generation, and lastly a custom SSD that, according to mark cerny, has a higher raw bandwidth than any SSD on PC. Cerny also said that not only is the read speed important (implying that by high bandwidth he was referring to read), but so is the IO mechanism and the software stack.
In May, AMD has announced their 7nm Zen 2 CPUs, surprise to no one, they are great CPUs for their prices. Also AMD announced their Navi architecture, based on a new RDNA architecture, 25% IPC boost over the previous generation, much more power efficient, etc.
Also in May our own user gofreak found a patent that details potential improvements to the SSD to lower its latency, improve its bandwidth etc. You can read more in the thread they made for this patent, very interesting stuff:
In June AMD NAVI was unveiled as the RX 5700 and the RX 5700XT, 36CU and 40CU GPUs, each with 8GB of GDDR6. These GPUs are based off the 40CU Navi 10 die. It makes sense then that gonzalo is based off a similar die because its GPU is based off the Navi 10LITE die.PS5 - a patent dive into what might be the tech behind Sony's SSD customisations (technical!)
This will be one for people interested in some potentially more technical speculation. I posted in the next-gen speculation thread, but was encouraged to spin it off into its own thread. I did some patent diving to see if I could dig up any likely candidates for what Sony's SSD solution might...www.resetera.com
Navi launched, its gaming performance had been pretty good, but not incredible,during the launch period IPC tests were made for Navi and it is found out Navi actually has a higher or equal IPC NVidia's latest architecture, and 39% higher IPC than Polaris (PS4 pro and X1X GPU architecture).
A few weeks ago, a product was found on user benchmark called AMD Flute. It is still no clear what AMD Flute is, but it has the same CPU clocks as Gonzalo, also it mentions that not only it has 8 cores, but 16 threads, and GPU Id of 13F9, or a Navi 10 LITE, which means this is most likely the whole system that has Gonzalo, or what we assumed to be the PlayStation 5 devkit. The CPU did show a lower score Han expected, and what seems to be a quarter of the cache of the normal ryzen 3000 CPU, this is possibly the cause of the lower score. Flute has 16 chips of GDDR6 each of 1GB, seems to be a downclocked version of the 18Gbps memory, it makes sense if the final version uses 8 chips of 2GB instead.
And finally we get to yesterday, where the reliable AMD insider komachi has found out that AMD has accidentally uploaded a lot of data with a public access, in there he found mentions if Oberon and Ariel, he explained he thinks they are the same. Oberon has 3 GPU clocks listed:
Gen 0 with 800MHz clock, gen 1 with 911MHz clock, and gen 2 with 2000MHz, this information is critical. Gen 0 and gen 1 are obviously PS4 and PS4 pro gpu clocks., Which means that without a doubt Oberon is the PS5, and has 2GHz clock. Why would it have those clocks? The reason is this; the PlayStation 5 seems to use a very similar backwards compatibility solution to the PS4 Pro, it will have the same Compute unit count of 36CU, so that to be compatible with the base PS4, you could disable half the compute units and clock the GPU at 800MHz for safe compatibility activate all the compute units and clock at 911MHz for safe PS4 pro compatibility. 36 Compute units clocked at 2000MHz would give us 9.2TF, with 39% higher IPC than Polaris we will get the equivalent of 12.8TF Polaris GPU, or 3x PS4Pro or a little higher than 2X X1X GPU performance. If this is true then Sony has a small but fast GPU, which should help in reducing cost, and also it might have a bit higher performance than a wider but lower clocked GPU due to clocks scaling better than CU count.
Then comes the question: why has the GPU clock increased. This is speculation territory, but, 40CU with 1800MHz would have an equal amount of TF, so maybe until they get the 36CU 2000MHz to work, they gave developers 40CU 1800MHz. Needs to be mentioned that usually console manufacturers uses disables a few compute units, but has been established before, in order to have 44CU with 4 disabled, due to Navi's design, you would need to waste a lot more space, compared to 40CU with 4 disabled to have 36CU. So it makes more sense going that route.
Komachi has been saying that Oberon is Ariel, so if we connect everything and really Ariel, Gonzalo, Flute and Oberon are all PS5 related code names then we can make a close to final spec list:
Zen 2 CPU with 8 core 16 threads clocked at 3.2GHz, quarter of desktop ryzen 3000 CPU cache.
Navi GPU with 36 compute units clocked at 2GHz = 9.2 TF which roughly equals to 2X X1X or 3x PS4Pro in GPU performance.
16GB GDDR6 with a bandwidth of around 530GB/s I think.
There are 2 question marks left:
Price
SSD size and bandwidth
Phew that was a lot of typing, but I felt like this was necessary at this point.
Please correct me on anything here or ask questions where needed.
Reiner's proof are the devs. You didn't talked with them, he did.
"if" you say? Depends in which context.
Except Reiner wasn't speculating. He was told in no uncertain terms. You can claim he's lying but then onus of proof is on you.
It's not a rumor. He was literally told by people who know.
This is why I called it a carnival of stupid.
If he's wrong the devs all lied to him. Did they all lie? The thread was becoming a carnival because of you and a few others last page.What if he's wrong?
You gonna call it a rumour, speculation or a carnival of stupid?
If he's wrong the devs all lied to him. Did they all lie? The thread was becoming a carnival because of you and a few others last page.
Oh again I'm not offended. I think you're silly.Because of me?
A lot of people are attacking me because i said Scarlett can be more powerful.
Why you're so offended?
Don't be mad when someone on a speculation thread says something you don't agree.
Just ignore me and move on :)
What devs? Where is the proof? Where is the official final specs?
Again, just rumours/speculations without proof.
He said PS5 has more Flops based on Devs info's about Targeting specs. You can believe it, i can believe it, everyone can believe it. Fine!
This means he knows the final specs from both console 18 months before the launch? If you believe in that, fine.
So far Komachi info's are much more factual :)
I didn't say he's lying.
I said he can be wrong based on Komachi leaks and speculations from this thread.
Like i said before, the info can be anything.
Until the final specs of both console, he's info's is just a rumour.
He literally claimed it was a rumor. It's a rumor.It's not a rumor. He was literally told by people who know.
This is why I called it a carnival of stupid.
Transparent.
Exactly. Oluasc has no intention of good faith here.OH MY GOD. He is th executive editor of game Informer. Everytime he says something he's presenting as information he is putting his reputation on the line, his job is his reputation as a reputable reporter. And who are these devs? No reporter/journalist would ever give up their sources, reporters have literally been dragged to court before they gave anything up. This is just mind boggling, people took Brad Sam's and Jez Corden at their word. But am impartial reporter says something against the accepted narrative and people lose their minds
OH MY GOD. He is th executive editor of game Informer. Everytime he says something he's presenting as information he is putting his reputation on the line, his job is his reputation as a reputable reporter. And who are these devs? No reporter/journalist would ever give up their sources, reporters have literally been dragged to court before they gave anything up. This is just mind boggling, people took Brad Sam's and Jez Corden at their word. But am impartial reporter says something against the accepted narrative and people lose their minds
Why could he be wrong based on Komachi's leak?I said he can be wrong based on Komachi leaks and speculations from this thread.
Like i said before, the info can be anything.
Until the final specs of both console, he's info's is just a rumour.
I think this new info is just a beginning. It will escalate. We are witnessing the true start of the leak season
Did the 380mm SOC size came from Komachi?I said if the Komachi leak is true, if the Scarlett SOC size is true, i doubt Scarlett is around 8TF with 380mm2 SOC.
Not more not less.
Phew that was a lot of typing, but I felt like this was necessary at this point.
Please correct me on anything here or ask questions where needed.
you dont need to guess that, we know as a fact NVidia's ray tracing is really expensive, Crytek has its own implementation of ray tracing running on a Vega 56:I feel maybe we could all take a step back from the latest Oberon leaks and appreciate what a good bit of console optimization can do. I will share some findings I made while researching real time global illumination in games for my thesis paper this past semester and I hope you all appreciate it :)
So first let me bring up that will all due respect to David Cage, I don't think we should take his 4K/no RT and 1080p/RT comments at face value. For starters, he was likely giving a very rough estimation through a brute force method and things could definitely be more positive once the new consoles are out and programmers build the rendering pipeline specifically to the strengths of the new hardware. Honestly, as impressive as Nvidia's RTX solution seems now, I feel like improvements in performance could be made albeit at a cost of visual fidelity and lighting accuracy. I would like to admit since I don't have access to the algorithms, this is purely guesswork but as anyone tinkering with PC settings would know, Nvidia specific settings aren't the end all solutions as they usually come with a very heavy performance hit, which is expected since these features aren't targeting a specific set of hardware within a closed box. A good example is this generation with Ubisoft's custom SSBC ambient occlusion solution made for consoles, which delivered results far superior to traditional SSAO and occasionally close to HBAO+ with the performance hit less than SSAO. One of the crucial ways to achieve this was that SSBC was rendered at half or even quarter resolution then upscaled unlike HBAO+ which renders at full resolution.
Now, onto the real time GI solution that impressed me the most this generation from a visuals/performance ratio; we're looking at Q Games's very ambitious cascaded voxel cone tracing solution used in The Tomorrow Children. From the start, the tech for the game called for a dynamic lighting engine because most of the environment was destructible and had to be updated at runtime. Therefore, a real time GI solution was needed to account for changes in the scene. The team first looked into multiple approaches including VPL's and yes, real time ray tracing but eventually settled on voxel cone tracing. As a brief summary to the technique, it essentially shares some similarities to ray tracing, but the difference is that a ray will intersect at a point within a scene whereas a cone will intersect at an area or a volume. This makes cone tracing far less demanding to render because it covers a much larger region but the properties of estimations will also change. Since it is no longer the exact value of a point obtained but rather an area, filtering is required, which in turn produces an average value instead. The scene is then voxelized, which is to say the scene of geometry is processed as 3D grids within the world space and sparse octree structures are built. After voxelization is complete, direct lighting is injected into cones which are traced to calculate the intersections within a scene.
However, voxel cone tracing on its own would prove far too demanding on PS4 hardware, so the team went to create their custom solution; cascaded voxel cone tracing. Some of the approaches taken include storing the voxels in a 3D texture instead of octrees and tracing the cone three times to obtain bounce lighting. Even with these measures in place, the technique was still too demanding to run on PS4 with a 33 ms target frame buffer, so further optimizations were necessary. Only one out of the six cascade levels are updated every frame while the others would follow in multiples of two proceeding from the second frame as the developers realized the human eye is slow to register changes in indirect lighting. The final screen space cone traced results are then calculated at only 1/16th of the screen resolution, which is 1920 x 1080 on PS4 or 1/4 of both dimensions. The end results required only 6 ms of GPU time to render, ideal for the game's target 33 ms or 30FPS.
To summarize my post, basically I'm saying we should be hopeful that next gen developers will come up with custom solutions for real time ray tracing which would be far cheaper than currently available methods. For instance, the rays could be traced at 1/8th of native resolution then upscaled or checkerboarded and frankly it would be almost impossible to spot the difference since we're talking about thousands of rays cast into the scene at runtime. We shouldn't immediately feel down that enabling ray tracing in next gen games means we're stuck at 1080p again
I'm a little worried given how loud my Pro can be....I really hope Sony step up the refinement of the cooling solution.
What devs? Where is the proof? Where is the official final specs?
Again, just rumours/speculations without proof.
He said PS5 has more Flops based on Devs info's about Targeting specs. You can believe it, i can believe it, everyone can believe it. Fine!
This means he knows the final specs from both console 18 months before the launch? If you believe in that, fine.
So far Komachi info's are much more factual :)
I didn't say he's lying.
I said he can be wrong based on Komachi leaks and speculations from this thread.
Like i said before, the info can be anything.
Until the final specs of both console, he's info's is just a rumour.
Because of me?
A lot of people are attacking me because i said Scarlett can be more powerful.
Why you're so offended?
Don't be mad when someone on a speculation thread says something you don't agree.
Just ignore me and move on :)
Reiner/Colin rumour?
No evidence at all Scarlett is weaker.
Probably the rumour is wrong based on early preview devkits.
If we are looking at Scarlett video, the SOC around 380mm2+... Scarlett will be more powerful for sure.
Because the early preview Devkits.
Like many said.
Reiner/Colin rumour is wrong.
you dont need to guess that, we know as a fact NVidia's ray tracing is really expensive, Crytek has its own implementation of ray tracing running on a Vega 56:
7np (7nm Performance) makes the more sense. it's a good compromise between 7nm and 7nm+ with the advantage of being 100% compatible with 7nm.Remind me again, if we're going with the assumption that Oberon/Ariel is related to Navi 10 Lite in some way, was Navi 10 Lite planned to be fabricated on 7nm+/EUV? Assuming the latest leaks are correct and PS5 GPU is targeting 2 GHz, it seems most are thinking 7nm+ is likely to significantly reduce power consumption from 7nm.
is it 7% performance increase AND 10% power consumption reduction, or is it a "OR" situation?7np (7nm Performance) makes the more sense. it's a good compromise between 7nm and 7nm+ with the advantage of being 100% compatible with 7nm.
7np : 7% performance increase, 10% power consumption reduction
7nm+: 10% performance increase, 15% power consumption reduction, 20% increase in density.
10% power consumption reduction instead of 15% is a good compromise for nextgen. And this die could be easier to cool down than using 7nm+ because it'll consume ~6% more than 7nm+ while being ~17% bigger.
With foundries, assume OR unless otherwise specified.is it 7% performance increase AND 10% power consumption reduction, or is it a "OR" situation?
you dont need to guess that, we know as a fact NVidia's ray tracing is really expensive, Crytek has its own implementation of ray tracing running on a Vega 56: