次世代コンソールゲーム機 「プレイステーション 5」に名称決定 2020年年末商戦期に発売
ソニー・インタラクティブエンタテインメント(SIE)は、次世代コンソールゲーム機の名称を「プレイステーション 5」(PS5)に決定し、2020年の年末商戦期に発売することをお知らせいたします。
www.sie.com
4x's the CPU power
2x's the GPU power
2x's the RAM
4x's as fast hard drive
That's like 16 x's by my count, conservatively really.
Keep in mind that "OS" space will also include the reserve for things like suspending multiple games at once.4K minimum? Interesting move.
OS using 3GB of system ram is hard for me to believe. 2GB seems appropriate, at the very most.
Keep in mind that "OS" space will also include the reserve for things like suspending multiple games at once.
For sure. I think next gen will come out the gate priced high and they'll count on mass production helping the new technology become more commonplace, driving costs down quickly. Sort of like Nintendo did with cartridges, or with low internal storage and counting on MicroSD card prices to drop.
The high cost is also why having a 1080p Lockhart model makes sense.
I don't know why anyone was expecting anything else though. To up games to 4K, introduce an SSD, and still have enough room for overhead over the current generation was always going to require an expensive console. I mean, take the X1X, add a regular 1TB SSD and a new CPU and you're already looking at a $399 console. That's why I've been a proponent of having a 1080p model and a 4K model for a while.
Every previous console generation has been about a 10x increase in RAM on average.
Every previous console generation has been about a 10x increase in RAM on average.
I have an idea, so 16 GB RAM in Xbox Series X (Anaconda) right?
And 12 GB is rumored for Lockhart.
Microsoft should drop Lockhart and put its 12 GB back into Xbox Series X.
16 + 12 = 28 GB (24 GB for games, 4 GB for OS),
And done.
Of course, but by that same metric, only going for 2x feels small.Was never gonna happen. 80 GB of ram would be prohibitively expensive unless you want $2k consoles
Technology don't work like thatEvery previous console generation has been about a 10x increase in RAM on average.
It would be hilarious if PS5 and Xbox Series X end up literally being mirror images of each other spec wise
It will purely come down to who has the best games and online offerings/services
PS5 better have full PS4 backwards compatibility with no smoke and mirrors or extra fees at the very least
Of course, but by that same metric, only going for 2x feels small.
What could even be done with 128GB of RAM... I mean wouldn't that be overkill?
Isn't it remored that PS5 has an SSD capable of 20GB/s read speed?
it means you consider 80GB of GDDR6 RAM to be a reasonable next gen RAM amount. InterestingEvery previous console generation has been about a 10x increase in RAM on average.
So what pc GPU are you lot guessing would be close to equal this 12 TF card ?
Keep in mind that "OS" space will also include the reserve for things like suspending multiple games at once.
So what pc GPU are you lot guessing would be close to equal this 12 TF card ?
Lol wth.. People really still believe that 24fps crap.Is that the lowest memory bump for any gen ever? Doesn't the One X have 12Gb?
120FPS sounds incredibly un-cinematic.
Huh? I haven't seen anything remotely like that rumoured. All that's been said there is faster than the pcie3 drives that were available when the first wired article hit.
what makes you say that? if the 5700xt is equivalent to a 2070 and its roughly 10 tflops then a 12 tflops gpu based on the same architecture would be equivalent to the rtx 2080, no?
Yeah honestly... given the SSD tech employed in both consoles will probably be very much the same I would be willing to predict that it will be the least important thing to have an advantage on. Like if MS for example has SSD tech that theoretically could load Spider Man in 0.4 seconds while PS5 does it in 0.8 seconds that's like... nothing anyone will notice or care about outside of YouTube comparison videos and systemwars fuckery on forums. It's gonna come down to who has the best CPU/GPU solution and features on the chipset to really let devs take advantage of every last drop of potential coming from XSX/PS5.Huh? I haven't seen anything remotely like that rumoured. All that's been said there is faster than the pcie3 drives that were available when the first wired article hit.
2080 ti is a 16-17 tflops Turning GPU. assuming rdna 2.0 has caught up with turing tflops, we are still 5 short of a 2080 ti.
Every previous console generation has been about a 10x increase in RAM on average.
'tis a beast.
What I am really surprised by:
1. CPU speed. I was not expecting anything above 3.2GHz given the power consumption according to the overclocking info for Ryzen as shared on the old next gen speculation thread increased sharply after around 3GHz.
2. GPU TF count. I was expecting 12TF GCN. For them to pull off 12TF RDNA shows gains from the second generation production/fab efficiency. IIRC, it was going to be around 5% extra performance for the same frequency and so this is a real surprise.
3. RAM allocation. Given MS is most likely not holding back with their premium model, it may very well indicate the upper bounds for RAM allocation and PS5 may also see the same fate.
If true, that GPU is more than I expected and I can't see it release for less than $499
devils advocate:
1) could it be no hyper threading to help hit higher clocks?
2) it says 12TF, and RDNA. Doesn't necessarily mean 12 RDNA TF. Although it probably does
Its two times X and 8 times OG xbox one/sIs this new information? It doesn't seem to jell with "4x increase in performance over Xbox One X" bit that was announced yesterday. The GPU here has 2x TFLOPS over One X.
If 13GB is only available to developers that's the ballgame isn't it?
I.e. what if PS5 has 12GB of RAM that is fully available to devs, plus 4GB of slower system RAM for the OS, and a faster SSD than XSX?
You're basically at the same amount of memory available (well, going from 12GB to 13GB on XSX is more, but probably not very noticable)
Of course, MS could always make more memory available with future updates after making numerous optimizations as time goes on, which is probably likely...
Consoles don't usually have boost clocks.Should we expect a constant 3.6GHz or max core speed of 3.6GHz?
Maybe it will help keep the MSRP <$500?Hardware based Ray Tracing solution but using RDNA not RDNA2? This is weird...
Ah, right.if you had a significantly faster SSD (eg like reram), how effective would that be at giving devs the equivalent of a larger 'virtual' ram pool? I mean that's what the SSD is already doing - that 13GB will effectively be much higher due to how quickly you can swap things in and out
They are using hilariously low base and boost clocks to get that 11.5-13.5 tflops. 1350 and 1540 mhz. if the rtx 2080 ti is anything my regular Non OC'ed RTX 2080, it regularly spends time in the 1.9-2.0 ghz range. if anything i distinctly remember seeing quite a few tests where the 2080 ti runs at over 2.0 ghz. thats a 17 tflops turing gpu.Base 2080 TI is 11.5TF to 13TF no OC? What ever is in Wikipedia.
But if we're using the boost clock 2080TI in people's PC's, 12TF RDNA would still be a tier under.