Next gen PS5 and next Xbox launch speculation - Secret sauces spicing 2019

When will the first 'next gen' console be revealed?

  • First half of 2019

    Votes: 593 15.6%
  • Second half of 2019(let's say post E3)

    Votes: 1,361 35.9%
  • First half of 2020

    Votes: 1,675 44.2%
  • 2021 :^)

    Votes: 161 4.2%

  • Total voters
    3,790
  • Poll closed .
Status
Not open for further replies.

Miasma

Member
Oct 31, 2017
126
PS4 was also a $399 machine. Looks like Sony might be going with a $499 beast.

Google stadia severs are 10.7 tflops. Jason Schrier said Sony and MS are aiming higher than that. So 12 tflops is pretty much a certainty. If they add a nice cooling system and go to a $499, we can do 14 Tflops on 7nm with no issues.

RTX 2080 Ti is an overpriced monstrosity. Expect 1080 Ti levels of performance from next gen consoles.

1080ti is 11.3 tflops though?
People here are usually referring to next-gen GPU performance in relation to AMD's cards, not NVidia's,. So expectations are between Vega 56 and 64 performance, not 2080Ttis.

Alot of people on this thread dont seem to think that though, however I do agree with what you have said regarding vega 56/ 64 peformance I would put my money on the graphical component of the APU being something within the power of a vega 64 but with the navi architecture.
 

VX1

Member
Oct 28, 2017
4,020
Europe
28nm to 16/14nm was a move from planar to FinFET transistors, and this was the main reason for most designs gaining clocks on 16/14 compared to same(ish) designs on 28nm. This won't occur again with the transition to 7nm and thus nobody should expect significant clock improvements on 7nm parts when compared to 16nm predecessors - unless said parts will be specifically re-engineered to reach significantly higher clocks of course
Oh really? I didn't know this...interesting.
 

anexanhume

Member
Oct 25, 2017
4,302
Oh really? I didn't know this...interesting.
That remains to be seen. We have a lot of reason to believe clocks won’t move according to statements from both ARM and AMD, and we know it has lot to do with interconnect resistivity. However, the leaked clock speeds of Ryzen 3000 show a hefty bump to base clocks.

You could also get to higher clock via architecture changes. Nvidia GPUs can often boost above 2GHz.
 

BradGrenz

Banned
Oct 27, 2017
900
Yeah, there is no reason to assume Navi lacks architectural optimizations aimed at achieving higher clock speeds. I think it's highly likely it does.
 

anexanhume

Member
Oct 25, 2017
4,302
Yeah, there is no reason to assume Navi lacks architectural optimizations aimed at achieving higher clock speeds. I think it's highly likely it does.
They need to prioritize perf/Watt. It likely follows that clocks can be higher due to lower power density, but it’s not necessarily guaranteed. Apple’s early A series processors were very low in frequency but perf/Watt leaders, for example. Now they clock similarly to others in the industry.
 

Papacheeks

Member
Oct 27, 2017
3,224
Watertown, NY
Navi is an architecture and Radeon 7 is a product. Your statement is confused.
No it is not.

Navi is based on GCN still, and from what the sources I TRUST say is it's much better efficiency than raw power. RADEON 7 is based on instinct cards using vega chip as it's basis but is basically chips from instinct cards that more than likely didn't meet certain criteria. NAVI will be a line of GPU's, and is just the code name, for that line. Just like Polaris was for the RX 500 line.

It's going to have variations, and be much more efficient than vega, with new memory controller. I expect something better than the RX VEGA's going into 2080 territory possibly performance wise for certain task's, but not compete on 1:1 perfomance. There's just no way in terms of what they have R&D wise compared to NVIDIA.

I expect a highend version, but from what AMD has been open about, they consider the Radeon 7 for now to be their high end. With Navi line filling in the gap for replacing the RX 500- RX VEGA line at much better power/performance and in price.

Even with anexanhume being much more knowledgeable than I am at this stuff, it's still just speculation. But from what we know in terms of Radeon being short handed, and having these already in play roadmap wise it's hard to believe that they will be able to compete with a 2080ti so quickly with a more efficient GCN arch, that chip wise is a evolution to Polaris.

I mean if it is more powerful than a Radeon 7 with better performance/watt. Then gravy, it's just hard to swallow knowing what Radeon group has had available for resources, and knowing this was all part of a roadmap way back when Raj was with the company.
 

Pheonix

Member
Dec 14, 2018
1,227
St Kitts
So A) "should be able to double the CUs" in a purely physical sense of being capable of putting about twice as many transistors in a chip of the same size on 7nm as it was possible on 16nm -- sure. This doesn't say anything about the cost of said chip though which might be quite a bit higher than that of the chip on 16nm of the same size and costs are way more important than process capabilities - as you can probably guess from the fact that current gen consoles all have APUs with 200-400mm^2 die sizes while the same 16nm process (or family of) they are using allows building something like a 815mm^2 sized Nvidia GV100 GPU.

B) The capability of doubling the number of transistors wrt GCN GPUs is largely irrelevant at this point as GCN is unable to make use of these additional transistors due to hitting the power ceiling well before reaching the maximum die size on both 14nm (Vega 10 is consuming 300W at 482mm^2) and 7nm (Vega 20 is consuming 300W at 331mm^2). For them to make any use of what 7nm process provides in GPU complexity increases they have to make *huge* gains in power efficiency first.



I don't even know if I want to ask you what's the difference and what is "ported over to 7nm" mean...
1) it's been said that 7nm's more complex will cost about 10-20% more than 14nm for the same sized die. Can't remember the details or where I saw that.

But regardless. Everyone is shifting over to that node, and by everyone I mean mobile which are the single largest customers of chips period. Economics of scale bla bla bla

2) built from the ground up for 7nm means just that. Navi was always intended to be a 7nm chip. And there will be things that being built on 7nm would allow them do, certain things that are smaller that allows them to out more of other things on a chip and so on. Navi is also a more focused GPU than Vega. Supposedly cutting out most of the bloat that Vega had.

3) Oh and the clock thing.... Vega 64 at 14nm clocked at a peak ~1500Mhz. Vega 7 on 7nm clocked at a peak $1750Mhz.
 
Last edited:

BradGrenz

Banned
Oct 27, 2017
900
They need to prioritize perf/Watt. It likely follows that clocks can be higher due to lower power density, but it’s not necessarily guaranteed. Apple’s early A series processors were very low in frequency but perf/Watt leaders, for example. Now they clock similarly to others in the industry.
Optimizing for clock speeds is one way to achieve better perf/watt. nVidia chips run at higher clockspeeds with better perf/watt.
 

anexanhume

Member
Oct 25, 2017
4,302
Optimizing for clock speeds is one way to achieve better perf/watt. nVidia chips run at higher clockspeeds with better perf/watt.
I don’t know what optimizing for clock speeds means in this context. A good arch is going to look better at any frequency unless you’re somehow designing a chip that is huge and has high leakage overhead. Even Turing manages to stay efficient with a huge die size. Perhaps AMD is not aggressive enough with DVFS, AVFS, clock and power gating.

You have to balance pipelining to achieve higher clock speeds with IPC so you’re not constantly flushing it. Of course, that’s a CPU paradigm but the idea of wavefronts with nops applies here.

I think at the core they need to revamp their cell library (in addition to architecture), which the Zen engineers were hopefully able to assist with.
 
Last edited:

Eylos

Member
Oct 25, 2017
6,310
with the article of kotaku about anthem i think that reddit leaker had a real source, 2 leaks of 2 right, it looks like they wanted to delay the game because it was a mess, but ea didnt want to change the dead line, also about the frostbite engine:
i think some of that stuff might be right, but he could be creating some stuff

what the leaker said:
"- "Anthem is a mess on This gen Consoles, Going to get delayed again"
Maybe they improved on some aspects that decided to cancel the delay, I’ll confirm the Anthem situation with my source
Edit : Same answer as before it will get delayed, The thing is The Modified Frostbite engine on consoles isn’t performing how they want to, The gameplay you see on live-streams are all modified demos
Edit edit : Ea wants it out before their earning report in March. That’s all that matters not if the game is ready or not""



quoting the article
"They had publicly committed to a fall 2018 ship date, but that had never been realistic. Publisher EA also wouldn’t let them delay the game any further than March 2019, the end of the company’s fiscal year. They were entering production so late, it seemed like it might be impossible to ship anything by early 2019, let alone a game that could live up to BioWare’s lofty standards."

"The explanation for this lag can be summed up in one word, a word that has plagued many of EA’s studios for years now, most notably BioWare and the now-defunct Visceral Games, a word that can still evoke a mocking smile or sad grimace from anyone who’s spent any time with it.
That word, of course, is Frostbite."
https://kotaku.com/how-biowares-anthem-went-wrong-1833731964
 
Last edited:
Oct 26, 2017
4,434
United Kingdom
No it is not.

Navi is based on GCN still, and from what the sources I TRUST say is it's much better efficiency than raw power. RADEON 7 is based on instinct cards using vega chip as it's basis but is basically chips from instinct cards that more than likely didn't meet certain criteria. NAVI will be a line of GPU's, and is just the code name, for that line. Just like Polaris was for the RX 500 line.

It's going to have variations, and be much more efficient than vega, with new memory controller. I expect something better than the RX VEGA's going into 2080 territory possibly performance wise for certain task's, but not compete on 1:1 perfomance. There's just no way in terms of what they have R&D wise compared to NVIDIA.

I expect a highend version, but from what AMD has been open about, they consider the Radeon 7 for now to be their high end. With Navi line filling in the gap for replacing the RX 500- RX VEGA line at much better power/performance and in price.

Even with anexanhume being much more knowledgeable than I am at this stuff, it's still just speculation. But from what we know in terms of Radeon being short handed, and having these already in play roadmap wise it's hard to believe that they will be able to compete with a 2080ti so quickly with a more efficient GCN arch, that chip wise is a evolution to Polaris.

I mean if it is more powerful than a Radeon 7 with better performance/watt. Then gravy, it's just hard to swallow knowing what Radeon group has had available for resources, and knowing this was all part of a roadmap way back when Raj was with the company.
What are you talking about? Navi IS a GPU microarchitecture, regardless of whether it’s based on GCN or not.

This whole diatribe is arguing against something I haven’t even said. I’ve been discussing architectures and regardless of faceless ‘source’ speculation, what we KNOW as facts is that Vega is less efficient and less performing as an architecture than it was originally intended to be.

A hypothetical Navi part of the same TDP, clock speed and die area as Radeon 7 will outperform Radeon 7. Why? Because it will be a much better resourced project within AMD and shouldn’t release with most of its major features disabled or not working as intended. Even if all Navi ends up being is a Vega with working NGG and primitive shaders, and with the geometry performance to take advantage of DSBR, it will comfortably outperform Radeon 7.

Radeon 7 is a product based on the Vega architecture. Navi is an architecture that will be implemented in a number of products. There is nothing stopping AMD making a Navi part that will outperform all released Vega products. A mid-range Navi part with 72-80 active CUs modestly clocked will outperform Radeon 7, even if Navi ends up being Vega but fixed and with a front end to feed more than 64 CUs (which is all but guaranteed).

If you really think a GPU better than that is going to make it into a console the following year then, well, good luck with that.
Who said anything about consoles?

Reading comprehension fail.
 
Last edited:

BreakAtmo

Member
Nov 12, 2017
3,445
I wonder if a next-gen-only Destiny 3 would go for 60fps. A cross-gen or enhanced BC version, sure, but for the latter they might want to go bananas with something else.
 

TONX

Member
Oct 28, 2017
6
Ft. Lauderdale
Honestly all it needs for me to be happy is true 4K at 60fps. Graphics have finally gotten to the point where we are splitting hairs when it comes to comparisons.
 

DrKeo

Member
Mar 3, 2019
592
Israel
Honestly all it needs for me to be happy is true 4K at 60fps. Graphics have finally gotten to the point where we are splitting hairs when it comes to comparisons.
If next gen will be around the 1080ti's performance, it runs Destiny 2 @75fps in native 4K so it could happen. Thing is, Bungie always went for graphics over resolution and performance so I'm not sure it will happen.
 

Thorrgal

Member
Oct 26, 2017
4,382
Yeah I'm back on the Fall 2020 train, there's no reason for Sony to fix what ain't broke and deviate from the strategy that helped them sell 100+ million consoles this gen
Agree. We may have to wait for months or even a year to have until the PS5 it's announced, probably around February/March of 2020.

I think it's time to check this thread twice a week instead of twice a day, at least until E3...we've really gone in circles for moths now, and nothing of substance has come out of GDC
 

BreakAtmo

Member
Nov 12, 2017
3,445
Considering the fact that i'm not expecting a huge visual update to the renderer in Destiny 3, I would think Zen CPUs can make a 60fps target a very real possibility.
I agree that a visual upgrade probably won't be too major, but what about if Bungie decides to, say, increase enemy numbers? Improve enemy AI? Have missions where huge ship battles are occurring in the sky that you can seamlessly interact with while fighting on the ground? Wouldn't these tax the CPU more? Or is it lonely that the Zen2 chip will be able to handle all this at 60fps anyway?
 

Kenzodielocke

Member
Oct 25, 2017
6,362
Did anyone listen to what Jeff said in the Bombcast? He said that the devs he talked to were frustrated bc they don‘t know a whole lot about next gen/stadia and that affects their business/deveopment stuff.

Interesting.
 
Oct 26, 2017
4,434
United Kingdom
Honestly all it needs for me to be happy is true 4K at 60fps. Graphics have finally gotten to the point where we are splitting hairs when it comes to comparisons.
I don't need true 4k. CBR from a lower internal rendering res to get a 4k output buffer is more than good enough for me.

I mostly agree on game graphics though. If every game boasts at the minimum the pixel quality of RDR in terms of visual fidelity and graphical effects, I'll be more than happy.

Played the game last night on base PS4... it is just stunning.

.... Thing is, Bungie always went for graphics over resolution and performance so I'm not sure it will happen.
Judging from all the graphics complaints about Destiny 2 at launch, I don't think I could argue the above with a straight face.

I agree that a visual upgrade probably won't be too major, but what about if Bungie decides to, say, increase enemy numbers? Improve enemy AI? Have missions where huge ship battles are occurring in the sky that you can seamlessly interact with while fighting on the ground? Wouldn't these tax the CPU more? Or is it lonely that the Zen2 chip will be able to handle all this at 60fps anyway?
I mean sure, but the decision on target framerate comes before that. If they wanna go 60fps they'll design around that goal and reign in their gameplay design ambitions accordingly.

Equally if their priority is 32 player PvPvE battles in a massive open world with shit flying everywhere, they'll focus on designing to that probably run at 30fps but minimise the input latency like they normally do.

Did anyone listen to what Jeff said in the Bombcast? He said that the devs he talked to were frustrated bc they don‘t know a whole lot about next gen/stadia and that affects their business/deveopment stuff.

Interesting.
I don't think it affects anything much yet if fall 2020 is the target launch date for both consoles. That's still over 18months away.

Some major 3rd parties will have small teams who know much more than even other devs working at the same studio. So comments like Jeff's aren't really useful indicators because they'll be skewed by a heavy amount of selections sample bias - I highly doubt Jeff knows the industry devs currently working on next-gen hardware and even of he did, they wouldn't talk to him precisely because of his position in the gaming press.

Contrary to popular internet belief, not all devs are over-excited fans itching to jeopardise their careers by leaking stuff online.
 
Last edited:

BreakAtmo

Member
Nov 12, 2017
3,445
I don't need true 4k. CBR from a lower internal rendering res to get a 4k output buffer is more than good enough for me.

I mostly agree on game graphics though. If every game boasts at the minimum the pixel quality of RDR in terms of visual fidelity and graphical effects, I'll be more than happy.

Played the game last night on base PS4... it is just stunning.
Agreed. Give me games with the pixel quality of RDR2 plus the resolution and framerate of DMC5 on the X.

I mean sure, but the decision on target framerate comes before that. If they wanna go 60fps they'll design around that goal and reign in their gameplay design ambitions accordingly.

Equally if their priority is 32 player PvPvE battles in a massive open world with shit flying everywhere, they'll focus on designing to that probably run at 30fps but minimise the input latency like they normally do.
So they never alter these things? I know it's not the same company, but I was thinking of how Naughty Dog was initially promising 60fps Uncharted 4 when they realised how good it looked in TLOU Remastered, but they just couldn't get it to work reliably and eventually had to accept (a really good and hard-locked) 30fps.
 

anexanhume

Member
Oct 25, 2017
4,302
Did anyone listen to what Jeff said in the Bombcast? He said that the devs he talked to were frustrated bc they don‘t know a whole lot about next gen/stadia and that affects their business/deveopment stuff.

Interesting.
That aligns with Benji talking about forcing hands later this year.

_____________________________________

TSMC has fully validated their design tools for 5nm (PDK, transistor models, etc.) They are in risk production already. It remains an insane long shot, but if some SKU were to be 2021...

It has true meaningful gains over 7nm (non-EUV) like 1.8x density on mobile.

https://www.tsmc.com/tsmcdotcom/PRListingNewsAction.do?action=detail&newsid=THPGWQTHTH&language=E


___________________________

Apparently there was another Reddit “leak” for Anaconda dev kit (Dante)

DANTE ( XDK )

CPU: Custom AMD Zen 2 8C/16T @ 3.2GHz
GPU: Custom AMD Navi @ 1475MHz
MEMORY: 48GB GDDR6 @ 672GB/s
STORAGE: SSD 4TB NVMe @ 3GB/s
Specs are on the edge of believable, but clearly a $500 box.
 
Last edited:

Gamer17

Member
Oct 30, 2017
5,432
sony is part of the BDA and they have the tv department and sony pictures, so they have interest in the UHD format. With the exception of playstation 4, they all introduced something new in this regard so the console could be sold as something else than a videogame.
- Playstation 1 was a CD player
- Playstation 2 was a DVD player.
- Playstation 3 was the blu-ray player.
That aligns with Benji talking about forcing hands later this year.

_____________________________________

TSMC has fully validated their design tools for 5nm (PDK, transistor models, etc.) They are in risk production already. It remains an insane long shot, but if some SKU were to be 2021...

It has true meaningful gains over 7nm (non-EUV) like 1.8x density on mobile.

https://www.tsmc.com/tsmcdotcom/PRListingNewsAction.do?action=detail&newsid=THPGWQTHTH&language=E


___________________________

Apparently there was another Reddit “leak” for Anaconda dev kit (Dante)



Specs are on the edge of believable, but clearly a $500 box.
Well then anaconda probably has 24gb gddr6 memory. Just like how dev unit for x had 24 gb if gddr 5 and production unit had 12 gb
 

Papacheeks

Member
Oct 27, 2017
3,224
Watertown, NY
What are you talking about? Navi IS a GPU microarchitecture, regardless of whether it’s based on GCN or not.

This whole diatribe is arguing against something I haven’t even said. I’ve been discussing architectures and regardless of faceless ‘source’ speculation, what we KNOW as facts is that Vega is less efficient and less performing as an architecture than it was originally intended to be.

A hypothetical Navi part of the same TDP, clock speed and die area as Radeon 7 will outperform Radeon 7. Why? Because it will be a much better resourced project within AMD and shouldn’t release with most of its major features disabled or not working as intended. Even if all Navi ends up being is a Vega with working NGG and primitive shaders, and with the geometry performance to take advantage of DSBR, it will comfortably outperform Radeon 7.

Radeon 7 is a product based on the Vega architecture. Navi is an architecture that will be implemented in a number of products. There is nothing stopping AMD making a Navi part that will outperform all released Vega products. A mid-range Navi part with 72-80 active CUs modestly clocked will outperform Radeon 7, even if Navi ends up being Vega but fixed and with a front end to feed more than 64 CUs (which is all but guaranteed).



Who said anything about consoles?

Reading comprehension fail.
Where are you getting your info from? Everything that is out there from leakes that show up to Navi 10 has shown where they lay performance wise around 1080/2070.

If there is a Navi 20 which some recent leaks which should be taken with a grain of salt say it will beat a 2080ti. but I believe that is in specific workloads like certain rendering techniques.

If there is a card with 72 cu's jesus the price won't be cheap fora consumer card. You said I was confused. I am not I know NAVI is a architecture, but also know that Consumer cards based on that which have somewhat leaked are lining up to be around 1080/2070. With Navi 20 which will come later being possibly their new render card like the instinct which they might have a desktop consumer version of possibly.

Someone said Navi would beat a radeon 7, I was strictly talking consumer video cards. Not the entire arch that has other chips based on it being made being used in lets say consoles or professional render cards.

More than likely they have something on the high end in the works.

But I highly doubt they will launch it this year, and mainstream gpu's will be what they focus on more like something to replace the rx 500 line and vega 56/64 line.
 

Mitchman1411

Member
Jul 28, 2018
242
Oslo, Norway
It's a common misconception.

There is no such thing as a UHD drive. There are only Blu-ray drives.

The UHD format just requires a higher minimum Blu-ray disc read speed. But for movies that like 32MB/S or so. Still significantly less than what we need for games just with regards to moving data from a disc to internal storage if you want to be able to get into your game in anything less than 5mins.
Capacity has been increased with UHD too, and capacity with minimum read speeds are 50 GB with 82 Mbit/s, 66 GB with 108 Mbit/s, and 100 GB with 128 Mbit/s. Not sure where you 32 MB/s comes from that would be 256Mbps, which is not mandated anywhere I could find.
 

Andromeda

Member
Oct 27, 2017
1,834
Apparently there was another Reddit “leak” for Anaconda dev kit (Dante)

GPU: Custom AMD Navi @ 1475MHz
MEMORY: 48GB GDDR6 @ 672GB/s


Specs are on the edge of believable, but clearly a $500 box.
- That would mean 2x more memory and 2x faster bandwith than XBX.
- Assuming Navi has 64 CUs with same tflops as GCN that would mean almost exactly 12 Tflops (~12.095), again 2x more than XBX GPU.
 

DrKeo

Member
Mar 3, 2019
592
Israel
Judging from all the graphics complaints about Destiny 2 at launch, I don't think I could argue the above with a straight face.
I've said that they preferred better graphics over frame-rate and resolution, I didn't say that their games were particularly good looking. Bungie always made bad decisions regarding allocating processing power.
 

msia2k75

Member
Nov 1, 2017
270
DANTE ( XDK )

CPU: Custom AMD Zen 2 8C/16T @ 3.2GHz
GPU: Custom AMD Navi @ 1475MHz
MEMORY: 48GB GDDR6 @ 672GB/s
STORAGE: SSD 4TB NVMe @ 3GB/s
The specs are believable considering the rumoured Anaconda 12TF...
1475Mhz with 64CU would reach exactly 12TF.
Concerning the memory, the 48GB would be for the devkit... You would need to slash that by 2 for a consumer device, giving us 24GB of GDDR6. Doable.
The bandwidth would indicate a 384bit memory bus clocked at 15gbps. That's quite a bit.
Finally, the storage amount needs to be slashed by 4 to give us 1TB. Most likely, the speed would be lower too.
 

RevengeTaken

Banned
Aug 12, 2018
1,445
The specs are believable considering the rumoured Anaconda 12TF...
1475Mhz with 64CU would reach exactly 12TF.
Concerning the memory, the 48GB would be for the devkit... You would need to slash that by 2 for a consumer device, giving us 24GB of GDDR6. Doable.
The bandwidth would indicate a 384bit memory bus clocked at 15gbps. That's quite a bit.
Finally, the storage amount needs to be slashed by 4 to give us 1TB. Most likely, the speed would be lower too.
I'm willing to bet PS5 will be more powerful than anaconda!
 

Pheonix

Member
Dec 14, 2018
1,227
St Kitts
Honestly all it needs for me to be happy is true 4K at 60fps. Graphics have finally gotten to the point where we are splitting hairs when it comes to comparisons.
You're asking for the one thing that is going to leave you unhappy.

If next gen will be around the 1080ti's performance, it runs Destiny 2 @75fps in native 4K so it could happen. Thing is, Bungie always went for graphics over resolution and performance so I'm not sure it will happen.
Common mistake a lot here seem to be making. You look at next gen as if it's going to be current gen with more power. As if they are making better hardware to run the games we have now at a higher resolution and higher frame rate.

That's not going to be the case. We will have native 4k games running at 30fps with "next gen assets" and "tech"..... Improvements in lighting, AI, geometry complexity and/or interactivity and a slew of other features that would cripple current gen hardware but be possible on next gen hardware.

And as usual, majority of the devs will opt for 30fps to allow for more eye candy in their games and again as usual only high end PCs will be able to take those same games and run them at a higher frame rate.


___________________________

Apparently there was another Reddit “leak” for Anaconda dev kit (Dante)



Specs are on the edge of believable, but clearly a $500 box.
How does that allign "clearly" with a $499 box though? That looks more to me like a Dev kit.
 

Pheonix

Member
Dec 14, 2018
1,227
St Kitts
- That would mean 2x more memory and 2x faster bandwith than XBX.
- Assuming Navi has 64 CUs with same tflops as GCN that would mean almost exactly 12 Tflops (~12.095), again 2x more than XBX GPU.
So a 8C/16t CPU, 64CU GPU at ~1500Mhz, 24GB of GDDR6 at around $7/GB....

Barring the ridiculous amount of storage, and it being NVMe, that doesn't sound like a $500 box to me. Sounds more like a $450-$480 to make box that will no doubt get subsidized and sold for like $399.

And I say this because I believe $500 box will in truth be a $550-$580 to make box that gets subsidized and sold for $499.

I don't think anyone of them is making a box that they will be selling at a profit.
 
Oct 27, 2017
2,552
Florida
- That would mean 2x more memory and 2x faster bandwith than XBX.
- Assuming Navi has 64 CUs with same tflops as GCN that would mean almost exactly 12 Tflops (~12.095), again 2x more than XBX GPU.
I'd be very happy with that. Double the X in GPU power (12TF) plus added architectural enhancements that improve rendering efficiency paired with a modern Zen 2, 24GB GDDR6 and a NVME SSD. That is rock fucking solid all around.
 
Status
Not open for further replies.