• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

When will the first 'next gen' console be revealed?

  • First half of 2019

    Votes: 593 15.6%
  • Second half of 2019(let's say post E3)

    Votes: 1,361 35.9%
  • First half of 2020

    Votes: 1,675 44.2%
  • 2021 :^)

    Votes: 161 4.2%

  • Total voters
    3,790
  • Poll closed .
Status
Not open for further replies.

anexanhume

Member
Oct 25, 2017
12,912
Maryland
Navi won't outbeat a Radeon 7.
Which one? Navi 10? It will be damn close if it doesn't beat it on perf. Especially on perf/Watt due to target TDP.

Notice how the Polaris parts have the same power efficiency of Vega 64? That tells you the performance of Vega is actually quite better than Polaris, and if you ramped it down to similar power levels, it should beat it more handily on perf/Watt. That's the exact territory Navi will live in, except with more arch benefits and the 7nm jump. You can already see the 7nm jump in Vega 7, but that's again pegged to the high end TDP, which obscure the perf/Watt gains possible.

I keep hearing people say it'll be as powerful as an rtx 2070/gtx 1080 but in reality for $400, 1660ti at best but that is a better performer than a gtx 1070 with less watts.

Nope. That's for $250 according to Jim (AdoredTV).
 
Last edited:

dgrdsv

Member
Oct 25, 2017
11,821
The Navi GPUs in PS5 and Anaconda are going to be 7nm. they should be able to double the CUs going from 16nm (mid gen consoles) to 7nm while increasing the clock speeds just as they were able to do so when going from the 28nm launch consoles to the 16nm mid gen refreshes.
So A) "should be able to double the CUs" in a purely physical sense of being capable of putting about twice as many transistors in a chip of the same size on 7nm as it was possible on 16nm -- sure. This doesn't say anything about the cost of said chip though which might be quite a bit higher than that of the chip on 16nm of the same size and costs are way more important than process capabilities - as you can probably guess from the fact that current gen consoles all have APUs with 200-400mm^2 die sizes while the same 16nm process (or family of) they are using allows building something like a 815mm^2 sized Nvidia GV100 GPU.

B) The capability of doubling the number of transistors wrt GCN GPUs is largely irrelevant at this point as GCN is unable to make use of these additional transistors due to hitting the power ceiling well before reaching the maximum die size on both 14nm (Vega 10 is consuming 300W at 482mm^2) and 7nm (Vega 20 is consuming 300W at 331mm^2). For them to make any use of what 7nm process provides in GPU complexity increases they have to make *huge* gains in power efficiency first.

C) 28nm to 16/14nm was a move from planar to FinFET transistors, and this was the main reason for most designs gaining clocks on 16/14 compared to same(ish) designs on 28nm. This won't occur again with the transition to 7nm and thus nobody should expect significant clock improvements on 7nm parts when compared to 16nm predecessors - unless said parts will be specifically re-engineered to reach significantly higher clocks of course.

It just has to be a GPU built from the ground up for 7nm as opposed to one that was simply ported over to 7nm.
I don't even know if I want to ask you what's the difference and what is "ported over to 7nm" mean...
 

Miasma

Banned
Oct 31, 2017
160
PS4 was also a $399 machine. Looks like Sony might be going with a $499 beast.

Google stadia severs are 10.7 tflops. Jason Schrier said Sony and MS are aiming higher than that. So 12 tflops is pretty much a certainty. If they add a nice cooling system and go to a $499, we can do 14 Tflops on 7nm with no issues.

RTX 2080 Ti is an overpriced monstrosity. Expect 1080 Ti levels of performance from next gen consoles.


1080ti is 11.3 tflops though?
People here are usually referring to next-gen GPU performance in relation to AMD's cards, not NVidia's,. So expectations are between Vega 56 and 64 performance, not 2080Ttis.


Alot of people on this thread dont seem to think that though, however I do agree with what you have said regarding vega 56/ 64 peformance I would put my money on the graphical component of the APU being something within the power of a vega 64 but with the navi architecture.
 

VX1

Member
Oct 28, 2017
7,000
Europe
28nm to 16/14nm was a move from planar to FinFET transistors, and this was the main reason for most designs gaining clocks on 16/14 compared to same(ish) designs on 28nm. This won't occur again with the transition to 7nm and thus nobody should expect significant clock improvements on 7nm parts when compared to 16nm predecessors - unless said parts will be specifically re-engineered to reach significantly higher clocks of course

Oh really? I didn't know this...interesting.
 

anexanhume

Member
Oct 25, 2017
12,912
Maryland
Oh really? I didn't know this...interesting.
That remains to be seen. We have a lot of reason to believe clocks won't move according to statements from both ARM and AMD, and we know it has lot to do with interconnect resistivity. However, the leaked clock speeds of Ryzen 3000 show a hefty bump to base clocks.

You could also get to higher clock via architecture changes. Nvidia GPUs can often boost above 2GHz.
 

BradGrenz

Banned
Oct 27, 2017
1,507
Yeah, there is no reason to assume Navi lacks architectural optimizations aimed at achieving higher clock speeds. I think it's highly likely it does.
 

anexanhume

Member
Oct 25, 2017
12,912
Maryland
Yeah, there is no reason to assume Navi lacks architectural optimizations aimed at achieving higher clock speeds. I think it's highly likely it does.
They need to prioritize perf/Watt. It likely follows that clocks can be higher due to lower power density, but it's not necessarily guaranteed. Apple's early A series processors were very low in frequency but perf/Watt leaders, for example. Now they clock similarly to others in the industry.
 

Papacheeks

Banned
Oct 27, 2017
5,620
Watertown, NY
Navi is an architecture and Radeon 7 is a product. Your statement is confused.

No it is not.

Navi is based on GCN still, and from what the sources I TRUST say is it's much better efficiency than raw power. RADEON 7 is based on instinct cards using vega chip as it's basis but is basically chips from instinct cards that more than likely didn't meet certain criteria. NAVI will be a line of GPU's, and is just the code name, for that line. Just like Polaris was for the RX 500 line.

It's going to have variations, and be much more efficient than vega, with new memory controller. I expect something better than the RX VEGA's going into 2080 territory possibly performance wise for certain task's, but not compete on 1:1 perfomance. There's just no way in terms of what they have R&D wise compared to NVIDIA.

I expect a highend version, but from what AMD has been open about, they consider the Radeon 7 for now to be their high end. With Navi line filling in the gap for replacing the RX 500- RX VEGA line at much better power/performance and in price.

Even with anexanhume being much more knowledgeable than I am at this stuff, it's still just speculation. But from what we know in terms of Radeon being short handed, and having these already in play roadmap wise it's hard to believe that they will be able to compete with a 2080ti so quickly with a more efficient GCN arch, that chip wise is a evolution to Polaris.

I mean if it is more powerful than a Radeon 7 with better performance/watt. Then gravy, it's just hard to swallow knowing what Radeon group has had available for resources, and knowing this was all part of a roadmap way back when Raj was with the company.
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
So A) "should be able to double the CUs" in a purely physical sense of being capable of putting about twice as many transistors in a chip of the same size on 7nm as it was possible on 16nm -- sure. This doesn't say anything about the cost of said chip though which might be quite a bit higher than that of the chip on 16nm of the same size and costs are way more important than process capabilities - as you can probably guess from the fact that current gen consoles all have APUs with 200-400mm^2 die sizes while the same 16nm process (or family of) they are using allows building something like a 815mm^2 sized Nvidia GV100 GPU.

B) The capability of doubling the number of transistors wrt GCN GPUs is largely irrelevant at this point as GCN is unable to make use of these additional transistors due to hitting the power ceiling well before reaching the maximum die size on both 14nm (Vega 10 is consuming 300W at 482mm^2) and 7nm (Vega 20 is consuming 300W at 331mm^2). For them to make any use of what 7nm process provides in GPU complexity increases they have to make *huge* gains in power efficiency first.



I don't even know if I want to ask you what's the difference and what is "ported over to 7nm" mean...
1) it's been said that 7nm's more complex will cost about 10-20% more than 14nm for the same sized die. Can't remember the details or where I saw that.

But regardless. Everyone is shifting over to that node, and by everyone I mean mobile which are the single largest customers of chips period. Economics of scale bla bla bla

2) built from the ground up for 7nm means just that. Navi was always intended to be a 7nm chip. And there will be things that being built on 7nm would allow them do, certain things that are smaller that allows them to out more of other things on a chip and so on. Navi is also a more focused GPU than Vega. Supposedly cutting out most of the bloat that Vega had.

3) Oh and the clock thing.... Vega 64 at 14nm clocked at a peak ~1500Mhz. Vega 7 on 7nm clocked at a peak $1750Mhz.
 
Last edited:

BradGrenz

Banned
Oct 27, 2017
1,507
They need to prioritize perf/Watt. It likely follows that clocks can be higher due to lower power density, but it's not necessarily guaranteed. Apple's early A series processors were very low in frequency but perf/Watt leaders, for example. Now they clock similarly to others in the industry.

Optimizing for clock speeds is one way to achieve better perf/watt. nVidia chips run at higher clockspeeds with better perf/watt.
 

anexanhume

Member
Oct 25, 2017
12,912
Maryland
Optimizing for clock speeds is one way to achieve better perf/watt. nVidia chips run at higher clockspeeds with better perf/watt.
I don't know what optimizing for clock speeds means in this context. A good arch is going to look better at any frequency unless you're somehow designing a chip that is huge and has high leakage overhead. Even Turing manages to stay efficient with a huge die size. Perhaps AMD is not aggressive enough with DVFS, AVFS, clock and power gating.

You have to balance pipelining to achieve higher clock speeds with IPC so you're not constantly flushing it. Of course, that's a CPU paradigm but the idea of wavefronts with nops applies here.

I think at the core they need to revamp their cell library (in addition to architecture), which the Zen engineers were hopefully able to assist with.
 
Last edited:

Deleted member 721

User-requested account closure
Banned
Oct 25, 2017
10,416
with the article of kotaku about anthem i think that reddit leaker had a real source, 2 leaks of 2 right, it looks like they wanted to delay the game because it was a mess, but ea didnt want to change the dead line, also about the frostbite engine:
i think some of that stuff might be right, but he could be creating some stuff

what the leaker said:
"- "Anthem is a mess on This gen Consoles, Going to get delayed again"
Maybe they improved on some aspects that decided to cancel the delay, I'll confirm the Anthem situation with my source
Edit : Same answer as before it will get delayed, The thing is The Modified Frostbite engine on consoles isn't performing how they want to, The gameplay you see on live-streams are all modified demos
Edit edit : Ea wants it out before their earning report in March. That's all that matters not if the game is ready or not""



quoting the article
"They had publicly committed to a fall 2018 ship date, but that had never been realistic. Publisher EA also wouldn't let them delay the game any further than March 2019, the end of the company's fiscal year. They were entering production so late, it seemed like it might be impossible to ship anything by early 2019, let alone a game that could live up to BioWare's lofty standards."

"The explanation for this lag can be summed up in one word, a word that has plagued many of EA's studios for years now, most notably BioWare and the now-defunct Visceral Games, a word that can still evoke a mocking smile or sad grimace from anyone who's spent any time with it.
That word, of course, is Frostbite."
https://kotaku.com/how-biowares-anthem-went-wrong-1833731964
 
Last edited:
Oct 26, 2017
6,151
United Kingdom
No it is not.

Navi is based on GCN still, and from what the sources I TRUST say is it's much better efficiency than raw power. RADEON 7 is based on instinct cards using vega chip as it's basis but is basically chips from instinct cards that more than likely didn't meet certain criteria. NAVI will be a line of GPU's, and is just the code name, for that line. Just like Polaris was for the RX 500 line.

It's going to have variations, and be much more efficient than vega, with new memory controller. I expect something better than the RX VEGA's going into 2080 territory possibly performance wise for certain task's, but not compete on 1:1 perfomance. There's just no way in terms of what they have R&D wise compared to NVIDIA.

I expect a highend version, but from what AMD has been open about, they consider the Radeon 7 for now to be their high end. With Navi line filling in the gap for replacing the RX 500- RX VEGA line at much better power/performance and in price.

Even with anexanhume being much more knowledgeable than I am at this stuff, it's still just speculation. But from what we know in terms of Radeon being short handed, and having these already in play roadmap wise it's hard to believe that they will be able to compete with a 2080ti so quickly with a more efficient GCN arch, that chip wise is a evolution to Polaris.

I mean if it is more powerful than a Radeon 7 with better performance/watt. Then gravy, it's just hard to swallow knowing what Radeon group has had available for resources, and knowing this was all part of a roadmap way back when Raj was with the company.

What are you talking about? Navi IS a GPU microarchitecture, regardless of whether it's based on GCN or not.

This whole diatribe is arguing against something I haven't even said. I've been discussing architectures and regardless of faceless 'source' speculation, what we KNOW as facts is that Vega is less efficient and less performing as an architecture than it was originally intended to be.

A hypothetical Navi part of the same TDP, clock speed and die area as Radeon 7 will outperform Radeon 7. Why? Because it will be a much better resourced project within AMD and shouldn't release with most of its major features disabled or not working as intended. Even if all Navi ends up being is a Vega with working NGG and primitive shaders, and with the geometry performance to take advantage of DSBR, it will comfortably outperform Radeon 7.

Radeon 7 is a product based on the Vega architecture. Navi is an architecture that will be implemented in a number of products. There is nothing stopping AMD making a Navi part that will outperform all released Vega products. A mid-range Navi part with 72-80 active CUs modestly clocked will outperform Radeon 7, even if Navi ends up being Vega but fixed and with a front end to feed more than 64 CUs (which is all but guaranteed).

If you really think a GPU better than that is going to make it into a console the following year then, well, good luck with that.

Who said anything about consoles?

Reading comprehension fail.
 
Last edited:

BreakAtmo

Member
Nov 12, 2017
12,806
Australia
I wonder if a next-gen-only Destiny 3 would go for 60fps. A cross-gen or enhanced BC version, sure, but for the latter they might want to go bananas with something else.
 

TONXY

Member
Oct 28, 2017
16
Ft. Lauderdale
Honestly all it needs for me to be happy is true 4K at 60fps. Graphics have finally gotten to the point where we are splitting hairs when it comes to comparisons.
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
Honestly all it needs for me to be happy is true 4K at 60fps. Graphics have finally gotten to the point where we are splitting hairs when it comes to comparisons.
If next gen will be around the 1080ti's performance, it runs Destiny 2 @75fps in native 4K so it could happen. Thing is, Bungie always went for graphics over resolution and performance so I'm not sure it will happen.
 

Thorrgal

Member
Oct 26, 2017
12,277
Yeah I'm back on the Fall 2020 train, there's no reason for Sony to fix what ain't broke and deviate from the strategy that helped them sell 100+ million consoles this gen

Agree. We may have to wait for months or even a year to have until the PS5 it's announced, probably around February/March of 2020.

I think it's time to check this thread twice a week instead of twice a day, at least until E3...we've really gone in circles for moths now, and nothing of substance has come out of GDC
 

BreakAtmo

Member
Nov 12, 2017
12,806
Australia
Considering the fact that i'm not expecting a huge visual update to the renderer in Destiny 3, I would think Zen CPUs can make a 60fps target a very real possibility.

I agree that a visual upgrade probably won't be too major, but what about if Bungie decides to, say, increase enemy numbers? Improve enemy AI? Have missions where huge ship battles are occurring in the sky that you can seamlessly interact with while fighting on the ground? Wouldn't these tax the CPU more? Or is it lonely that the Zen2 chip will be able to handle all this at 60fps anyway?
 

Kenzodielocke

Member
Oct 25, 2017
12,835
Did anyone listen to what Jeff said in the Bombcast? He said that the devs he talked to were frustrated bc they don't know a whole lot about next gen/stadia and that affects their business/deveopment stuff.

Interesting.
 
Oct 26, 2017
6,151
United Kingdom
Honestly all it needs for me to be happy is true 4K at 60fps. Graphics have finally gotten to the point where we are splitting hairs when it comes to comparisons.

I don't need true 4k. CBR from a lower internal rendering res to get a 4k output buffer is more than good enough for me.

I mostly agree on game graphics though. If every game boasts at the minimum the pixel quality of RDR in terms of visual fidelity and graphical effects, I'll be more than happy.

Played the game last night on base PS4... it is just stunning.

.... Thing is, Bungie always went for graphics over resolution and performance so I'm not sure it will happen.

Judging from all the graphics complaints about Destiny 2 at launch, I don't think I could argue the above with a straight face.

I agree that a visual upgrade probably won't be too major, but what about if Bungie decides to, say, increase enemy numbers? Improve enemy AI? Have missions where huge ship battles are occurring in the sky that you can seamlessly interact with while fighting on the ground? Wouldn't these tax the CPU more? Or is it lonely that the Zen2 chip will be able to handle all this at 60fps anyway?

I mean sure, but the decision on target framerate comes before that. If they wanna go 60fps they'll design around that goal and reign in their gameplay design ambitions accordingly.

Equally if their priority is 32 player PvPvE battles in a massive open world with shit flying everywhere, they'll focus on designing to that probably run at 30fps but minimise the input latency like they normally do.

Did anyone listen to what Jeff said in the Bombcast? He said that the devs he talked to were frustrated bc they don't know a whole lot about next gen/stadia and that affects their business/deveopment stuff.

Interesting.

I don't think it affects anything much yet if fall 2020 is the target launch date for both consoles. That's still over 18months away.

Some major 3rd parties will have small teams who know much more than even other devs working at the same studio. So comments like Jeff's aren't really useful indicators because they'll be skewed by a heavy amount of selections sample bias - I highly doubt Jeff knows the industry devs currently working on next-gen hardware and even of he did, they wouldn't talk to him precisely because of his position in the gaming press.

Contrary to popular internet belief, not all devs are over-excited fans itching to jeopardise their careers by leaking stuff online.
 
Last edited:

BreakAtmo

Member
Nov 12, 2017
12,806
Australia
I don't need true 4k. CBR from a lower internal rendering res to get a 4k output buffer is more than good enough for me.

I mostly agree on game graphics though. If every game boasts at the minimum the pixel quality of RDR in terms of visual fidelity and graphical effects, I'll be more than happy.

Played the game last night on base PS4... it is just stunning.

Agreed. Give me games with the pixel quality of RDR2 plus the resolution and framerate of DMC5 on the X.

I mean sure, but the decision on target framerate comes before that. If they wanna go 60fps they'll design around that goal and reign in their gameplay design ambitions accordingly.

Equally if their priority is 32 player PvPvE battles in a massive open world with shit flying everywhere, they'll focus on designing to that probably run at 30fps but minimise the input latency like they normally do.

So they never alter these things? I know it's not the same company, but I was thinking of how Naughty Dog was initially promising 60fps Uncharted 4 when they realised how good it looked in TLOU Remastered, but they just couldn't get it to work reliably and eventually had to accept (a really good and hard-locked) 30fps.
 

anexanhume

Member
Oct 25, 2017
12,912
Maryland
Did anyone listen to what Jeff said in the Bombcast? He said that the devs he talked to were frustrated bc they don't know a whole lot about next gen/stadia and that affects their business/deveopment stuff.

Interesting.
That aligns with Benji talking about forcing hands later this year.

_____________________________________

TSMC has fully validated their design tools for 5nm (PDK, transistor models, etc.) They are in risk production already. It remains an insane long shot, but if some SKU were to be 2021...

It has true meaningful gains over 7nm (non-EUV) like 1.8x density on mobile.

https://www.tsmc.com/tsmcdotcom/PRListingNewsAction.do?action=detail&newsid=THPGWQTHTH&language=E


___________________________

Apparently there was another Reddit "leak" for Anaconda dev kit (Dante)

DANTE ( XDK )

CPU: Custom AMD Zen 2 8C/16T @ 3.2GHz
GPU: Custom AMD Navi @ 1475MHz
MEMORY: 48GB GDDR6 @ 672GB/s
STORAGE: SSD 4TB NVMe @ 3GB/s

Specs are on the edge of believable, but clearly a $500 box.
 
Last edited:

Gamer17

Banned
Oct 30, 2017
9,399
sony is part of the BDA and they have the tv department and sony pictures, so they have interest in the UHD format. With the exception of playstation 4, they all introduced something new in this regard so the console could be sold as something else than a videogame.
- Playstation 1 was a CD player
- Playstation 2 was a DVD player.
- Playstation 3 was the blu-ray player.
That aligns with Benji talking about forcing hands later this year.

_____________________________________

TSMC has fully validated their design tools for 5nm (PDK, transistor models, etc.) They are in risk production already. It remains an insane long shot, but if some SKU were to be 2021...

It has true meaningful gains over 7nm (non-EUV) like 1.8x density on mobile.

https://www.tsmc.com/tsmcdotcom/PRListingNewsAction.do?action=detail&newsid=THPGWQTHTH&language=E


___________________________

Apparently there was another Reddit "leak" for Anaconda dev kit (Dante)



Specs are on the edge of believable, but clearly a $500 box.
Well then anaconda probably has 24gb gddr6 memory. Just like how dev unit for x had 24 gb if gddr 5 and production unit had 12 gb
 

Papacheeks

Banned
Oct 27, 2017
5,620
Watertown, NY
What are you talking about? Navi IS a GPU microarchitecture, regardless of whether it's based on GCN or not.

This whole diatribe is arguing against something I haven't even said. I've been discussing architectures and regardless of faceless 'source' speculation, what we KNOW as facts is that Vega is less efficient and less performing as an architecture than it was originally intended to be.

A hypothetical Navi part of the same TDP, clock speed and die area as Radeon 7 will outperform Radeon 7. Why? Because it will be a much better resourced project within AMD and shouldn't release with most of its major features disabled or not working as intended. Even if all Navi ends up being is a Vega with working NGG and primitive shaders, and with the geometry performance to take advantage of DSBR, it will comfortably outperform Radeon 7.

Radeon 7 is a product based on the Vega architecture. Navi is an architecture that will be implemented in a number of products. There is nothing stopping AMD making a Navi part that will outperform all released Vega products. A mid-range Navi part with 72-80 active CUs modestly clocked will outperform Radeon 7, even if Navi ends up being Vega but fixed and with a front end to feed more than 64 CUs (which is all but guaranteed).



Who said anything about consoles?

Reading comprehension fail.

Where are you getting your info from? Everything that is out there from leakes that show up to Navi 10 has shown where they lay performance wise around 1080/2070.

If there is a Navi 20 which some recent leaks which should be taken with a grain of salt say it will beat a 2080ti. but I believe that is in specific workloads like certain rendering techniques.

If there is a card with 72 cu's jesus the price won't be cheap fora consumer card. You said I was confused. I am not I know NAVI is a architecture, but also know that Consumer cards based on that which have somewhat leaked are lining up to be around 1080/2070. With Navi 20 which will come later being possibly their new render card like the instinct which they might have a desktop consumer version of possibly.

Someone said Navi would beat a radeon 7, I was strictly talking consumer video cards. Not the entire arch that has other chips based on it being made being used in lets say consoles or professional render cards.

More than likely they have something on the high end in the works.

But I highly doubt they will launch it this year, and mainstream gpu's will be what they focus on more like something to replace the rx 500 line and vega 56/64 line.
 

Mitchman1411

Member
Jul 28, 2018
635
Oslo, Norway
It's a common misconception.

There is no such thing as a UHD drive. There are only Blu-ray drives.

The UHD format just requires a higher minimum Blu-ray disc read speed. But for movies that like 32MB/S or so. Still significantly less than what we need for games just with regards to moving data from a disc to internal storage if you want to be able to get into your game in anything less than 5mins.

Capacity has been increased with UHD too, and capacity with minimum read speeds are 50 GB with 82 Mbit/s, 66 GB with 108 Mbit/s, and 100 GB with 128 Mbit/s. Not sure where you 32 MB/s comes from that would be 256Mbps, which is not mandated anywhere I could find.
 

Andromeda

Member
Oct 27, 2017
4,841
Apparently there was another Reddit "leak" for Anaconda dev kit (Dante)

GPU: Custom AMD Navi @ 1475MHz
MEMORY: 48GB GDDR6 @ 672GB/s


Specs are on the edge of believable, but clearly a $500 box.

- That would mean 2x more memory and 2x faster bandwith than XBX.
- Assuming Navi has 64 CUs with same tflops as GCN that would mean almost exactly 12 Tflops (~12.095), again 2x more than XBX GPU.
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
Judging from all the graphics complaints about Destiny 2 at launch, I don't think I could argue the above with a straight face.
I've said that they preferred better graphics over frame-rate and resolution, I didn't say that their games were particularly good looking. Bungie always made bad decisions regarding allocating processing power.
 

msia2k75

Member
Nov 1, 2017
601
DANTE ( XDK )

CPU: Custom AMD Zen 2 8C/16T @ 3.2GHz
GPU: Custom AMD Navi @ 1475MHz
MEMORY: 48GB GDDR6 @ 672GB/s
STORAGE: SSD 4TB NVMe @ 3GB/s

The specs are believable considering the rumoured Anaconda 12TF...
1475Mhz with 64CU would reach exactly 12TF.
Concerning the memory, the 48GB would be for the devkit... You would need to slash that by 2 for a consumer device, giving us 24GB of GDDR6. Doable.
The bandwidth would indicate a 384bit memory bus clocked at 15gbps. That's quite a bit.
Finally, the storage amount needs to be slashed by 4 to give us 1TB. Most likely, the speed would be lower too.
 

RevengeTaken

Banned
Aug 12, 2018
1,711
The specs are believable considering the rumoured Anaconda 12TF...
1475Mhz with 64CU would reach exactly 12TF.
Concerning the memory, the 48GB would be for the devkit... You would need to slash that by 2 for a consumer device, giving us 24GB of GDDR6. Doable.
The bandwidth would indicate a 384bit memory bus clocked at 15gbps. That's quite a bit.
Finally, the storage amount needs to be slashed by 4 to give us 1TB. Most likely, the speed would be lower too.
I'm willing to bet PS5 will be more powerful than anaconda!
 
Status
Not open for further replies.