• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

When will the first 'next gen' console be revealed?

  • First half of 2019

    Votes: 593 15.6%
  • Second half of 2019(let's say post E3)

    Votes: 1,361 35.9%
  • First half of 2020

    Votes: 1,675 44.2%
  • 2021 :^)

    Votes: 161 4.2%

  • Total voters
    3,790
  • Poll closed .
Status
Not open for further replies.

M3rcy

Member
Oct 27, 2017
702
Yes, OK.
But radeon 7 has 60cus, so it won't be the same as the MI60.

So it's 60x64 x 2= 7680
7680 x 1800 = 13824000 ÷ 1million
=13.24tflops?

I didn't say it was the same, but the MI60 page has all of the specs for that part on it so you can see that the method I was using to calculate the TFlops was valid because when you plugged the values for that part into the formula you get the official result for that part as per AMD. If you went to any other AMD GPU product page and plugged the appropriate values in, you would also get the correct result. XXCUs * 128 (combining 64SPs per CU and 2 Flops per clock per SP) * XXXX Mhz /1,000,000 will always give you the correct number of TFlops for an AMD GCN GPU.

And you have a typo. 13.82, not 13.24. You skipped the "8" in 13"8"24000/1M.
 
Last edited:
Feb 10, 2018
17,534
I didn't say it was the same, but the MI60 page has all of the specs for that part on it so you can see that the method I was using to calculate the TFlops was valid because when you plugged the values for that part you get the official result for that part as per AMD.

And you have a typo. 13.82, not 13.24. You skipped the "8" in 13"8"24000/1M.

Yes. Thanks.
 

Locuza

Member
Mar 6, 2018
380
Double the ROPs, and over double the memory bandwidth. Both were suggested as holding Vega back on Vega 64. With over 1TB/s, it can read it's entire memory 60 times a second (credit Tim Sweeney on that observation).
I have serious doubts about the 128 ROPs claim from Anandtech.
The diagrams from AMD indicate only 64 ROPs for Vega 20 and it makes little to no sense to include 128 ROPs if your Pixel-Frontend can only deliver 64 Pixels per cycle.
 

gofreak

Member
Oct 26, 2017
7,734
Something of a side note wrt hardware production and costs, but worth keeping an eye on this situation...

https://asia.nikkei.com/Business/Companies/Sony-to-reconsider-China-production-if-tariffs-rise

A Sony executive warned Tuesday that the Japanese technology group could be forced to move some production out of China if the U.S. raises tariffs against the trading partner.

If tariffs rise to 25% from 10% as threatened, "of course we will consider manufacturing in other countries and regions," Senior Executive Vice President Ichiro Takagi told reporters at the CES consumer electronics expo here.

On the mainland, Sony makes video cameras, among other products, that could be exposed to Sino-U.S. frictions. Takagi said the company is "closely monitoring events" and is poised to avoid risks, as any corporation would be.
 

Intersect

Banned
Nov 5, 2017
451
Radeon 7 with 4 stacks of HBM2, 16GB at 1TB/s bandwidth. That's driving a lot of the cost there.

Also note the pricing is very strategic. They mentioned 2080 as the competitor in their performance graphs, and that's exactly where they priced it. This is also 60 CUs instead of 64. Very deliberate.

The 25% performance boost for the same power is also exactly what they've been saying all along.

Also interesting that Ryzen 3 is 8C/16T chiplet with IO die. Next gen consoles could use off the shelf CPUs with custom GPU and memory controller integrated inside.
I agree, the timing would support chiplets if the consoles were waiting on AMD 7nm chiplets. I keep hearing AMD is targeting Navi for the mid range PC market which is also the Console target if only for TDP (power). Lest anyone get concerned, mid range has been steadily creeping up in performance and was one reason for the PS4 Pro release.

So we see AMD releasing small low power APUs @ 12nm but no 7nm till Q2 2019. Given the tapeout/design/optimization @ 7nm is expensive, will they use chiplet designs to increase the volume to amortize cost or as in the past, no higher power APUs just discrete CPUs and GPUs. Also larger higher performance products should be @ 7nm. Larger APUs may require chiplet designs for yield reasons but where is the break even point?

Console volume eliminates most of the above concerns except for Yield so at what size would AMD and Sony go with Chiplets? IS the CPU and IO available off the shelf a deciding factor?
 

gofreak

Member
Oct 26, 2017
7,734
Some more (very) detailed patent applications for the 'Knuckles' style controller Sony filed for have appeared. There are a few different applications, filed at various points going back to the end of 2016.

http://www.freepatentsonline.com/20190009172.pdf

http://www.freepatentsonline.com/EP3425481A1.pdf < one with some more software related stuff added

lfNSkPJ.png


Z7QvKic.png


They're basically very similar to the Knuckles controllers in functionality. The first application describes:

An analog stick with an individual vibration motor
A trigger, with
- a light sensor to detect the position of the index finger
- an individual vibration motor
- a 'reaction force' generator, that can generate a force opposing the depression of the trigger
A pressure sensor on the grip, to detect pressure exerted on the body by the middle finger - with a light sensor to detect the position of the middle finger
Two light sensors on the lower part of the grip to detect the position of the ring finger and little finger
A light sensor on the front portion to detect thumb position
The light sensors are infrared, and are made up of infrared emitters and detectors
Everything is designed to be left/right symmetrical, to work with left or right hands. The band and battery are swappable to either side.

Of course as always, if this was commercialised it wouldn't necessarily include everything mentioned here, or may include other things not mentioned here... one wonders about cost also.
 
Last edited:

klik

Banned
Apr 4, 2018
873
Any possibility Sony will use 2 Amd chiplets on CPU in PS5?
For example 2x8core in one chip 8 in total of 16 real cores?
 
Oct 26, 2017
6,151
United Kingdom
I agree, the timing would support chiplets if the consoles were waiting on AMD 7nm chiplets. I keep hearing AMD is targeting Navi for the mid range PC market which is also the Console target if only for TDP (power). Lest anyone get concerned, mid range has been steadily creeping up in performance and was one reason for the PS4 Pro release.

So we see AMD releasing small low power APUs @ 12nm but no 7nm till Q2 2019. Given the tapeout/design/optimization @ 7nm is expensive, will they use chiplet designs to increase the volume to amortize cost or as in the past, no higher power APUs just discrete CPUs and GPUs. Also larger higher performance products should be @ 7nm. Larger APUs may require chiplet designs for yield reasons but where is the break even point?

Console volume eliminates most of the above concerns except for Yield so at what size would AMD and Sony go with Chiplets? IS the CPU and IO available off the shelf a deciding factor?

Standardized CPU chiplet benefits are a good reason, however, I also wonder whether a separate IO die can provide a design that all but eliminates the issues with bus contention between CPU and GPU and can be re-used across both Sony's and MS's next-gen console "APUs", as well as perhaps Apple MacBooks too?

Could it also open up some interesting opportunities for the inclusion of custom fixed function accelerators and co-processors on additonal chiplets, and/or even ESRAM/eDRAM low latency scratchpad chiplets within the same package...?
 

Rikimaru

Member
Nov 2, 2017
851
I wonder if they could make a 5 chiplet design: CPU, CPU-Logic and DRAM controller, GPU and 2 HBM stacks :D
 

anexanhume

Member
Oct 25, 2017
12,913
Maryland
I have serious doubts about the 128 ROPs claim from Anandtech.
The diagrams from AMD indicate only 64 ROPs for Vega 20 and it makes little to no sense to include 128 ROPs if your Pixel-Frontend can only deliver 64 Pixels per cycle.

TechPowerUp and Ars Technica make the same claims. Maybe they all got it from Anandtech.

https://arstechnica.com/gadgets/2019/01/amd-announces-the-699-radeon-vii-7nm-vega-coming-february/

I wonder if they could make a 5 chiplet design: CPU, CPU-Logic and DRAM controller, GPU and 2 HBM stacks :D

GPU and memory controller will be on one die. It doesn't make sense to try and push 500GB/s over IF links just to save a little die area when you'll pay for it in power.
 
Last edited:

msia2k75

Member
Nov 1, 2017
601
So an estimated 300W TDP for Radeon 7 and an estimated 13.8 tflops and it competes with a 2080 with no RTX what a fucking joke. AMD Vega is a very disappointing architecture.

Best pray Navi architecture improves perf per watt otherwise i'm still going for 10 tflops minimum for PS5 anything more will be icing on the cake.

I agree, we have to hope Navi performs a huge leap in this regard or else, don't expect much more than 10TF and even then, it could a tall order....
 
Oct 27, 2017
7,136
Somewhere South
Could it also open up some interesting opportunities for the inclusion of custom fixed function accelerators and co-processors on additonal chiplets, and/or even ESRAM/eDRAM low latency scratchpad chiplets within the same package...?

Dat Cell co-processor :D But jokes aside, chiplet design opens up a lot of interesting possibilities, indeed, especially because it modularizes several aspects of the hardware - for instance, instead of trying to pack low-precision, high-throughput computation in the GPU, you can design a chiplet just for that, provided the interconnect gives you enough piping to work with.

You'll probably be duplicating stuff and likely wasting more energy than necessary, and those are completely valid worries when designing stuff for a console, but it's pretty interesting regardless.
 

klik

Banned
Apr 4, 2018
873
I really hope Navi is gonna be a big upgrade on GPU like Ryzen was on CPU.

We have RADEON VII which is basically Vega 64 that has performance between GTX1080-1080TI which is almost 3 YEARS OLD card and costs 699$ a delivers same power consumption..And best of all its even 7nm procces node.
 

anexanhume

Member
Oct 25, 2017
12,913
Maryland
I agree, we have to hope Navi performs a huge leap in this regard or else, don't expect much more than 10TF and even then, it could a tall order....
We didn't learn anything yesterday that we didn't already know about 7nm Vega except some game benchmarks and the possibility it has 128 ROPs.

We need to see 7nm targeted at 150W to get an idea of how Navi will perform.

InstinctMI60MI25.png
 

VX1

Member
Oct 28, 2017
7,000
Europe
I really hope Navi is gonna be a big upgrade on GPU like Ryzen was on CPU.

We have RADEON VII which is basically Vega 64 that has performance between GTX1080-1080TI which is almost 3 YEARS OLD card and costs 699$ a delivers same power consumption..And best of all its even 7nm procces node.

It is obvious now that those limited resources AMD has it used mostly in CPU division and Zen.After Bulldozer fiasco they finally again have very good and competitive CPU.By all accounts 7nm Zen2 will be great CPU and the biggest improvement in next gen consoles.That is great news for us after 6 years with crappy Jaguar.

On the other side people really shouldn't expect much from Navi.AMD simply doesn't have resources to compete with Nvidia.Polaris was meh,Vega big power hungry disappointment and i can only hope 7nm Navi is power efficient and competitive enough to give us ~10TF in console APU at good ($399) price.
 

Mister X

Banned
Dec 5, 2017
2,081
It is obvious now that those limited resources AMD has it used mostly in CPU division and Zen.After Bulldozer fiasco they finally again have very good and competitive CPU.By all accounts 7nm Zen2 will be great CPU and the biggest improvement in next gen consoles.That is great news for us after 6 years with crappy Jaguar.

On the other side people really shouldn't expect much from Navi.AMD simply doesn't have resources to compete with Nvidia.Polaris was meh,Vega big power hungry disappointment and i can only hope 7nm Navi is power efficient and competitive enough to give us ~10TF in console APU at good ($399) price.
Were you impressed with what AMD showed this CES?
 
Feb 10, 2018
17,534
It is obvious now that those limited resources AMD has it used mostly in CPU division and Zen.After Bulldozer fiasco they finally again have very good and competitive CPU.By all accounts 7nm Zen2 will be great CPU and the biggest improvement in next gen consoles.That is great news for us after 6 years with crappy Jaguar.

On the other side people really shouldn't expect much from Navi.AMD simply doesn't have resources to compete with Nvidia.Polaris was meh,Vega big power hungry disappointment and i can only hope 7nm Navi is power efficient and competitive enough to give us ~10TF in console APU at good ($399) price.

I agree.
At $399 I would expect vega 64 like performance, a little better because of faster memory.
 

anexanhume

Member
Oct 25, 2017
12,913
Maryland
It is obvious now that those limited resources AMD has it used mostly in CPU division and Zen.After Bulldozer fiasco they finally again have very good and competitive CPU.By all accounts 7nm Zen2 will be great CPU and the biggest improvement in next gen consoles.That is great news for us after 6 years with crappy Jaguar.

On the other side people really shouldn't expect much from Navi.AMD simply doesn't have resources to compete with Nvidia.Polaris was meh,Vega big power hungry disappointment and i can only hope 7nm Navi is power efficient and competitive enough to give us ~10TF in console APU at good ($399) price.
AMD deployed Zen engineers to the RTG team to apply their efficiency learnings to graphics. Navi would be the first chance for that to show up.
 

KOHIPEET

Member
Oct 29, 2017
1,416
Which one would make more sense for gaming? An 8-core cpu with somewhat lower speeds, or a 6-core cpu clocked higher? (Also, since these CPUs are increasingly becoming an overkill for OS related workload, wouldn't it be a better option to include, for example a 2-4 core low power mobile cpu to handle everything os related?)
 

Papacheeks

Banned
Oct 27, 2017
5,620
Watertown, NY
So an estimated 300W TDP for Radeon 7 and an estimated 13.8 tflops and it competes with a 2080 with no RTX what a fucking joke. AMD Vega is a very disappointing architecture.

Best pray Navi architecture improves perf per watt otherwise i'm still going for 10 tflops minimum for PS5 anything more will be icing on the cake.

But why pay for RTX when nothing meaningful has been used? Ray tracing isn't something brand new. But because there is hardware/architecture designed specifically for it in NVIDIA'S cards everyone thinks it's practical when so far games that are in production currently don't use it out side of shiny sub surface reflections that don't look natural. All nvida even showed at CES for RTX 2060 was a demo of a dude in armor being reflective. Not impressive.

Maybe when compared to how it is used in pre=rendered scenes on current engines like MAya. Like there are tons of games out there in development, so far not one has made a big case for Ray tracing. Even when the next iteration of RTX comes out there won;t be a lot of dev's using it. On top of that, consoles won't even have it?

So I ask what do you benefit currently from it? And if you looked into Radeon 7 you would see that it's double precision capabilities which is a big deal is something you see in a $2,000-4,000 dollar card.
 

ShaDowDaNca

Member
Nov 1, 2017
1,645
200w.gif


This is "2-4 gigs o ram max" all over again.

Next gen you will get 24 gigarams of the finest sort, AT LEAST, at minimum, at either $399 or $499.

My specs, which haven't changed for the most part still remains the same:
32 gigarams
14 jigaflops
3.2 gigahertzing 16 Ryzen cores
1TB SSD, actually the only new addition, SSD prices in the last few months collapsed to the point you can get a 1TB drive for almost 2 digit figures. It will become the standard next gen.
I can see a weird ram amount like 20 or 24gb.
SSD isn't happening at least in the base hardware.
I say 12 tf for the PS5 and 14 for the XB V X2.
 

VX1

Member
Oct 28, 2017
7,000
Europe
Were you impressed with what AMD showed this CES?

Impressed with what...? No word about Navi was telling.
They obviously have problem with GPUs.There was some rumor on twitter yesterday that Navi was delayed cause of some problem,i forgot what exactly.Also,maybe Navi couldn't deliver what Sony expected in next gen,i dunno.We'll find out later this year.

In general,if you have high expectations of AMD GPU-prepare to be disappointed.
 

Carn

Member
Oct 27, 2017
11,911
The Netherlands
But why pay for RTX when nothing meaningful has been used? Ray tracing isn't something brand new. But because there is hardware/architecture designed specifically for it in NVIDIA'S cards everyone thinks it's practical when so far games that are in production currently don't use it out side of shiny sub surface reflections that don't look natural. All nvida even showed at CES for RTX 2060 was a demo of a dude in armor being reflective. Not impressive.

Maybe when compared to how it is used in pre=rendered scenes on current engines like MAya. Like there are tons of games out there in development, so far not one has made a big case for Ray tracing. Even when the next iteration of RTX comes out there won;t be a lot of dev's using it. On top of that, consoles won't even have it?

So I ask what do you benefit currently from it? And if you looked into Radeon 7 you would see that it's double precision capabilities which is a big deal is something you see in a $2,000-4,000 dollar card.

adopting new technology always faces this question; the fact that it might not have many current use-cases at the moment is no argument to ignore it. Current gen was also designed for GPGPU stuff that wasnt that "popular" before; but has seen some nice implementations now (also because the current console CPUs are relatively weak). Also, stuff like VRS (which is a nVidia Turing feature at the moment) is very helpful for Foveated VR, for example.
 

Papacheeks

Banned
Oct 27, 2017
5,620
Watertown, NY
Raytracing will be in the next xbox, I have no doubts about it.

How if they are going with AMD? Unless they are going with NVIDIA for the GPU? Which is highly unlikely. Navi would need to have something similar to the tensor core structure in their architecture?

And from what we've heard nothing suggests that with navi which is a new node, but super efficient.
 

Intersect

Banned
Nov 5, 2017
451
TechPowerUp and Ars Technica make the same claims. Maybe they all got it from Anandtech.

https://arstechnica.com/gadgets/2019/01/amd-announces-the-699-radeon-vii-7nm-vega-coming-february/



GPU and memory controller will be on one die. It doesn't make sense to try and push 500GB/s over IF links just to save a little die area when you'll pay for it in power.
IF can use any bus including the memory bus. CPUs generally have less need for a high bandwidth bus, just one that has minimum latency, GPUs require a high bandwidth bus. In CPU chiplet designs the IO contains the memory controller and each CPU chiplet is connected to the IO chiplet.

amd-chiplet-678_678x452.png


Maybe, remember it's likely to support AMD's VM which means the GDDR6 memory controller will have a ARM trustzone processor, AES 128 accelerator, Flash and separate DDR memory connected to it in addition to connections to the GDDR6 memory bus, GPU and CPU. I'd guess the IO chiplet has all that in it and there is some other type of secondary memory bus/move/cache controller like the AMD HBCC in the GPU. GPU chiplets should NOT have the video buffer or IO in them if they are designed to have two or more GPU chiplets on a MCM.

In any case we don't know enough about AMD's VM or HBCC to even make an educated guess. A very unreliable guess would have a very small dedicated RAM pool in the IO for the trusted boot then a minimum 4 GB of DDR4 memory external in addition to 8 GB of GDDR6. In design it's going to be closer to a PC with the HBCC eliminating most of the bottlenecks PCs have that made the APU so much more efficient. More expensive VRAM (GDDR6) is not needed with a HBCC and 4+ GB of GDDR4.

The HBCC manages prefetching and caching from DDR4 to GDDR6.
 
Last edited:

Zedelima

▲ Legend ▲
Member
Oct 25, 2017
7,716
I dunno, but i kinda expect a heavily modified rx580, and not a new NAVI.
And a zen 2 cpu. Or is this already debunked?
 

anexanhume

Member
Oct 25, 2017
12,913
Maryland
IF can use any bus including the memory bus. CPUs generally have less need for a high bandwidth bus, just one that has minimum latency, GPUs require a high bandwidth bus. In CPU chiplet designs the IO contains the memory controller and each CPU chiplet is connected to the IO chiplet.

amd-chiplet-678_678x452.png


Maybe, remember it's likely to support AMD's VM which means the GDDR6 memory controller will have a ARM trustzone processor, AES 128 accelerator, Flash and separate DDR memory connected to it in addition to connections to the GDDR6 memory bus, GPU and CPU. I'd guess the IO chiplet has all that in it and there is some other type of secondary memory bus/move/cache controller like the AMD HBCC in the GPU. GPU chiplets should NOT have the video buffer or IO in them if they are designed to have two or more GPU chiplets on a MCM.

In any case we don't know enough about AMD's VM or HBCC to even make an educated guess. A very unreliable guess would have a very small dedicated RAM pool in the IO for the trusted boot then a minimum 4 GB of DDR4 memory external in addition to 8 GB of GDDR6. In design it's going to be closer to a PC with the HBCC eliminating most of the bottlenecks PCs have that made the APU so much more efficient. More expensive VRAM (GDDR6) is not needed with a HBCC and 4+ GB of GDDR4.
We already know Navi is monolithic.

https://www.pcgamesn.com/amd-navi-monolithic-gpu-design
 

zedox

Member
Oct 28, 2017
5,215
How if they are going with AMD? Unless they are going with NVIDIA for the GPU? Which is highly unlikely. Navi would need to have something similar to the tensor core structure in their architecture?

And from what we've heard nothing suggests that with navi which is a new node, but super efficient.
You don't need to do things like NVidia (which is something a lot of people need to understand...) and they (AMD) are coming out with raytracing cards later this year (which they said they are working with partners on...which MS is one)...and MS came out with DXR not for just one partner. Too many signs for it not to be supported in the next console.
 

Intersect

Banned
Nov 5, 2017
451
Then we have either
1) a Pure PC design with Navi GPU and Zen CPU. HBCC makes this almost as efficient as a APU/SoC.
2) a heavily modified Navi design not an off the shelf Navi as ZEN CPU chiplets would need a IF connection to the GPU. If it's heavily modified and chiplet then it could borrow a design from next generation and the IO is seperate.
3) or it's a Sony semi-custom monolithic SoC.

Remember the rumors have Microsoft using their design for cloud servers and those should NOT have video buffers or IO in them either. The AMD Exascale chiplets page 2 have 3D memory on each GPU and shared memory for the CPUs with IO on a separate chip.
 
Last edited:

anexanhume

Member
Oct 25, 2017
12,913
Maryland
Then we have a heavily modified Navi design not an off the shelf Navi as ZEN CPU chiplets would need a IF connection to the GPU. If it's heavily modified and chiplet then it could borrow a design from next generation and the IO is seperate, that or it's a Sony semi-custom monolithic SoC.
The GPUs have been IF capable since Vega. The GPU chiplet would share bandwidth that way. Similar to how existing APUs can share bandwidth with Vega graphics.



Don't let us down, Navi.
 

Deleted member 5764

User requested account closure
Banned
Oct 25, 2017
6,574
I just caught wind of Phil coming on stage for that AMD conference yesterday. Xbox is 100% going to announce their next-gen console(s) this year aren't they? It may just be a spec sheet like with Project Scorpio, but I think it's happening for sure.
 

AegonSnake

Banned
Oct 25, 2017
9,566
So reading through the first few posts, it seems we are looking at 12-14 Tflops GPUs max.

We know MS doesnt want to give Sony the upper hand this time around so they will attempt to beat Sony in the TFlops race but if Navi just allows them to do 14 max, will people really care about a 1-2 tflops difference? It will allow MS to market their console as the most powerful console on the market, but how much would that really matter. This gen the difference was 500 Gflops which was around 50%, 2 tflops above 12Tflops would be a 15% improvement. better some better post processing effects and some extra pixels.

Is there a possibility of MS going all out and including a second chip to get their console up to 20 tflops? or add a fancy cooling solution to push the clock speeds to get to 16-18 tflops?

I am resigned to a 12tflops console, but i really hope either Sony or MS has some kind of trick up their sleeve and give us a badass $600 18-20 tflops monster.
 

Whatislove

Member
Jan 2, 2019
905
I just caught wind of Phil coming on stage for that AMD conference yesterday. Xbox is 100% going to announce their next-gen console(s) this year aren't they? It may just be a spec sheet like with Project Scorpio, but I think it's happening for sure.
I still think both Microsoft and Sony will announce at some point this year for a late spring 2020 release.
 

Papacheeks

Banned
Oct 27, 2017
5,620
Watertown, NY
You don't need to do things like NVidia (which is something a lot of people need to understand...) and they (AMD) are coming out with raytracing cards later this year (which they said they are working with partners on...which MS is one)...and MS came out with DXR not for just one partner. Too many signs for it not to be supported in the next console.

This conversation to me says other wise that it would be in NAVI:

"
We are deep in development," she told reporters, "and that development is concurrent between hardware and software. That's key."

For Su, this comes back to AMD's overall strategy of delivering features only when they make a real impact for consumers. "The consumer doesn't see a lot of benefit today […]" Su said. "I think by the time we talk more about ray tracing, the customer is going to see [the benefit.]"

Her response gently pokes at Nvidia's launch of ray tracing which, though technically impressive, hasn't enjoyed widespread support. Battlefield V, the showcase game, remains the only title in which U.S. gamers can enjoy their RTX card's new hotness."

Doesn't sound like NAVI will have it, NAVI cards already have engineering samples to my knowledge. They are probably making a lot of driver changes to explore it, but I feel the actual chips themselves have been designed and are now in the fab process for a later 2019 release. Fingers point to late summer/fall.

I honestly think they are talking about when developers start supporting it more in their engines and require the hardware to utilize it. Same thing happened to NVIDIA physx, and still only a select few titles actually use it, and not many in meaningful ways.

Which is what we are seeing currently for ray tracing.
 

Deleted member 5764

User requested account closure
Banned
Oct 25, 2017
6,574
I still think both Microsoft and Sony will announce at some point this year for a late spring 2020 release.

I like your way of thinking! I'm thinking there's a 50-50% chance that Sony announces this year for a Spring 2020 release. Microsoft always felt like Fall to me, but I certainly wouldn't complain about a Spring release. That just feels like a better time to buy hardware for me personally. Holidays involve too much gift buying for others.
 
Feb 1, 2018
5,240
Europe
I just caught wind of Phil coming on stage for that AMD conference yesterday. Xbox is 100% going to announce their next-gen console(s) this year aren't they? It may just be a spec sheet like with Project Scorpio, but I think it's happening for sure.

Yeah 100% sure Phil will drop a teaser at E3, similar to X1X, not much more than that I guess. X1Next will probably arrive end 2020, so one more E3 to do a full reveal.

Something like "20TF console gaming!" /walks off
(BTW that 20TF is just a crazy number to make my point funnier) :)
 

SeanMN

Member
Oct 28, 2017
2,185
This conversation to me says other wise that it would be in NAVI:

"

Her response gently pokes at Nvidia's launch of ray tracing which, though technically impressive, hasn't enjoyed widespread support. Battlefield V, the showcase game, remains the only title in which U.S. gamers can enjoy their RTX card's new hotness."

I agree with zedox in that I expect RT support in the new Xbox. I also don't think Navi will feature RT.

MS has been working on the software side of RT via Direct X since early 2017 (per their claims at GDC 2018), I speculate that they likey have a good idea of the hardware requirements to implement this and would be able to work with AMD on bespoke hardware customizations on top of existing AMD graphics tech to be ready for a fall 2020 console.

Anecdotally, every Xbox that has launched has been feature complete with the current version of direct X at the time (please correct me if I'm inaccurate on this), so it's reasonable to expect the same for future the Xbox.
 

AegonSnake

Banned
Oct 25, 2017
9,566
Yeah 100% sure Phil will drop a teaser at E3, similar to X1X, not much more than that I guess. X1Next will probably arrive end 2020, so one more E3 to do a full reveal.

Something like "20TF console gaming!" /walks off
(BTW that 20TF is just a crazy number to make my point funnier) :)
no, i am with you on that. They were pretty much caught off guard with Pro dev kits but decided to undercut Sony's announcement with a quick 6 tflops number they knew they could do a year and a half later. it pretty much made the Pro obselete before it was even announced.

I think they will attempt to do the same this year at E3 and with Sony not even present, they will get a lot of free press and marketing.
 

zedox

Member
Oct 28, 2017
5,215
I agree with zedox in that I expect RT support in the new Xbox. I also don't think Navi will feature RT.

MS has been working on the software side of RT via Direct X since early 2017 (per their claims at GDC 2018), I speculate that they likey have a good idea of the hardware requirements to implement this and would be able to work with AMD on bespoke hardware customizations on top of existing AMD graphics tech to be ready for a fall 2020 console.

Anecdotally, every Xbox that has launched has been feature complete with the current version of direct X at the time (please correct me if I'm inaccurate on this), so it's reasonable to expect the same for future the Xbox.
This is exactly my line of thinking. Also, AMD is most likely talking about the environment in which more developers would be more inclined to do raytracing... What would that be? Consoles. Sony could be doing it as well but I'm more in favor of MS doing it because of DXR. If you are looking at it from a straight PC perspective in the cards, then no, but it's about the customization of the hardware... Along with AI stuff MS can do on the GPU that would most likely be similar to DLSS from Nvidia.
 
Feb 1, 2018
5,240
Europe
no, i am with you on that. They were pretty much caught off guard with Pro dev kits but decided to undercut Sony's announcement with a quick 6 tflops number they knew they could do a year and a half later. it pretty much made the Pro obselete before it was even announced.

I think they will attempt to do the same this year at E3 and with Sony not even present, they will get a lot of free press and marketing.

I would be very impressed if we get a 20TF console ; ) Let's hope MS does this then \o/
 

Outrun

Member
Oct 30, 2017
5,782
You don't need to do things like NVidia (which is something a lot of people need to understand...) and they (AMD) are coming out with raytracing cards later this year (which they said they are working with partners on...which MS is one)...and MS came out with DXR not for just one partner. Too many signs for it not to be supported in the next console.

Yeah, I think that it is a definite possibility. We could see Ray-tracing be the new 1080p full-HD marketing tool for the next generation.
 

jroc74

Member
Oct 27, 2017
28,992
You don't need to do things like NVidia (which is something a lot of people need to understand...) and they (AMD) are coming out with raytracing cards later this year (which they said they are working with partners on...which MS is one)...and MS came out with DXR not for just one partner. Too many signs for it not to be supported in the next console.
I wouldnt be surprised either.


Look at the One X and Free Sync, VRR support. if not to start the gen, their next gen refresh.
 
Status
Not open for further replies.