• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

1-D_FE

Member
Oct 27, 2017
8,261
Most modern (AAA) games have some pretty amazing support for GCN h/w thanks to consoles but if you would look at engines which are properly optimized for all h/w on the market (like UE4) then you will see what I've already said above - GCN needs about twice as much energy for the same performance as current NV tech and this is solely a h/w design problem. This problem is also likely to become a catastrophe for AMD in 2018 after the launch of NV's next gen gaming architecture which will probably improve perf/watt even further if Volta is of any indication.

The thing that disappoints me most with modern AMD, is even when there are these clear cut deficiencies, it basically takes a decade for AMD to try and even address it. It just doubles and triples and quadruples down.

Welp...looks like the landscape's a changing. Intel is going directly after Nvidia. They actually have the R&D budget and experience to make competitive chips though (drivers will be a huge issue still however). This move is actually scary!

They tried going after Nvidia with Larabee too. Apparently it isn't very easy (even with all the money). Feels more Intel like getting scared about Nvidia beginning to encroach on their territory and Intel hoping they can evolve to try and stop it.
 

tuxfool

Member
Oct 25, 2017
5,858
They tried going after Nvidia with Larabee too. Apparently it isn't very easy (even with all the money). Feels more Intel like getting scared about Nvidia beginning to encroach on their territory and Intel hoping they can evolve to try and stop it.
Larrabee failed because it was an exotic approach to GPU design. Intel's current efforts have much more commonality with nvidia and amd designs (and everyone else) .

The issue with Intel is that they're designing pure igpus. As such they're not scaling those designs nor are they developing drivers to fully optimize their architectures in the same way Nvidia or AMD do.

So it is just a question of whether they want to compete and put investment toward those efforts.
 

dgrdsv

Member
Oct 25, 2017
11,885
The thing that disappoints me most with modern AMD, is even when there are these clear cut deficiencies, it basically takes a decade for AMD to try and even address it. It just doubles and triples and quadruples down.
Again, down to lack of resources on fast enough GPU architecture iteration. This is basically an accepted truth now in the industry and I wouldn't be surprised to learn one day that Koduri jumped ship because he simply could not do much at RTG with their current development bugets.

datschge no, I really don't as I see no point in talking any further to someone who think that AMD doesn't have any h/w issues with their current GPUs.
 

datschge

Member
Oct 25, 2017
623
dgrdsv that's perfectly fine with me since in your constant antagonism you don't even notice anymore where we are in agreement and where not. Correcting you on what I supposedly said is too tiring.
 
AMD are making a mistake in agreeing to this deal. Intel are one of the most untrustworthy companies in existence, I wouldn't be surprised if this is purely for corporate espionage.
Oh wow, so you were doing this shtick earlier even. A lot of unwarranted and unsourced statements right there. Corporate Espionage, hah. Yeah i am sure AMD should listen to the mind of a single programmer telling them that this is a terrible deal that they are doing...

... even when they built a complete division for semi-custom hardware just for this kind of purpose, targetting the high end laptops, workstations and HTPC ranges with this one.

But as a programmer, i guess you know better.

4C/8T i7, 24CUs GPU

The GPU is pretty massive in the officially released images as well as it's about four times bigger than the iGPU in the Intel Core on the same images.
1536 shaders, between 3.0 and 3.3 TF. Should be reasonable enough for 1080p gaming on Ultra settings, no? It is a big jump compared to the Iris Pro series that were more of a match for Geforce GTX 750 Ti.

One more photo of the dev board:
What amaze is me is actually how rather large it is compared to everything else. Granted, its still small enough that i could see someone like Zotac make a nice Zbox out of it, just didnt think it was this size.

dgrdsv that's perfectly fine with me since in your constant antagonism you don't even notice anymore where we are in agreement and where not. Correcting you on what I supposedly said is too tiring.
It has reminded me all too much of that Other Side user Kezen. His replies were perfectly valid but maan how thinly veiled his bragging and condescending was about one company.

That being said Doctor Rus knows his shit so its good to have him here :)
 

dgrdsv

Member
Oct 25, 2017
11,885
Rumor: Hades Canyon NUC with AMD Graphics Spotted

core-radeon-leak.png


Well, not bad for what is essentially a NUC. Won't light any fires on the market though since you can essentially have the same performance in a small factor right now with Max-Q and such.
 

Primus

Member
Oct 25, 2017
3,840
intel-chips-1-1.jpg


Two lines of integrated Vega graphics, the GL series which is pointed towards laptops (lower clock speed and only 20 compute units), and the GH series which is the desktop variant (higher clock speed and 26 compute units).

That Hades Canyon NUC shown off in that video above is a BEAST, and is VR-ready on the GH variant. Engadget has the rest of the slides, with the GH showing slight gains over a 6GB Nvidia 1060.
 

finalflame

Product Management
Banned
Oct 27, 2017
8,538
I'd be in for a NUC with performance slightly above GTX 1060 levels. Do we have any idea on price/availability?
 

JonnyDBrit

God and Anime
Member
Oct 25, 2017
11,027
I'd be in for a NUC with performance slightly above GTX 1060 levels. Do we have any idea on price/availability?

The sheer size to performance ratio just kinda blows my mind with this. That thing looks about as big as a Wii U, and might just blow the pants off the Xbox One X. I say might because we don't necessarily how the performance will actually translate, and of course it's practically twice the price when you factor in RAM and all, but... damn.
 

Deleted member 17491

User requested account closure
Banned
Oct 27, 2017
1,099
The sheer size to performance ratio just kinda blows my mind with this. That thing looks about as big as a Wii U, and might just blow the pants off the Xbox One X. I say might because we don't necessarily how the performance will actually translate, and of course it's practically twice the price when you factor in RAM and all, but... damn.
If Intel's 1.2 liter volume claim is correct, it's actually smaller than a WiiU which volume comes in at ~2.1 liter.

Edit: Found the dimensions: https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/intel-nuc-spec-sheet.pdf

Intel NUC: 221 x 142 x 39 mm (1.2 L)
WiiU: 172 x 46 x 269 mm (2.1L)
 

JonnyDBrit

God and Anime
Member
Oct 25, 2017
11,027
If Intel's 1.3 liter volume claim is correct, it's actually smaller than a WiiU which volume comes in at ~2.1 liter.

That just makes it even more ridiculous!

More seriously, I am very interested in what this could mean for gaming laptops in future. Hope they can downscale it into more affordable variants.
 
Oct 26, 2017
6,151
United Kingdom
The sheer size to performance ratio just kinda blows my mind with this. That thing looks about as big as a Wii U, and might just blow the pants off the Xbox One X. I say might because we don't necessarily how the performance will actually translate, and of course it's practically twice the price when you factor in RAM and all, but... damn.

"Up to 3.7 TFlops" for the top end part, compared with the XB1X's 6 Tflops... yeah, no.
 

RogerL

Member
Oct 30, 2017
606
The sheer size to performance ratio just kinda blows my mind with this. That thing looks about as big as a Wii U, and might just blow the pants off the Xbox One X. I say might because we don't necessarily how the performance will actually translate, and of course it's practically twice the price when you factor in RAM and all, but... damn.

X Box One X still comes out on top, this is Playstation Pro level of GPU performance.
This design might answer the question - will a better CPU than Jaguar help if we only get half the number real cores.
 

JonnyDBrit

God and Anime
Member
Oct 25, 2017
11,027
"Up to 3.7 TFlops" for the top end part, compared with the XB1X's 6 Tflops... yeah, no.

Thank you for a clarification, and I'll admit that statement was pre-emptively hyperbolic, however...

X Box One X still comes out on top, this is Playstation Pro level of GPU performance.
This design might answer the question - will a better CPU than Jaguar help if we only get half the number real cores.

I was also factoring in this as well. Xbox One X, depending on the circumstances, is bottlenecked by its CPU.
 
Oct 26, 2017
6,151
United Kingdom
X Box One X still comes out on top, this is Playstation Pro level of GPU performance.
This design might answer the question - will a better CPU than Jaguar help if we only get half the number real cores.

4.2 TFlops is still > upto 3.7 TFlops.

Still in the same ballpark though. So yeah you're right.

On the CPU, assuming these core boast SMT, then they'll push just as many threads (if not more) at a significant increase in CPU clockspeed.

The CPUs here will run rings around the Jaguars in XB1X/Pro/XB1S/PS4.
 
Oct 27, 2017
3,894
ATL
I'd be in for a NUC with performance slightly above GTX 1060 levels. Do we have any idea on price/availability?

Wasn't test comparing the highest end VEGA chip variant (26CUs) to a Max-Q 1060? Isn't the Max-Q variant of the 1060 lower performing than the full 1060 variant?

Anyway, I'm honestly curious as to where this CPU+GPU design fits into a typical Intel laptop portfolio? The wattage concerns and overall price really make this an odd man out for what part of the market would benefit the most from this. Are NUCs even worthwhile? If the highest-end variant is $999 without including ram and an SSD, you will probably get better bang for your buck building a SFF pc.

Intel ultimately wants a competent APU, but I'm guessing that's years away.

Edit: https://arstechnica.com/gadgets/201...intel-cpu-amd-gpu-nvidia-beating-performance/

It looks like Arstechnica has a good breakdown of where this chip fits in to the market. It looks like Intel have been cheating a little bit in the performance comparisons. They are comparing a 15W CPU and a low power variant of an exiting chip with a CPU + GPU combo that can pull more than 100W. I wonder what the numbers would look like if the Intel G series was limited to the wattage/thermal constraints of a typical 15" laptop?
 
Last edited:
Oct 26, 2017
6,151
United Kingdom
Wasn't test comparing the highest end VEGA chip variant (26CUs) to a Max-Q 1060? Isn't the Max-Q variant of the 1060 lower performing than the full 1060 variant?

Anyway, I'm honestly curious as to where this CPU+GPU design fits into a typical Intel laptop portfolio? The wattage concerns and overall price really make this an odd man out for what part of the market would benefit the most from this. Are NUCs even worthwhile? If the highest-end variant is $999 without including ram and an SSD, you will probably get better bang for your buck building a SFF pc.

Intel ultimately wants a competent APU, but I'm guessing that's years away.

Edit: https://arstechnica.com/gadgets/201...intel-cpu-amd-gpu-nvidia-beating-performance/

It looks like Arstechnica has a good breakdown of where this chip fits in to the market. It looks like Intel have been cheating a little bit in the performance comparisons. They are comparing a 15W CPU and a low power variant of an exiting chip with a CPU + GPU combo that can pull more than 100W. I wonder what the numbers would look like if the Intel G series was limited to the wattage/thermal constraints of a typical 15" laptop?

I'm wondering whether a large part of the high price is due to HBM2 and the requisite interposer?

HBM3 or even Samsung's low cost HBM with an organic interposer would likely bring the cost of a future iteration down considerably.

Still, Intel love dem high margins.