• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

Charsace

Chicken Chaser
Banned
Nov 22, 2017
2,845
RTX is actually a huge upgrade from the 1000 series. Try any game with Ray tracing effects and you will see a huge performance gap.
 

Deleted member 7948

User requested account closure
Banned
Oct 25, 2017
1,285
It really isn't, and it's measured in comparisons between GPUs. Price for performance.

Spending over $100 for a few extra frames is poor value.
It isn't.

For me, HDMI 2.1 VRR support and OpenGL performance matters, so an AMD card has poor value for me.

Sure, I could get more frames in some games with an RX 5700, but do I really want to deal with screen tearing and judder in 2019? No.
 

Lakeside

Member
Oct 25, 2017
9,209
I think it's pretty obvious that Nvidia should have launched a GTX 2080 Ti (or a 1680 or whatever number they wanted to put on it) alongside the RTX one. They were probably afraid that it'd stunt ray tracing, but with the consoles next year, I doubt it, and ray tracing is going to take off no matter what. I think most of us who bought an RTX card knew we were spending extra not so much for performance but for the promise of new rendering techniques. Those who aren't interested and want a card that doesn't have that RT silicon are just going to buy an AMD card and will be more than happy with it I'm sure, so yeah, Nvidia left a gap in the market there and AMD went for it.

Sounds like AMD have some form of hardware RT acceleration coming soon since both the PS5 and Scarlet seem to have it even if it isn't in their desktop parts yet.

I expect prices will settle down a bit with the next generation of Nvidia and AMD cards, as I expect there'll be more competition again, and Nvidia won't be the only ones in the RT game. Their tech will also have matured and will be cheaper to produce I expect.

For now, the RTX cards are for people like me and Dark1x and Dictator who are as interested in bleeding edge graphical techniques as we are in high framerates (you can be interested in both!), and all that extra silicon (and all the hardware and software development costs that went into the product) pushed up the prices way more than just increasing clock speeds and cuda core counts.

In two or three years, RT acceleration is going to be a must have in any mid to high end GPU, but we aren't there yet.

With the RTX and tensor hardware using such a small percentage of the die.. it wouldn't be a significant benefit. Few people are going to pay a significant amount of extra money just for ray tracing before software is available. Essentially what you are saying is that RTX shouldn't have happened at all. This stuff has to start somewhere.
 
Oct 27, 2017
5,761
You can really tell who's been PC gaming for a long time vs those who have not.

The Turing cards are a significant upgrade in so many ways but, like GeForce 3, this is just the beginning.


You're not wrong, but just like the GeForce 3 cards, I feel like waiting for the next generation is a good idea. I want the card that is to ray tracing what the Radeon 9700 Pro was to shaders. Until then my 980 Ti will do.
 

plagiarize

Eating crackers
Moderator
Oct 25, 2017
27,501
Cape Cod, MA
With the RTX and tensor hardware using such a small percentage of the die.. it wouldn't be a significant benefit. Few people are going to pay a significant amount of extra money just for ray tracing before software is available. Essentially what you are saying is that RTX shouldn't have happened at all. This stuff has to start somewhere.
Well, they're competing against non RT parts that offer better dollar to framerate metrics (but missing those RT features) anyway. Nvidia only put out a GTX based on this chipset at the low end. They could have done one at the high end and priced it competitively against AMDs cards.

Personally, I'm not sure just how viable RT is on the lower RTX cards. It certainly able to hit decent performance and IQ on the 2080 Ti, but below that I'm not necessarily convinced. Those of who bought the RTX 2080 Ti would have bought it anyways. Perhaps you'd have lost 2080 and 2070 buyers though. But as I mentioned, they're competition with non RT parts already, and people who aren't yet convinced RT is worth paying more money for are going to look outside RTX either way.

I don't think RTX doesn't happen. Ray tracing is very much the near future of rendering. It is and was inevitable. Fully rasterized games only offer so much, and only go so far. As we get the hardware that can handle RT, it's going to take off. Control with RTX is clearly closer to what Remedy wanted to do than without it. It's an astoundingly good looking game, that hints at the type of dynamism we might see in larger environments as we move into the next generation.

Just as people who think Nvidia might have killed RT by over pricing these cards... It's not going to matter in the long run what they were priced. Nvidia are going to still be around in a few years and ray tracing will become common place before you know it, no matter which vendors are making GPUs come the middle of next decade.
 

Duxxy3

Member
Oct 27, 2017
21,662
USA
You're not wrong, but just like the GeForce 3 cards, I feel like waiting for the next generation is a good idea. I want the card that is to ray tracing what the Radeon 9700 Pro was to shaders. Until then my 980 Ti will do.

As someone with a 980 ti in one system and an RTX 2070 in another system, you'll be just fine with your 980 ti.
 

Lakeside

Member
Oct 25, 2017
9,209
Well, they're competing against non RT parts that offer better dollar to framerate metrics (but missing those RT features) anyway. Nvidia only put out a GTX based on this chipset at the low end. They could have done one at the high end and priced it competitively against AMDs cards.

Personally, I'm not sure just how viable RT is on the lower RTX cards. It certainly able to hit decent performance and IQ on the 2080 Ti, but below that I'm not necessarily convinced. Those of who bought the RTX 2080 Ti would have bought it anyways. Perhaps you'd have lost 2080 and 2070 buyers though. But as I mentioned, they're competition with non RT parts already, and people who aren't yet convinced RT is worth paying more money for are going to look outside RTX either way.

I don't think RTX doesn't happen. Ray tracing is very much the near future of rendering. It is and was inevitable. Fully rasterized games only offer so much, and only go so far. As we get the hardware that can handle RT, it's going to take off. Control with RTX is clearly closer to what Remedy wanted to do than without it. It's an astoundingly good looking game, that hints at the type of dynamism we might see in larger environments as we move into the next generation.

Just as people who think Nvidia might have killed RT by over pricing these cards... It's not going to matter in the long run what they were priced. Nvidia are going to still be around in a few years and ray tracing will become common place before you know it, no matter which vendors are making GPUs come the middle of next decade.

Removing RTX would free up 8-10% die space and performance isn't going to scale linearly. You would wind up with a card so close to the regular 2080 Ti that it'd be within overclock range, but lack ray tracing.

I'm not sure there's a place for that at the high end in terms of product segmentation, plus it would undercut ray tracing development as has been mentioned a number of times. This is the future of rendering. Nvidia can't dump all the R&D burden onto a few cards that nobody buys.

And it would still be a MASSIVE die.
 

plagiarize

Eating crackers
Moderator
Oct 25, 2017
27,501
Cape Cod, MA
Removing RTX would free up 8-10% die space and performance isn't going to scale linearly. You would wind up with a card so close to the regular 2080 Ti that it'd be within overclock range, but lack ray tracing.

I'm not sure there's a place for that at the high end in terms of product segmentation, plus it would undercut ray tracing development as has been mentioned a number of times. This is the future of rendering. Nvidia can't dump all the R&D burden onto a few cards that nobody buys.

And it would still be a MASSIVE die.
Well my whole thought process was that the card would be the same as the 2080 Ti in terms of rasterization performance, but offered at a cheaper price.

I understand where you're coming from, I'm just pointing out that the AMD cards exist no matter what Nvidia do. That cheaper card that offers equivalent rasterization performance exists now, and all I'm saying is if Nvidia had put one out, they'd get more of that money that's currently going to AMD, and ray tracing isn't going to fail eitherway.
 

ShinUltramanJ

Member
Oct 27, 2017
12,949
It isn't.

For me, HDMI 2.1 VRR support and OpenGL performance matters, so an AMD card has poor value for me.

Sure, I could get more frames in some games with an RX 5700, but do I really want to deal with screen tearing and judder in 2019? No.

HDMI 2.1 VRR support is coming to AMD cards as well.

Pretty sure nobody is even offering VRR right now?

Between my Freesync monitor and Enhanced Sync I'm not dealing with screen tearing or judder, so I don't know what that remark was for?
 

Deleted member 7948

User requested account closure
Banned
Oct 25, 2017
1,285
HDMI 2.1 VRR support is coming to AMD cards as well.
Sadly, I can't use future features right now. There isn't even an ETA for it.

Between my Freesync monitor and Enhanced Sync I'm not dealing with screen tearing or judder, so I don't know what that remark was for?
The remark was for this:
"But Nvidia is the bigger brand name that a lot of folks are just going to keep flocking to, regardless of how much they get bent over."

I just gave you two reasons why I prefer an Nvidia card.

Can I use freesync with my LG C9 and an AMD card? No, so it's useless to me. It has nothing to do with "getting bent over".
 

ShinUltramanJ

Member
Oct 27, 2017
12,949
Sadly, I can't use future features right now. There isn't even an ETA for it.


The remark was for this:
"But Nvidia is the bigger brand name that a lot of folks are just going to keep flocking to, regardless of how much they get bent over."

I just gave you two reasons why I prefer an Nvidia card.

Can I use freesync with my LG C9 and an AMD card? No, so it's useless to me. It has nothing to do with "getting bent over".

Sorry, but I don't follow?

You prefer an Nvidia Card for VRR, which you admitted is a future feature?

What's useless about the AMD card that the Nvidia Card is offering for your TV?

I was referring to price/performance. It's a thing whether you want to admit it or not.
 

Deleted member 7948

User requested account closure
Banned
Oct 25, 2017
1,285
I was referring to price/performance. It's a thing whether you want to admit it or not.
I have no problem admitting it, it's a fact. The problem is this part here:

It really isn't, and it's measured in comparisons between GPUs. Price for performance.

Spending over $100 for a few extra frames is poor value.
It's just plain wrong. Price-performance ratio is a part of value, but not all of it.

You prefer an Nvidia Card for VRR, which you admitted is a future feature?

What's useless about the AMD card that the Nvidia Card is offering for your TV?
For AMD, it isn't even close to be released. For Nvidia, it is available right now.

If I had bought an AMD card, I'd have two options:
1 - play with traditional v-sync and waste the extra power from the card.
2 - play with judder and/or tearing.

And you are telling me that's the best value, lol.
 
Last edited:

ohitsluca

Member
Oct 29, 2017
730
GTX 980ti —> RTX 2080 was a pretty big upgrade for me at 1440p 144hz. I think the reason the upgrades don't feel quite as big because people are still gaming at 1080p. The new cards really benefit higher resolutions more, as seen in benchmarks
 

Deleted member 12790

User requested account closure
Banned
Oct 27, 2017
24,537
Well put.

I'd sat raytracing at that point was both too early and too late. Too early because of the ridiculous price and meager performance you get out of it, and too late because at this point developers have honestly mastered the art of tricking us with lighting/shadows/reflections that on the whole look fantastic without being really accurate or "real."

No, no we have not, not by a country mile. And, more to the point, employing the tricks currently used to fake lighting/shadows/reflections constitutes a very significant amount of development time. Hence a huge reason RT is the future: it will drastically reduce the work load for developers, while achieving superior results.
 

Deleted member 35204

User requested account closure
Banned
Dec 3, 2017
2,406
Wasting more than half of the die size for ray tracing crap instead of raster performance is the stupidest thing they could do
 

Fawz

Member
Oct 28, 2017
3,654
Montreal
I expect 2020's new line of cards to be a pretty significant improvement in performance, plus with the really nice added bonuses from HDMI 2.1 support

However without proper competition I fully expect them to overprice them and try to pull another trick with the naming convention to mask that
 

Deleted member 35204

User requested account closure
Banned
Dec 3, 2017
2,406
Again, where are people getting these crazy estimations of wasted die size?

There is a ton of misinformation being handed out on this front.
At the unveiling nvidia showed a die where half of it was only rt cores and more soace wasted for tensor cores, regardless of how accurate it was just look at how enormous the die size for turing is while still having more or less the same number of stream processors/tmu/rops
 

J2d

Member
Oct 26, 2017
1,140
As a pc noob I was dead set on upgrading when the next cards are out but seeing a 2080ti struggling with rdr2 makes me worried how they will be able to deal with next gen games. Would suck to have to wait for 4xxx..
 

Lakeside

Member
Oct 25, 2017
9,209
At the unveiling nvidia showed a die where half of it was only rt cores and more soace wasted for tensor cores, regardless of how accurate it was just look at how enormous the die size for turing is while still having more or less the same number of stream processors/tmu/rops

There was no useful data in the overview they initially provided. Initially there was a lot of misinformation that people have hug onto. It turns out that most of the additional space you're describing comes from cache and increased instruction set.

Eventually someone was able to get their hands on an actual image of TU106 and TU116 dies several months ago. It was reverse "engineered" to discern exactly how much of the die space was RTX and tensor.. 8-10%. I saw it in multiple places months back but at the moment can only find an analysis that is uncontested on Reddit.

Not all of the die space is used, so the percentage of used space is larger but my math seems ~18% for TU106 but it's possible I'm missing something. I've seen others claim 22%.

Source:

 
Nov 2, 2017
2,275
At the unveiling nvidia showed a die where half of it was only rt cores and more soace wasted for tensor cores, regardless of how accurate it was just look at how enormous the die size for turing is while still having more or less the same number of stream processors/tmu/rops
Nope. https://www.techpowerup.com/254452/...ses-tpc-area-by-22-compared-to-non-rtx-turing

It seems that most of the area increase compared to the Pascal architecture actually comes from increased performance (and size) of caches and larger instruction sets on Turing than from RTX functionality.

So even without any raytracing hardware Turing would be much bigger than Pascal.
 

tokkun

Member
Oct 27, 2017
5,392
Can you explain where they get 22% given the numbers?

The 22% number is as a fraction of the die space used by the cores.
The 10% number is as a fraction of the total die space.

The total number is a lot smaller because a significant amount of the die is devoted to caches and interconnect.
 

Deleted member 35204

User requested account closure
Banned
Dec 3, 2017
2,406
There was no useful data in the overview they initially provided. Initially there was a lot of misinformation that people have hug onto. It turns out that most of the additional space you're describing comes from cache and increased instruction set.

Eventually someone was able to get their hands on an actual image of TU106 and TU116 dies several months ago. It was reverse "engineered" to discern exactly how much of the die space was RTX and tensor.. 8-10%. I saw it in multiple places months back but at the moment can only find an analysis that is uncontested on Reddit.

Not all of the die space is used, so the percentage of used space is larger but my math seems ~18% for TU106 but it's possible I'm missing something. I've seen others claim 22%.

Source:

Nope. https://www.techpowerup.com/254452/...ses-tpc-area-by-22-compared-to-non-rtx-turing



So even without any raytracing hardware Turing would be much bigger than Pascal.
Ok my bad but still... that cache doesn't seem to do much and more instruction sets... well, let's just say i and i think many others would have appreciated much more either a cheaper video card without the rt baggage or a full fat die of rasterizing performance.
But nvidia gotta nvidia so we have to hope on AMD to deliver (lol they are on a more advanced node and still can't grasp the power or performance per watt of nvidia) or Intel to breakthrough immediately.
 

Lakeside

Member
Oct 25, 2017
9,209
Ok my bad but still... that cache doesn't seem to do much and more instruction sets... well, let's just say i and i think many others would have appreciated much more either a cheaper video card without the rt baggage or a full fat die of rasterizing performance.

You wanted nothing changed, just a bigger Pascal die. Ok, got it.

Regardless, your comments are now getting into criticism of the Pascal -> Turing architecture changes without the knowledge to have the discussion. This isn't a criticism of your knowledge, as I am in that same boat. I just know enough than to avoid that particular conversation.
 

low-G

Member
Oct 25, 2017
8,144
Well put.

I'd sat raytracing at that point was both too early and too late. Too early because of the ridiculous price and meager performance you get out of it, and too late because at this point developers have honestly mastered the art of tricking us with lighting/shadows/reflections that on the whole look fantastic without being really accurate or "real."

Except people go gaga whenever they see raytracing they can actually play (Minecraft PTGI videos). And they get sour grapes when it costs $1000.

The difference is clear. Everyone sees it. Raytracing is absolutely critical and inevitable, and when more people can afford it they'll stop acting like children.
 

Deleted member 29195

User requested account closure
Banned
Nov 1, 2017
402
NVIDIA shouldn't be pricing their cards like they are. Though the reason there is not a huge difference is counterintuitive. NV ain't trying to scam you on the chip. It's as good as it can be.

The issue is that we've hit the zone where engineering gets pretty tough. We're talking Quantum Effects, quantum entanglement, electrons doing weird as shit things. This is engineering at scales we've never engineered before. It's going to slow down, and it's going to slow down massively. That's why NV is introducing ray tracing. They're hoping it's something drastic they can improve year over year. Because the rest of the chip? That's not happening anymore. Period. Physics says no.
 

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
NVIDIA shouldn't be pricing their cards like they are. Though the reason there is not a huge difference is counterintuitive. NV ain't trying to scam you on the chip. It's as good as it can be.

The issue is that we've hit the zone where engineering gets pretty tough. We're talking Quantum Effects, quantum entanglement, electrons doing weird as shit things. This is engineering at scales we've never engineered before. It's going to slow down, and it's going to slow down massively. That's why NV is introducing ray tracing. They're hoping it's something drastic they can improve year over year. Because the rest of the chip? That's not happening anymore. Period. Physics says no.
nvidia is still on 12nm (16nm). they're jumping to 7nm next year
 

Deleted member 29195

User requested account closure
Banned
Nov 1, 2017
402
nvidia is still on 12nm (16nm). they're jumping to 7nm next year
That is precisely my point. NV is still at 12nm because jumps are getting harder and harder to make. And even after making the jump, there are further and further diminishing returns. And once we jump to 7nm, that's it for awhile. The returns we used to get every year are gone. Back then folks were upgrading their process size constantly, and that's why it was so much performance for each upgrade.
 

Inugami

Member
Oct 25, 2017
14,995
More upsetting is there are no value upgrades to be made... My 580 is just a slightly overclocked 480... and there is nothing in the 200-300 range that is worth it. The 2060 Super is almost there, but still quite a bit pricey for about 50% improvement over the 580 (with RT mostly not being worth it as a feature since it's got so few tensor cores)
 

Lakeside

Member
Oct 25, 2017
9,209
There's a direct link to the source one post above mine.

Right, I had posted the same data. It turns out I was just looking at it differently.

The percent of the TPC that is RTX is ~18%. That's what I was stuck on.

The percent increase in TPC size due to RTX is ~22%.

Of course both are greater than looking at it from a die perspective, but in the end it's just different ways of viewing the same thing. To me it makes more sense to consider what percentage of the TPC and die are used for RTX (ray tracing and tensor).
 

Ra

Rap Genius
Moderator
Oct 27, 2017
12,196
Dark Space
They didn't ruin anything. They are going to drag you guys along to the future with the rest of us and ray tracing is that future. Many of us have been waiting on for decades and the 20 series was the first step. It had to start somewhere and now even the next console are fully commited, which just makes it that much more of a reality.

People can complain about the performance gains, they can whine about the focus on RT cores, but it had to happen sooner or later and Nividia was the only one who was in a secure enough position to take the leap.

In 5-10 years when ray tracing is the standard all of these complaint posts about it are going to age like milk. They are small minded and short-sighted. Big deal, they went one gen without massive gains everyone is used to, and btw, they also introduced consumer level tech that will change the way games are rendered forever.

Well, here's the thing.

1) RT is the future of rendering. It has to start somewhere. It was never going to be a convenient time but, personally, I think the end of a console generation is THE BEST time for it as most games aren't so demanding as to require that much more power than a 1080ti can deliver. There are a few exceptions but we're good right now - next-gen will bring new, more demanding games where larger leaps are more necessary.

2) This ties into my comment - PC hardware in the last decade has become all about performance boosts but little else. It's just about increasing what we already have. That also means that newcomers to the world of PC just sort of feel 'that's how it should be' when, in reality, it wasn't like this in the past. Each new paradigm shift in graphics requires sacrifice. As I said, programmable shaders were a key feature of GeForce 3 and it was critical to the development of graphics - but the GF3 wasn't a super fast card and few games launched with those features during its heyday. It needed to happen, however. Same deal with hardware T&L on GeForce 256 and many many other cards from different companies. Important features were added that were critical to the future of graphics but you didn't always get massive performance boosts. That's what is happening here.

Basically, what I'm saying is that you need to look beyond simple performance boosts and consider the big picture. Is it expensive? Yes but the answer to that is - wait. This was an important leap and it needed to happen. This was the absolute best time for it too.

This is why the knee-jerks reactions from people do bother me - that attitude hurts the advance of graphics for a boost that we don't REALLY need at this exact moment. I understand why people want that typical upgrade cycle but this is better for the future of graphics and next-generation games.

So, if it's too expensive and you're disappointed - again, just wait. Pickup a used 1080ti if you haven't already and be happy - it can run all games just fine at high resolutions.
Freaking THANK YOU. Nvidia literally took one for the team and the reaction to the RTX 20 generation is incessantly parroted as a move done due to lack of competition. Can you believe this madness? "Lazy Nvidia" would've said screw the RT noise and just release okay cards at not good prices. Instead they sunk crazy R&D into RT, ate a an ocean of bad PR because they released when their was no software support, oh yeah and shifted the entire freaking industry for the foreseeable future. Now we have AMD, Sony, and Microsoft on board, meaning ray tracing is completely legitimized and here to stay, likely forever.

But Nvidia gets no props for this because we didn't get to upgrade this year...

The gamer's inability to think past its nose irritates the fuck out of me in cases like this.
 
Dec 15, 2017
1,590
Well if anything… AMD is to blame here. Nvidia is the market leader and people go and willingly choose their graphics cards. The marketplace has voted with their wallets and NVIDIA won so far. Its AMDs job to put pressure on NVIDIA by releasing new graphics cards at a faster pace. I want to upgrade my GTX 760 but only if I can get a card for 200 USD that can grant me at least GTX 1080 performance.
6 and a half years after the GTX 760 release (a mid range card by the way) and the lowest end gaming card you can have is a RX 560 for around 120 USD a card that just trades blows with the 760.

I am really looking forward for the RX 5500. Why the interest for a low end card? We need the new low end to be at least at RX 590 level so that tech at the high end follows suit. Polaris tech is more than 3 years old by now and those GPUs still sell like hotcakes. I understand that we had the mining craze and also lets be honest: most gamers do not NEED more graphics performance than a RX 570 to play modern games at 1080p. Pop one of those bad boys on an old PC and you can game just fine, at least until the new consoles are released.

Last but not least, I would like NVIDIA and AMD to find a workaround for dual GPU setups. This gen was just terrible at that. I loved to see those enthusiast rigs with 2 way 3 way SLI configs driving multi monitor configs at high framerates. As modern (NVIDIA) GPUs have low power consumption, a 2 way SLI could be a viable upgrade path if they solve the inherent issues with the technology.
 

Kieli

Self-requested ban
Banned
Oct 28, 2017
3,736
Freaking THANK YOU. Nvidia literally took one for the team and the reaction to the RTX 20 generation is incessantly parroted as a move done due to lack of competition. Can you believe this madness? "Lazy Nvidia" would've said screw the RT noise and just release okay cards at not good prices. Instead they sunk crazy R&D into RT, ate a an ocean of bad PR because they released when their was no software support, oh yeah and shifted the entire freaking industry for the foreseeable future. Now we have AMD, Sony, and Microsoft on board, meaning ray tracing is completely legitimized and here to stay, likely forever.

But Nvidia gets no props for this because we didn't get to upgrade this year...

The gamer's inability to think past its nose irritates the fuck out of me in cases like this.

They released the 1660Ti. Why not offer the full spectrum non-RTX cards as a complement and let consumers decide whether they value ray tracing or not? At some point, it has to happen. That can even be now. But there are ways to make the transition less bumpy.
 

Deleted member 12790

User requested account closure
Banned
Oct 27, 2017
24,537
They released the 1660Ti. Why not offer the full spectrum non-RTX cards as a complement and let consumers decide whether they value ray tracing or not? At some point, it has to happen. That can even be now. But there are ways to make the transition less bumpy.

What you describe is a good way to stall adoption. This is a well understood princple of the technological adoption curve. This is actually very bad for developers to do. When you start offering less advanced, cheaper technical solutions during the infancy of a new technology's life, it halts their adoption. Adoption happening over time isn't an accident or forgone conclusion. It's actually a cycle - developers need an audience with hardware to start pumping out content, and audiences need content to buy the hardware. If nobody has the hardware, developers won't pump out the content to support it, as this is a rendering process very far removed from conventional rasterization.

It's expensive right now, because it's new, because only enthusiasts will pick it up currently, with the expectation that that audience will grow. Offering those same enthusiasts a non-RT option stalls the adoption.
 

Deleted member 35204

User requested account closure
Banned
Dec 3, 2017
2,406
You wanted nothing changed, just a bigger Pascal die. Ok, got it.

Regardless, your comments are now getting into criticism of the Pascal -> Turing architecture changes without the knowledge to have the discussion. This isn't a criticism of your knowledge, as I am in that same boat. I just know enough than to avoid that particular conversation.
Don't put words into my mouth and don't go around assuming things.
I said that given the results of how Turing turned out on the bigger chips i would have preferred a bigger Pascal chip at the same price (or a Turing without the baggage), that doesn't mean i don't want nothing changed.
Second: more cache is always good, reduced latency by keeping more stuff on the on die memory rather than sending it outside but we have not seen that many improvements from it so far; last but not the least the wider instruction set are good if you do compute and ML stuff aka more Tensor/RT baggage when talking pure "traditional" gaming performance because they just added native low precision integer pipes rather than having it reserved for more FP32.
 

karnage10

Member
Oct 27, 2017
5,498
Portugal
maybe i'm in the minority but i really enjoy what ray-tracing brings to the table. That said for me (i have a 1070 gtx) the new GPU don't yet offer enough performance to wander a buy. I'd really enjoy ray tracing in strategy games like total war and battlefleet gothic.
 

Lakeside

Member
Oct 25, 2017
9,209
Don't put words into my mouth and don't go around assuming things.
I said that given the results of how Turing turned out on the bigger chips i would have preferred a bigger Pascal chip at the same price (or a Turing without the baggage), that doesn't mean i don't want nothing changed.
Second: more cache is always good, reduced latency by keeping more stuff on the on die memory rather than sending it outside but we have not seen that many improvements from it so far; last but not the least the wider instruction set are good if you do compute and ML stuff aka more Tensor/RT baggage when talking pure "traditional" gaming performance because they just added native low precision integer pipes rather than having it reserved for more FP32.

How am I putting words in your mouth when I simply restated your position? You want a big Pascal.

Regarding the second part, you had no clue how much of the die was "baggage" yet you do feel knowledgeable enough to critique the evolution of their architecture.
 

1-D_FE

Member
Oct 27, 2017
8,246
The Nvidia Turing GPUs already support VRR via HDMI, and even provide 4k/120hz via HDMI now too.

At what color bit? HDMI 2.0 doesn't have the necessary bandwidth for this. If the above statement is factually correct, how are people doing this? And how much of a hit does IQ take?

All of these cards are kind of crappy investments for this very reason. You want full 4k@120hz with VRR, we need to wait for cards that ship with full 2.1 silicon.