It isn't.It really isn't, and it's measured in comparisons between GPUs. Price for performance.
Spending over $100 for a few extra frames is poor value.
I think it's pretty obvious that Nvidia should have launched a GTX 2080 Ti (or a 1680 or whatever number they wanted to put on it) alongside the RTX one. They were probably afraid that it'd stunt ray tracing, but with the consoles next year, I doubt it, and ray tracing is going to take off no matter what. I think most of us who bought an RTX card knew we were spending extra not so much for performance but for the promise of new rendering techniques. Those who aren't interested and want a card that doesn't have that RT silicon are just going to buy an AMD card and will be more than happy with it I'm sure, so yeah, Nvidia left a gap in the market there and AMD went for it.
Sounds like AMD have some form of hardware RT acceleration coming soon since both the PS5 and Scarlet seem to have it even if it isn't in their desktop parts yet.
I expect prices will settle down a bit with the next generation of Nvidia and AMD cards, as I expect there'll be more competition again, and Nvidia won't be the only ones in the RT game. Their tech will also have matured and will be cheaper to produce I expect.
For now, the RTX cards are for people like me and Dark1x and Dictator who are as interested in bleeding edge graphical techniques as we are in high framerates (you can be interested in both!), and all that extra silicon (and all the hardware and software development costs that went into the product) pushed up the prices way more than just increasing clock speeds and cuda core counts.
In two or three years, RT acceleration is going to be a must have in any mid to high end GPU, but we aren't there yet.
You can really tell who's been PC gaming for a long time vs those who have not.
The Turing cards are a significant upgrade in so many ways but, like GeForce 3, this is just the beginning.
Well, they're competing against non RT parts that offer better dollar to framerate metrics (but missing those RT features) anyway. Nvidia only put out a GTX based on this chipset at the low end. They could have done one at the high end and priced it competitively against AMDs cards.With the RTX and tensor hardware using such a small percentage of the die.. it wouldn't be a significant benefit. Few people are going to pay a significant amount of extra money just for ray tracing before software is available. Essentially what you are saying is that RTX shouldn't have happened at all. This stuff has to start somewhere.
You're not wrong, but just like the GeForce 3 cards, I feel like waiting for the next generation is a good idea. I want the card that is to ray tracing what the Radeon 9700 Pro was to shaders. Until then my 980 Ti will do.
Well, they're competing against non RT parts that offer better dollar to framerate metrics (but missing those RT features) anyway. Nvidia only put out a GTX based on this chipset at the low end. They could have done one at the high end and priced it competitively against AMDs cards.
Personally, I'm not sure just how viable RT is on the lower RTX cards. It certainly able to hit decent performance and IQ on the 2080 Ti, but below that I'm not necessarily convinced. Those of who bought the RTX 2080 Ti would have bought it anyways. Perhaps you'd have lost 2080 and 2070 buyers though. But as I mentioned, they're competition with non RT parts already, and people who aren't yet convinced RT is worth paying more money for are going to look outside RTX either way.
I don't think RTX doesn't happen. Ray tracing is very much the near future of rendering. It is and was inevitable. Fully rasterized games only offer so much, and only go so far. As we get the hardware that can handle RT, it's going to take off. Control with RTX is clearly closer to what Remedy wanted to do than without it. It's an astoundingly good looking game, that hints at the type of dynamism we might see in larger environments as we move into the next generation.
Just as people who think Nvidia might have killed RT by over pricing these cards... It's not going to matter in the long run what they were priced. Nvidia are going to still be around in a few years and ray tracing will become common place before you know it, no matter which vendors are making GPUs come the middle of next decade.
Well my whole thought process was that the card would be the same as the 2080 Ti in terms of rasterization performance, but offered at a cheaper price.Removing RTX would free up 8-10% die space and performance isn't going to scale linearly. You would wind up with a card so close to the regular 2080 Ti that it'd be within overclock range, but lack ray tracing.
I'm not sure there's a place for that at the high end in terms of product segmentation, plus it would undercut ray tracing development as has been mentioned a number of times. This is the future of rendering. Nvidia can't dump all the R&D burden onto a few cards that nobody buys.
And it would still be a MASSIVE die.
It isn't.
For me, HDMI 2.1 VRR support and OpenGL performance matters, so an AMD card has poor value for me.
Sure, I could get more frames in some games with an RX 5700, but do I really want to deal with screen tearing and judder in 2019? No.
Sadly, I can't use future features right now. There isn't even an ETA for it.
The remark was for this:Between my Freesync monitor and Enhanced Sync I'm not dealing with screen tearing or judder, so I don't know what that remark was for?
Sadly, I can't use future features right now. There isn't even an ETA for it.
The remark was for this:
"But Nvidia is the bigger brand name that a lot of folks are just going to keep flocking to, regardless of how much they get bent over."
I just gave you two reasons why I prefer an Nvidia card.
Can I use freesync with my LG C9 and an AMD card? No, so it's useless to me. It has nothing to do with "getting bent over".
I have no problem admitting it, it's a fact. The problem is this part here:I was referring to price/performance. It's a thing whether you want to admit it or not.
It's just plain wrong. Price-performance ratio is a part of value, but not all of it.It really isn't, and it's measured in comparisons between GPUs. Price for performance.
Spending over $100 for a few extra frames is poor value.
For AMD, it isn't even close to be released. For Nvidia, it is available right now.You prefer an Nvidia Card for VRR, which you admitted is a future feature?
What's useless about the AMD card that the Nvidia Card is offering for your TV?
You prefer an Nvidia Card for VRR, which you admitted is a future feature?
Well put.
I'd sat raytracing at that point was both too early and too late. Too early because of the ridiculous price and meager performance you get out of it, and too late because at this point developers have honestly mastered the art of tricking us with lighting/shadows/reflections that on the whole look fantastic without being really accurate or "real."
Wasting more than half of the die size for ray tracing crap instead of raster performance is the stupidest thing they could do
The Nvidia Turing GPUs already support VRR via HDMI, and even provide 4k/120hz via HDMI now too.HDMI 2.1 VRR support is coming to AMD cards as well.
Pretty sure nobody is even offering VRR right now?
Between my Freesync monitor and Enhanced Sync I'm not dealing with screen tearing or judder, so I don't know what that remark was for?
Same here, want to upgrade, but the cost of doing it isn't justified with the current lineup of cards.I'm sticking with my 980ti for a few more years until I have good reason to upgrade. Still runs games amazingly well @ 1440p.
RT and Tensor cores don't take 50% of the dieWasting more than half of the die size for ray tracing crap instead of raster performance is the stupidest thing they could do
At the unveiling nvidia showed a die where half of it was only rt cores and more soace wasted for tensor cores, regardless of how accurate it was just look at how enormous the die size for turing is while still having more or less the same number of stream processors/tmu/ropsAgain, where are people getting these crazy estimations of wasted die size?
There is a ton of misinformation being handed out on this front.
The Nvidia Turing GPUs already support VRR via HDMI, and even provide 4k/120hz via HDMI now too.
At the unveiling nvidia showed a die where half of it was only rt cores and more soace wasted for tensor cores, regardless of how accurate it was just look at how enormous the die size for turing is while still having more or less the same number of stream processors/tmu/rops
Nope. https://www.techpowerup.com/254452/...ses-tpc-area-by-22-compared-to-non-rtx-turingAt the unveiling nvidia showed a die where half of it was only rt cores and more soace wasted for tensor cores, regardless of how accurate it was just look at how enormous the die size for turing is while still having more or less the same number of stream processors/tmu/rops
It seems that most of the area increase compared to the Pascal architecture actually comes from increased performance (and size) of caches and larger instruction sets on Turing than from RTX functionality.
Nope. https://www.techpowerup.com/254452/...ses-tpc-area-by-22-compared-to-non-rtx-turing
So even without any raytracing hardware Turing would be much bigger than Pascal.
There was no useful data in the overview they initially provided. Initially there was a lot of misinformation that people have hug onto. It turns out that most of the additional space you're describing comes from cache and increased instruction set.
Eventually someone was able to get their hands on an actual image of TU106 and TU116 dies several months ago. It was reverse "engineered" to discern exactly how much of the die space was RTX and tensor.. 8-10%. I saw it in multiple places months back but at the moment can only find an analysis that is uncontested on Reddit.
Not all of the die space is used, so the percentage of used space is larger but my math seems ~18% for TU106 but it's possible I'm missing something. I've seen others claim 22%.
Source:
Ok my bad but still... that cache doesn't seem to do much and more instruction sets... well, let's just say i and i think many others would have appreciated much more either a cheaper video card without the rt baggage or a full fat die of rasterizing performance.Nope. https://www.techpowerup.com/254452/...ses-tpc-area-by-22-compared-to-non-rtx-turing
So even without any raytracing hardware Turing would be much bigger than Pascal.
Yeah HDMI VRR output on Turing removes the need for G-Sync on the C9. It does the same thing. But the C9 is receiving a G-Sync update so it can also support pre-Turing cards.That's awesome. I didn't know that Nvidia already had it, until this thread.
So you does that mean you don't need Gsync anymore?
Ok my bad but still... that cache doesn't seem to do much and more instruction sets... well, let's just say i and i think many others would have appreciated much more either a cheaper video card without the rt baggage or a full fat die of rasterizing performance.
Well put.
I'd sat raytracing at that point was both too early and too late. Too early because of the ridiculous price and meager performance you get out of it, and too late because at this point developers have honestly mastered the art of tricking us with lighting/shadows/reflections that on the whole look fantastic without being really accurate or "real."
nvidia is still on 12nm (16nm). they're jumping to 7nm next yearNVIDIA shouldn't be pricing their cards like they are. Though the reason there is not a huge difference is counterintuitive. NV ain't trying to scam you on the chip. It's as good as it can be.
The issue is that we've hit the zone where engineering gets pretty tough. We're talking Quantum Effects, quantum entanglement, electrons doing weird as shit things. This is engineering at scales we've never engineered before. It's going to slow down, and it's going to slow down massively. That's why NV is introducing ray tracing. They're hoping it's something drastic they can improve year over year. Because the rest of the chip? That's not happening anymore. Period. Physics says no.
That is precisely my point. NV is still at 12nm because jumps are getting harder and harder to make. And even after making the jump, there are further and further diminishing returns. And once we jump to 7nm, that's it for awhile. The returns we used to get every year are gone. Back then folks were upgrading their process size constantly, and that's why it was so much performance for each upgrade.nvidia is still on 12nm (16nm). they're jumping to 7nm next year
There's a direct link to the source one post above mine.
They didn't ruin anything. They are going to drag you guys along to the future with the rest of us and ray tracing is that future. Many of us have been waiting on for decades and the 20 series was the first step. It had to start somewhere and now even the next console are fully commited, which just makes it that much more of a reality.
People can complain about the performance gains, they can whine about the focus on RT cores, but it had to happen sooner or later and Nividia was the only one who was in a secure enough position to take the leap.
In 5-10 years when ray tracing is the standard all of these complaint posts about it are going to age like milk. They are small minded and short-sighted. Big deal, they went one gen without massive gains everyone is used to, and btw, they also introduced consumer level tech that will change the way games are rendered forever.
Freaking THANK YOU. Nvidia literally took one for the team and the reaction to the RTX 20 generation is incessantly parroted as a move done due to lack of competition. Can you believe this madness? "Lazy Nvidia" would've said screw the RT noise and just release okay cards at not good prices. Instead they sunk crazy R&D into RT, ate a an ocean of bad PR because they released when their was no software support, oh yeah and shifted the entire freaking industry for the foreseeable future. Now we have AMD, Sony, and Microsoft on board, meaning ray tracing is completely legitimized and here to stay, likely forever.Well, here's the thing.
1) RT is the future of rendering. It has to start somewhere. It was never going to be a convenient time but, personally, I think the end of a console generation is THE BEST time for it as most games aren't so demanding as to require that much more power than a 1080ti can deliver. There are a few exceptions but we're good right now - next-gen will bring new, more demanding games where larger leaps are more necessary.
2) This ties into my comment - PC hardware in the last decade has become all about performance boosts but little else. It's just about increasing what we already have. That also means that newcomers to the world of PC just sort of feel 'that's how it should be' when, in reality, it wasn't like this in the past. Each new paradigm shift in graphics requires sacrifice. As I said, programmable shaders were a key feature of GeForce 3 and it was critical to the development of graphics - but the GF3 wasn't a super fast card and few games launched with those features during its heyday. It needed to happen, however. Same deal with hardware T&L on GeForce 256 and many many other cards from different companies. Important features were added that were critical to the future of graphics but you didn't always get massive performance boosts. That's what is happening here.
Basically, what I'm saying is that you need to look beyond simple performance boosts and consider the big picture. Is it expensive? Yes but the answer to that is - wait. This was an important leap and it needed to happen. This was the absolute best time for it too.
This is why the knee-jerks reactions from people do bother me - that attitude hurts the advance of graphics for a boost that we don't REALLY need at this exact moment. I understand why people want that typical upgrade cycle but this is better for the future of graphics and next-generation games.
So, if it's too expensive and you're disappointed - again, just wait. Pickup a used 1080ti if you haven't already and be happy - it can run all games just fine at high resolutions.
Freaking THANK YOU. Nvidia literally took one for the team and the reaction to the RTX 20 generation is incessantly parroted as a move done due to lack of competition. Can you believe this madness? "Lazy Nvidia" would've said screw the RT noise and just release okay cards at not good prices. Instead they sunk crazy R&D into RT, ate a an ocean of bad PR because they released when their was no software support, oh yeah and shifted the entire freaking industry for the foreseeable future. Now we have AMD, Sony, and Microsoft on board, meaning ray tracing is completely legitimized and here to stay, likely forever.
But Nvidia gets no props for this because we didn't get to upgrade this year...
The gamer's inability to think past its nose irritates the fuck out of me in cases like this.
They released the 1660Ti. Why not offer the full spectrum non-RTX cards as a complement and let consumers decide whether they value ray tracing or not? At some point, it has to happen. That can even be now. But there are ways to make the transition less bumpy.
Don't put words into my mouth and don't go around assuming things.You wanted nothing changed, just a bigger Pascal die. Ok, got it.
Regardless, your comments are now getting into criticism of the Pascal -> Turing architecture changes without the knowledge to have the discussion. This isn't a criticism of your knowledge, as I am in that same boat. I just know enough than to avoid that particular conversation.
No, it was the right thing to do.Wasting more than half of the die size for ray tracing crap instead of raster performance is the stupidest thing they could do
Don't put words into my mouth and don't go around assuming things.
I said that given the results of how Turing turned out on the bigger chips i would have preferred a bigger Pascal chip at the same price (or a Turing without the baggage), that doesn't mean i don't want nothing changed.
Second: more cache is always good, reduced latency by keeping more stuff on the on die memory rather than sending it outside but we have not seen that many improvements from it so far; last but not the least the wider instruction set are good if you do compute and ML stuff aka more Tensor/RT baggage when talking pure "traditional" gaming performance because they just added native low precision integer pipes rather than having it reserved for more FP32.
The Nvidia Turing GPUs already support VRR via HDMI, and even provide 4k/120hz via HDMI now too.