• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

JahIthBer

Member
Jan 27, 2018
10,376
I just got a 2080S a week ago for 750$

heres to hoping that the 3080 is starting at 1000$-1200$ and not that much better maybe it'll be a beefed up 2080ti
The 3080 Ti probably won't even be that price this time, it sold poorly in everywhere except Germany & places using it for Ai, unlike the other Ti's that sold like hotcakes.
 

Deleted member 2229

User requested account closure
Member
Oct 25, 2017
6,740
They are using the same chips as gaming cards, just have more VRAM usually. Every x80Ti has a Titan pair which is similar in performance.


Same price as 3080Ti means that the markup will be similar.

Again: between an option of an expensive Titan card coming first and a cheaper Ti later and both of them coming at the same time at similar price the first option is for smart people and the second is for those who buy product names like "xxxTi" instead of looking at specs and performance.
Fair
 

Haint

Banned
Oct 14, 2018
1,361
I just got a 2080S a week ago for 750$

heres to hoping that the 3080 is starting at 1000$-1200$ and not that much better maybe it'll be a beefed up 2080ti

It will be '$699' with a real world +15-25% delta over a 2080Ti. This is a node shrink generation with what are likely to be legitimately competitive AMD cards. They can't get away with gouging or renaming/reshuffling the performance stack again. 2080Ti will be a 2 year old card at time of release, a super equilatent for $1000+ is a ridiculous musing.
 

Kieli

Self-requested ban
Banned
Oct 28, 2017
3,736
20GB is a fucking enormous quantity of headroom for VRAM, even considering next generation asset/texture standards raising load on top of 4K rendering and a healthy dose of AA. We're bound by current generation titles and a vast majority of modern games don't hit 10GB VRAM at 4K. Control I think is currently the most demanding at at 4K with all settings maxed out and RT enabled you're looking at almost 9GB VRAM. To topple 10GB VRAM you need to go over 8K which naturally milks VRAM for all it can. But that calls into question the probability of anybody actually playing at 8K with maxed out RT and settings, which on next generation games will be so fucking demanding and performance intensive I'm not convinced these cards can accommodate.

Next gen is arriving and devs will have access to 16GB of unified ram to play with. Expect min and max reqs to go up.
 
Nov 8, 2017
13,096
Next gen is arriving and devs will have access to 16GB of unified ram to play with. Expect min and max reqs to go up.

Yeah but that's like having 11-12GB VRAM now. It's not a realistic requirement for games, and 20GB would be overkill for outperforming consoles that have 16GB shared with the OS.

If there's any truth to this rumour at all (which I'm not convinced there is), I would have to assume the more conservative values of 10GB for the 3080 and 8GB for the 3070. If it turns out that the consoles are higher than this figure, then these values would seem more reasonable.
 

Deleted member 13560

User requested account closure
Banned
Oct 27, 2017
3,087
I'm thinking I'm skipping out on Ampere and waiting on Hopper so I can just do a complete system rebuild utilizing DDR5 which should be consumer ready by the time NVDA's MCM GPU is released. I have a feeling we'll see 8800GTX levels of performance increase. I've never bought new GPUs right before a console release. I've always gotten the first new architecture after a new console release. I've don't that for the last 2 console generations. I'm gonna need to talk myself down when the 3000 series is released. It's very tempting.

With MCM, I also think it will be the nail in the coffin for consumer mGPU.

Speaking of older GPUs...I was looking at MSRP for the 8800GTX and 8800GTX Ultra and in today's money they would be $750 and $1030 when taking inflation into account. Flagship cards have always been destroying bank accounts. I couldn't afford those cards back in the day since I was in my early 20s and broke.
 

EatChildren

Wonder from Down Under
Member
Oct 27, 2017
7,029
Next gen is arriving and devs will have access to 16GB of unified ram to play with. Expect min and max reqs to go up.

Of course, as with any generation. 20GB is enormous though. Wouldnt complain if they do end up using this ballpark.

20GB should be enough to supersample 4K down to 1440p or even 1080p, right?

Modern cards have enough VRAM to render 4K. 20GB even with a new generation of games should be more than sufficient.
 

Isee

Avenger
Oct 25, 2017
6,235
I think it's impossible to estimate how much vram we'll need in the future without knowing how much shared memory the new consoles are going to have.
A 24gb PS5 would probably mean ~22gb(?) shared between the CPU and GPU. Which could become a problem for 8 to theoretical 10gb cards. Especially if you want to run even higher Shadow resolutions, details or above console texture settings.
That said, I really don't think we'll see 20gb on gaming cards anytime soon. Maybe a 3080ti with 16gb? But I don't expect that card to be below 1400€.
 

Deleted member 11276

Account closed at user request
Banned
Oct 27, 2017
3,223
I think it's impossible to estimate how much vram we'll need in the future without knowing how much shared memory the new consoles are going to have.
A 24gb PS5 would probably mean ~22gb(?) shared between the CPU and GPU. Which could become a problem for 8 to theoretical 10gb cards. Especially if you want to run even higher Shadow resolutions, details or above console texture settings.
That said, I really don't think we'll see 20gb on gaming cards anytime soon. Maybe a 3080ti with 16gb? But I don't expect that card to be below 1400€.
Wasn't 16 GB and 13 GB usable RAM the latest and most reliable information? That would mean roughly 5 GB VRAM if the consoles use 8 GB as RAM
 

laxu

Member
Nov 26, 2017
2,782
You guys think it's just unlucky that they massively underdelivered RTX performance on the 20XX series with no way to improve it considerably in future cards?

4352 CUDA cores, 544 tensor cores and a measily 68 RT cores on the 2080Ti, with only 46 RT on the 2080. What, are they 10x the size or that much more expensive to make?

If the 3080 doesn't have at the very minimum over 100/150, closer to 200 RT cores, I'm going to re-evaluate what sort of system I'm going to build.

The 20xx series are huge chips, which makes them expensive and also means that they have limits to what they can cram onto them. It's not an accident that the Titan RTX is not significantly faster for gaming than the 2080 Ti. For a first gen raytracing product Nvidia also had no reason to go all in on RT cores since even now only a handful of games offer support. When the majority of games shift to use raytracing more then we will start seeing GPUs that will increasingly have more hardware towards accelerating those processes.

Just throwing in more RT cores isn't necessarily going to give us raytracing with a low performance hit. As I understand it, the problem isn't that the BVH tree traversal acceleration (which the RT cores do) is too slow but the shading that follows. So having more efficient CUDA cores with some extra functionality to handle raytraced data faster might be the solution rather than just doubling the amount of RT cores.

At the same time developers and API vendors are working on techniques to reduce the rendering load and do raytracing more efficiently.
 

Banzai

The Fallen
Oct 28, 2017
2,585
I was actually planning to set a big amount of money aside for this, but reading this the price must be enormous...yeah, probably getting a xx70 after all.
You know, if this is even true.
 

laxu

Member
Nov 26, 2017
2,782
As someone who doesn't really understand any of this stuff on a technical level, does this mean that:

1. Ray tracing on the new consoles will be extremely limiting to everything else (frame rate, settings, etc.)?
2. The price of a 20 vram graphics card isn't as daunting as it sounds (not that the consoles will have 20, but will certainly need a somewhat large number there if ray tracing is that intensive to vram)?
3. The consoles will cost way more than the current thinking?

I don't suspect it's is as clean as any of the answers here, but seems like something would have to give.

Anyone can feel free to just say "you don't know what you're talking about" given that's likely the case, LOL.

Either way, I'm anxious to upgrade my 1070, this all sounds exciting (and expensive!).

1. Raytracing will have a large performance hit so I expect that we won't see it in the same capacity on consoles as we do on PC. I expect that it will be instead used for more controlled situations like realtime cutscenes, interior sections etc. Wherever it will make a significant impact visually.
2. Depends largely on what GDDR6 or whatever costs on the market. 20 GB seems excessive and we are unlikely to see that on anything but the Titan parts. You have to remember that consoles require to cram in the same memory pool everything that is handled by system RAM on computers which means less left for games to utilize.
3. Probably not. More than last gen probably especially if they have several tiers of device.
 
Oct 25, 2017
2,932
Pre-DLSS, FFXV with all of the settings turned on/max chewed through 8GB cards and made 11GB cards sweat. Now imagine the whole game updated to support RT/PT like the Back Stage demo. 10GB would probably be a minimum recommended spec, with 16GB+ being the ideal.

Its funny because you could choose how much total memory you wanted to use for the game before "flushing", tweaking the TRAM setting in the .ini file for the VRAM and using the Special K mod for system ram.
 

Riflen

Member
Nov 13, 2017
107
I ment choose speed and no 'eye candy' aka ray tracing for a better price.
And I disagree with you completely. I'm sure a 699 "3080Ti" from AMD with no RTX would sell like hot cakes. Many people don't care about ray tracing, they just want play the latest games and are happy with baked lighting. And devs will continue to support baked lighting for a very long time.

Nowhere did I say that such a card wouldn't sell. You're not reading what I write. It doesn't matter whether there is a market, it would be a big mistake for Nvidia to do it, so it wont happen.
If this were 2017 and we were speculating about Turing release, you'd have a great point. But Nvidia has already hitched their wagon to RTX and they cannot about face. Ampere or whatever comes next will be plenty fast at rasterisation and you seem to be under the impression that ray tracing hardware support is the sole reason why prices are where they are. This isn't the case.
 
Nov 28, 2017
1,356
Fair to assume that BIG Navi (N23) will sit in between a 3070 and 3080, and the full/uncut GA102 chip will be 20-25% faster?
 

dgrdsv

Member
Oct 25, 2017
11,846
Of course, as with any generation. 20GB is enormous though. Wouldnt complain if they do end up using this ballpark.
20GB figure is there simply because that's what you may get on a 320 bit bus. Same goes for the 16GB one.
I think it's possible that this time around NV will allow the AIBs to produce custom models with twice the VRAM so a 3070 with 16 and a 3080 with 20 will be available - but the prices will likely be considerably higher than those of reference 8/10GBs models.

Fair to assume that BIG Navi (N23) will sit in between a 3070 and 3080, and the full/uncut GA102 chip will be 20-25% faster?
We can't assume anything on their comparative performance just yet.
 

Tovarisc

Member
Oct 25, 2017
24,401
FIN
Next gen is arriving and devs will have access to 16GB of unified ram to play with. Expect min and max reqs to go up.

16 GB of unified memory isn't that much when OS take 2-3 GB just for itself and then you divide the rest as general memory and as VRAM, start to look like ~ 6 / 6 split all of sudden.
 

Rice Eater

Member
Oct 26, 2017
2,814
Even with the news of all this, I'm still planning getting a 2060 or 5600 XT soon(maybe I'll push for a 5700, maybe not). It's likely the highest end cards will be introduced first and with the 3070 being the cheapest but still costing at least $500. Secondly I was cheap and bought a 1050 TI back in late 2016 and I can't stand how weak it is anymore, I'm ready to upgrade now. Even if I only stick with it for a year lol.

If I could go back in time I'd tell myself to at least put $40 more on the RX 470, then I think I could have held out until the 3060 or whatever the 1660 TI successor is called.
 

Isee

Avenger
Oct 25, 2017
6,235
Wasn't 16 GB and 13 GB usable RAM the latest and most reliable information? That would mean roughly 5 GB VRAM if the consoles use 8 GB as RAM

16gb sounds realistic currently imo. Maybe they'll add slower ddr3/4 to just hold the OS, just like they did with ps4pro. 8-10gb for video memory could happen that way
Maybe they'll add a bit more shared ram "last second" like they did with ps4... We'll see.
 

JahIthBer

Member
Jan 27, 2018
10,376
Even with the news of all this, I'm still planning getting a 2060 or 5600 XT soon(maybe I'll push for a 5700, maybe not). It's likely the highest end cards will be introduced first and with the 3070 being the cheapest but still costing at least $500. Secondly I was cheap and bought a 1050 TI back in late 2016 and I can't stand how weak it is anymore, I'm ready to upgrade now. Even if I only stick with it for a year lol.

If I could go back in time I'd tell myself to at least put $40 more on the RX 470, then I think I could have held out until the 3060 or whatever the 1660 TI successor is called.
2060 for 300 is the best bet for a "budget" card, VRS will be super important next gen & the AMD GPU's are going to age poorly fast because of it.
 
Nov 28, 2017
1,356
Sounds quite reasonable to me.

We can't assume anything on their comparative performance just yet.

I get it, but they're absolutely going to show their Ampere Quadro lineup at GTC 2020, even if this gets postponed to Siggraph or Gamescom (like the 20** RTX SKU reveal).

I was just basing the numbers on those "Big Navi 17% faster than 2080Ti rumors" earlier, and just assumed the full/uncut GA102 core will be 40% faster than the TU102, which will power their high end Quadro lineup, and the $2500 (or probably even more this time) consumer variant with Deep Learning performance cut down, Titan RTX Ampere.

Honestly, if Big Navi isn't at least on par with the RTX 3080 while being $50 cheaper, all those inner nicknames of "Nvidia Killer" would look really dumb in retrospect

I would be very (pleasantly) surprised if the fastest Radeon consumer GPU, that's going to release in 2020, will be on par with a 3080 or even trades blows. Even an RVII situation this time will be surprising here, not because of AMD renaming their upcoming workstation Navi products to their discrete graphics segment (they did that with the MI50 because they needed to show their presence in the high-end discrete graphics market and clear the volume of MI50's), as Lisa Su already confirmed that we'll be seeing them getting back to high end, but the leap of Ampere over Turing will be massive. A little more than Maxwell to Pascal.
 

mario_O

Member
Nov 15, 2017
2,755
Nowhere did I say that such a card wouldn't sell. You're not reading what I write. It doesn't matter whether there is a market, it would be a big mistake for Nvidia to do it, so it wont happen.
If this were 2017 and we were speculating about Turing release, you'd have a great point. But Nvidia has already hitched their wagon to RTX and they cannot about face. Ampere or whatever comes next will be plenty fast at rasterisation and you seem to be under the impression that ray tracing hardware support is the sole reason why prices are where they are. This isn't the case.

It would be a big mistake to sell a ton of cards? Getting rid of all the extra silicon wouldn't reduce the price significantly?
I guess we'll agree on disagreeing.
If Nvidia continues with these crazy prices they'll end up destroying the high-end PC market. I'm sure the 2080Ti is the worst selling 'Ti' of all time, by far. And my guess, Turing cards in general.
Many people will abandon PC and get a next-gen console. They look quite beefy this time and will have a reasonable price, unlike Nvidia.
I think another generation of crazy prices, to force people into raytracing, will be the real mistake.
Nvidia can keep selling RTX cards for the people that are willing to pay for them. But don't force it on everybody.
For many people it's still not worth it: going back to 1080p or playing at 30 fps on a 1.000 dollar card. They would rather wait a couple of generations for the tech to evolve. And developers are going to continue to support baked lighting for many years anyway.
I think a high-end GPU at a reasonable price ($699 for a Ti) with no RTX would be very healthy for the PC market.
 

Rice Eater

Member
Oct 26, 2017
2,814
2060 for 300 is the best bet for a "budget" card, VRS will be super important next gen & the AMD GPU's are going to age poorly fast because of it.

Thanks for the insight. I was leaning more towards AMD because they provide better bang for buck according to all these benchmarks I've been looking at recently. But I constantly hear about how the RTX cards come with extra features like DLSS. And now you mention VRS and the importance of it so I guess I'll be sticking with team green.

The 2060 should be a good card for at least 2 years. By then I may be able to put together a mid range PC that should beat either next gen consoles.
 

Yogi

Banned
Nov 10, 2019
1,806
The 20xx series are huge chips, which makes them expensive and also means that they have limits to what they can cram onto them. It's not an accident that the Titan RTX is not significantly faster for gaming than the 2080 Ti. For a first gen raytracing product Nvidia also had no reason to go all in on RT cores since even now only a handful of games offer support. When the majority of games shift to use raytracing more then we will start seeing GPUs that will increasingly have more hardware towards accelerating those processes.

Just throwing in more RT cores isn't necessarily going to give us raytracing with a low performance hit. As I understand it, the problem isn't that the BVH tree traversal acceleration (which the RT cores do) is too slow but the shading that follows. So having more efficient CUDA cores with some extra functionality to handle raytraced data faster might be the solution rather than just doubling the amount of RT cores.

At the same time developers and API vendors are working on techniques to reduce the rendering load and do raytracing more efficiently.

I don't buy a £1000 GPU just for this year's RTX implementation, I want it proofed for at least a few years in the future and it seems to be all about RTX in the future. Are you telling me that the 3080 isn't going to have way more RT cores? How can it not be a good thing - having more cores means more paths and bounces can be calculated, and the better the effect would look. I want to believe you that it wouldn't be faster because the shading is the bottleneck but 46 and 68 RT cores sounds PUNY!

Eitherway, performance needs to go up considerably now that we know RTX is what's coming.
 
Last edited:

Tovarisc

Member
Oct 25, 2017
24,401
FIN
It would be a big mistake to sell a ton of cards? Getting rid of all the extra silicon wouldn't reduce the price significantly?
I guess we'll agree on disagreeing.
If Nvidia continues with these crazy prices they'll end up destroying the high-end PC market. I'm sure the 2080Ti is the worst selling 'Ti' of all time, by far. And my guess, Turing cards in general.
Many people will abandon PC and get a next-gen console. They look quite beefy this time and will have a reasonable price, unlike Nvidia.
I think another generation of crazy prices, to force people into raytracing, will be the real mistake.
Nvidia can keep selling RTX cards for the people that are willing to pay for them. But don't force it on everybody.
For many people it's still not worth it: going back to 1080p or playing at 30 fps on a 1.000 dollar card. They would rather wait a couple of generations for the tech to evolve. And developers are going to continue to support baked lighting for many years anyway.
I think a high-end GPU at a reasonable price ($699 for a Ti) with no RTX would be very healthy for the PC market.

Don't worry, just because 1st gen of RTX didn't sell insane numbers it didn't start death of PC as gaming platform.

Or are you hoping it did?
 

TSM

Member
Oct 27, 2017
5,821
I don't see Nvidia putting anywhere near that much ram on even the higher end cards. More ram is one of the carrots they use to move people off the cards they already own. The only way that much ram would make sense is if they have some crazy new AI features that chew up vram like it's nothing.
 

BeI

Member
Dec 9, 2017
5,974
I don't see Nvidia putting anywhere near that much ram on even the higher end cards. More ram is one of the carrots they use to move people off the cards they already own. The only way that much ram would make sense is if they have some crazy new AI features that chew up vram like it's nothing.

Could just be future-proofing. Higher resolution textures that are drawn further could probably increase how much VRAM you need quite quickly.
 

TSM

Member
Oct 27, 2017
5,821
Could just be future-proofing. Higher resolution textures that are drawn further could probably increase how much VRAM you need quite quickly.

That's my point. They don't really have any incentive to future-proof their cards. They want customers to buy the next product, and more ram is one of the incentives they give users to upgrade.
 

EatChildren

Wonder from Down Under
Member
Oct 27, 2017
7,029
20GB figure is there simply because that's what you may get on a 320 bit bus. Same goes for the 16GB one.
I think it's possible that this time around NV will allow the AIBs to produce custom models with twice the VRAM so a 3070 with 16 and a 3080 with 20 will be available - but the prices will likely be considerably higher than those of reference 8/10GBs models.

This makes a lot of sense. Thanks.
 

laxu

Member
Nov 26, 2017
2,782
I don't buy a £1000 GPU just for this year's RTX implementation, I want it proofed for at least a few years in the future and it seems to be all about RTX in the future. Are you telling me that the 3080 isn't going to have way more RT cores? How can it not be a good thing - having more cores means more paths and bounces can be calculated, and the better the effect would look. I want to believe you that it wouldn't be faster but 46 and 68 RT cores sounds PUNY! Can't only be 68 paths to calculate or even close to that...

Eitherway, performance needs to go up considerably now that we know RTX is what's coming.

None of us have any idea how Nvidia will improve raytracing performance for next gen. What I'm saying that simply increasing the number of RT cores won't necessarily result in massive performance improvements for raytracing because it's the shading part after BVH and ray intersection checking that is expensive in terms of processing time.

I do agree that we need better performance for it but you can't simplify it to "more RT cores = 100 fps with raytracing". That will only work to a certain point before making the BVH traversal and intersection checking faster yields no further benefits if it is waiting for the rest of the pipeline. If you then add more rays to cast then you again run into the same issue that the shading will take more horsepower. That's how I understand it.

Likewise you can't compare RT cores between generations. For all we know Nvidia may have significantly improved their performance for 30xx series so maybe 60 RT cores on 3080 is better than 68 RT cores on 2080 Ti.

So to improve raytracing performance you need to improve several aspects on both software and hardware level. Software is handled by API vendors (MS, Vulkan group etc), Nvidia and game developers. Hardware side is up to Nvidia and we will probably see what they have done on that front at GTC in March.
 

TheOne

Alt Account
Banned
May 25, 2019
947
If the 3080ti can 5k/60FPS FFXV I'll be happy.

Have to upgrade regardless, 1080ti is getting long in the tooth. :(

It's fascinating to me how my 1080Ti is still rocking anything I'm throwing at it. I can still max out everything and get 60fps as long as I scale back the resolution. Many believe that the next step below 4K is 1440p and that it's too much of a down grade but this couldn't be further from the truth. Custom resolutions do wonders. If 4K60 is a no go, you can always try 2088p, 2016p, 1944p, 1872p, 1800p, 1728p, 1652p, 1620p, etc. There are so many games that I'm running at all these custom resolutions. Monster Hunter World for example is struggling in the 40s fps at 4K, but drop that resolution down to 1872p and boom, I get that locked 60fps. From where I sit, it's basically undistinguishable. Assuming I'd be willing live with 60fps with some small hiccups/drops here and there, I could do 1944p or even 2016p.

Then there are the frivolous ultra settings that honestly aren't all that better than some high or very high settings who also have the benefit of not taxing the ressource by an additional 20+%. Seriously, volumetric lighting, ambiant occlusion, shadows and the likes can be scaled back a notch without making any apparent visual difference. I once infliged myself the perogative to run everything maxed out, but not anymore.

So live on, 1080Ti, you amazing beast :)
 

Yogi

Banned
Nov 10, 2019
1,806
None of us have any idea how Nvidia will improve raytracing performance for next gen. What I'm saying that simply increasing the number of RT cores won't necessarily result in massive performance improvements for raytracing because it's the shading part after BVH and ray intersection checking that is expensive in terms of processing time.

I do agree that we need better performance for it but you can't simplify it to "more RT cores = 100 fps with raytracing". That will only work to a certain point before making the BVH traversal and intersection checking faster yields no further benefits if it is waiting for the rest of the pipeline. If you then add more rays to cast then you again run into the same issue that the shading will take more horsepower. That's how I understand it.

Likewise you can't compare RT cores between generations. For all we know Nvidia may have significantly improved their performance for 30xx series so maybe 60 RT cores on 3080 is better than 68 RT cores on 2080 Ti.

So to improve raytracing performance you need to improve several aspects on both software and hardware level. Software is handled by API vendors (MS, Vulkan group etc), Nvidia and game developers. Hardware side is up to Nvidia and we will probably see what they have done on that front at GTC in March.

I understand that it isn't a guarantee, we can't know for sure but we need way, way higher performance and if they've already hit a wall then maybe they shouldn't have bothered.

I still want way more RT cores. I don't care if the CEO of Nvidia told me otherwise, 68 sounds puny for light rays. Give me the cores, I'll turn down the other settings. And give me like 4-10x the cores, you can keep the tensor ones.

Raytracing isn't a new concept itself. Nvidia bought out Mental Ray 13 years ago, back in 2007! I don't think they're going to suddenly improve the calculations for it by the landslide that we need. I'm hoping to god it's the cores and they were just being tight with them to get guaranteed upgrades going into next-gen. Raytracing might be all we have to show for next-gen for years.
 
Last edited:

Riflen

Member
Nov 13, 2017
107
GPU designs take years. Nvidia are likely to have at least two teams currently working in parallel on future designs. There is no Future Proofing as the next design is likely 2.5 years away. Ampere, or whatever it's named, will be designed for whatever software developers have been working on since 2018 or so.
Developers have been making their games using Turing Quadro GPUs with up to 48GB VRAM for the last couple of years. Nvidia has a very good idea about how much VRAM this upcoming GPU will likely need to be suitable for the next 2-3 years of releases.
 
Oct 27, 2017
490
1. Raytracing will have a large performance hit so I expect that we won't see it in the same capacity on consoles as we do on PC. I expect that it will be instead used for more controlled situations like realtime cutscenes, interior sections etc. Wherever it will make a significant impact visually.
2. Depends largely on what GDDR6 or whatever costs on the market. 20 GB seems excessive and we are unlikely to see that on anything but the Titan parts. You have to remember that consoles require to cram in the same memory pool everything that is handled by system RAM on computers which means less left for games to utilize.
3. Probably not. More than last gen probably especially if they have several tiers of device.

Thanks!
 

dgrdsv

Member
Oct 25, 2017
11,846
I get it, but they're absolutely going to show their Ampere Quadro lineup at GTC 2020, even if this gets postponed to Siggraph or Gamescom (like the 20** RTX SKU reveal).
They won't show anything this far from launch. They need to sell their current products. GTC is the most likely event for HPC GA100 announcement, with likely wide availability at the end of 2020. New GeForces and Quadros will be there only if they'll go on sale in a month or less from that day.

I was just basing the numbers on those "Big Navi 17% faster than 2080Ti rumors" earlier, and just assumed the full/uncut GA102 core will be 40% faster than the TU102, which will power their high end Quadro lineup, and the $2500 (or probably even more this time) consumer variant with Deep Learning performance cut down, Titan RTX Ampere.
We don't know anything solid about either big Navi or any Ampere GPU right now.
This "17% faster" rumor is based on just one result in an obscure benchmark, with clocks and drivers being in an unknown state.
We also know nothing about architectural changes which Ampere will have over Turing and thus can't really predict its performance even from SP numbers.

If Nvidia continues with these crazy prices they'll end up destroying the high-end PC market.
Because without Nvidia we have had so many high end options to choose from!
Also do note that "high end" is almost solely defined by pricing. When you say that high price will destroy high end market you show that you don't really understand how market segmentation work.

Many people will abandon PC and get a next-gen console. They look quite beefy this time and will have a reasonable price, unlike Nvidia.
Yes, because Nvidia doesn't sell anything at $400 or $500. Oh, wait.