XDevil666

Banned
Oct 27, 2017
2,985
A side from the cloud/games/controller etc do we know anything about the OS itself? Does it have friends lists, achievements, group voice chat etc etc?
 

Cyanity

Member
Oct 25, 2017
9,345
Page 17 and these takes are still coming in hot.

From the original reporting on the topic:


Setting aside whether that explanation makes technical sense, or whether it is likely to work, does any part of it involve sending electronics faster than the speed of light?

I'm saying that no matter what they do with AI, it's not going to change the fact that streaming introduces latency. We'll see how well their systems manage to "predict" movement, but it might end up feeling weird to those of us used to ultra low-latency gaming.

edit - don't even get me started on the fact that the controller is wireless on top of the inevitable streaming delay
 

riotous

Member
Oct 25, 2017
11,421
Seattle
You have to define better what do you mean by matter. I can say right now that there will be people that use and enjoy this service, so it will matter to them right? If you are saying that it wont matter to a lot of people or enough people to make the service successful, you can of course say that. I on the other hand will wait to see how good does the service work for most people.

I described why I don't think it will matter; I mean that I don't think it will be economically feasible for Google to throw multiple stadia instances at each gamer to resolve latency. I'm not dismissing it as technically infeasible, I'm dismissing it as economically infeasible.

The way multiple GPU rendering will work, it's not like developers will be able to choose to use 5 Stadia blades per user on a single player game, played by 3 million people. It will work more by taking advantage of the GPUs that are under the same roof by making them work together on multiplayer games, or by using one or more GPUs to run simulations that will then be shared with potentially hundreds of thousands of users playing a single player game.

I mean sure.. you are talking about Crackdown 3 like cloud tech.. Google has certainly at least vaguely eluded to the more standard approach of multi-GPU rendering per user, that's what I'm talking about here. It's what many users in many Stadia threads have claimed Google will do, they aren't talking about offloading simulation data the vast majority of the time.

There is a lot of information out there explaining how all of this will work and you have to do the leg work before stating that the service will not work, based on information that is not true.

You know this crap actually stops relatively informed people like me from posting in these threads?

I've been reading about these techs for years, been interested for them for years.. have read white papers.. I'm a software dev / cloud architect too...

So enjoy babbling about a product you can't use endlessly and being condescending to everyone who tries to post about it, I'm out.
 

Alucardx23

Member
Nov 8, 2017
4,719
You are miss interpreting at least part of what I said. The numbers available do not suggest that input latency is currently on the decline when the target frame rate is reached, at least outside of the competitive shooter genre that has made input latency a priority for a decade.

The bigger problem is the variability of it. Games are all over the map depending on factors like the engine and game complexity. It's not like Google can rely on tittles consistently hitting good input lag metrics before their service is layered ontop.

Bethesda gets it. Throwing more hardware at the problem will not be nearly as effective as addressing latency at the engine level.

Again, it's difficult to predict how popular streaming will be in the upcoming years, but consider that at the start of next gen devs will potentially have many more SKUs to manage than years prior. Especially those aiming for cross gen. We'll just have to wait and see where Stafia and streaming in general fits in the order of priorities.

Sorry if I'm miss interpreting you. What do you mean exactly by "when the target frame rate is reached"? Gears 3 is a game that runs at 30fps on Xbox 360 compared to Gears 4 that also runs at 30fps on Xbox One and still Gears 3 has more input lag. If you are saying that the other genres like racing, 3rd person, RPGs, etc. Have been getting worse on input lag, please share your numbers on that. Don't know why the fact that input lag varies per game is a big problem. Not all games are affected the same way by input latency and not all games need to have the same input latency in order for Stadia to work. The point is that developers will invest more of their time to reduce input lag as cloud gaming services get more popular. Do you agree with that or not? At the same time developers continue to lower the input lag for their games, Google will continue to make their codecs more efficient and faster, building more servers and edge nodes that will allow for close by users to have a lower ping and investing on machine learning code used to predict what buttons will the user probably press. All of this while at the same time televisions and monitors continue to have lower and lower latency. Do you agree with that or not?
 

exodus

Member
Oct 25, 2017
9,965
Question, wouldn't the video feed sent to me have some sort of compression on it just like Youtube, Twitch, cable, etc.?

I haven't been following this at all really, but wouldn't a raw 4k/60fps video feed consume absolutely massive amounts of bandwidth? But if there's compression... doesn't that kind of defeat the point of switching for a more detailed picture than you're able to accomplish on a console or PC? Personally, the compression on the picture I get from cable TV makes the picture look terrible, and certain games going through Youtube's compression even at 1080p 60fps still make it look way worse than playing on console.

If it's streaming at ~50-80mbps, then I'd expect the quality to be nearly identical to raw. A lot of it is going to depend on the bitrate and encoding used, though. We'll have to wait and see.
 

PKrockin

Member
Oct 25, 2017
5,260
If it's streaming at ~50-80mbps, then I'd expect the quality to be nearly identical to raw. A lot of it is going to depend on the bitrate and encoding used, though. We'll have to wait and see.
Well dang, I don't think I'd be able to do that, then, since that alone is pushing right up against my bandwidth limit.
 

exodus

Member
Oct 25, 2017
9,965
Given the speed at which it needs to encode, I can't imagine it's going to be high quality multi-pass encoding.

Yeah so I'm still a bit skeptical on that front. Destiny 2 on GeForce Now exhibits very visible artifacting on the moon since it's such a dark environment. I'm hoping the results are good here. Steam in-home streaming is pretty much perfect and I don't notice any artifacting when my connection is good (at least at 1080p), so hopefully this is similar.
 
Last edited:

PKrockin

Member
Oct 25, 2017
5,260
They said if you can watch YouTube videos in 4k you can play stadia in 4k

can you watch 4k YouTube videos?
I've never seen a video give me an option for 4K quality. lol. Guess I'll have to check whenever I make it back home. I'm on the road and past my mobile hotspot bandwidth limit so I'm dealing with 600kbps.
 

Alucardx23

Member
Nov 8, 2017
4,719
I described why I don't think it will matter; I mean that I don't think it will be economically feasible for Google to throw multiple stadia instances at each gamer to resolve latency.

It all depends on how it will work. I'm really looking forward to know more about it.

I mean sure.. you are talking about Crackdown 3 like cloud tech.. Google has certainly at least vaguely eluded to the more standard approach of multi-GPU rendering per user, that's what I'm talking about here. It's what many users in many Stadia threads have claimed Google will do, they aren't talking about offloading simulation data the vast majority of the time.

Crackdown 3 was not exactly the same, because the GPUs of the Xbox Ones logged into the level would not be working together to share the simulation between them. Think about how a current Battlefield V match works on PS4. 64 PS4s, all running the same code/simulations, without the processing power from any of them affecting the rest of the other PS4s logged into the same level. Do you understand how much redundant processing is happening here? As an example, If the level has a waterfall, it has to be simulated 64 times on each independent PS4. On a server environment you suddenly have 64 PS4 GPUs under the same roof and each one is connected with several orders of magnitude more bandwidth than the regular internet connection out there. As a developer you can then say "let me take 2 out of the 64 GPUs that I have available at the start of a match to render a higher quality waterfall and then share that with the rest of the players that are logged into the same level." Do you understand the difference here? This is why you get developers like the one below saying the following:

"I think that the more interesting question is how stuff like Google Stadia will change things. It gives developers something different. In the data center, these machines are connected to each other, and so you could start thinking of doing things like elastic rendering, like make a couple of servers together to do physics simulations that may not be possible on current local hardware. I think you'll see a lot of evolution in this direction."

"When you have an almost uncapped amount of computation sitting in a data centre that you can use to support your game design and ambition – whether it's in vastly superior multiplayer, whether it's in distributed physics, or massive simulation – there are things we can do inside a data center that you could never do inside a discreet, standalone device."


You know this crap actually stops relatively informed people like me from posting in these threads?

I've been reading about these techs for years, been interested for them for years.. have read white papers.. I'm a software dev / cloud architect too...

So enjoy babbling about a product you can't use endlessly and being condescending to everyone who tries to post about it, I'm out.

It shouldn't and I wouldn't be offended if someone takes their time to make the same observation on me. Sorry if it came out as condescending. We can have a better conversation when it is based as much as possible on the information that is out there. I have seen a LOT of dishonest people out there, that are more worried about saving face than having a real discussion where everybody can learn. It is really frustrating when you realize that you have been wasting hours talking with someone like that and I usually try to identify right away if someone is that type of person. I can talk for hours with anyone that is willing to learn and share their information.
 

Alucardx23

Member
Nov 8, 2017
4,719
Well dang, I don't think I'd be able to do that, then, since that alone is pushing right up against my bandwidth limit.

Here you go.

d8yg6-sxsaaof4g.jpg
 

riotous

Member
Oct 25, 2017
11,421
Seattle
Alucardx23 said:
It shouldn't and I wouldn't be offended if someone takes their time to make the same observation on me. Sorry if it came out as condescending. We can have a better conversation when it is based as much as possible on the information that is out there.

Here's what your wannabe-mod bold thing said:

There is a lot of information out there explaining how all of this will work and you have to do the leg work before stating that the service will not work, based on information that is not true.

Thing is..my post did not say the service will not work..lol.. You pretty clearly didn't even read my whole post or something then pull this "READ MORE BRO" crap. Nothing in my post was untrue either, get over yourself.
 
Last edited:

Alucardx23

Member
Nov 8, 2017
4,719
Here's what your wannabe-mod bold thing said:

Thing is..my post did not say the service will not work..lol.. You pretty clearly didn't even read my whole post or something then pull this "READ MORE BRO" crap. Nothing in my post was untrue either, get over yourself.

"I don't think it will be economically feasible for Google to throw multiple stadia instances at each gamer to resolve latency."

If you want I can use exactly the term you used to see if you want to continue playing word games.

There is a lot of information out there explaining how all of this will work and you have to do the leg work before stating that the service will not be economically feasible, based on information that is not true.

"To top it off they've already claimed game devs can allegedly choose to target using multiple stacked CPU/GPU combos for their games. So what happens when you combine all of that? A game already chooses to use 2x "Stadias" per user, then you throw in something that doubles or triples that with lag mitigation."

Based on what I just explained to you. Will you continue to say that what Google is planing is to allow for developers to use multiple Stadia blades per user and that is not economically feasible? Does that sounds right or correct to you?
 

riotous

Member
Oct 25, 2017
11,421
Seattle
"let me take 2 out of the 64 GPUs that I have available at the start of a match to render a higher quality waterfall and then share that with the rest of the players that are logged into the same level." Do you understand the difference here? This is why you get developers like the one below saying the following:

Dude.. while I think offloading simulation data in a cloud streaming environment is really cool.. I really think you are not quite getting it here (which is hilarious considering your condescending remarks.)

They won't be borrowing 2 of the GPUs of 64 players logged in lol.. the "server" that dev is referring to would be a server dedicated to the physics simulation, that Stadia players would be connecting to. Those Stadia players GPUs would all be doing the same thing and not sharing each other's resources lol, they'd be using the same shared offloaded simulation server.

And that's exactly what Crackdown was trying to do; big difference being the actual clients (each individual game renderer) aren't local. While I do agree these possibilities are cool.. your interpretation is outright bizarre lol
 

riotous

Member
Oct 25, 2017
11,421
Seattle
Will you continue to say that what Google is planing is to allow for developers to use multiple Stadia blades per user and that is not economically feasible? Does that sounds right or correct to you?

Yes Google alluded to the idea of developers being able to allow devs to use multiple Stadia instances per user... very early on in their tech talks.

But just nevermind dude.. completely over you either way based on your ridiculous interpretation of how cloud based simulation offloading would work lol
 

Alucardx23

Member
Nov 8, 2017
4,719
Dude.. while I think offloading simulation data in a cloud streaming environment is really cool.. I really think you are not quite getting it here (which is hilarious considering your condescending remarks.)

They won't be borrowing 2 of the GPUs of 64 players logged in lol.. the "server" that dev is referring to would be a server dedicated to the physics simulation, that Stadia players would be connecting to. Those Stadia players GPUs would all be doing the same thing and not sharing each other's resources lol, they'd be using the same shared offloaded simulation server.

And that's exactly what Crackdown was trying to do; big difference being the actual clients (each individual game renderer) aren't local. While I do agree these possibilities are cool.. your interpretation is outright bizarre lol

I gave you two examples, one where the simulation is offloaded to a separate dedicated server and one where the simulation is distributed between server blades logged into the same level. Will you continue to lie about what I said?

Did you take a minute out of your time to read this?
"I think that the more interesting question is how stuff like Google Stadia will change things. It gives developers something different. In the data center, these machines are connected to each other, and so you could start thinking of doing things like elastic rendering, like make a couple of servers together to do physics simulations that may not be possible on current local hardware. I think you'll see a lot of evolution in this direction."

"When you have an almost uncapped amount of computation sitting in a data centre that you can use to support your game design and ambition – whether it's in vastly superior multiplayer, whether it's in distributed physics, or massive simulation – there are things we can do inside a data center that you could never do inside a discreet, standalone device."


Again, Crackdown DID NOT use the processing power from each Xbox One logged into the server. The destruction simulation was running on a dedicated server and it was then shared with the users logged on the same level.
 

Alucardx23

Member
Nov 8, 2017
4,719
Yes Google alluded to the idea of developers being able to allow devs to use multiple Stadia instances per user... very early on in their tech talks.

But just nevermind dude.. completely over you either way based on your ridiculous interpretation of how cloud based simulation offloading would work lol

Share your link for this please.
 

riotous

Member
Oct 25, 2017
11,421
Seattle
I gave you two examples, one where the simulation is offloaded to a separate dedicated server and one where the simulation is distributed between server blades logged into the same level. Will you continue to lie about what I said?

Did you took a minute out of your time to read this?
"I think that the more interesting question is how stuff like Google Stadia will change things. It gives developers something different. In the data center, these machines are connected to each other, and so you could start thinking of doing things like elastic rendering, like make a couple of servers together to do physics simulations that may not be possible on current local hardware. I think you'll see a lot of evolution in this direction."

Yes I read that,and I believe whole heartedly you are misinterpreting the developer here. The machines are connected in a datacenter with loads of other machines.. including ones where they can offload stuff too.

You can't borrow GPU power from 2 player's Stadia instances.. like WTF? lol Why do you think those 2 players would suddenly have extra GPU power to offload a simulation to? You are misinterpreting quotes because you are out of your element here dude.

Again, Crackdown DID NOT use the processing power from each Xbox One logged into the server. The destruction simulation was running on a dedicated server and it was then shared with the users logged on the same level.

Of courser Crackdown didn't do this.. I never claimed it did. That would be nonsensical.
 

Alucardx23

Member
Nov 8, 2017
4,719
Yes I read that,and I believe whole heartedly you are misinterpreting the developer here. The machines are connected in a datacenter with loads of other machines.. including ones where they can offload stuff too.

You can't borrow GPU power from 2 player's Stadia instances.. like WTF? lol

Of courser Crackdown didn't do this.. I never claimed it did. That would be nonsensical.

You are exactly the type of person I described. What a waste of time.

"Google has released the following data for Stadia. It's a curious mixture of data points, combining the kind of minutiae rarely released on some components along with notable omissions elsewhere, such as the amount of cores/threads available for developers on the CPU. Regardless, it paints a picture of a highly capable system, clearly more powerful than both the base and enhanced consoles of the moment.

  • Custom 2.7GHz hyper-threaded x86 CPU with AVX2 SIMD and 9.5MB L2+L3 cache
  • Custom AMD GPU with HBM2 memory and 56 compute units, capable of 10.7 teraflops
  • 16GB of RAM with up to 484GB/s of performance
  • SSD cloud storage
Google says that this hardware can be stacked, that CPU and GPU compute is 'elastic', so multiple instances of this hardware can be used to create more ambitious games."

 

riotous

Member
Oct 25, 2017
11,421
Seattle
Dude.. what in the world are you even talking about now?

Like this is mindblowing... in fact what you just posted is what you called me a liar over..

Put me on ignore or something dude you are toomuch.
 

riotous

Member
Oct 25, 2017
11,421
Seattle
Share your link for this please.
Should I link you to your last post? LOL..

This is absolutely bizarre.. you just proved me right in your post where you are trying to call me an idiot.

But I get it now.. you quite literally do not understand any of the tech being discussed here and are just cobbling disparate things together with no ability to relate it all together. And yet you dominate the hell out of these threads.. absolutely mindblowing lol
 

Alucardx23

Member
Nov 8, 2017
4,719
Should I link you to your last post? LOL..

This is absolutely bizarre.. you just proved me right in your post where you are trying to call me an idiot.

I didn't call you an idiot. You're are doing a good job by yourself. Now share the link where Google said that they will allow devs to use multiple Stadia instances per user.
 

riotous

Member
Oct 25, 2017
11,421
Seattle
Quote me when you want me to see your post. I'm asking because I haven't seen anywhere Google saying that they would allow devs to use multiple Stadia instances per user.

It's what the words IMPLY.. if you know the tech at least.

Elastic compute.. stacking.. = classic multi-GPU compute, where you are adding hardware to a rendering pipeline for one instance of a game. That's what those words IMPLY... that they are talking about on a per-user basis. When I create a server in the cloud that has elastic compute, I am saying that if I need more power for that ONE TASK.. that ONE SERVER.. that I can devote more power to it.

Distributed computing / offloading simulation data = new concept for gaming where shared hardware does work for multiple instances of the same game. That is not the same as stacking/elastic. I would not use the term "elastic compute" or "hardware stacking" to describe distributed compute.. Distributed computing is where multiple pieces of hardware (virtualized or not), all doing a DIFFERENT TASK are used together.. Each of those distributed tasks might use elastic computing or stacked hardware but that's separate from the distribution of work.

Google has mentioned both of these things... stacking of their hardware..and use of the datacenters for distributed computing.
 

Alucardx23

Member
Nov 8, 2017
4,719
It's what the words IMPLY.. if you know the tech at least.

Elastic compute.. stacking.. = classic multi-GPU compute, where you are adding hardware to a rendering pipeline for one instance of a game. That's what those words IMPLY... that they are talking about on a per-user basis. When I create a server in the cloud that has elastic compute, I am saying that if I need more power for that ONE TASK.. that ONE SERVER.. that I can devote more power to it.

Distributed computing / offloading simulation data = new concept for gaming where shared hardware does work for multiple instances of the same game. That is not the same as stacking/elastic. I would not use the term "elastic compute" or "hardware stacking" to describe distributed compute.. Distributed computing is where multiple pieces of hardware (virtualized or not), all doing a DIFFERENT TASK are used together.. Each of those distributed tasks might use elastic computing or stacked hardware but that's separate from the distribution of work.

Google has mentioned both of these things... stacking of their hardware..and use of the datacenters for distributed computing.

Everyone that takes the time to read your post will see how dishonest you are being here. In no way or form Google has said that a developer can design a game and use multiple Stadia blades per user. Google defined the configuration below per each user, because that is exactly the target a developer has to point at when designing a game. It is something completely different when you take multiple instances like the one below and have them work together on a multiplayer game. This is not the same as saying that a developer can design a game where each user will get an equivalent power from 2 or 3 configurations like the one below.
  • Custom 2.7GHz hyper-threaded x86 CPU with AVX2 SIMD and 9.5MB L2+L3 cache
  • Custom AMD GPU with HBM2 memory and 56 compute units, capable of 10.7 teraflops
  • 16GB of RAM with up to 484GB/s of performance
  • SSD cloud storage
There is a BIG reason why you are here answering my post without any quotes. So don't waste my time again please.
 

riotous

Member
Oct 25, 2017
11,421
Seattle
This is not the same as saying that a developer can design a game where each user will get an equivalent power from 2 or 3 configurations like the one below.

That is literally what stacking means dude... it's right in the post you wrote where you quoted this in giant bold letters:

Google says that this hardware can be stacked, that CPU and GPU compute is 'elastic', so multiple instances of this hardware can be used to create more ambitious games."

There is a BIG reason why you are here answering my post without any quotes.

Yeah that reason being...

You already quoted it.

That was never even my main point though... lol.. I don't actually believe Google is going to let devs decide their game needs stacked hardware willy nilly either. Tons of people do though.... you see it all over these threads.
 

PieOMy

Member
Nov 15, 2018
627
Boston
Everyone that takes the time to read your post will see how dishonest you are being here. In no way or form Google has said that a developer can design a game and use multiple Stadia blades per user.

He is being honest. Google has said the developer can use multiple instances per user. "Google says that this hardware can be stacked, that CPU and GPU compute is 'elastic', so multiple instances of this hardware can be used to create more ambitious games" https://www.eurogamer.net/articles/digitalfoundry-2019-google-stadia-spec-and-analysis
 

Alucardx23

Member
Nov 8, 2017
4,719
That is literally what stacking means dude... it's right in the post you wrote where you quoted this in giant bold letters:

This does not mean PER USER. I'm not denying in any way that developers will be able to take multiple instances and have them work together. That has been my point all this time, but it is something completely different to take that and say that developers will be able to design a game where each player has the power of 2 or 3 instances.
 

Alucardx23

Member
Nov 8, 2017
4,719
He is being honest. Google has said the developer can use multiple instances per user. "Google says that this hardware can be stacked, that CPU and GPU compute is 'elastic', so multiple instances of this hardware can be used to create more ambitious games" https://www.eurogamer.net/articles/digitalfoundry-2019-google-stadia-spec-and-analysis

No he's not. That is not the same as saying that a developer can design a game where EACH PLAYER has 3 instances dedicated to him. Are you honestly saying that Stadia users will have the equivalent of two or more like the configuration below? If that is the case why not promote Stadia as each user having 20tf (teraflops) or 30tf of power instead of 10.1tf per user?
  • Custom 2.7GHz hyper-threaded x86 CPU with AVX2 SIMD and 9.5MB L2+L3 cache
  • Custom AMD GPU with HBM2 memory and 56 compute units, capable of 10.7 teraflops
  • 16GB of RAM with up to 484GB/s of performance
  • SSD cloud storage
What that quote means is that a developer can put all this hardware to work together. This is not a PER USER thing.

"I think that the more interesting question is how stuff like Google Stadia will change things. It gives developers something different. In the data center, these machines are connected to each other, and so you could start thinking of doing things like elastic rendering, like make a couple of servers together to do physics simulations that may not be possible on current local hardware. I think you'll see a lot of evolution in this direction."

"When you have an almost uncapped amount of computation sitting in a data centre that you can use to support your game design and ambition – whether it's in vastly superior multiplayer, whether it's in distributed physics, or massive simulation – there are things we can do inside a data center that you could never do inside a discreet, standalone device."

 

riotous

Member
Oct 25, 2017
11,421
Seattle
This does not mean PER USER. I'm not denying in any way that developers will be able to take multiple instances and have them work together. That has been my point all this time, but it is something completely different to take that and say that developers will be able to design a game where each player has the power of 2 or 3 instances.
And I could say the same thing to you.. nothing in that quote says they are talking about offloading game simulation data either.

Most people think Google was implying.. well.. what people mean by default when they talk about elastic computing and hardware stacking. Not the way more specific offloading of rendering you think they are implying. That's just not what the term elastic computing is used for... which is what I explained to you.

Either way it's beside the damn point; I don't actually think Google is going to use tech that requires multiple instances of Stadia per user.. which was my point.. even ignoring the idea that a game itself could do that, how in the world else do you think this predictive negative latency shit would work? Or other latency mitigation like massively increasing framerate?
 

Alucardx23

Member
Nov 8, 2017
4,719
And I could say the same thing to you.. nothing in that quote says they are talking about offloading game simulation data either.

Most people think Google was implying.. well.. what people mean by default when they talk about elastic computing and hardware stacking. Not the way more specific offloading of rendering you think they are implying.

Either way it's beside the damn point; I don't actually think Google is going to use tech that requires multiple instances of Stadia per user.. which was my point.. even ignoring the idea that a game itself could do that, how in the world else do you think this predictive negative latency shit would work? Or other latency mitigation like massively increasing framerate?

On the contrary. I have direct information from Stadia where they are making it very clear that the configuration per user is the one below. I also have a direct quote from a developer stating that this is all about making the hardware that is already connected work together, not about dedicating 2 or 3 instances per user. I guess that Stadia just forgot to say that instead of 10.1TF per user it was really 20TF or 30TF, right? You should call them and let them know about their mistake.
  • Custom 2.7GHz hyper-threaded x86 CPU with AVX2 SIMD and 9.5MB L2+L3 cache
  • Custom AMD GPU with HBM2 memory and 56 compute units, capable of 10.7 teraflops
  • 16GB of RAM with up to 484GB/s of performance
  • SSD cloud storage
 

PieOMy

Member
Nov 15, 2018
627
Boston
That is not the same as saying that a developer can design a game where EACH PLAYER has 3 instances dedicated to him. Are you honestly saying that Stadia users will have the equivalent of two or more like the configuration below?

That is what they are saying. The developer can decide to have their game run on 1 or more instances of stadia. If you want to dispute or discredit EuroGamer and their sources then so be it.

Overall it does not matter because the user is completely abstracted away from what is happening in the cloud. A game could be running on 36 potatoes for all we know.
 

Alucardx23

Member
Nov 8, 2017
4,719
That is what they are saying. The developer can decide to have their game run on 1 or more instances of stadia. If you want to dispute or discredit EuroGamer and their sources then so be it.

Overall it does not matter because the user is completely abstracted away from what is happening in the cloud. A game could be running on 36 potatoes for all we know.

No, that is not what they are saying. What a developer can do is take advantage of the fact that these machines are connected to each other and have them work together. They can for example take two instances to simulate a waterfall and then share that simulation with every user playing the level where that waterfall appears. They can also take a Call of Duty multiplayer match and divide the simulation between some or all of the players that are on the same level. Read again the quote from the developer below. I'm just giving some examples here. Developers will get more creative with time.

"I think that the more interesting question is how stuff like Google Stadia will change things. It gives developers something different. In the data center, these machines are connected to each other, and so you could start thinking of doing things like elastic rendering, like make a couple of servers together to do physics simulations that may not be possible on current local hardware. I think you'll see a lot of evolution in this direction."

"When you have an almost uncapped amount of computation sitting in a data centre that you can use to support your game design and ambition – whether it's in vastly superior multiplayer, whether it's in distributed physics, or massive simulation – there are things we can do inside a data center that you could never do inside a discreet, standalone device."

Please also answer my question.

Are you honestly saying that Stadia users will have the equivalent of two or more like the configuration below? If that is the case why not promote Stadia as each user having 20tf (teraflops) or 30tf of power instead of 10.1tf per user?
  • Custom 2.7GHz hyper-threaded x86 CPU with AVX2 SIMD and 9.5MB L2+L3 cache
  • Custom AMD GPU with HBM2 memory and 56 compute units, capable of 10.7 teraflops
  • 16GB of RAM with up to 484GB/s of performance
  • SSD cloud storage
 

Morbius

Banned
Oct 25, 2017
1,008
I cant believe people in this thread are really at each others throat over something thats going to be dead in 2 years time.

With Googles track record and just what most internet is capable of in the home i just cant see it working. I can barley load certain things on wifi in my house and im on top of the router.

Also, with the whole thing that they can make the games better on the stadia i think thats a bunch of BS. IDC about a data center in cali or some fake computing power they want to mention. Across all consoles they have parity companies have marketing deals with Microsoft and Sony i DOUBT they would make the game better on Stadia then on those other machines.
 

Murdy Plops

Banned
Dec 21, 2018
572
I cant believe people in this thread are really at each others throat over something thats going to be dead in 2 years time.

With Googles track record and just what most internet is capable of in the home i just cant see it working. I can barley load certain things on wifi in my house and im on top of the router.

Also, with the whole thing that they can make the games better on the stadia i think thats a bunch of BS. IDC about a data center in cali or some fake computing power they want to mention. Across all consoles they have parity companies have marketing deals with Microsoft and Sony i DOUBT they would make the game better on Stadia then on those other machines.

I've been cloud gaming for years and years. It does work. Sadly it just requires something I don't think you have - decent internet. You don't have to rain on other folks parade though, on prem solutions arent going away!
 

Stalker

The Fallen
Oct 25, 2017
6,787
*under optimal (fake) conditions.

Also I'd rather not rely on the Internet for my entire gaming life.
 

riotous

Member
Oct 25, 2017
11,421
Seattle
On the contrary. I have direct information from Stadia where they are making it very clear that the configuration per user is the one below. I also have a direct quote from a developer stating that this is all about making the hardware that is already connected work together, not about dedicating 2 or 3 instances per user. I guess that Stadia just forgot to say that instead of 10.1TF per user it was really 20TF or 30TF, right? You should call them and let them know about their mistake.
  • Custom 2.7GHz hyper-threaded x86 CPU with AVX2 SIMD and 9.5MB L2+L3 cache
  • Custom AMD GPU with HBM2 memory and 56 compute units, capable of 10.7 teraflops
  • 16GB of RAM with up to 484GB/s of performance
  • SSD cloud storage
They literally showed those specs on screen after making vague statements about unlimited power being available. They then showed the specs and described them as "one Stadia instance."

News outlets, gamers, etc. took that presentation and ran with it.. assuming it meant developers could in fact use 20TF+ per user.

I also don't believe Google will allow that. Maybe their vague statements were referring to stuff like offloading simulation data.. that's not how Eurogamer or anyone else took those vague statements though. And far more specific statements about stuff like offloading simulation data suddenly doesn't change what the Eurogamer quote means. You would not describe offloading execution to a server as "elastic compute" if you knew WTF you were talking about.

But man am I over talking to you.. calling me a liar repeatedly.. lol
 

PieOMy

Member
Nov 15, 2018
627
Boston
What a developer can do is take advantage of the fact that these machines are connected to each other and have them work together. They can for example take two instances to simulate a waterfall and then share that simulation with every user playing the level where that waterfall appears

The example you've given has a significantly higher degree of complexity than dedicating multiple instances to a single user as needed. You can't have your complex example without also having the more simpler approach. You can't have one without the other.
 

Alucardx23

Member
Nov 8, 2017
4,719
They literally showed those specs on screen after making vague statements about unlimited power being available. They then showed the specs and described them as "one Stadia instance."

News outlets, gamers, etc. took that presentation and ran with it.. assuming it meant developers could in fact use 20TF+ per user.

I also don't believe Google will allow that.

But man am I over talking to you.. calling me a liar repeatedly.. lol

What they meant with the power, was about making this hardware work together. You are still yet to find me any single quote where Stadia is saying that developers will be able to take multiple Stadia blades and assign them to individual users. I don't care about any news source that didn't understand what Google said. Google announced the specs per user and clearly said 10.1TF GPU, not that individual users would have 20TF or 30TF of power available. That is just crazy to say. Honestly don't waste my time any more if you are not planning on answering with anything else that is not a direct quote from Google stating what you are saying they said.
 
Last edited:

Alucardx23

Member
Nov 8, 2017
4,719
The example you've given has a significantly higher degree of complexity than dedicating multiple instances to a single user as needed. You can't have your complex example without also having the more simpler approach. You can't have one without the other.

What the fuck are you saying here? I gave you a direct quote from a developer stating what he is able to do and how it works, are you just going to say he's lying? What I need from you is a quote from Google stating that they got confused when they announced a 10.1TF GPU per user. What they really meant is that every user can get 2 or 3 GPUs for a total of 20.2TF or 30.3 TF of power. YAY EVERYONE GETS 3 GPUs..... I mean come on!