I described why I don't think it will matter; I mean that I don't think it will be economically feasible for Google to throw multiple stadia instances at each gamer to resolve latency.
It all depends on how it will work. I'm really looking forward to know more about it.
I mean sure.. you are talking about Crackdown 3 like cloud tech.. Google has certainly at least vaguely eluded to the more standard approach of multi-GPU rendering per user, that's what I'm talking about here. It's what many users in many Stadia threads have claimed Google will do, they aren't talking about offloading simulation data the vast majority of the time.
Crackdown 3 was not exactly the same, because the GPUs of the Xbox Ones logged into the level would not be working together to share the simulation between them. Think about how a current Battlefield V match works on PS4. 64 PS4s, all running the same code/simulations, without the processing power from any of them affecting the rest of the other PS4s logged into the same level. Do you understand how much redundant processing is happening here? As an example, If the level has a waterfall, it has to be simulated 64 times on each independent PS4. On a server environment you suddenly have 64 PS4 GPUs under the same roof and each one is connected with several orders of magnitude more bandwidth than the regular internet connection out there. As a developer you can then say "let me take 2 out of the 64 GPUs that I have available at the start of a match to render a higher quality waterfall and then share that with the rest of the players that are logged into the same level." Do you understand the difference here? This is why you get developers like the one below saying the following:
"I think that the more interesting question is how stuff like Google Stadia will change things. It gives developers something different.
In the data center, these machines are connected to each other, and so you could start thinking of doing things like elastic rendering, like make a couple of servers together to do physics simulations that may not be possible on current local hardware. I think you'll see a lot of evolution in this direction."
"When you have an almost uncapped amount of computation sitting in a data centre that you can use to support your game design and ambition – whether it's in vastly superior multiplayer, whether it's in distributed physics, or massive simulation – there are things we can do inside a data center that you could never do inside a discreet, standalone device."
You know this crap actually stops relatively informed people like me from posting in these threads?
I've been reading about these techs for years, been interested for them for years.. have read white papers.. I'm a software dev / cloud architect too...
So enjoy babbling about a product you can't use endlessly and being condescending to everyone who tries to post about it, I'm out.
It shouldn't and I wouldn't be offended if someone takes their time to make the same observation on me. Sorry if it came out as condescending. We can have a better conversation when it is based as much as possible on the information that is out there. I have seen a LOT of dishonest people out there, that are more worried about saving face than having a real discussion where everybody can learn. It is really frustrating when you realize that you have been wasting hours talking with someone like that and I usually try to identify right away if someone is that type of person. I can talk for hours with anyone that is willing to learn and share their information.