For GDDR6 to hit its peak bandwidths, t comes down to the perfect distribution of the data on it (amongst other things). Theoretically, you can say that if there are four 2GB chips (8GB) and eight 1GB chips (8GB) giving a total of say 16GB but allowing you have a 384-but bus You could ensure that the OS only ever sits in the RAM chips that have 2GB. But to do that effectively the OS reserve will have to take all of those eta gigs. So of you 8GB of 2GB chips, you must reserve 4GB from it for your OS and use the remaining 1GB on each o those 4 chips as you would the other 8GB worth of 1GB chips.
Now imagine if you tried putting eight 2GB chips and four 1GB chip for a total 20GB. Will you then reserve 8GB for your OS?
But do dynamic objects affect indirect lighting or only receive indirect lighting data from static objects?Nope. I used LPV and and I can confirm that dynamic objects react to moving lights and GI and even cast coliur bleeding. It jus that its radius is not great and has a lot of leaking.
But do dynamic objects affect indirect lighting or only receive indirect lighting data from static objects?
Ok.. let me elaborate on the issue.I'm not seeing the issue....?
You would clearly only include the number of 2GB chips for the RAM allocation you want--presupposing this is the method you want to use for allocating the OS reserve.
It does mean that your OS allocation is limited to a smaller effective bus size, e.g.:
10x chips total: 4x 2GB + 6x 1GB
The game sees 10GB via a 320 bit bus
The OS sees 4GB via a 128 bit bus
But since the OS doesn't need nearly as much bandwidth, with 14Gb/s chips that's still 224GB/s which is plenty for the OS.
I will be shocked if we get anything that good-looking on next gen consoles, it just looks too good imo.
I see, I was mistaken then. Thanks for the clarificationYes they do but the results aren't accurate sometimes the results is a big bugged smudge rather than a defined pattern of colour bleeding or shade which is not sexy to look at tbh.
Nothing as defined as VXGI or any voxel based solution even with low radius, let alone RT.
Edit: Here is a very old example of mine where I sued to toy with LPV. I used to change the colours of lights, blocs and move them to see how the dynamic GI reacts in it with colour bleeding changing, mixing colours and how indirect shadows work too. You can't see me moving those blocs in those pics though. :p
Just out of curiosity, based on the specs we know, is it possible to make the XMB not so much of a sluggish mess when clicking the home button during a game? I'm really hoping a SSD & Zen 2 changes that, but I don't know.
Personally, I've only used Unity so far, even my current game project is built on Unity, heh. Although, I may have to learn how to use Unreal Engine 4 during my internship, I suppose. Unity, by comparison, has a more limited real time GI solution that isn't even actually fully real time. You have to set objects as lightmap static first then bake the lighting and dynamic objects will only receive indirect lighting data through light probes. UE4's LPV sounds a lot more interesting
Personally, I've only used Unity so far, even my current game project is built on Unity, heh. Although, I may have to learn how to use Unreal Engine 4 during my internship, I suppose. Unity, by comparison, has a more limited real time GI solution that isn't even actually fully real time. You have to set objects as lightmap static first then bake the lighting and dynamic objects will only receive indirect lighting data through light probes. UE4's LPV sounds a lot more interesting
Doesn't look particularly unbelievable considering you can still see texture pop-in in that and the other video posted.I will be shocked if we get anything that good-looking on next gen consoles, it just looks too good imo.
When you see how the decima engine do on such a limited hardware as PS4 and if the spec of PS5 is what we are speculate, it will surpassed it as early as mid gen title.I will be shocked if we get anything that good-looking on next gen consoles, it just looks too good imo.
Sony just bought Insomniac games. That must have been expensive.
We need to downgrade PS5 specs to account for this purchase due to budgetary reasons.
Which means we need to remove some more CUs because thermals.
32 CUs at 2.0 Ghz = 8.1 tflops confirmed.
Spider-Man PS5 will look like this:
But wasn't some kind of Xbox Insider claiming that MS would buy some kind of big Studio that worked with Sony and implying it was Insomniac?XD
What happeend to his claims? What studio MS would buy? Thta is if the claim is true to start with. he also had many false specs of Scarlet (the false lockheart), an ultra weak PS5.
So maybe all his claims were false from start.
I thought that studio turned out to be Double Fine.Spider-Man PS5 will look like this:
But wasn't some kind of Xbox Insider claiming that MS would buy some kind of big Studio that worked with Sony and implying it was Insomniac?XD
What happeend to his claims? What studio MS would buy? Thta is if the claim is true to start with. he also had many false specs of Scarlet (the false lockheart), an ultra weak PS5.
So maybe all his claims were false from start.
Two big, big, big, big studios are being acquired by Sony.
The announcement will be made near the official PlayStation 5 presentation.
Bungie
you'r on fire todaySony just bought Insomniac games. That must have been expensive.
We need to downgrade PS5 specs to account for this purchase due to budgetary reasons.
Which means we need to remove some more CUs because thermals.
32 CUs at 2.0 Ghz = 8.1 tflops confirmed.
This is mostly art and texture work, you can get pretty close to this IQ even on modern gen consoles (geometry complexity will take the biggest hit I think) but producing a whole game with this level of detail just isn't economically feasible.I will be shocked if we get anything that good-looking on next gen consoles, it just looks too good imo.
I have a feeling the second one is Remedy.
No idea who you're talking about and not defending him/her, but Lockhart clearly was real at some point. Hell, Phil Spencer just mentioned "consoles" again in that GameSpot interview a few days ago, but no one seemed to notice.
World detail of The Order 1886+miles long draw distance+fully dynamic world destruction + Ray Traced reflections.
It didn't look very polished like those nVidia/Unreal/Unity demos but at the same time beyond and I could imagine running on an Xbox One X or Pro
So yeah, not like a current game, but not like a GDC demo either. You would have to see it yourself to understand because I can't quite articulate it properly because I can't compare it to anything else.
Man I read this shit and literally just bust out laughing. How much is 1886 on sale for now? If you haven't I'd play if for the graphics and gunplay alone. That graphics fidelity with an open world game + dynamic destruction?!
😂😂😂 next gen litty af 🔥
Ok.. let me elaborate on the issue.
The whole OS reserve thing is an oversimplification and is actually wrong. The problem is this. Say you have a total of 12 chips making a 384-bit bus and 78GB/s. And these chips are all the same size. For you to use all 384 bit of your bus, and get all 768GB of bandwidth, you have to be reading that data from all 12 chips.
Now let's mix and match the RAM sizes, first with 16GB of total RAM. So thats 4x 2GB chips + 8x 1GB chips. And you want to reserve 4GB for your OS, fine everything works cause as long as your apps are concerned there are12x 1GB chips I the system. The bottleneck here though falls on your OS, as it will only ever have a max of 256GB/s of bandwidth available to it since its technically only on 4 chips and as such only has a 128-bit bus.
Now what happens when you want more RAM? Say you want the system to have 18GB of RAM. So you are looking at 6x 2GB of RAM and 6x 1GB of RAM chips. For the above system to work ow,you will have to reserve 6GB of RAM for your OS. Because if you reserve 4GB 1Gb spread out across 4 chips) you will be left with 2 x 2Gb chips and 6x 1GB chips. The extra 2 GB on those two chips will be useless to you.
Not next gen consoles but this is fucking insane. 1.2 TRILLION transistor wafer-sized chip (yes, the WHOLE 300mm wafer). Will be interesting to see the ripple effect here for the industry as defect tolerance becomes the norm for this approach.
I'm not smart enough to understand the technical terms and lingo in this thread outside of a base level...can't catch up on a lot of what was said either.
The latest round of rumors/speculation have the console specs around the same or we still talking that the PS5 will be more powerful?
Doesn't matter too much, just wanna catch up and know where it stands now lol (I know nothing is official)
The PS5 was always powerful, just people here lost so much hope and started to feel gloomy. :p
They got their hopes back recently.
they are both pretty much the same.I'm not smart enough to understand the technical terms and lingo in this thread outside of a base level...can't catch up on a lot of what was said either.
The latest round of rumors/speculation have the console specs around the same or we still talking that the PS5 will be more powerful?
Doesn't matter too much, just wanna catch up and know where it stands now lol (I know nothing is official)
they are both pretty much the same.
no 40-50% increase in power like last gen. both are on same tech, so no cell like disaster.
they are both going to be more powerful than anything AMD has out right now. ray tracing, SSD, fantastic CPU. its going to be amazing.
Kojima Productions is my guess.
Tbh, I really doubt anything could come close to the accuracy of PBR and materials of The Order. The photoscanning of The Order 1886 with microscopic details and the natural look of all materials and fabrics is leagues beyond anything else available when it comes to PBR.
UE4 uses Metalness PBR workflow which gives a quite good look on metals but really fails to catch the life-like look of otehr more organic fabrics ehcne everythign stays shiny even non reflective which makes the whole scene look awkward and not natural at all.
So looking like The Order using UE4 is kinda far stretched unless the engine itself is heavily modified with the PBR workflow itself heavily custimized with new physical coefficients, new lighting system and interactions etc which will take so much time and I can't imagine any third party doing such efforts on just a multiplatform game.
It's gotta be remedy. It would be super weird for an independent dev to congratulate being bought by a big publisher otherwise