Yes, it's just impractical because of dead address space relative to the number of address bits.
Impractical might not be quite the right sentiment. Unusual, though? Absolutely. To put another spin on this, what you get is effectively a 320-bit data bus for half the address space, and the width of the other half of your address space is dictated by how many 2Gb chips you use. One 2GB chip gives you a 32-bit data bus, two gives you 64-bits, etc. So you get 10GB of fast GDDR6 plus however much additional GDDR6 you add using larger chips at a speed that varies by how much you put in. The memory controller and caching mechanism also get correspondingly more complex.
I can't think of a case where I've seen the approach used,. It isn't a simple hedge against a late-breaking decision regarding the amount of RAM because of the differences in available bandwidth that result from making different decisions. It could be annoying to develop for unless the point was to have the CPU primarily accessing the slower, narrower data bus and the GPU effectively limited to the fast, wide 10GB subset.