"21st century 2D chip and architecture". Focusing strictly on 2D tech and games, they don't want 2D development to be "hard" like on PS4 or Xbox due to them being 3D focused machines.
This bit sounds like it's coming from someone who doesn't know how video game graphics hardware works, either old "2D" hardware or modern 3D hardware. It sounds like someone who has a tenuous understanding that there is something indeed different about 2D and 3D graphics hardware, but doesn't know what. So let me break it down -
Old "2D" graphics technologies as we know them were birthed from Texas Instruments and their character generation chips. In the early days of computing and video game graphics, ram was the limiting factor for everything. Often, the price of RAM was so high that you literally did not have enough to stop information about every pixel on the screen. Color information about pixels take space. To solve the problem of not having enough RAM to store information for a full screen's worth of colorful pixels, old consoles would turn to one of the most basic forms of compression:
Vector quantization
To put it simply, Vector Quantization is the process of splitting a big chunk of data that repeats a lot of itself into a smaller index of "chunks" of data, so that to construct the original large amount of data, you only need to store references to these "chunks" in ram. The most easy way to understand how this works, is with a color palette, like people might be familiar with on the NES. To explain what I mean, imagine we are working with a system where each pixel of color takes 4 bytes of RAM (this is called 32-bit color). To describe an entire screen's worth of graphics, say 320x240 pixels, that would mean we'd need 320x240x4bytes = 307,200 bytes (about 300 kb). Well, if instead of storing each individual pixel as 4-bytes, we split the screen up into 8x8 cells, where each cell is just 64 1-byte pointers to a table that contains a list of 4-byte colors, like so:
Say our
palette of colors is 16 colors long, each color is 4 bytes big, meaning the palette is 64 bytes big, and then each 8x8 tile is 64 bytes big as well. We can then reuse tiles along the screen, which reduces it's effective resolution by 8 in each direction, meaning our screen is now represented as 40x30 8x8 tiles. If each tile is represented itself by a 1-byte reference to a list of, say, 40 tiles, that works out that our tile palette is 2560 bytes big, and our screen map is 1200 bytes big. Thus, using this quantization, we can define a full screen of graphics in this manner using only 3824 bytes ( 2560 bytes for tiles, 1200 bytes for the screen, 64 bytes for the palette).
You can see how vector quantization dramatically reduces the amount of ram needed to draw a full screen of graphics. 307,200 bytes compared to 3824 bytes. So this was by far the most widely used method of drawing full screen graphics, like levels or backgrounds, in old games. Texas instruments used this method, originally, to display full pages of text easily on computer monitors, but eventually those text characters became background tiles. Systems like the Colecovision or MSX used this type of graphics exclusively for their video modes.
The problem with vector quantization is that you lose fidelity. You can't place a single pixel on screen, only a 64-pixel tile, and that 64-pixel tile had to snap to an 8x8 grid. This meant drawing anything that didn't align to an 8x8 grid was basically impossible. To aleviate this,
Sprites were invented. Sprites represented a different concept for displaying graphics. Rather than quantizing the screen into regions, sprites used what is called
DIRECT COLOR MAPPING. This is, as described earlier, where each "piece" of data that makes up the sprite, represents a single pixel anywhere on the screen. Sprites don't have to align to 8x8 grids, they can go anywhere. The downside is, of course, as I described earlier, directly mapping each pixel takes a ton of space in memory. So the size of sprites would be limited. Maybe an area of memory would be set aside that contained enough space for, say, a 128x128 sprite that could be drawn anywhere on screen. Early early systems had few hardware sprites, never enough to actually cover the entire screen (and thus allow any pixel on the screen to be drawn to any color).
So old 2D consoles used this happy balance where backgrounds were using tiles in vector quantization to draw big backgrounds using a little bit of ram, and things like characters on screen would be drawn using Sprites which took lots of ram, but could be free of restrictions. As consoles generations grew, the number of sprites available to a console increased. Additionally, consoles began utilizing special drawing hardware that was setup to more speedily move tiles-worth of data faster using dedicated circuitry. Thus, we arrive at the benefits of this type of 2D hardware -- fast and small memory footprint (and because the less memory things take the faster they operate, that meant yet another form of speed increase).
That brings us to the days of the Sega Saturn vs Sony Playstation. The sony playstation represented a paradigm shift in computer graphics. It was the first really popular mass consumer hardware that could do direct color mapping to the entire screen. The Playstation had juuuust enough ram to allow programmers to draw to the entire screen as though it was a big canvas. The screen would be represented in memory as something called a framebuffer, which is essentially a texture in memory where each pixel of the texture can be directly changed. If I want to draw a dot on the screen, I can access the pixel representing the dot directly in memory. This was huge, and extremely important to the 3D drawing hardware of the playstation.
(this is a look into the actual framebuffer of a playstation, the two images on the left are each a frame of the screen. This is called double buffering, where the PSX spends time calculating one frame while showing the other so that the work it does isn't seen being drawn on screen in real time)
To understand why this is important to the playstation, you need to understand how the Saturn works. The way the saturn worked is that it essentially had two video chips inside. One was a "2D" chip that worked like older 2D hardware worked. It would draw vector quantized backgrounds using tiles in very speedy ways, then could draw sprites over them. What changed is that the Saturn, for a 2D machine, had an insane amount of ram. Enough to fill the entire screen head to toe in sprites. Due to the way it drew sprites, that still didn't mean it could easily access a single pixel on the screen, but in round about ways it could draw to every pixel on screen. The other processor for the saturn was, basically, a geometry processor. It would take square tiles and stretch and skew them into a small buffer in memory. This processor would do 3D math on a bunch of tiles and shape them into a 3D model, which would then be turned into a 2D sprite in memory and passed to the other video processor to be manipulated. By combining the two, you essentially could create 3D scenes.
The playstation didn't have to work in this complex manner. It had a single processor that could calculate the 3D math result of transformations of triangle polygons. Basically, it could take a 3D triangle shape, do math on it, and figure out which pixels it should draw on the screen to represent the result. By being allowed to directly draw onto the texture and manipulate pixels in this way, that meant it could save a step and "directly draw" the 3D scene it was calculating. This meant the playstation was super speedy at 3D scenes, especially compared to the Saturn. In fact, the PSX was really, really speedy at pushing pixels. The PSX could directly access pixels on screen just as fast as the Saturn's 2D hardware could draw sprites.
Now, once your reach this level of image manipulation, where you can draw to every single pixel on screen at once, conceptually, there is nothing 2D hardware can do that 3D hardware cannot. This "3D" hardware, is basically just a super fast blitter. It loses all concepts of things like tiles or sprites, because it doesn't need to define those areas in memory. It's framebuffer is all of that combined. If "3D" hardware wants to draw a scene comprised of "tile palette" entries of 64 pixels 8x8 big, it can do that by just directly manipulating the pixels on the screen (and creating objects in memory to represent those graphics structures). "3D" hardware is basically a blank canvas.
Now, you might be saying, if that's true, then why did people hype up the Saturns 2D capabilities, and why is it considered a better "2D" machine than the playstation? Well, that's because this was 1995. Remember, all this "2D" hardware is merely a form of data compression, a way to get more out of your limited amount of RAM. In 1995, the amount of ram in the playstation and saturn were so small, that the gains you had from vector quantization still meant something significant. The playstation, in terms of video hardware, can do anything the Saturn can do, but doing it might take 10 times more RAM. The Playstation doesn't have that much RAM. Thus, when you needed to do some sort of 2D game that had lots of animation and colors and stuff that take up RAM, the "2D hardware" of the Saturn shined through and let it compress everything on screen so that it's small amount of ram could seem like an enormous amount of RAM. The playstation had nothing inside like that.
That was 1995. This is 2018. Today, when we talk about video ram, we're talking gigabytes worth of data. To mimic 2D hardware using modern 3D hardware, you might be using 10 times more ram... but 10 times more than a handful of megabytes of RAM is still going to be absolutely nothing compared to modern amounts of RAM. Thus, whatever compression advantages 2D hardware once held 30 years ago literally doesn't matter anymore. Further, it's way, way easier to work with a framebuffer than it is to work with tilemaps. 2D hardware wasn't easier to work with. Not at all. They were merely more memory performant.
These "specs" sound like they were designed by laymen. It reads like a forum poster who doesn't understand the terms they are using. This is an embarrassing bullet point that actually shows how poor of a grasp of the technology they have.
TL;DR: Bragging about "2D hardware" belies a lack of knowledge about how "2D hardware" works, and ironically achieves the exact opposite of what they were hoping for. There is honestly no such thing as "better 2D hardware than Xbox One/PS4/Switch" because the way they draw graphics already eliminates any need for "2D hardware." The way we draw graphics is literally the apex of what any "2D hardware" was trying to achieve.