As stated in my previous post, I think it is mostly down to the fact there was more crossover between the most popular games on PS1 and those with the best visuals than there was on N64. Most of N64's best looking games were released in a later period where sales for the console had drastically slowed down.93143 wrote:What I mean is that the N64 (for debatable reasons) failed to convince a certain subset of gamers that it was even more powerful than the PSX at all.
Well, there are potentially three times more PS1 fans than N64 fans. Previously I alluded to poor N64 emulation being another factor. I think emulation had a parallel influence on recent online Mega Drive vs SNES audio debates. Putting aside whether one is actually 'objectively' better than the other at sound, for a long time (and continuing to some extent) the FM emulation of Mega Drive's audio was generally poor and (unduly) gave the console a bad name. Conversely, SNES audio emulation tended to be high quality and even sounded better than the real console, which you could argue gave its audio too good of a reputation. I think there is some similarity with PS1 and N64, where emulators would commonly boost PS1 visually and degrade N64's graphics.93143 wrote:Maybe there's fanboyism mixed in there, as there tends to be with Mega Drive audio vs. SNES audio, but as in that case there's something to be said for it.
I actually would have to go into the camp that says the two consoles were very similar in overall power. Texturing was of course far better on Gamecube, and unlike on PS2 you didn't have to produce ridiculously strange sized (and often blurry) framebuffers. But at the same time, the Gamecube's vertex hardware was extremely inflexible. So if you wanted to do any interesting vertex shading with dynamic elements, you would be completely stuffed. The CPU could not save you there either because it didn't implement full-SIMD but paired singles so its vertex performance was relatively low. Also the main memory should have been larger and faster to at least match PS2's main RAM, but it didn't. At the same time, Gamecube's auxiliary memory had such low bandwidth (81 MB/s) it's better to describe it as like a rewritable N64 cartridge on steroids as opposed to actual RAM. Additionally, even with the Gamecube's relatively good pipeline efficiency, there was no way for it (or the Xbox for that matter) to challenge the PS2's monstrously high destination blending speed.93143 wrote:The fact that some people think the PS2 was massively more powerful than the GameCube is much more bewildering. Maybe they're just going on published performance numbers (which were extremely apples-to-oranges that gen)...
With Gamecube, ArtX didn't set out to create a PS2 performance beater, but produce a 'cheaper PS2' with a much gentler development curve through smart design. I would say they succeeded very soundly. And while the Gamecube's vertex shader was inflexible, it actually had similar "peak" vertex power to Emotion Engine when the right situation presented itself. ArtX tailored Flipper to Factor 5's requirements (after Nintendo they were the main contributors to ArtX's 'how can we improve on RCP' developer forum), so the vertex shader's capabilities was a great fit for the Rogue Squadron sequels. PS2 was extremely difficult to get good performance out of it (there was a notorious Sony statistic presented around 2003 where the average PS2 game was only utilizing around 5% of one of the two vector units), but the top games do more 'interesting' things on the vertex end of things than the top Gamecube games, though framebuffer image quality always remained fairly bad on PS2 (it wasn't always perfect for Gamecube either - if you wanted destination alpha, you were forced into 6bpc framebuffers instead of 8bpc). TEV on Gamecube was often underutilized and underrated. Despite being a "fixed" pixel shader like the N64's color combiner, it actually had more flexible texture combine stages than the Xbox's GeForce 3. It's also a misnomer for some people to say the PS2 had no 'pixel shading'. While the PS2 didn't have a pixel shading unit, the multipass based design made creating pixel shading like effects with its blender a practical possibility.
The comparison between the two consoles is very multi-faceted and I would have only scratched the surface at most.
The SNES PPU is a massive leap over the NES PPU. It's the console's biggest asset, and arguably represents a fairly significant improvement over the Mega Drive VDP (though it still has some NES-like oddities in the sprite engine which hold it back somewhat).93143 wrote:The SNES wasn't all that much more powerful than the NES. Twice the CPU clock speed, twice the word length (but an 8-bit bus that nerfed it a bit, not to mention slow ROM and RAM), similar resolution and video architecture but with higher-quality pixels (kinda like the N64 vs. the PSX) and, uh... six times the video memory bandwidth, with the PPU beefed up to handle it. Yeah, that could be important...
I'm not sure what you mean by 'pre-rendering' the additive graphic or constant color registers in TMEM (TMEM has registers?), but yes, one of the biggest issues would be ensuring that the 2D framebuffer coordinates used to create TEXEL1 match the rasterized 2D coordinates of the primitive which RDP is processing. The CPU would first have to generate the appropriate coordinate range, and then use this information to load TMEM. RSP can write out the result of its matrices to its data cache or main memory, so at least the CPU wouldn't have to do any transforms itself.93143 wrote:But where would you get TEXEL1? It has to change per-pixel, but not based on the transform you used to get TEXEL0. Can you get the RDP to load one of the constant colour registers from TMEM every other cycle?
It would work fine going the long way, by pre-rendering the additive graphic and reloading it as a texture so you can just step through the pixels one by one (and maybe you could speed that up by pre-rendering and reloading in 8bpp indexed, if the colour profile is sufficiently one-dimensional or if you don't mind nearest neighbour), but I don't see how you get this in one step unless the additive graphic is to be rendered untransformed (like in a 2D sidescroller or something).
Mipmapping has nothing to do with it. RDP supports multitexturing without LOD.
RGBA will come from RSP.93143 wrote:Wait a minute. The rasterizer generates an RGBA pixel value in addition to the texture coordinates and LOD level. Where does that come from? Is it just the vertex shader value?
Well I stand corrected. Your CC1 is quite a clever way of pre-clamping the pixels. In my mind I had incorrectly imagined that any value underflow would immediately be clamped (either intentionally or the internal register not permitting signed values), but of course the more likely behavior is that it will work properly with no clamping occurring until the end.93143 wrote:Code: Select all
CC0: COMBN = max(0,min(255,(1-0)*TEXEL0+TEXEL1)) CC1: PIX = max(0,min(255,(0-1)*TEXEL1+COMBN)) BL0: BLEND = PIX + MEM BL1: MEM = BLEND
However, despite your ideas I think devoting one stage of the color combiner and one stage of the blender just to additive blending is unnecessary. If you look at how the hardware additive mode in the blender works, it has these particular blender flags turned on:
1) enable color/cvg read/modify/write memory access
2) don’t overwrite memory cvg (i.e. does not disturb the anti-aliasing of silhouette edges)
3) force blend enable
4) ZMODE opaque surface (i.e. updates z-buffer).
We know these flags are safe to use with additive blending, because SGI chose them for it. If you were to create a blending mode that used these flags but without the official additive mode being enable (just normal blend mode, which won't do very much here), it should be perfectly compatible with anti-aliasing and z-buffer in a single blender stage. However, note that SGI treat additive blended surfaces like they are opaque.