It is currently Sat Jun 23, 2018 12:48 pm

All times are UTC - 7 hours





Post new topic Reply to topic  [ 30 posts ]  Go to page Previous  1, 2
Author Message
PostPosted: Fri Apr 06, 2018 10:30 am 
Offline
User avatar

Joined: Thu Mar 29, 2018 8:14 am
Posts: 10
93143 wrote:
What I mean is that the N64 (for debatable reasons) failed to convince a certain subset of gamers that it was even more powerful than the PSX at all.

As stated in my previous post, I think it is mostly down to the fact there was more crossover between the most popular games on PS1 and those with the best visuals than there was on N64. Most of N64's best looking games were released in a later period where sales for the console had drastically slowed down.

93143 wrote:
Maybe there's fanboyism mixed in there, as there tends to be with Mega Drive audio vs. SNES audio, but as in that case there's something to be said for it.

Well, there are potentially three times more PS1 fans than N64 fans. Previously I alluded to poor N64 emulation being another factor. I think emulation had a parallel influence on recent online Mega Drive vs SNES audio debates. Putting aside whether one is actually 'objectively' better than the other at sound, for a long time (and continuing to some extent) the FM emulation of Mega Drive's audio was generally poor and (unduly) gave the console a bad name. Conversely, SNES audio emulation tended to be high quality and even sounded better than the real console, which you could argue gave its audio too good of a reputation. I think there is some similarity with PS1 and N64, where emulators would commonly boost PS1 visually and degrade N64's graphics.

93143 wrote:
The fact that some people think the PS2 was massively more powerful than the GameCube is much more bewildering. Maybe they're just going on published performance numbers (which were extremely apples-to-oranges that gen)...

I actually would have to go into the camp that says the two consoles were very similar in overall power. Texturing was of course far better on Gamecube, and unlike on PS2 you didn't have to produce ridiculously strange sized (and often blurry) framebuffers. But at the same time, the Gamecube's vertex hardware was extremely inflexible. So if you wanted to do any interesting vertex shading with dynamic elements, you would be completely stuffed. The CPU could not save you there either because it didn't implement full-SIMD but paired singles so its vertex performance was relatively low. Also the main memory should have been larger and faster to at least match PS2's main RAM, but it didn't. At the same time, Gamecube's auxiliary memory had such low bandwidth (81 MB/s) it's better to describe it as like a rewritable N64 cartridge on steroids as opposed to actual RAM. Additionally, even with the Gamecube's relatively good pipeline efficiency, there was no way for it (or the Xbox for that matter) to challenge the PS2's monstrously high destination blending speed.

With Gamecube, ArtX didn't set out to create a PS2 performance beater, but produce a 'cheaper PS2' with a much gentler development curve through smart design. I would say they succeeded very soundly. And while the Gamecube's vertex shader was inflexible, it actually had similar "peak" vertex power to Emotion Engine when the right situation presented itself. ArtX tailored Flipper to Factor 5's requirements (after Nintendo they were the main contributors to ArtX's 'how can we improve on RCP' developer forum), so the vertex shader's capabilities was a great fit for the Rogue Squadron sequels. PS2 was extremely difficult to get good performance out of it (there was a notorious Sony statistic presented around 2003 where the average PS2 game was only utilizing around 5% of one of the two vector units), but the top games do more 'interesting' things on the vertex end of things than the top Gamecube games, though framebuffer image quality always remained fairly bad on PS2 (it wasn't always perfect for Gamecube either - if you wanted destination alpha, you were forced into 6bpc framebuffers instead of 8bpc). TEV on Gamecube was often underutilized and underrated. Despite being a "fixed" pixel shader like the N64's color combiner, it actually had more flexible texture combine stages than the Xbox's GeForce 3. It's also a misnomer for some people to say the PS2 had no 'pixel shading'. While the PS2 didn't have a pixel shading unit, the multipass based design made creating pixel shading like effects with its blender a practical possibility.

The comparison between the two consoles is very multi-faceted and I would have only scratched the surface at most.

93143 wrote:
The SNES wasn't all that much more powerful than the NES. Twice the CPU clock speed, twice the word length (but an 8-bit bus that nerfed it a bit, not to mention slow ROM and RAM), similar resolution and video architecture but with higher-quality pixels (kinda like the N64 vs. the PSX) and, uh... six times the video memory bandwidth, with the PPU beefed up to handle it. Yeah, that could be important...

The SNES PPU is a massive leap over the NES PPU. It's the console's biggest asset, and arguably represents a fairly significant improvement over the Mega Drive VDP (though it still has some NES-like oddities in the sprite engine which hold it back somewhat).

93143 wrote:
But where would you get TEXEL1? It has to change per-pixel, but not based on the transform you used to get TEXEL0. Can you get the RDP to load one of the constant colour registers from TMEM every other cycle?
It would work fine going the long way, by pre-rendering the additive graphic and reloading it as a texture so you can just step through the pixels one by one (and maybe you could speed that up by pre-rendering and reloading in 8bpp indexed, if the colour profile is sufficiently one-dimensional or if you don't mind nearest neighbour), but I don't see how you get this in one step unless the additive graphic is to be rendered untransformed (like in a 2D sidescroller or something).

I'm not sure what you mean by 'pre-rendering' the additive graphic or constant color registers in TMEM (TMEM has registers?), but yes, one of the biggest issues would be ensuring that the 2D framebuffer coordinates used to create TEXEL1 match the rasterized 2D coordinates of the primitive which RDP is processing. The CPU would first have to generate the appropriate coordinate range, and then use this information to load TMEM. RSP can write out the result of its matrices to its data cache or main memory, so at least the CPU wouldn't have to do any transforms itself.

Mipmapping has nothing to do with it. RDP supports multitexturing without LOD.

93143 wrote:
Wait a minute. The rasterizer generates an RGBA pixel value in addition to the texture coordinates and LOD level. Where does that come from? Is it just the vertex shader value?

RGBA will come from RSP.

93143 wrote:
Code:
CC0: COMBN = max(0,min(255,(1-0)*TEXEL0+TEXEL1))
CC1: PIX = max(0,min(255,(0-1)*TEXEL1+COMBN))
BL0: BLEND = PIX + MEM
BL1: MEM = BLEND


Well I stand corrected. Your CC1 is quite a clever way of pre-clamping the pixels. In my mind I had incorrectly imagined that any value underflow would immediately be clamped (either intentionally or the internal register not permitting signed values), but of course the more likely behavior is that it will work properly with no clamping occurring until the end.

However, despite your ideas I think devoting one stage of the color combiner and one stage of the blender just to additive blending is unnecessary. If you look at how the hardware additive mode in the blender works, it has these particular blender flags turned on:
1) enable color/cvg read/modify/write memory access
2) don’t overwrite memory cvg (i.e. does not disturb the anti-aliasing of silhouette edges)
3) force blend enable
4) ZMODE opaque surface (i.e. updates z-buffer).

We know these flags are safe to use with additive blending, because SGI chose them for it. If you were to create a blending mode that used these flags but without the official additive mode being enable (just normal blend mode, which won't do very much here), it should be perfectly compatible with anti-aliasing and z-buffer in a single blender stage. However, note that SGI treat additive blended surfaces like they are opaque.


Top
 Profile  
 
PostPosted: Fri Apr 06, 2018 10:52 am 
Offline
User avatar

Joined: Thu Mar 29, 2018 8:14 am
Posts: 10
calima wrote:
Realityengine, do you know what was the optimal way to deal with large textures? Was Rare's way documented anywhere?

I can certainly imagine preprocessing to split models and textures to smaller ones, but I don't think that's the best way.

Good memory bandwidth management. I believe that was exactly Rare's method. Their games would constantly "defragment" the position of textures loaded in RAM. As main RAM suffered from a large random access penalty, this meant maintaining the position of textures in RAM that would facilitate, as much as possible, linear access or (less preferably but acceptably) banked access.

EDIT: Other good techniques were using low color depth textures (since they consumed less texture memory) and then adding color through vertex shading, and repeatedly tiling a texture (so as not to stretch it out too much) followed by using multitexturing to blend another texture a few times arbitrarily over the tiled texture to make the pattern look less repetitive and more detailed.

You needed to split models and textures into smaller pieces, since each primitive could only have 4 KB of texture data mapped to it at most. I imagine this kind of thing could have caused bigger headaches for artists than programmers. Switching in new texture data in the middle of a single primitive being rendered was not officially supported.


Top
 Profile  
 
PostPosted: Fri Apr 06, 2018 3:27 pm 
Online

Joined: Fri Jul 04, 2014 9:31 pm
Posts: 926
realityengine wrote:
93143 wrote:
What I mean is that the N64 (for debatable reasons) failed to convince a certain subset of gamers that it was even more powerful than the PSX at all.
As stated in my previous post, I think it is mostly down to the fact there was more crossover between the most popular games on PS1 and those with the best visuals than there was on N64. Most of N64's best looking games were released in a later period where sales for the console had drastically slowed down.

I think we're on the same page here.

Quote:
Putting aside whether one is actually 'objectively' better than the other at sound, for a long time (and continuing to some extent) the FM emulation of Mega Drive's audio was generally poor and (unduly) gave the console a bad name. Conversely, SNES audio emulation tended to be high quality and even sounded better than the real console, which you could argue gave its audio too good of a reputation.

Inaccurate emulation could make the SNES sound worse too. F-Zero sounded horrible in ZSNES because the engine noise was poorly emulated. Also, non-damping interpolation schemes sometimes brought out noise that the real system muffled, or threw off the intended mix balance, though I agree that in general they tended to be an improvement (you could compensate for the muffling with proper sample prefiltering, but I don't think most developers did).

One complicating factor is that the best-known games didn't always have the best audio, and there was unexplored potential in both systems. Comparing Sonic the Hedgehog with Super Mario World doesn't tell you much. Comparing Time Trax with Green Lantern gives you a better sense of the possible, but those still don't stretch either system to its limits. I'm not sure about the Mega Drive, but I'd say the S-SMP has still not been fully exploited even in the chiptune scene.

Quote:
I think there is some similarity with PS1 and N64, where emulators would commonly boost PS1 visually and degrade N64's graphics.

I'd consider that likely. A lot of people either only know emulation or have gotten used to it, and don't even realize or remember that PSX games had such bad polygons. I myself was away from my N64 for an extended period, and was surprised when I got back and realized that there were lighting effects in F-Zero X not properly represented in PJ64 - I knew how it looked originally, I knew the emulation was glitchy, and the emulator still trained me to think the game looked worse than it did.

Quote:
The comparison between the two consoles is very multi-faceted and I would have only scratched the surface at most.

Looks like it was even more apples-to-oranges than I thought...

Quote:
though it still has some NES-like oddities in the sprite engine which hold it back somewhat.

Tell me about it. I'm trying to port an advanced shmup to the SNES, and I can't even let myself daydream about being able to specify sprite sizes per-axis per-sprite like on the Mega Drive. I did get around the 16 KB limit, but the technique I used isn't easy to generalize...

Quote:
I'm not sure what you mean by 'pre-rendering' the additive graphic

Draw the additive graphic in a blank secondary framebuffer with the appropriate transforms and interpolation, and maybe vertex shading/mipmapping/fogging if desired, so it appears as it would onscreen. Then reload the result as a texture. This way you can just step through the pixels in both the additive graphic and the framebuffer texture, without worrying about the transforms being different between them. (It may be reasonable to pre-render whole objects rather than triangles, so that you're dealing with longer horizontal runs when reloading TMEM.)

(Can you use transforms and interpolation and such when drawing in 8bpp?)

Or is it actually possible to combine a transformed additive texture with an untransformed framebuffer texture in a single pass? I still don't have a clear idea of how flexible the rasterizer is. With a simple zoom you could just set the texture size appropriately, but anything more complex seems like it wouldn't work...

Quote:
or constant color registers in TMEM (TMEM has registers?)

No, the color combiner has registers, that can be used as sources for its operations. I assume you have to load them with the RSP, but I don't know for a fact that the RDP can't load them from TMEM automatically, so I asked.

Quote:
2) don’t overwrite memory cvg (i.e. does not disturb the anti-aliasing of silhouette edges)

Oh. Well, that solves that problem.

Quote:
4) ZMODE opaque surface (i.e. updates z-buffer).

This doesn't seem like such a big deal if you handle transparencies properly (draw them last, and/or use Z ordering). It shouldn't be necessary to use this setting; you'd get glitchy results either way if you did something dumb enough to need it. Not sure why I was worried...

realityengine wrote:
Switching in new texture data in the middle of a single primitive being rendered was not officially supported.

I believe that's what people claim Rare did. Not for all of their games, but supposedly some later ones. I can't find a reference and I may be wrong.


Top
 Profile  
 
PostPosted: Sat Apr 07, 2018 1:30 am 
Offline

Joined: Tue Oct 06, 2015 10:16 am
Posts: 748
realityengine wrote:
Switching in new texture data in the middle of a single primitive being rendered was not officially supported.
This is exactly what I was curious about. Did anybody manage to do it, and if so, how.


Top
 Profile  
 
PostPosted: Sat Apr 07, 2018 10:14 pm 
Offline
User avatar

Joined: Mon Sep 15, 2014 4:35 pm
Posts: 3339
Location: Nacogdoches, Texas
93143 wrote:
I believe that's what people claim Rare did. Not for all of their games, but supposedly some later ones. I can't find a reference and I may be wrong.

How would this even work? Seems like you'd have to make a lot of assumptions for how the polygon is being drawn onscreen.

realityengine wrote:
As stated in my previous post, I think it is mostly down to the fact there was more crossover between the most popular games on PS1 and those with the best visuals than there was on N64. Most of N64's best looking games were released in a later period where sales for the console had drastically slowed down.

Yeah, I'm not sure how you can say the PS1's visuals hold up to the N64's after seeing things like Star Wars Episode I: Racer, Conker's Bad Fur Day, Indiana Jones and the Infernal Machine, and World Driver Championship...

realityengine wrote:
and unlike on PS2 you didn't have to produce ridiculously strange sized (and often blurry) framebuffers

What now? I was always under the impression many PS2 games were blurrier because the PS2 didn't have enough graphics processing power to fill a 640x480 framebuffer, not because of some other limitation.

realityengine wrote:
Also the main memory should have been larger and faster to at least match PS2's main RAM, but it didn't. At the same time, Gamecube's auxiliary memory had such low bandwidth (81 MB/s) it's better to describe it as like a rewritable N64 cartridge on steroids as opposed to actual RAM.

Isn't this pretty much designed for audio, where the very low bandwidth wouldn't be an issue? I can't imagine the GameCube was really at a ram disadvantage from the PS2 with most games. Nintendo did seem to have a ram problem (even proportional to the rest of the hardware) for whatever reason though; you've got the NES with it's puny 2KB of ram, the SNES with it's slow ass main ram and its not-so-stellar 64KB of audio ram as well as sprites only having access to 16KB of ram (which isn't really a problem with the ram itself, but whatever) and the N64 with its apparently slow and although larger than the competition, still small amount of ram. Even go all the way to the Wii U, which had 2GB of ram but the OS and auxiliary processes took almost half of it if I'm not mistaken.

realityengine wrote:
With Gamecube, ArtX didn't set out to create a PS2 performance beater

Well, that's what they did. Resident Evil 4 is a famous example of the GameCube looking very noticeably better than the PS2; it was my impression this game actually stopped most of the debate. Regardless of the insane polygon per second numbers given by Sony, games on the GameCube often looked better than those on the PS2, and if not, they looked identical, which can probably be attributed to a lack of effort on the developer anyway seeing the huge difference in sales between the PS2 and the GameCube/Xbox.


Top
 Profile  
 
PostPosted: Sun Apr 08, 2018 2:48 am 
Offline
User avatar

Joined: Thu Mar 29, 2018 8:14 am
Posts: 10
93143 wrote:
Looks like it was even more apples-to-oranges than I thought...

The only consoles of that generation with reasonably comparable graphics hardware are the Gamecube and Xbox (and Xbox soundly wins, aside from the occasionally pixel shading situation where TEV is better, and the sometimes faster blending on the eSRAM). PS2's Graphic Synthesizer is still unique to this day because of its focus on massively high overdraw (particularly useful for alpha blending because you can't avoid overdraw there) and the Dreamcast's PowerVR2 has a mostly unique (on a home device, at least) focus on massively low overdraw. I think this particular contrast would have added to PS2 hype back in the day where the console was touted as an absolute monster that would make even the Dreamcast look last-gen. When the Dreamcast's VRAM bandwidth is 0.8 GB/s and the PS2's VRAM is 48 GB/s, to the layman it made the Dreamcast look extremely weak. Of course because of the overdraw design difference, PS2 has to read/write VRAM all of the time while Dreamcast only has to do it infrequently, so this VRAM difference is stripped of almost all meaning in context.

93143 wrote:
Draw the additive graphic in a blank secondary framebuffer with the appropriate transforms and interpolation

It's a 'neater' way of doing things, but to me it sounds like a mostly unnecessary waste of the console's processing and memory time.

93143 wrote:
(Can you use transforms and interpolation and such when drawing in 8bpp?)

Are you trying to ask here if RDP supports 8bpp output? I'm fairly sure the answer is no.

93143 wrote:
Or is it actually possible to combine a transformed additive texture with an untransformed framebuffer texture in a single pass?

I still think the only way to do one-pass additive blending with textures is just using the hardware additive blender.

93143 wrote:
the color combiner has registers, that can be used as sources for its operations

I can't see them being specifically useful here (though they could be useful for the additional color combiner pass). If you wanted to do additive blendng without a texture against the framebuffer, it would be better just to use the vertex shade rather than the registers.

calima wrote:
This is exactly what I was curious about. Did anybody manage to do it, and if so, how.

It's not really a practical possibility because you can only sync the RDP pipeline per-primitive. There's an enormous virtually unavoidable risk that swapping the TMEM data in the middle of a primitive will either result in garbage texels being loaded into the pipeline (from an incomplete texture transfer) or simply the timing on the swap over point being wrong (meaning that the point on the primitive surface where the old texture should have stopped and the new texture should have started won't be as expected).

Espozo wrote:
What now? I was always under the impression many PS2 games were blurrier because the PS2 didn't have enough graphics processing power to fill a 640x480 framebuffer, not because of some other limitation.

No, the problem was that due to PS2 Graphic Synthesizer's reliance on extremely high drawing speed, to maximize performance you needed to create as many back buffers in VRAM as possible (also main RAM was invisible to Graphic Synthesizer). Unfortunately due to the size of the VRAM only being 4 MB, that didn't leave a lot of room (plus the texture cache had to share that space). That left developers with a range of choices, none of them too good. For the framebuffers to all fit, they either had to decrease the framebuffer size (makes jaggies and/or blur), decrease color depth (makes banding), or decrease the texture cache (usually resulted in lower texture resolution and/or memory thrashing). Remember also the PS2 has no hardware texture compression except for CLUT (if you could call that compression) putting more pressure on that limited space, though 'software' (i.e. vector unit driven) techniques were developed later on in the console's life.

Also because Graphic Synthesizer's anti-aliasing unit didn't work (broken design in silicon), developers would to come up with buffer tricks to smooth out jaggies. One way to do that was to have different sized front and back buffers and try to 'supersample' the output. Of course, while this would have some success in removing jaggies, it would also create a lot of blur since it wasn't proper supersampling.

There are more complicated factors at play, but I can tell you that while PS2 had a lot of problems, pixel fill speed was the least of them. At just filling any given resolution with pixels it was much faster than the other console of its generation (well it did have 4 times more pixel pipelines than Xbox and Gamecube).

EDIT: Gamecube only stored one back buffer in VRAM (eSRAM). The front buffers (or any other buffers) all had to be copied to main RAM. While this meant the Gamecube didn't regularly have any size limitations, it did put a significant damper on memory bandwidth. However, this also meant that with MSAA mode on, the back buffer had to be smaller, which is probably why it was rarely used. A neat bit of trivia: Flipper is capable of z-buffer compression but only when MSAA is enabled (though Xbox's NV2A does it all the time).

Espozo wrote:
Isn't this pretty much designed for audio, where the very low bandwidth wouldn't be an issue? I can't imagine the GameCube was really at a ram disadvantage from the PS2 with most games.

Sure, for audio the speed is not a big problem (PS2 also has audio RAM but it is much smaller though actually faster), but the fact is that Gamecube's main RAM is only 24 MB and the PS2's is 32 MB. It caused annoyance for Gamecube developers to have to constantly switch in things from slow auxiliary RAM into the much faster main RAM. I guess you could argue that the bandwidth difference between the main RAM on the two consoles was not that significant anyway, because in practice the PS2's RDRAM with its high latency would have a lower effective bandwidth, while the Gamecube's 1T-SRAM would actually achieve close to its peak.

Curiously, the Gamecube received a significant downclocking prior to its release. Flipper originally ran at 200mhz (later 162mhz) and main RAM had 3.2 GB/s bandwidth (same as PS2, later downgraded to 2.6 GB/s). I think the Gecko CPU was made faster though (probably needed changing anyway due to the different system bus multiplier).

Espozo wrote:
Resident Evil 4 is a famous example of the GameCube looking very noticeably better than the PS2; it was my impression this game actually stopped most of the debate.

I wouldn't put too much stock in this comparison. Resident Evil 4 was a Gamecube exclusive for almost all of its development lifespan. Given the major differences in graphics hardware architecture between the two consoles, I would say that any game that was not developed as multiplatform from the start could not be properly ported across the two in a way that would maximize their power (at least, not without a lot of extra development time).

Espozo wrote:
which can probably be attributed to a lack of effort on the developer anyway seeing the huge difference in sales between the PS2 and the GameCube/Xbox.

Sure, but the PS2 was also way harder to develop for than the Gamecube, so it kind of balanced out. IMO the real reason the graphics between the two (excluding the worst efforts on each) looked fairly equal overall is because their hardware power was pretty evenly matched despite different strengths.

In my experience checking out multiplatform versions between the two consoles, I generally noticed that the Gamecube versions almost always had higher framebuffer resolution and better texture quality, but the PS2 versions broadly had better vertex-related things, like higher quality reflections and nicer quality lighting. Just an observation of mine, which I think lines up with a reasonably informed view on their hardware capabilities.


Top
 Profile  
 
PostPosted: Sun Apr 08, 2018 8:39 am 
Offline
User avatar

Joined: Mon Sep 15, 2014 4:35 pm
Posts: 3339
Location: Nacogdoches, Texas
realityengine wrote:
I wouldn't put too much stock in this comparison. Resident Evil 4 was a Gamecube exclusive for almost all of its development lifespan. Given the major differences in graphics hardware architecture between the two consoles, I would say that any game that was not developed as multiplatform from the start could not be properly ported across the two in a way that would maximize their power (at least, not without a lot of extra development time).

Well, here's a game that was developed for the PS2 and released on the GameCube and Xbox later. Granted, the only real difference is the framebuffer size, but I'd figure that if a game were being made with the PS2 in mind, they'd scale up the load to where the system would be running at about full capacity even with the smaller framebuffer, unless it doesn't quite work like that...

Attachment:
Burnout 1.png
Burnout 1.png [ 610.74 KiB | Viewed 765 times ]

realityengine wrote:
Curiously, the Gamecube received a significant downclocking prior to its release. Flipper originally ran at 200mhz (later 162mhz) and main RAM had 3.2 GB/s bandwidth (same as PS2, later downgraded to 2.6 GB/s). I think the Gecko CPU was made faster though (probably needed changing anyway due to the different system bus multiplier).

Yeah, I've heard they sped up the CPU too from something like low 400's Mhz to 486Mhz. Weird that they'd slow everything else down; I guess it was running into heat related issues?

realityengine wrote:
but the PS2 versions broadly had better vertex-related things, like higher quality reflections and nicer quality lighting

Have any examples? I've never actually noticed this. Multiplatform games on the GameCube have pretty much always been greater or equal to the PS2 from what I've seen. I do cite a lack of effort though because many Xbox ports I've seen look no better than the GameCube versions. I've heard that the CPU's between both are pretty much evenly matched, but that the Xbox has an advantage with the GPU.


Top
 Profile  
 
PostPosted: Sun Apr 08, 2018 9:35 am 
Offline
User avatar

Joined: Thu Mar 29, 2018 8:14 am
Posts: 10
Espozo wrote:
Well, here's a game that was developed for the PS2 and released on the GameCube and Xbox later. Granted, the only real difference is the framebuffer size, but I'd figure that if a game were being made with the PS2 in mind, they'd scale up the load to where the system would be running at about full capacity even with the smaller framebuffer, unless it doesn't quite work like that...

I don't think that's quite the best example. Burnout was clearly developed as a multiplatform game (I don't really see how they could turn around both Gamecube and Xbox ports in about 5 months otherwise). Granted, it'll be easier to port from PS2 to Gamecube/Xbox than the other way around, if for no other reason than the PS2's esoteric hardware. The size of the framebuffer would have little impact on performance, given as I've said, the PS2 was not really fill-rate bound at all (if supersampling, the actual rendering buffer might not actually be smaller). Those early PS2 games would likely be more bound by how much performance they could ring out of the twin vector units (most developers in 2002 only were able to use 5% of VU0) and dealing with the PS2's relatively low general performance MIPS central core. And extra bad image quality (including textures), due to not yet working out the best approaches on dealing with limited VRAM space.

Espozo wrote:
Yeah, I've heard they sped up the CPU too from something like low 400's Mhz to 486Mhz. Weird that they'd slow everything else down; I guess it was running into heat related issues?

May have also been yields. Flipper was a pretty big chip, so parts of it may not have correctly handled the higher speed on an excessive number of production samples.

EDIT: N64's RCP was also downclocked before launch (66mhz to 62.5mhz). So was the CPU (100mhz to 94mhz).

Espozo wrote:
Have any examples? I've never actually noticed this. Multiplatform games on the GameCube have pretty much always been greater or equal to the PS2 from what I've seen. I do cite a lack of effort though because many Xbox ports I've seen look no better than the GameCube versions. I've heard that the CPU's between both are pretty much evenly matched, but that the Xbox has an advantage with the GPU.

I don't really want to start a PS2 vs Gamecube holy war. Plus it's really easy to cherrypick particular games to push a particular view. I prefer a more detached perspective. But I can give you one example: https://www.youtube.com/watch?v=ORbVdTBeUOU. Notice how the PS2's framebuffer, as usual, is low resolution, but it has a much more sophisticated lighting and reflection model than the Gamecube? Well it might be a bit hard to tell unless you look closely.

Regarding Gecko (PPC750) vs the Pentium III in the Xbox, I think the Pentium III has a fair edge in general performance. In old PC vs Mac benchmarks, a 500mhz PPC750 would generally lose by a small margin to a Pentium III at 733mhz (might be because PPC750 had fairly slow cache). Though if dealing with really bad code, I can see Gecko's larger L2 cache (256 KB vs 128 KB) being a bit more resilient to it. As for SIMD, I think there's little question that Gecko is significantly slower. Whatever per-cycle advantage the PPC750 has over the PIII would be wiped out by the former only having paired singles and the later having SSE (even though PIII's SSE sucks). So the Xbox's PIII would win the SIMD contest even on clock speed alone (and probably by that margin in terms of percentage).


Top
 Profile  
 
PostPosted: Sun Apr 08, 2018 10:08 am 
Offline
User avatar

Joined: Mon Sep 15, 2014 4:35 pm
Posts: 3339
Location: Nacogdoches, Texas
realityengine wrote:
(I don't really see how they could turn around both Gamecube and Xbox ports in about 5 months otherwise)

I didn't think 5 months was particularly short to port over a game, especially one as barebones as it? I doubt they programmed the game in MIPS assembly (if that's even humanly possible...)

realityengine wrote:
Well it might be a bit hard to tell unless you look closely.

It's not; the GameCube version looks considerably worse, almost as if on purpose (for starters, why is it so much brighter?). Although, you could argue that about Resident Evil 4 as well. The difference in hair quality sticks out like a sore thumb.


Top
 Profile  
 
PostPosted: Sun Apr 08, 2018 10:32 am 
Offline
User avatar

Joined: Thu Mar 29, 2018 8:14 am
Posts: 10
Espozo wrote:
I didn't think 5 months was particularly short to port over a game, especially one as barebones as it? I doubt they programmed the game in MIPS assembly (if that's even humanly possible...)

It's less time than that because it still has to go through QA (including by the licensor) and then manufacturing (fairly long wait on the Gamecube mini disks). But it actually seems that this wasn't the first Renderware engine game for Gamecube, but THPS3 was already on it. IIRC THPS3 on Gamecube had pretty bad performance, so they might have spent the time porting Burnout to optimize the engine better for the console. In any case, I think the fact it was on Renderware proved the technology wasn't only meant for one platform.

Espozo wrote:
It's not; the GameCube version looks considerably worse, almost as if on purpose (for starters, why is it so much brighter?). Although, you could argue that about Resident Evil 4 as well. The difference in hair quality sticks out like a sore thumb.

Might have been a (bad) attempt to hide the downgraded lighting. As for Resident Evil 4, if the hair is that bad it sounds like they botched the alpha (which should otherwise be a strength of the PS2). Any port that uses prerecorded footage from the original version for cutscenes is, in my mind, a red flag for a port that didn't have enough time and/or money behind it. South Park for PS1 had the same thing going on.


Top
 Profile  
 
PostPosted: Sun Apr 08, 2018 11:01 am 
Offline

Joined: Tue Oct 06, 2015 10:16 am
Posts: 748
How about using the vertex unit as a geometry processor then, outputting triangles that do not exist in the source model? Would such an approach have any advantage over just having a preprocessed model?


Top
 Profile  
 
PostPosted: Sun Apr 08, 2018 12:30 pm 
Offline
User avatar

Joined: Thu Mar 29, 2018 8:14 am
Posts: 10
calima wrote:
How about using the vertex unit as a geometry processor then, outputting triangles that do not exist in the source model? Would such an approach have any advantage over just having a preprocessed model?

In terms of getting around that texturing issue you previously mentioned, probably not. But to create interesting surfaces for extra detail? Tessellation has value.

https://ultra64.ca/files/other/Game-Developer-Magazine/GDM_November_1999_Putting_Curved_Surfaces_to_Work_on_the_Nintendo_64.pdf


Top
 Profile  
 
PostPosted: Sun Apr 08, 2018 1:15 pm 
Offline
User avatar

Joined: Mon Sep 15, 2014 4:35 pm
Posts: 3339
Location: Nacogdoches, Texas
realityengine wrote:
As for Resident Evil 4, if the hair is that bad it sounds like they botched the alpha (which should otherwise be a strength of the PS2).

I'm not familiar enough with 3D hardware to tell you if that's the problem. The scene isn't exactly the same so it isn't a completely fair comparison, but look:

Image


Top
 Profile  
 
PostPosted: Sun Apr 08, 2018 4:28 pm 
Online

Joined: Fri Jul 04, 2014 9:31 pm
Posts: 926
realityengine wrote:
93143 wrote:
Draw the additive graphic in a blank secondary framebuffer with the appropriate transforms and interpolation
It's a 'neater' way of doing things, but to me it sounds like a mostly unnecessary waste of the console's processing and memory time.

Okay, but 'neater' than what? If my assumptions are right, I'm not sure there is any other way to do clamped additive blending on the N64 in the general case (other than attempting a reverse transform on the framebuffer, which I admit this method is neater than...).

Whether using clamped additive blending is a good idea in a specific instance can only be determined once a broadly optimal method has been established and profiled (and I doubt we'll even get that far). If there's a faster way than what I've described, I want to hear about it.

I think the problem can be subdivided into (at least) five plausible scenarios, only one of which requires the full treatment:

1. Full-Screen One-Step Blending
The SNES does this. All you have to do is fill two framebuffers, a main screen and a subscreen, taking care to set alpha for main screen pixels to indicate whether or not to blend with the subscreen. Then just run both screen buffers through the RDP and do the alpha-keyed blend in the color combiner. Should be reasonably efficient and predictable, since you're dealing with a constant amount of contiguous data. Still takes something like a quarter of a frame for 320x240 16-bit at 60 fps, but you only have to do it once.

Maybe it's also possible to combine the buffers using Z instead of alpha, but I haven't thought about that. Maybe it's not.

2. 2D Sprite Blending
Some faux-retro games do this. Each sprite can be additively blended, and they can stack. To pull this off, you'd just take the sprite texture and the chunk of framebuffer you want to paste it over, load them into TMEM, and do the blend in the color combiner. Since both textures are 1:1, no transform mismatch issues can arise.

3. Scaled Rectangular Primitive Blending
Should be similar to sprite blending, except that now you have to figure out how big your primitive will be on screen and load that amount of framebuffer (presumably you'd do this in stages, if you couldn't fit everything in TMEM all at once). Since both transforms are rectilinear, with no rotation, skew, or perspective, it should be possible (I think) to reliably blend an arbitrarily-scaled additive texture with a 1:1 framebuffer texture simply by specifying appropriate sizes for the texture tiles. Anything you could accomplish with fog in a secondary buffer could be done with a constant colour (or at most vertex shading) in CC0, with the additive blend done in CC1. (Using fog with additive blending in the primary buffer is a bad idea because it will not have the desired effect.)

Open questions:
- can you use entirely arbitrary texture sizes, so that the texture coordinates can be matched at any scale? (I guess yes - I think I read somewhere that you can)
- can you select trilinear for one texture and nearest neighbour for the other, so that the primitive size need not be an integer? (I guess no)

4. Untextured 3D or Non-Rectilinear Blending
Use vertex shading, or constant colours if desired, and blend that with the framebuffer texture. Can execute in one-cycle mode. Might not play well with the Z-buffer, but then transparency in general doesn't play well with the Z-buffer; just use the standard workarounds.

5. Textured 3D or Non-Rectilinear Blending
The general case. This is what I was attempting to find a solution to with my pre-rendering scheme. If it is not possible to have the rasterizer provide two different sets of texture coordinates based on different transforms on alternate cycles, I think the most efficient way to handle this case is to pre-render the additive object (or primitive, if doing a whole object at once doesn't gain enough efficiency to make up for processing a bunch of blank pixels around the edges) and load the result as a texture; once this is done, the remaining procedure is equivalent to case 2 above. This is somewhat similar to how environment mapping was done on the N64 (render a scene, use it as a texture).

If it is possible to alternate transforms in two-cycle mode, you don't need to pre-render, and this case becomes basically equivalent to case 3 above.

Open question:
- can the rasterizer handle two different transforms on alternate cycles in two-cycle mode? (I guess no)

...

Alternately, as I mentioned before, you could render in 32-bit mode using 16-bit colours (or really any less-precise colour space), which gives you a bit of headroom for just using the blender. Still not perfectly safe, but better than nothing. You'd have to be able to brighten the colours substantially with the color combiner to properly convert the framebuffer back to 16-bit; I think that's probably possible...

Quote:
93143 wrote:
(Can you use transforms and interpolation and such when drawing in 8bpp?)
Are you trying to ask here if RDP supports 8bpp output? I'm fairly sure the answer is no.

No. I know the VI can't display 8bpp (at least, I don't see a flag for it), but the description of the blender in the manual describes three "color image formats", these being 32-bit RGBA, 16-bit RGBA, and 8-bit. So it seems that the blender can write to an 8-bit framebuffer, even if it's not allowed to be the final display buffer.

What I don't know is under what circumstances 8bpp is a valid blender target - it would be neat if you could do a full render, but I have a feeling it's only valid for copy mode or something like that.

Quote:
93143 wrote:
Or is it actually possible to combine a transformed additive texture with an untransformed framebuffer texture in a single pass?
I still think the only way to do one-pass additive blending with textures is just using the hardware additive blender.

So that's a no? There's only one transform that can be set in the rasterizer regardless of mode, and you have to change it manually with the RSP?

Quote:
93143 wrote:
the color combiner has registers, that can be used as sources for its operations
I can't see them being specifically useful here (though they could be useful for the additional color combiner pass).

I was thinking maybe you could get it to automatically load a colour from TMEM into one of those registers every other cycle, just incrementing the address each time. This would avoid the problem with the transforms being different between the framebuffer and the additive texture. But there was never really a reason to suspect this was possible, and I suppose it was too much to hope that you could do texturing, however primitive, with anything other than the texture unit...


Top
 Profile  
 
PostPosted: Mon Apr 09, 2018 1:09 am 
Offline

Joined: Tue Oct 06, 2015 10:16 am
Posts: 748
realityengine wrote:
calima wrote:
How about using the vertex unit as a geometry processor then, outputting triangles that do not exist in the source model? Would such an approach have any advantage over just having a preprocessed model?

In terms of getting around that texturing issue you previously mentioned, probably not. But to create interesting surfaces for extra detail? Tessellation has value.

https://ultra64.ca/files/other/Game-Developer-Magazine/GDM_November_1999_Putting_Curved_Surfaces_to_Work_on_the_Nintendo_64.pdf

Yeah I've read that, my question was more about whether triangles between RSP and RDP go to RAM or cache - is it faster to stream existing triangles from RAM or to generate new ones on-chip?


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 30 posts ]  Go to page Previous  1, 2

All times are UTC - 7 hours


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group