Final Fight 1 uses 8x8 and 16x16 sprites, while 2 and 3 uses 16x16 and 32x32 (during gameplay at least, I don't know if they use 64x64 elsewhere).
Which SNES games uses higher resolution than 256x240 ?
I already know about Romanging SaGa3, Secret of Mana, Seiken Densetsu 3 and Rudra no Hiho who use a horizontal resolution of 512 for some of their parts (menus, and text boxes in the latter 2).
However I don't know if any other games uses a horizontal resolution of 512, and if any game uses interlacing to get a pseudo-vertical resolution of 480.
Why use this instead of the real transparency that is also supported by the SNES ? The only reason I'd see this used instead is that it simplifies the sprite layering, but I can't think of any other advantages. Besides, the result is likely to look worse than real transparency.
It lets the game blend three things at once, namely the backdrop, the first layer, and the second layer, as seen in the tidbits shown when Jurassic Park is paused.Bregalad wrote:Mmh, so this is the infamous "pseudo-512H" mode in action ?
Why use this instead of the real transparency that is also supported by the SNES ?
I think the only game that actually uses 512x448 is RPM Racing.
And I don't understand what Jurassic Park is trying to do. Seems it enables the pseudo-512H mode, but with the same data for both main screen and subscreen, so this would have no effect ?
Pausing the screen just makes it darker, I guess this is done by darkening the palette as a whole, and I don't think the pseudo-512H mode has anything to do with that, especially considering it's also enable when the game is not paused.
I definitely don't think you can ever blend 3 things at once on the SNES, with both real transparency and pseudo-512H mode, they are working by blending main screen and sub-screen, so in both case you can only blend 2 layers at once. The only way to blend more is to simulate it by a trick without using hardware transparency.
Also it's quite funny, the last 3 generations of consoles before HDTVs became common all allow interlacing, but the SNES hardly ever saw it used, the PS1 saw about half of it's game using it, and the PS2 saw most of it's games using it. Isn't this a bit funny ? I guess the only reason for these choice is VRAM usage, as VRAM quantity in the console went up.
EDIT : The only use of pesudo-H512 mode I can see is that it allows to mix a (pseudo) transparency effect and a real high resolution graphic on the same scanline (and on the same frame without using HDMA/IRQ tricks). Like you could have a textbox not taking all the screen with high resolution text in it (done by interleaving pixels on purpose between main and sub-screen using 2 different BGs), and having a pseudo-transparency effect on the playfield outside of the text box (using the 2 same BGs).
Did you try waiting a minute with it paused?Bregalad wrote:And I don't understand what Jurassic Park is trying to do. Seems it enables the pseudo-512H mode, but with the same data for both main screen and subscreen, so this would have no effect ?
Pausing the screen just makes it darker
VRAM was a large part of it why effectively all Super NES and Genesis games and most Nintendo 64 games were low definition. Fill rate was another, as filling 448 lines of pixels takes twice as long as filling 224 lines of pixels unless your engine is already rock-solid 60 fps like that of Tobal No. 1 and Ehrgeiz.Also it's quite funny, the last 3 generations of consoles before HDTVs became common all allow interlacing, but the SNES hardly ever saw it used, the PS1 saw about half of it's game using it, and the PS2 saw most of it's games using it. Isn't this a bit funny ? I guess the only reason for these choice is VRAM usage, as VRAM quantity in the console went up.
The other reason is that interlacing looks terrible for anything rendering at >30fps, since only the even or odd lines of the rendered frame are updated, either in the display's image buffer (modern displays that sample frames for analogue input) or in image persistence as far as the eye can see (CRT displays, anything that implements interlacing the way it was designed). Even then I'd argue it looks terrible as on a CRT it'll be flickery, and on a modern display it's hard to disable the image processing that makes interlaced content look... shitty? Not really sure how to word it.Bregalad wrote:Also it's quite funny, the last 3 generations of consoles before HDTVs became common all allow interlacing, but the SNES hardly ever saw it used, the PS1 saw about half of it's game using it, and the PS2 saw most of it's games using it. Isn't this a bit funny ? I guess the only reason for these choice is VRAM usage, as VRAM quantity in the console went up.
When PS2 came out LCD screens was still a novelty/niche but they became very common toward the end of the console's life. Yet, more PS2 games use interlacing than anything else.
Interlacing looks bad on PAL because it flickers too slowly (25 Hz), but on NTSC I think it should look better (30Hz) and most developers only cared about NTSC during the development of their games.
The fact that some cheap modern TVs does not de-interlace properly has nothing to do with the fact old games didn't use it. There exists method to deinterlace properly, but those are complicated and computationally expensive (some may also be patented). Mostly, you have to detect movement on the screen and act differently if there is movement (update all pixels at 50/60Hz) or if there is no/few movement (update odd/even rows separately when corresponding frames are sent). I did a work of this during my studies so I sort of know what I'm talking about (although I don't know what is implemented in modern LCD TVs, but I bet you see all sort of de-interlacing techniques depending on the brand, form pure cheap crap to the top high quality algorithms).
Yeah, some instructions appear on the screen, but I don't see how this makes it "blend 3 things together", it's just another layer on the top of the normal playfield layers, no transparency at all.Did you try waiting a minute with it paused?
PS : Am I correct by assuming the following video modes "exists" (i.e. makes sense) on the SNES :
Code: Select all
*** Progressive video "modes" *** 256x240, 256x224 (modes 0-4, 7) => the most common one, everything is normal resolution 512x240, 512x224 (modes 5,6) => high H-res BGs at the cost of having no inter-BG transparency, sprites are normal res p512x240, p512x224 (modes 0-4, 7) => using pseudo-512 H-res BG, resolution is increased "by software" *** Interlaced video "modes" *** 256x480, 256x448 (modes 0-4, 7) => BG V-res is increased in software by changing the tilemap every frame, "by software" Tiles should be interleaved by hand or by HDMA V-scroll updates every line. Sprites are normal res 256x480, 256x448 (modes 0-4, 7) => Same as above, but sprites have increased V-res as well 256x480, 256x448 (modes 0-4, 7) => BGs not affected, but the sprites have increased V-res (but normal H-res). Vertical BG scrolling could look jerky 512x480, 512x448 (modes 5-6) => all is done by hardware, sprites are normal res 512x480, 512x448 (modes 5-6) => all is done by hardware, sprites are high V-res, but still normal H res p512x480, p512x448 (modes 0-4, 7) => BGs have pseudo-512 H-res, vertical resolution increased in software, sprites are normal-res p512x480, p512x448 (modes 0-4, 7) => Same as above, but sprites have increased V-res as well p512x480, p512x448 (modes 0-4, 7) => BGss have pseudo-512 H-res, normal V-res, but sprites have increased V-res (but normal H res). Vertical BG scrolling could look jerky
In fact I think I sort of understood, "smoothing" the screen is only one of the possible applications. If you enable colour averaging (add and half) as a normal transparency, and the main and sub screen are the same, this will not affect main screen pixels (as the average between 2 identical values is the value itself), and subscreen pixels will be averaged with the neighbour main screen pixel, resulting in a smoothing of the screen.
If, instead, a colour constant transparency is used instead, it will work like normal, and if main and sub-screen are not the same, it results in what Kirby's game does, i.e. a "software" high resolution.
I should definitely test that on hardware and improve my SNES transparency FAQ to explain this stuff.
edit: Nevermind, there's not enough RAM anyway. Maybe draw the shifted sprites to a small "sprite buffer", and then write the tiles directly to a pattern table buffer, without any full screen buffer.
Hi, my name is Josh and I've been playing video games since 1983 when I was 7 years old. This was the year I got my Atari 5200 (actually I played arcade games before that). I arrived here as the result of a google search. I was searching for information about the resolution of the Nes and the Snes. I had assumed that the Nes must have lower resolution than the Snes because Snes games appear to be more finely detailed. I found out that I was wrong. So my question is: Why do the 16 bit systems graphics appear more detailed than the 8 bit systems?JimDaBim wrote:I've got a question about the resolution of the NES vs. the resolution of the Super Nintendo:
The NES is 256 x 240 while a standard NTSC TV typically just shows 256 x 224 of the pixels. But from a pure data point of view, it's 256 x 240 and you can make the additional rows visible in an emulator.
The Super Nintendo is 256 x 224 and even in an emulator, you see just that, not 256 x 240.
So, does that mean that the Super Nintendo has a lower resolution than the NES?
And if a TV cuts off some of the rows from the NES image, wouldn't that mean that the TV cuts off rows from the Super Nintendo image as well? And since the Super Nintendo has only 224 pixel rows natively, wouldn't that mean that the Super Nintendo image as shown by an NTSC TV screen is even less than 224 rows while the same TV screen does show 224 line of the NES's 240 lines?
Thank you and have a good day.