- For making cartridges of your Super NES games, see Reproduction.
https://www.reddit.com/r/emulation/comm ... 71_beta_1/
https://www.resetera.com/threads/bsnes- ... es.111715/
https://www.pcgamer.com/this-snes-emula ... ng-results
I implemented hires mode 7 in bsnes, where I render at double resolution, which is the same thing ZSNES and Snes9X used to do. It didn't make a large difference, so I just filed it as a novelty option and moved on. u/DerKoun on Reddit came along and upped it to 16x resolution for jaw-dropping results:
Frankly, I'm stunned we didn't realize this level of detail was possible for 20+ years.
So, I intend to offer this in bsnes, but I'm curious if Snes9X and/or Mesen-S are going to be interested or not.
I know Snes9X recently removed the feature, so ... maybe it's too soon?
But if others want to implement this, I think we should coordinate on what we name all of the various options and how they work, mathematically, so we can provide a consistent user experience instead of a mass of confusion. I won't hold back anyone's progress: if someone comes up with something cool, I'll implement it as well.
* supersampling: if we scale the resulting image back down, it makes the pixel resolution match better without causing too great a disparity between sprite pixel sites and the background detail. Example: https://i.imgur.com/DTBoOTN.gif
* prescaling: we can perform filtering like HQ2x onto the mode 7 background first to smooth the jaggedness of the output, but that probably won't matter unless we go crazy and output at 4K mode.
* interpolation: we can interpolate on the output pixels to smooth things out, and also introduce more than 256 colors from the source image in doing so.
* maximum scaling factor: do we stop at 1024x1024, or go up even higher?
* perspective correction: affine texture mapping (eg what mode 7 actually is) has a really obvious problem with distortion. We could offer an option to correct for this. Example: https://en.wikipedia.org/wiki/Texture_m ... apping.jpg
If no one's interested in coming up with some standard terms and features for this, that's okay too, and I'll just experiment on my own for now.
Maybe I'm missing something, but unless you're just talking about corrections for the extra in-between lines you're adding, I don't see how this makes sense.byuu wrote:* perspective correction: affine texture mapping (eg what mode 7 actually is) has a really obvious problem with distortion. We could offer an option to correct for this.
1) It's only affine if you don't change the matrix line by line. Affine distortion isn't really a thing in perspective Mode 7; all you get is the distortion from people taking shortcuts when computing the coefficients, or using unrealistic viewport parameters, or correcting for fisheye (Mode 7 perspective is essentially a raycaster, and you need to take similar measures). There's the constant-distance effect on single scanlines, but that's only a problem if you're doing a tilted camera, which I'm pretty sure no one did.
F-Zero with affine perspective distortion would look like this: 2) How would you get the information? The actual 3D coordinate/camera/viewport data is gone, if it ever existed; all you have are the affine matrix coefficients, and there's no way to back out enough information from those to implement any sort of perspective improvements. You'd need game-specific hacks, and even that might not be feasible in some cases.
I wasn't aware it actually got rid of the distortion just by setting the coordinates through HDMA. Very interesting.
Being perfectly honest, the underlying math of mode 7 never made the most sense to me. I've written affine texture mapping in C code where it's nice and simple. Thankfully anomie figured out the underlying math, so it was pretty straightforward to just implement it. The few mode 7 demos I've written were very simple affairs that didn't really do HDMA updates.
> How would you get the information?
My thought was that you'd analyze the mode 7 coordinates at each scanline to extract the "vertexes" trying to be displayed. It would obviously not work in all games that do silly things. But any that just treat it as "rotozoom the mode 7 background around onscreen as one big whole piece" (eg as if the square tilemap is just a quad to be rasterized from 3D->2D) should theoretically be quite doable.
I figured that was why games like Terranigma and Hyper Zone were breaking in HD, because the games are being silly and repeating the screen mirrored on the top and bottom using HDMA.
In terms of resolution, going much higher than 1024x1024 may get pretty rough in terms of CPU usage (e.g HD Packs in Mesen have no scale limit, but trying to run a 10x HD pack (2560x2400) can't even hit 60fps iirc despite all of it being done in its own thread.)
https://www.youtube.com/watch?v=3FVN_Ze7bzwbyuu wrote:Being perfectly honest, the underlying math of mode 7 never made the most sense to me.
...easy, isn't it? /s
AlexFromRussia Smack-Fu Master, in training wrote:Math in the base of Mode 7 is very simple.
It is based on six numbers - let's call them PA, PB, PX, PC, PD and PY.
Videochip goes through pixels of scanlines of screen iterating by two current coordinates TX and TY.
TX starts with PX and increments by PA with every pixel and by PB with every scanline.
TY starts with PY and increments by PC with every pixel and by PD with every scanline.
That is formula for random pixel is:
TX = PA * x + PB * y + PX
TY = PC * x + PD * y + PY
But 16-bit videochip cannot do several multiplications for every pixel drawn, so it just increments current values of TX and TY - that is 2 additions per pixel only.
For every pixel of screen TX and TY are used as "texture coordinates" in background layer of tiles of mode 7. That is "texture fetching" in modern terms. But 16-bit videochip can't do texture filtering, so it just gets nearest pixel from layer, that is why image is granular.
However this formula (which is known as "affine transforms" can do scaling and rotations but cannot do perspective projection which is needed in 3D-like planes.
So, second tricks is to change P-params in every scanline of screen to modify magnification of plane properly to imitate prespective projection. This can be done with special mode of DMA-controller of system which can feed several bytes of data to videochip ports with every scanline automatically without usage of CPU.
Super Famicom ("2/1/3" SNS-CPU-GPM-02) → SCART → OSSC → StarTech USB3HDCAP → AmaRecTV 3.10
So the big trick here is that this is taking advantage of my multi-threaded scanline renderer. Around H=512 of each frame, I capture the entire PPU I/O register state plus CGRAM to a line buffer. Each line holds all of this state. There's some extra logic to detect when games force blank to get more VRAM transfer headroom as well, and it will split off batches as needed. I use this data to then parallelize the rendering of the frame, which doesn't yield a huge speedup on its own, but it helps.
The HD mode 7 trick relies on it being true 99.99999% of the time that the line render functions are called in one giant batch per screen. Basically, no game is going to force blank in the middle of the screen, that's ridiculous. It then scans the adjacent line buffers to inspect their mode 7 parameters to perform vertical interpolation. This can look pretty good, but it's still not really that sharp due to rounding errors when SNES games try to do perspective correction of their own.
So the 3D perspective option instead scans to find the first and last scanline of each mode 7 block (you could imagine a mode 7->1->7 change for a text box on the screen, for instance), and then presumes the game is trying to draw a 3D quad, and will interpolate between the first and last vertical mode 7 A/B/C/D parameters, which looks stunning, but breaks when games get silly and do even cooler distortions with HDMA like repeating the screen in Hyper Zone, or showing whatever the heck at the top in Terranigma.
I believe that someone smarter than me should be able to analyze the A/B/C/D values of each line, and detect algorithmically when they change too drastically to break off the 3D perspective correction option.
That and a supersampling option would make this really something special.
So the bad news ... implementing this into existing emulators is going to be very difficult. You'll have to implement a scanline buffering system as I did (you need to know scanline 239's M7A/B/C/D values before you can start drawing scanline 1), and honestly if you do that you might as well parallelize it.
The obvious result of this is that you can basically forget about supporting this with a pixel-accurate raster PPU core like higan and Mesen-S use. Unless of course Sour *really* wants to show off and he manages to build some kind of state delta system for the entire frame, heheh.
As a result, Snes9X seems the most likely candidate to get this support one day in the future, by replacing their PPU core with a new one. Mesen-S will be up to whether Sour wants to maintain two separate PPU cores like bsnes is doing. And you can pretty much forget about seeing this on the Super Nt.
But then again, that weird RA gimmick where they idle for 75% of each frame to try and poll gamepads closer to the video rendering will definitely not like anything to do with this. Nor will runahead like it. So, this may just be a novelty for screenshots and casual gaming.
The good news is that if you're gonna scale up mode 7 to true 4K (81x the pixels of the SNES), you're gonna be very grateful to have a multi-threaded PPU core for that. I can just barely exceed 60fps at 4K on a Ryzen 5 2600. Realistically, 4x scale (16x the pixels) gets you 95% of the benefits, so that's probably good enough and will only rule out running this on very low-powered devices like ARM cores.
Now I have to decide whether to keep or scrap my older 512x240 hires mode 7 renderer. Seems pretty quaint by comparison, but it does fit into the existing SNES renderer way better and isn't quite as extreme. Some people may prefer it ...
As for "when can I use this", I know you're not fond of such solutions, but a game whitelist would work fine.
That's what I initially thought was being done: some sort of low-pass on the A/B/C/D values before rendering each 2 scanlines as a textured quad. I'll have to try to figure out how to implement (as a tech demo outside of a Super NES emulator but using the same principle) how to detect out-of-line A/B/C/D values and smooth those that are in line.
- Dump of CGRAM, tiles, and map
- Log of writes to all matrix registers during a frame
High resolution, then filtering to actual pixel size (called supersampled here), is basically applying a comb filter before downsampling, to avoid aliasing, which is the way proper low-resolution rendering should be done. The SPC700 does this correctly for sound, but somehow the PPU isn't able to keep up for the image and uses the cheapes "nearest neighbour" algorithm when rendering mode 7 images.
Also, neither bsnes_hm7_b2.exe nor bsnes_hm7_b1.exe works on my computer. It just says "error in executing program !" without any further information. My Windows install is in French so I bet it's the programm itself displaying that.
And perhaps these.tepples wrote:But to get started on actually making the heuristics for splitting a mode 7 playfield into polygons, I'll first need some representative test data for both nice scenes and problematic ones (like HyperZone, Terranigma, and Super Castlevania IV)
Super Famicom ("2/1/3" SNS-CPU-GPM-02) → SCART → OSSC → StarTech USB3HDCAP → AmaRecTV 3.10
Code: Select all
[1:46 AM] Quantam: What is the reason bsnes' HD mode 7 took so long to implement? Hasn't this kind of thing been implemented in PSX and other emulators for many years? [4:04 AM] koitsu: i think that might be a question better posed on the forum for byuu [4:05 AM] koitsu: i suspect the answer is "maybe nobody cared" [4:05 AM] koitsu: a large percentage of bsnes/higan's user base is about "true accuracy", which that enhancement certainly is not