Kismet wrote:[...]What I'd like to see is a SNES-FPGA kit that has a switch between HDMI-pixel-perfect scale (up to 8K) and Accurate (puts out 240p, use with external capture or upscaler) and can use original carts and controllers. A complete kit would need to emulate all the known chips if it doesn't have the cartridge slot, which means it may require a larger FPGA.
I would love to include such an option. But there are two problems as far as I can tell:
1) The used FPGA wont be capable of driving an HDMI Signal (at least I have not found any Information, I use an Altera Cyclone II). However I could drive a VGA Signal with it, which I would assume to be enough to get a nice picture. VGA should also be fairly present on TVs and Monitors and not die any time soon (at least I hope so).
2) The PPU does not have a digital Output for composed Pixels, so in order to capture the video signal, I either have to somehow sample the analog signals from PPU2, process (upscale or anything else), then Output the analog for VGA. Or I need an FPGA PPU implementation that could mimic the PPU behavior (which is obviosly the better solution, but would require work that is not really on my TODO list).
Or maybe I am missing out on some way to get to the Pixel data?
1) You can drive HDMI directly with a FPGA, the catch is that most FPGA dev boards use a separate chip to do HDMI,VGA and Composite video thus incur latency and timing issues with the PPU. Thanks NTSC.
2) From a development perspective you'd have to emulate the PPU1 and PPU2 as it exists in hardware, and then drive a pixel doubler and HDMI output where the S-ENC/S-MIX is normally. If you are developing something to target 10 year old computer monitors and televisions, yes VGA might be fine, but that is not the direction anything is going. If you buy a 4K monitor, there is only DisplayPort, HDMI2.0/MHL or USB-C (which includes power for MHL, and HDMI 2.0 or DisplayPort alternate modes.) 4K monitors can still do VGA resolution at 640x480, but that invokes the monitor's scaler, which adds latency in pretty much every case. 4K televisions are worse in this regard as they've been deleting S-Video, VGA and YPbPr component connections, and "smartTV" add additional latency that previous models previous overlays didn't have. The video scaler's in TV's assume you're connecting something like a VHS deck where you can simply stretch the video and it doesn't make the video any worse than it was originally, and the latency is irrelevant. When you connect a video game to a TV, unless it explicitly has a game mode for that input, it typically gets stretched in ways that makes the pixels smeared.
Like I get it, on one side people would prefer an accurate pixel-perfect emulation, which requires emulating the PPU in a way that multiplies the pixels at the last stage possible. On the other hand doing so would require more complex FPGA's. See the difference between the RetroAVS(Xilnix Spartan 6) vs. Analogue Nt mini(Altera Cyclone V), where the former is $150 and the latter is $450. I'd assume a devkit system would be more like the latter with all the options.
That said the main issue with developing things for 16-bit systems is that they are not things you can just develop on a software emulator and then expect to work on real hardware or re-engineered hardware. It's not enough to go "it works", it has to "work on every version." So it's very likely any development solution ends up being two FPGA's, one that emulates the base system, and one that emulates the cartridge and any additional coprocessors, because the co-processors are not always used (and if the option to use the SA1 were always available, it's likely that it would be used, but that then limits it to people who have a cartridge with it.) If someone wants to release their homebrew SNES game, right now you have no guarantee that it works on anything but the emulator it was developed against. So it's very likely that a fully functional devkit ends up costing $1000, which puts it out of price range for just about everyone.
I'd like to see such a thing, even if it's expensive, but I think at some point it has to be evaluated for practicality. Would it be cheaper to have 4 cheaper FGPA's than one expensive FPGA? Who knows until someone tries.
In case people haven't seen it
The SFC FPGA here, 28,417 / 114,480 Logic elements for the entire SFC. Of which 13476 Logic cells are for the APU alone(11400 DSP, 1707 SPC.) 11853 For the rest of the CPU/PPU (1790 DMA, 2160 MPU, 7060 PPU1, 392 PPU2)
I've seen at least two FPGA projects online that have done HDMI with the FPGA alone, but because they require high speed transmitters, they can't just use cheap FPGA's and high resolution.
I come from the net. Through systems, peoples and cities to this place.