NTSC color palette emulation, with the square wave modulator

Discuss emulation of the Nintendo Entertainment System and Famicom.

Moderator: Moderators

Post Reply
User avatar
kyuusaku
Posts: 1665
Joined: Mon Sep 27, 2004 2:13 pm

Post by kyuusaku »

James wrote:What is the proper method of converting YIQ to sRGB? Some variation of the RGB conversion matrix in Bisqwit's code?
http://www.sjbrown.co.uk/2004/05/14/gam ... rendering/


I think this thread is going off the deep end a bit. I thought I was anal but seriously you guys... :lol: Modeling this level of "accuracy" is borderline perverse when the only available method is black boxing from one source of values from one individual from one PPU taken with one scope.

All of this accuracy zeal is about preservation right? Perhaps it would be better to wait to see what the DAC designer was actually striving for and model THAT as an ideal circuit. Wouldn't that be more definitive than the unideal real thing? Don't forget that emulators are themselves ideal components. Otherwise, what's next? Should emulators include a netlist and SPICE to create a RGB palette? PCB trace modeling for interference ("jailbar") simulation in NTSC emulation?

I actually built a pseudo NES color generator in hardware using a 3-bit linear DAC (not a current-source DAC like the real thing). I rounded KH's values (a lot) and the only non-parasitic filtering was an AC-coupling capacitor... and it looked......... identical, despite the grievous inaccuracy.


Also something to perhaps consider with this crazy simulation is that TV have AGC which apparently usually go by the colorburst level. I presume they will adjust the gain so that CB is 40 IRE peak to peak.
tepples
Posts: 22708
Joined: Sun Sep 19, 2004 11:12 pm
Location: NE Indiana, USA (NTSC)
Contact:

Post by tepples »

kyuusaku wrote:All of this accuracy zeal is about preservation right? Perhaps it would be better to wait to see what the DAC designer was actually striving for and model THAT as an ideal circuit. Wouldn't that be more definitive than the unideal real thing?
Not if the goal is to model what things will look like on the unideal real thing so that your graphics won't surprise you on the unideal real thing by looking like jagged, muddled trash. It took several iterations before the block dithering in LJ65 could get close enough to the ideal colors of the seven game pieces. But for this, I found Nestopia 1.40's NTSC emulation to be accurate enough vs. both CRT and LCD TVs.
PCB trace modeling for interference ("jailbar") simulation in NTSC emulation?
I wouldn't bother. Most NES consoles are frontloaders, on which jailbars aren't noticeable.
Also something to perhaps consider with this crazy simulation is that TV have AGC which apparently usually go by the colorburst level.
I seem to remember that the manual for the NES recommended disabling AGC, AFC, or something.
User avatar
kyuusaku
Posts: 1665
Joined: Mon Sep 27, 2004 2:13 pm

Post by kyuusaku »

tepples wrote:Not if the goal is to model what things will look like on the unideal real thing so that your graphics won't surprise you on the unideal real thing by looking like jagged, muddled trash.
But this thread is about finding the definitive RGB palette, which is not jagged, muddled trash. It should be a perfect* balance between reality and ideal.

*subjective
It took several iterations before the block dithering in LJ65 could get close enough to the ideal colors of the seven game pieces. But for this, I found Nestopia 1.40's NTSC emulation to be accurate enough vs. both CRT and LCD TVs.
It's my guess that modeling the ideal DAC circuit will be accurate enough and will turn out to be more logical than real life parameters which change from silicon to silicon, temperature to temperature, load to load etc. You can't please everyone by default, but you can start at the "ideal".
I wouldn't bother. Most NES consoles are frontloaders, on which jailbars aren't noticeable.
I wouldn't bother either, but for many people that's probably an authentic experience that they may want to relive. Maybe it sounds silly but I won't pretend to know what the people want.
I seem to remember that the manual for the NES recommended disabling AGC, AFC, or something.
I don't believe this is a user-configurable setting on most sets. Perhaps in the service menu? But most sets in use then probably didn't have menus either, I know I played on a TV with 5 or so knobs and which only accepted input via 300 ohm twin lead. All original Famicom owners received a 300 ohm RF switch in box so most were probably playing on similar menuless sets, and of course via RF input which requires the AGC for OTA reception. Seeing how a 300 ohm-era TV's only mode of input was RF, I don't know why it'd have an option to disable the AGC. Even if that were the case and we called AGC-less decoding Nintendo ordained, wouldn't a Virtual Console palette trump all else?
User avatar
Dwedit
Posts: 4924
Joined: Fri Nov 19, 2004 7:35 pm
Contact:

Post by Dwedit »

Then you find the perfect palette, and the user is like "This is too dark, screw this, I'm using the Nesticle palette".
Here come the fortune cookies! Here come the fortune cookies! They're wearing paper hats!
User avatar
HardWareMan
Posts: 209
Joined: Mon Jan 01, 2007 11:12 am

Post by HardWareMan »

Let's return to the topic. What do you think that the subcarrier on the fact of the triangular and not square or sine? I has recieved the requested test today and I'll try to record what I had promised.
ReaperSMS
Posts: 174
Joined: Sun Sep 19, 2004 11:07 pm

Post by ReaperSMS »

Seems to make sense for a slew rate limited square. What the TV's decoder is going to see is the output after the AV circuitry, but not the RF. What's it look like at the AV jack (if it exists on yours)?
User avatar
HardWareMan
Posts: 209
Joined: Mon Jan 01, 2007 11:12 am

Post by HardWareMan »

I have working monitor that supports PAL/SECAM (sorry, no NTSC). I can show signal directly from the regenerating filter of analog decoder (pin 1), which is built on the TDA3510 (sorry, can't find english one). Or even give you resulting R-Y and B-Y signals (from pins 11 and 10). Then you can make your own YUV to RGB matrix. But I continue to pay attention to the fact that the triangular shape of the subcarrier obtained directly at the output of PPU (pin 21).
I sold my TV with decoder, build on TDA4555, wich support NTSC too. Maybe I can get TDA4555 and build own decoder, must check store in town.
Bisqwit
Posts: 249
Joined: Fri Oct 14, 2011 1:09 am

Post by Bisqwit »

Presumably, the wave should be smoothed across pixels, not only pixel-internally. Which brings us to the color bleeding artifacts.

I.e. instead of,

Code: Select all

000000000000111111111111222222222222   x coordinate
aaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbb   pixel type
______¯¯¯¯¯¯____¯¯¯¯¯¯______¯¯¯¯¯¯__   signal
, you get something like this:

Code: Select all

000000000000111111111111222222222222
aaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbb
______/¯¯¯¯¯`-__/¯¯¯¯¯`-____/¯¯¯¯¯`-
Where the waveforms for the two successive "b" pixels are different because one is preceded by a signal high and the other is not.
This obviously cannot be modelled with a palette file. There seems to be merit to scanline rendering after all.
Basically, your pixel converter should produce 12 signal levels per pixel, and the scanline renderer should go across the signal levels, smooth it a bit (such as, every value becomes the weighted average of the last 10), and then convert it into YIQ and sRGB in 12 sample units.

EDIT: I tested this. Each pixel has 12 samples as the signal level, as usual. However, the signal levels are faded with formula oldsignal × (1-M) + newsignal × M where M might be e.g. 0.7 for a 70% signal clarity. Before fading, the signal is translated to –0.5 to 0.5 range. After fading, back to 0 to 1 range.

Left: 10%, middle: 30%, right: 70%
Image Image Image
Left: 100% (no color bleeding), middle: 120%, right: 70% and 150% mixed together in 70%-30% proportion.
Image Image Image

I tested a number of different fade coefficients. At > 100 %, it should produce the over-shoot spikes at signal edges. At low quality levels, it seems that the saturation suffers. However, you can indeed observe color artifacts where ever the color changes horizontally.
It is important to note that this is not a palette hack. Each pixel is interpreted from the raw transformed scanline signal. A palette can not model this effect.

Here's one in which I applied it at subpixel level. Each pixel is offseted by 3 signal samples from the previous one. Signal is the 70-150 mix explained above and below.
Image
(Ahem, am I just reinventing the same as what Blargg did earlier?)

Here is a signal dump of scanline 10 of each image.
Image

EDIT: Source code here. Apologies about the wonky indentation levels; I removed some code not relevant for documentation, and did not fix the indentation after the fact:

Code: Select all

unsigned Xscale = 4, Yscale = 3; // will render at 256*4 by 240*3

    struct cache
    {
        float levels[12];
    } yiqmap[240][256] = { {{{}}} };

    void PutPixel(unsigned px,unsigned py, unsigned pixel)
    {
        // The input value is a NES color index (with de-emphasis bits).
        auto& r = yiqmap[py][px];
            // Decode the color index
            int color = (pixel & 0x0F), level = color<0xE ? (pixel>>4) & 3 : 1;

            // Voltage levels, relative to synch voltage
            static const float black=.518f, white=1.962f, attenuation=.746f,
              levels[8] = {.350f, .518f, .962f,1.550f,  // Signal low 
                          1.094f,1.506f,1.962f,1.962f}; // Signal high

            // Calculate the luma and chroma by emulating the relevant circuits:
            auto wave = [](int p, int color) { return (color+8+p)%12 < 6; };
            for(int p=0; p<12; ++p) // 12 clock cycles per pixel.
            {
                // NES NTSC modulator (square wave between two voltage levels):
                float spot = levels[level + 4*(color <= 12*wave(p,color))];
                // De-emphasis bits attenuate a part of the signal:
                if(((pixel & 0x40) && wave(p,12))
                || ((pixel & 0x80) && wave(p, 4))
                || ((pixel &0x100) && wave(p, 8))) spot *= attenuation;
                // Normalize:
                float v = (spot - black) / (white-black) / 12.f;
                r.c.levels[p] = v;
            }
    }

    #define c(v) std::cos(3.141592653 * (v) / 6) * 1.5
    static const float cos[12] =
        { c(0),c(1),c(2),c(3),c(4),c(5),c(6),c(7),c(8),c(9),c(10),c(11) };
    static const float sin[12] =
        { c(9),c(10),c(11),c(0),c(1),c(2),c(3),c(4),c(5),c(6),c(7),c(8) };
    #undef c

    void FlushScanline(unsigned py)
    {
            u32* pix = (u32*) s->pixels; // SDL surface

            float level07=0.f, level15=0.f, cache[256*12];
            for(unsigned o=0, px=0; px<256; ++px)
                for(int p=0; p<12; ++p)
                {
                    level07 = level07*0.3 + 0.7*(yiqmap[py][px].levels[p]-0.5f);
                    level15 = level15*-.5 + 1.5*(yiqmap[py][px].levels[p]-0.5f);
                    cache[o++] = 0.5f + (level07*0.7 + level15*0.3);
                }
            for(unsigned px=0; px<256; ++px)
                for(int r=0; r< int(Xscale); ++r)
                {
                    float yiq[3] = {0.f, 0.f, 0.f};
                    for(int x=px*12 + ((r+1-int(Xscale))*12/int(Xscale)),
                            p=0; p<12; ++p, ++x)
                    {
                        if(x<0 || x>=256*12) continue;
                        float v = cache[x];
                        // Simulate ideal TV NTSC decoder:
                        yiq[0] += v;
                        yiq[1] += v * cos[x%12] * 1.5;
                        yiq[2] += v * sin[x%12] * 1.5;
                    }
                    float gamma = 1.8f;

 // Convert YIQ into RGB according to FCC sanctioned matrix.
 auto gammafix = [=](float f) { return f < 0.f ? 0.f : std::pow(f, 2.2f / gamma); };
 auto clamp = [](int v) { return v<0 ? 0 : v>255 ? 255 : v; };
 unsigned rgb = 0x10000*clamp(255 * gammafix(yiq[0] +  0.946882f*yiq[1] +  0.623557f*yiq[2])) 
              + 0x00100*clamp(255 * gammafix(yiq[0] + -0.274788f*yiq[1] + -0.635691f*yiq[2])) 
              + 0x00001*clamp(255 * gammafix(yiq[0] + -1.108545f*yiq[1] +  1.709007f*yiq[2]));

                    for(int p=0; p< int(Yscale); ++p)
                        pix[ (py*Yscale+p) * (256*Xscale) + px*Xscale + r] = rgb;
                }
            //SDL_UpdateRect(s, 0,py*3, 256*3,3);
            if(py == 239) SDL_Flip(s);
    }
P.S. With this code, the super black color is actually meaningful! There is a theoretical difference to the next pixel whether the previous pixel was black or super black. A very slight difference, but a difference nonetheless.
Here is a test image. Horizontally, all even pixels are either 0D or 1D; odd pixels are everything from 00..3F. The 0D/1D selection toggles every 4 scanlines. Emphasis bits change every 30 pixels. Firefox's color picker extension reveals that there indeed are differences every 4 pixels, but they are very small.
Image
Could someone verify this effect on the NES?

One more note. The combination of square wave and RC is quantized to 12 samples in this generator. Or more accurately, this assumes that the TV samples the video signal exactly 12 times per pixel. In reality, it samples it close to an infinite number per second. I am not a master of integral mathematics, so I won't even try to model it more accurately than that. Chances are that the differences are still completely neglible... If you care, simply read the same square wave level multiple times (e.g. instead of 12 samples per pixel, you'd get 48 samples per pixel), adjust the blur factors appropriately, and change the /12 divider (e.g. /48 divider). It will of course be slower.
Last edited by Bisqwit on Fri Oct 21, 2011 3:50 am, edited 2 times in total.
User avatar
HardWareMan
Posts: 209
Joined: Mon Jan 01, 2007 11:12 am

Post by HardWareMan »

To be clear, I tell you something. We know there 12 phases, wich generate always. There is pixel render, wich select color index (or number) for each pixel. So, obvious pixel color index works as selector of desired phase (or static level for B/W colors) and works like time gate for choosen signal for one pixel time. For example: we have two phases PH1 and PH2, pixel index gate signal (wich 0 is PH1, 1 is static level and 2 is PH2). Then result signal will be something like this:

Code: Select all

       -       -       -
      - -     - -     - - 
PH1  -   -   -   -   -   -
    -     - -     - -     -
           -       -       -

     -       -       -
    - -     - -     - - 
PH2    -   -   -   -   -   -
        - -     - -     - -
         -       -       -


PIX 000000001111111122222222


       -             -
      - -   --------- -
VID  -   -             -   -
    -     -             - -
           -             -
Don't forget that luma level of colored pixel(s) will be the average value of the amplitude of the subcarrier. In PAL PPU the total duration of the frame is a multiple of the main frequency. So, subcarrier dots are present, but do not move. On the other end, duration of NTSC frame are not multiple to NTSC masterclock and this make subcarrier dots crawling. Familiar term? :3 Am I clear?
Bisqwit
Posts: 249
Joined: Fri Oct 14, 2011 1:09 am

Post by Bisqwit »

HardWareMan wrote:On the other end, duration of NTSC frame are not multiple to NTSC masterclok and this make subcarrier dots crawling. Familiar term?
Oh, I see. So it looks like this? (+4 offset on each successive scanline, +6 offset on consecutive frame, modulo 12, loop length 2 frames)
Image
Or like this? (+4 offset on each successie scanline, +4 offset on consecutive frame, modulo 12, loop length 3 frames)
Image
(Both images rendered in 3072x720, resized to 960x720 with imagemagick's -filter box filter.)
User avatar
HardWareMan
Posts: 209
Joined: Mon Jan 01, 2007 11:12 am

Post by HardWareMan »

Bisqwit wrote:
HardWareMan wrote:On the other end, duration of NTSC frame are not multiple to NTSC masterclok and this make subcarrier dots crawling. Familiar term?
Oh, I see. So it looks like this?
Second picture. But the subcarrier braid is bigger, because the pixel size and the size of a subcarrier period are comparable. Also, the pixels a bit stretched.
Bisqwit
Posts: 249
Joined: Fri Oct 14, 2011 1:09 am

Post by Bisqwit »

Okay. Here is a version in which the lines are merged together.

Left: Rendered at 256x720, rescaled to 256x240 with box filter.
Middle: Rendered at 3072x720, rescaled to 256x240 with box filter.
Right: Rendered at 3072x720, line offset -6 (image moved half pixel to the right), rescaled to 256x240 with box filter.
Image Image Image
The point of this comparison is to see whether there is merit to calculating the YIQ values for each 12 subpixel positions separately if the end result is going to be 256 wide anyway.
User avatar
HardWareMan
Posts: 209
Joined: Mon Jan 01, 2007 11:12 am

Post by HardWareMan »

Do not forget that the TV smooths out the picture only in the scanline. Vertically, the picture remains clear, each scanline looks are very clear.
tepples
Posts: 22708
Joined: Sun Sep 19, 2004 11:12 pm
Location: NE Indiana, USA (NTSC)
Contact:

Post by tepples »

kyuusaku wrote:But this thread is about finding the definitive RGB palette
Which was ripped from PlayChoice PPUs and the GameCube and GBA versions of acNES long ago.
kyuusaku wrote:I don't believe this is a user-configurable setting on most sets.
It was for late 1970s sets. From the Control Deck manual:
Nintendo wrote:If your TV has an automatic fine tuning control (AFC), turn it off. (Use the manual fine tune dial to adjust the picture after inserting the game pak as described below.)
As I said, I might have misremembered the details.
kyuusaku wrote:and of course via RF input which requires the AGC for OTA reception.
I thought gain control for OTA was keyed to the vertical sync signal, not the colorburst, made possible by negative modulation. Otherwise, how would AGC have worked in the black-and-white era?
kyuusaku wrote:wouldn't a Virtual Console palette trump all else?
Agreed, at least for determining ideal flat colors, even if not for games like Blaster Master that depend on artifacts.
HardWareMan wrote:Do not forget that the TV smooths out the picture only in the scanline.
Some TVs are known to smooth chroma vertically.
Bisqwit
Posts: 249
Joined: Fri Oct 14, 2011 1:09 am

Post by Bisqwit »

HardWareMan wrote:Do not forget that the TV smooths out the picture only in the scanline. Vertically, the picture remains clear, each scanline looks are very clear.
Do you mean that there are distinctly only 240 scanlines (minus portions rendered outside the visible screen)? Hmm, makes sense.
So the dot crawl happens on each of the 240 scanlines and not three times within each scanline.
It would look like this, then:
Image
Ugh. That's ugly.

Left: Box-scaled from 3720x240 to 256x240; Middle: Rendered directly to 256x240 with half-pixel offset. Right: Without half-pixel offset.
Image Image Image
Post Reply