It is currently Sat Oct 21, 2017 3:31 am

All times are UTC - 7 hours



Forum rules


Related:



Post new topic This topic is locked, you cannot edit posts or make further replies.  [ 127 posts ]  Go to page Previous  1 ... 4, 5, 6, 7, 8, 9  Next
Author Message
PostPosted: Sun Jul 31, 2016 3:05 am 
Offline

Joined: Mon Jul 01, 2013 11:25 am
Posts: 228
byuu wrote:
Yeah, I think that could really work well for the Genesis, being based off a single oscillator.

In many ways, I am starting to feel MAME's pain with supporting more and more systems. It's extremely rewarding intellectually, but my pride and strive for perfection really take a beating.


Yeah of course, the single clock based system make things much easier :) In fact as i was supporting Sega-CD (and 32X but the later use Megadrive clock) as well but i cheated and used an approximated value (795 cycles per scanline which is really close to reality) so i could rely on the same uniq global counter as well for it.

Quote:
Anyway ... I would strongly recommend you look into binary min-heap arrays for this. Here's my implementation for reference: http://hastebin.com/raw/qurokahane


I'm not used to templatizen C++ (the syntax is... well, not very readable :p) but i got it :)
My event structure was similar with "cycle" and "callback" fields, i added an "id" so i could eventually relocate a given event if for any reason it should be postponed and removed (i also have an extra "param" to own pamameters for callback if any).
Your binary min-heap arrays looks like a binary tree to store your events based on their counter.

Quote:
If you use this as a priority queue, it's pretty miraculous. The idea is, any time you know something is going to happen in N cycles, where N can be any number of cycles you want ... you can add it to the queue in logarithmic time. And whenever an event triggers, you can remove it in logarithmic time too. But the real magic that makes it so great ... as time passes, you can advance the queue by N cycles and trigger callback events in constant(!!) time ... which boils down to one compare.


Yeah, i expected it from the tree structure and your is specially optimized to the minimal requirement. Always important to cleverly choose the array storage structure depending what you plan to do most with it (insertions, removals or just retrievals) =)
Today i'm doing many java development for my professional job and the basic API is well designed to deal with all that kind of data structure:
https://commons.apache.org/proper/commo ... eList.html

I've to admit that back in time (sources were dates from 2005) i was doing almost everything in C (wanted to port it on Dreamcast) and i chosen a simple array where i was storing the first incoming event index with others minor optimization to make some operation fast but the structure itself was a simple array. Still that array was just containing events for a single scanline so it was never that big ;)

Quote:
So instead of having an add_cpu_cycles(uint N) loop that has to test if we need to fire an IRQ, an NMI, a DMA event, run the ALU, or do a bunch of other things like that ... you can test every single possible event with just one compare.


Yeah, because the idea is that you just push events first in the queue then just checking counter against first event counter, exactly what i was doing.

Quote:
There may be better data structures than binary min-heap for this, but I loved the simplicity of it. It's very rare that I'm able to implement algorithms when described by mathematicians.


When you want sorted list with fast insert/remove and iterate operation, the tree is the structure to go with, there is no real better alternative. After that it all depends to the Tree structure implementation itself :)

Quote:
Anyway, a Gens reboot sounds pretty awesome! Gens was always my favorite Genesis emulator (sorry Steve, but I don't use closed source stuff) ... would be fun to talk shop with you sometime in the future after I learn a lot more :D


Haha thanks but unfortunately this was a (very) old project (2005) than i never completed by lack of motivation and because i turned more and more in Megadrive programming :p I could eventually release the sources but i think it's not that interesting now we have emulators as BlastEm or Exodus.

Quote:
As for the Saturn, that's my ultimate dream console to emulate. But short of a 100-fold increase in processing power before I reach 40, I'm not going to attempt it. It would require too many accuracy sacrifices and nothing kills my enjoyment of emu coding more than that =(


The Saturn is a challenging system ;) As you said i think we can forget about 100% accuracy for it as it would require too much effort as CPU power.
In fact i worked a bit on Saturn emulation back in time. I joined Sthief which was just releasing it's first version of SSE (a very old and discontinued saturn emulator). I moved the emulator to windows (it was DOS based) then i made a VDP1 software implementation (which was really lacking as the OpenGL was quite broken in accuracy) and also the first SCSP sound core while fixing tons of bugs. I believe SSE was the first Saturn emulator to actually have real SCSP sound (the one you can heard in BIOS logo) and not only CDDA playback. Too bad we never released that version... it's probably sitting somewhere on my hard drive :p

Edit: I found an old binary sitting on my hard drive:
https://dl.dropboxusercontent.com/u/933 ... n/wsse.zip

Sometime you get up to the CD player panel but almost time it crashes when you launch it X'D
It was working better back in time (i guess newer windows version don't help) :p


Last edited by Stef on Sun Jul 31, 2016 12:08 pm, edited 1 time in total.

Top
 Profile  
 
PostPosted: Sun Jul 31, 2016 3:39 am 
Offline

Joined: Mon Mar 30, 2015 10:14 am
Posts: 177
Quote:
Still the PCE has a very small RAM (8 KB) compared to the Sega Genesis because of that, and hucard capacity were limited compared again to Genesis ROM.

The PCE was made for 8 bit competition ,not 16 bit, 8 ko was enough, and don't forget that PCE can access to his VRAM at any time,so more ram become useless in most case .
Z80 ram in MD was 100/120/150/200ns (on various revisions), it's strange for a very costly memory .
The pce's RAM/ROM speed is 140 ns, for Md it's 150,and like i alway said (and tom too ) nintendo should have made ​​a custom 65816 not using a vanilla one,mainly to reduce the RAM/ROM speed needed,and put it into a PCE/MD level of 150/160 ns .
I think the classic 65816 was easy to put in snes, the low speed permit to big N to reduce the costs easily, because he was sure that the CPU can be easily replaced by a much powerfull one via cartridge at a low costs .
Snes's roms were expensive ??, yes because third developers were forced to buy the cartriges to nintendo,and could not be allowed to make their own .

150 ns RAM/ROM were expensive also for sega, and this don't discourage SEGA to put 100ns RAM speed for his Z80(purely useless,even for 68k) .

when hudson made his chipset ,it didn't know who was going to do his machine, and 140ns memory was already here,the chipset was thinked with memory which was easily faisible for all the primary manufacturers in the market,and was not manufactured by nec,but by epson .


Last edited by TOUKO on Mon Aug 01, 2016 11:05 am, edited 6 times in total.

Top
 Profile  
 
PostPosted: Sun Jul 31, 2016 7:30 am 
Offline

Joined: Thu Aug 28, 2008 1:17 am
Posts: 591
Stef wrote:
I can't convince you that i'm right but you will never made my mind change about that too and honestly i think we all don't care.

That's because your whole view stems from a 68k centric perspective. Until you adapt a more holistic perspective, you'll always have that bias and preconceptions. I like you stef, and you create some great stuff, but you in particular are known for your extreme 68k and z80 bias.

I don't think the 65x is a great design. I think it makes a very poor general purpose processor (i.e. not a good wide range processor). I think it lacks features like the 6809 has (its only closest comparison), which makes it feel a bit cheap in design. I think its constant bus hogging makes it limited in system design scopes and appeal. Extending into 16bit design, none of this was addresses and had limited application because of it - it was a simple and cheap upgrade. By comparison, the 68k was a much better design (linear PC, real/usable branch relative addressing, not hogging the bus, fuller ISA, hardware macro instructions (32bit ones; two pass over the ALU)) - it was a forward thinking model that easily extended itself to changes and upgrades down the line. Its range of application is also far greater (which made it a perfect processor for computer system designs).

What you call a bias on my part, I call a realistic understanding. I recognize that the 68k is better in soo many respects, but context is key. Context is a HUGE part of this comparison. And this... this is where you fail to see that the 68k and the 65x models are brought much closer to capability - specifically because of the limited role game logic requires of said processors. Are there some exceptions where game logic is more expansive in design? Sure. And requires some more processing power. But the norm brings these processors closer together. And that's what I think you completely fail to see.




Quote:
Really that is so disappointing to read that from you... Still thinking in that absurd logic.
You still comparing the CPU on their speed... Who care about what a CPU can do for a given speed if you can't put it more than 3 Mhz while others can go up to 10 Mhz ?

It's not absurd at all; it speaks to the intrinsic characteristics of the processor (namely, efficiency in relation to clock cycles). You do this as well, except for other areas that show the 68k in good light. Take off the blinders... man.



Quote:
Do you realize that the SNES itself it the proof that you're wrong?


And do you realize there are a myriad of reasons why the snes is designed the way it is? It's plainly obvious that additional hardware support on cartridge was part of the design scope of the system. From their perspective, having successfully extended the Famicom life span with extended hardware, it made sense to go with a much cheaper (cost) processor and add hardware resource as the bar for software develop began to push the boundaries. Apparently Nintendo planned ahead given both the superior audio and color design of the system at the time; above and beyond the status quo. From Nintendo's perspective at the time, it made sense to go with a cheaper processor to keep the base price down (consider the cost of the SMP and sPPU design). I think it's a good indication that Nintendo saw the processing requirements increasing beyond whatever initial design would provide, especially giving the capability of the support hardware (video/hardware). Whether they executed this integration seamlessly or not, is entirely another thing.

_________________
__________________________
http://pcedev.wordpress.com


Top
 Profile  
 
PostPosted: Sun Jul 31, 2016 7:53 am 
Offline

Joined: Thu Aug 12, 2010 3:43 am
Posts: 1589
tepples wrote:
Is there an equivalent decomposition for signed multiplication?

Just apply signs rule (result is positive if both are same sign, negative if both are different sign, then you can just make both operands positive for the calculation and apply the sign after the fact).

Stef wrote:
So yes the 65816 is definitely as 65C02 with 16 bits registers and 16 bits ALU .

You realize the width of ALU operations is the usual determinator when it comes to determining the "bittage" of a CPU, right? (yes, I know the Z80's ALU works on 4-bit halves, but it always tries to do two halves in practice)

Seriously, you're just looking for excuses to say the 65816 is just a 6502 with 16-bit registers and built-in bank switching. That's like saying the 68020 is just a 68000 with a 32-bit bus.

(EDIT: typo)

koitsu wrote:
Cool, so now that we've determined this thread serves absolutely zero purpose because it's filled with nothing but opinions, can it be locked given its uselesness? The previous post vs. the initial post should be all that's needed to justify that.

There's like only one person in the entire forum who disagrees on the 65816 being 16-bit, and I haven't seen anybody anywhere else disagree on that either =P


Also really if we insist on arguing whether the 65816 was powerful enough or not... let's not forget the NES, running a 6502 at less than 2MHz, was consistently doing platformers at 60FPS. So huh yeah, there goes the whole argument for the entire generation.


Last edited by Sik on Sun Jul 31, 2016 8:53 am, edited 1 time in total.

Top
 Profile  
 
PostPosted: Sun Jul 31, 2016 8:04 am 
Offline

Joined: Mon Mar 30, 2015 10:14 am
Posts: 177
Quote:
(yes, I know the Z80's ALU works on 4-bit halves, but it always tries to do two halves in practice)

May be, that's why he's not very fast even with operations between registers ??

Quote:
Seriously, you're just looking for excuses to stay the 65816 is just a 6502 with 16-bit registers and built-in bank switching. That's like saying the 68020 is just a 68000 with a 32-bit bus.

Yes and this is why i created this tread .. :?


Top
 Profile  
 
PostPosted: Sun Jul 31, 2016 8:52 am 
Offline

Joined: Thu Aug 12, 2010 3:43 am
Posts: 1589
I just realized I typo'd "say" as "stay". Whoops ^_^;

TOUKO wrote:
May be, that's why he's not very fast even with operations between registers ??

I think it has ugly stuff going on regarding memory accesses too, every extra byte access in the instruction results in 3 extra cycles (even if it's just fetching). A faster ALU isn't going to help here.

EDIT: all accesses add 3 cycles, not just the bytes in the instruction itself. Still, point stands. 68000 suffers from something similar (every extra word access adds 4 cycles due to how bus accesses are handled)


Top
 Profile  
 
PostPosted: Sun Jul 31, 2016 9:13 am 
Offline

Joined: Mon Mar 30, 2015 10:14 am
Posts: 177
This 4 bit ALU is a very courious thing, really.


Top
 Profile  
 
PostPosted: Sun Jul 31, 2016 10:14 am 
Offline

Joined: Sun Sep 19, 2004 11:12 pm
Posts: 19113
Location: NE Indiana, USA (NTSC)
Sik wrote:
tepples wrote:
Is there an equivalent decomposition for signed multiplication?

Just apply signs rule (result is positive if both are same sign, negative if both are different sign, then you can just make both operands positive for the calculation and apply the sign after the fact).

I thought you couldn't "just make both operands positive" if trying to use the mode 7 multiplier to multiply signed by signed.

Quote:
koitsu wrote:
Cool, so now that we've determined this thread serves absolutely zero purpose because it's filled with nothing but opinions, can it be locked given its uselesness? The previous post vs. the initial post should be all that's needed to justify that.

There's like only one person in the entire forum who disagrees on the 65816 being 16-bit, and I haven't seen anybody anywhere else disagree on that either =P

Also really if we insist on arguing whether the 65816 was powerful enough or not

And that's why I'm not locking it just yet. Though the previous question is answered (the 65816 has a 16-bit ALU and 16-bit ISA), I find the derail about overall performance interesting, and splits are discouraged under new policy.

What we've proved so far: Data bandwidth is a wash, as is overall data processing rate where large multiplies and divides aren't expected. The segmented architecture and dearth of registers make high-level languages less efficient on 65816. 16x16 multiplies and 32/16 divides are slow in both cases but still faster on 68000, but the atomicity of DIVS/DIVU and lack of HDMA in the surrounding memory controller cause latency that makes them less useful in an engine relying heavily on hblank IRQ.


Top
 Profile  
 
PostPosted: Sun Jul 31, 2016 10:41 am 
Offline

Joined: Thu Aug 12, 2010 3:43 am
Posts: 1589
tepples wrote:
I thought you couldn't "just make both operands positive" if trying to use the mode 7 multiplier to multiply signed by signed.

No, it's more like a wrapper on the multiplication algorithm:

  1. Keep track of sign
  2. Make both operands positive
  3. Do multiplication as if it was unsigned
  4. Apply intended sign to the result

Although wait, do the mode 7 registers allow doing multiplication like it was unsigned? Because if not, that is going to be your problem. I suppose if you can do away with wasting a couple of bits (i.e. 14-bit instead of 16-bit) you could try pushing the higher half so the sign bit is always clear. I suppose it's still faster than doing the whole multiplication entirely in software.

tepples wrote:
16x16 multiplies and 32/16 divides are slow in both cases but still faster on 68000, but the atomicity of DIVS/DIVU and lack of HDMA in the surrounding memory controller cause latency that makes them less useful in an engine relying heavily on hblank IRQ.

Yep.

Note that with hblank IRQs it still depends. Some stuff will always only take effect on the next line no matter what (e.g. changing vertical scroll while not in scroll-per-two-cell mode) - for those it's most likely not an issue since timing is not that important (as long as it lands in the correct line, you're fine). The problem is for those where you want to get it done as early as possible, like palette stuff.


Top
 Profile  
 
PostPosted: Sun Jul 31, 2016 11:54 am 
Offline

Joined: Sun Apr 13, 2008 11:12 am
Posts: 6289
Location: Seattle
Sik wrote:
Although wait, do the mode 7 registers allow doing multiplication like it was unsigned? Because if not, that is going to be your problem. I suppose if you can do away with wasting a couple of bits (i.e. 14-bit instead of 16-bit) you could try pushing the higher half so the sign bit is always clear. I suppose it's still faster than doing the whole multiplication entirely in software.
Should still be possible to decompose the arithmetic even though it's s8×s16→s24. Might have to do some prep first instead of only after like with the unsigned-to-signed conversion.


Top
 Profile  
 
PostPosted: Sun Jul 31, 2016 12:53 pm 
Offline

Joined: Mon Jul 01, 2013 11:25 am
Posts: 228
Quote:
...The pce's RAM/ROM speed is 140 ns, for Md it's 150...


Where did you saw that ? First MD roms were much slower than that...

Quote:
That's because your whole view stems from a 68k centric perspective. Until you adapt a more holistic perspective, you'll always have that bias and preconceptions. I like you stef, and you create some great stuff, but you in particular are known for your extreme 68k and z80 bias.


I don't think i have a 68k centric view... i do know the 68k is special by itself and i see it more as 16/32 bits hybrid than a pure 16 bits. And you make a mistake by assuming i'm totally biased to 68K and Z80 (just because MD uses both). Yeah I do like the 68k CPU but honestly i think that almost people who programmed it do appreciate it as it's a really confortable CPU to develop for... but i really dislike the Z80 on the other hand, i think it's quite difficult to really use that CPU efficiently. For me the Z80 is not a really good 8 bits CPU and i far prefer the GBZ80 (the gameboy customized Z80) which is somehow a "fixed" version of the Z80 (simpler, more usable). I also like the 6809, a very nice and powerful 8 bits CPU. In other CPU ERA i also like ARM thumb or SHx CPU as well (really efficient and nice design) but ok, that is out of focus, just to tell you i'm not a 100% 68000 fan bias... The 6502 series CPU has only one interest for me: their price... but at the cost of their poor efficiency and painful programming.

Quote:
You realize the width of ALU operations is the usual determinator when it comes to determining the "bittage" of a CPU, right? (yes, I know the Z80's ALU works on 4-bit halves, but it always tries to do two halves in practice)


But the ALU size is just almost a design choice, as in the Z80... Internally the 68000 has 3 16 bits ALU so it can do 3 times more than the 65816 ? MMX was using 64 bits ALU and SSE 128 ALU so these CPU were 64 and 128 bits ? That is part of the whole but definitely the data processing capacity (directly linked to the memory capacity) is what matter.

Quote:
Seriously, you're just looking for excuses to say the 65816 is just a 6502 with 16-bit registers and built-in bank switching. That's like saying the 68020 is just a 68000 with a 32-bit bus.


The 68020 bring the full 32 bits IO logic to the 68000, exactly what the 65816 *does not* from the 6502 and that is a pretty big difference... But honestly again i don't care, it's just my opinion, i don't want to convince anyone.

Kisses :p


Last edited by Stef on Sun Jul 31, 2016 2:47 pm, edited 1 time in total.

Top
 Profile  
 
PostPosted: Sun Jul 31, 2016 1:23 pm 
Offline

Joined: Mon Mar 30, 2015 10:14 am
Posts: 177
Quote:
Where did you saw that ? First MD roms were much slower than that...

If you want to use DMA(ROM->all that you can) you have no choice, ROM must be as fast as WRAM, so 150ns .
it's entirely dumb having to copy first in WRAM, and after in VRAM(with DMA) because slow ROM,when you know that DMA can do it more, more faster .

Quote:
but at the cost of their poor efficiency

If this is not biased !!!,65xxx not efficient ?? first news !!


Top
 Profile  
 
PostPosted: Sun Jul 31, 2016 2:03 pm 
Offline
User avatar

Joined: Fri Nov 12, 2004 2:49 pm
Posts: 7232
Location: Chexbres, VD, Switzerland
tomaitheous wrote:
Hardly the bare minimum. Just for clarification, I pretty sure he meant the cpu handling writing to the DSP registers, not generating the waveforms itself. But that said, I've done 4 channels, frequency scaled with volume control, on the PC Engine at 35% cpu resource - all software (and that's a 7.16mhz 8bit 65x). A simple set of 24bit fixed point auto increment regs/pointers hardware on cart would cut that down to 8% cpu resource. No need for a 16mhz arm chip.

Oh, I didn' understand you meant it that way. Yeah, I guess the CPU tied to the S-DSP directly without the CPU part of the SPC700 chip would have made some sense, but then there'd be a major RAM problem. Either the whole RAM access is interleaved between the S-DSP and the CPU, and that means sacrifying a huge part of performance just for getting sound. The other option is to have a dedicated sound-RAM, accessible through DMA. This would end-up pretty much what the PS1 chip is, it's basically 3x SNES DSP with 512k RAM and accessible directly by the CPU.

The main problem would be games would mostly update their sound engine at the slow rate of 60/50Hz, by lazyness, instead of update it at faster rates like they do, which allows for more precision and effects in sound.

I know squat about thee PCE or it's hardware, but it sounds like it's a real achievement what you did, if you actually managed to render sound by software on the 6502 alone (even if the 6502 is overclocked and has extra instructions). The reasons of why the GBA games have poor sound rendering are multiple and not only due to technical CPU constraints. However some games uses tricks such as pre-rendering instruments at 12 tones in order to make sound rendering much more simpler/faster. My Final Fantasy sound restoration hacks uses pre-filtered samples to compensate the lack of an anti-aliasing filter in the sound engine. In other words : Low CPU usage, good sound quality, decent ROM usage, pick (at most) two.


Top
 Profile  
 
PostPosted: Mon Aug 01, 2016 8:01 am 
Offline
User avatar

Joined: Mon Sep 15, 2014 4:35 pm
Posts: 3074
Location: Nacogdoches, Texas
Bregalad wrote:
However some games uses tricks such as pre-rendering instruments at 12 tones in order to make sound rendering much more simpler/faster.

I think I remember seeing that this was done on the Neo Geo, because I don't think the audio hardware can alter PCM samples in any way, maybe aside from volume.

Bregalad wrote:
The other option is to have a dedicated sound-RAM, accessible through DMA.

Okay, so it's basically getting rid of the SPC 700, but keeping the RAM. Actually, wait a minute, wouldn't the CPU have pretty much no time to write to the ram because the DSP would constantly be reading from it? Wait, then the SPC700 couldn't write to it either... :lol: When does the DSP pull from ram anyway? It can't be a whole frame, otherwise there'd be no way for the SPC700 to update it. But yeah, the SPC700 just seems to make it difficult to update audio ram, while doing processing that wouldn't even make the 65816 break a sweat. I also imagine that if the SPC700 were gone, there'd be more money for other things, like the slow ram.

Bregalad wrote:
Low CPU usage, good sound quality, decent ROM usage, pick (at most) two.

Bye bye ROM usage! :lol: Unless you're truly trying to make a GBA game (or a game on any other system) like it was back then (which why would you want to, there's hundreds of already existing ones) then ROM usage isn't even an issue, or at least to me. I'd like to see how the systems would work if ROM wasn't an issue like it was back in the day, and there are no pre existing games like that. Of course, you have to put some restraint. I almost had a copy of the level tilemap flipped sideways so I could DMA the rows and not use any CPU time to build a buffer, but I realized that that was kind of ridiculous... :lol:

Anyway though, I imagine because the GBA's CPU is such a beast and the games really aren't any more complicated than the SNES's, that you'd spend less than a quarter of the CPU time actually on game logic and the rest on sound. I'm almost certain that how developers programed for the GBA wasn't as efficient as it was back then on the SNES because they have much more wiggle room (and I'm sure that the fact it was 2001 and not 1991 had something to do with it).


Top
 Profile  
 
PostPosted: Mon Aug 01, 2016 8:43 am 
Offline

Joined: Sun Sep 19, 2004 11:12 pm
Posts: 19113
Location: NE Indiana, USA (NTSC)
Espozo wrote:
Actually, wait a minute, wouldn't the CPU have pretty much no time to write to the ram because the DSP would constantly be reading from it? Wait, then the SPC700 couldn't write to it either... :lol: When does the DSP pull from ram anyway?

The audio RAM runs at 3.07 MHz: two slots for the DSP and one for the SMP. In your proposed change, it'd be two for the DSP and one for an interface similar to that of VRAM and the Apple IIGS's audio RAM.

Quote:
Unless you're truly trying to make a GBA game (or a game on any other system) like it was back then (which why would you want to, there's hundreds of already existing ones) then ROM usage isn't even an issue, or at least to me.

GBA ROM is limited to 32 MiB for the first player and 256 KiB for players 2-4. There's only one cart I know of that uses a mapper to address more: Shrek and Shark Tale.

Quote:
I imagine because the GBA's CPU is such a beast and the games really aren't any more complicated than the SNES's, that you'd spend less than a quarter of the CPU time actually on game logic and the rest on sound.

A soft mixer at the typical rate (18 kHz) might take about 15% of the CPU. The GSM Full Rate compressed audio decoder used in Luminesweeper takes 60%. But then Doom doesn't use a mixer at all (PSG music, hardcoded samples) because it's spending almost all its CPU time on rendering a pseudo-3D view in software.

Quote:
I'm almost certain that how developers programed for the GBA wasn't as efficient as it was back then on the SNES because they have much more wiggle room

So is Martin "nocash" Korth. Search for "HLL" in GBATEK.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic This topic is locked, you cannot edit posts or make further replies.  [ 127 posts ]  Go to page Previous  1 ... 4, 5, 6, 7, 8, 9  Next

All times are UTC - 7 hours


Who is online

Users browsing this forum: Bing [Bot] and 7 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group