It is currently Sat Jun 23, 2018 6:35 am

All times are UTC - 7 hours





Post new topic Reply to topic  [ 83 posts ]  Go to page 1, 2, 3, 4, 5, 6  Next
Author Message
PostPosted: Sat Apr 21, 2018 3:55 pm 
Offline

Joined: Wed May 19, 2010 6:12 pm
Posts: 2696
I saw a video on YouTube from computerphile, where one of the guys who invented the ARM cpu said back in 1986 their CPU was cheaper than the 68000 and the 80368 because of the RISC architecture. If this is the case, what took them so long to catch on? Was ARM unavailable in Japan at the time? Did the 32 bit data bus require too much external circuitry?


Top
 Profile  
 
PostPosted: Sat Apr 21, 2018 4:00 pm 
Offline
User avatar

Joined: Sun Sep 19, 2004 9:28 pm
Posts: 3389
Location: Mountain View, CA
Re: why it took so long to catch on: an instruction set not compatible with x86 (i.e. couldn't run most existing software during that time period) sure makes this difficult. For another example, see the PowerPC.


Top
 Profile  
 
PostPosted: Sat Apr 21, 2018 4:17 pm 
Offline

Joined: Sun Apr 13, 2008 11:12 am
Posts: 7227
Location: Seattle
In 1986 the x86 first-mover advantage wasn't insurmountably huge yet. But the 68k's might have been.

Performance-for-cost-wise, the Acorn Archimedes was neither drastically better (similar MIPS/MHz and MHz) nor cheaper than 68k or x86-based machines (Fair comparison: Amiga 1000 (1985; 256k RAM) for $1300, Amiga 2000 (1987; 512k RAM) for $1500 vs Archimedes 305 (1987; 512k RAM) for £800≈$1300)

Whenever I've sat down and actually run the numbers on real-world performance ... RISC architectures have always been a good deal more marketing woo then actually drastically better. Even the much-vaunted DEC Alpha seems to have really only been impressive because it was easier to make their design run at higher total power dissipation than other architectures.


Top
 Profile  
 
PostPosted: Sat Apr 21, 2018 4:42 pm 
Offline

Joined: Wed May 19, 2010 6:12 pm
Posts: 2696
ARM has single cycle memory access. I don't see how the 68000 could be close at all, unless the archimedes had slow memory.


Top
 Profile  
 
PostPosted: Sat Apr 21, 2018 4:56 pm 
Offline

Joined: Sun Apr 13, 2008 11:12 am
Posts: 7227
Location: Seattle
RISC architectures consume craptons more memory for instruction encoding than CISC ones. This has always been their Achilles's heel. (It wasn't until Thumb and MIPS16 that they actually addressed this crippling design flaw)

(edit: the RISC central thesis is "what if each instruction did less? then we should be able to execute more of them". It turns out that when you make each instruction do less and you make the instructions all the same larger size, you end up limited by memory bandwidth.)

Synthetic comparison: ARMv2 is 0.5MIPS/MHz; 68k is 0.2MIPS/MHz. More-nearly-real-world comparison: ARMv2 is about 300Dhrystones/MHz while 68k is about 250Dhrystones/MHz. (edit2: fix shared error in order of magnitude)


Top
 Profile  
 
PostPosted: Sat Apr 21, 2018 8:51 pm 
Offline

Joined: Sun Mar 27, 2011 10:49 am
Posts: 252
Location: Seattle
The original ARM design was genius, but ARM probably wouldn't have been a choice for any Japanese systems in the mid to late 80s. Acorn was a relatively small British firm, and ARM chips initially were built primarily for their own computer. From Wikipedia, it's not clear to me that they started widely licensing them out until the early 90s - and at that point, other chips were more than catching up to ARM's offerings on many fronts. And remember, hardware is usually in development for several years before release.

The common narrative is that Nintendo chose the 65816 for the SNES in the hopes of continuity with the NES: either for backwards compatibility, or at least for the sake of familiarity and code reuse for existing developers. And Sega chose the 68k for the MD because they'd been using it in arcade boards since the mid-80s, before ARM was a thing; it was also a widely available and kind of beloved chip used in pretty much all top-of-the-line desktops and workstations at the time, so developers had a good chance of being familiar with it.

Accepting the (debatable) premise that RISC ideas were what gave ARM the edge, well...pretty much every console manufacturer did start using RISC chips around the early to mid-90s, and really didn't stop until the most recent console generation. It's just that by the time that became an option there were better options on the market than ARM.

The 3DO (1993) ran on an ARM chip.

MIPS chips are RISC chips and probably the purest examples of them, being directly descended from the researchers at Stanford who pioneered the RISC idea in the first place. Nintendo partnered with SGI (the world leaders in the 3D space at the time) for the N64, and all of SGI's workstations all ran on MIPS, so no surprise that the N64 did too.

Sony also chose MIPS for the PlayStation. I can only speculate as to why - maybe to keep the hardware in line with the SGI workstations many devs probably were using, maybe because at that point in time it's where they got the best bang for their buck.

Sega went with Hitachi's SuperH architecture for the 32X and Saturn. As I understand it SuperH is also RISC-y, but has higher code density than ARM (before Thumb) and MIPS, so maybe that choice was to maximize the memory bandwidth they had. Or maybe it was because Sega preferred to work with and could get better deals from a fellow Japanese company like Hitachi. They continued using SuperH for the Dreamcast as well.


Top
 Profile  
 
PostPosted: Sat Apr 21, 2018 9:08 pm 
Offline
User avatar

Joined: Mon Jan 03, 2005 10:36 am
Posts: 3078
Location: Tampere, Finland
adam_smasher wrote:
Sony also chose MIPS for the PlayStation. I can only speculate as to why - maybe to keep the hardware in line with the SGI workstations many devs probably were using, maybe because at that point in time it's where they got the best bang for their buck.

They had at least some prior experience with MIPS: Sony NEWS. It's hard to say how much this influenced the decision.

_________________
Download STREEMERZ for NES from fauxgame.com! — Some other stuff I've done: fo.aspekt.fi


Top
 Profile  
 
PostPosted: Sat Apr 21, 2018 11:21 pm 
Offline

Joined: Tue Feb 07, 2017 2:03 am
Posts: 449
Because ARM was this tiny little British company from the people that made the Spectrum and nobody had heard of them. Raise you hand if your not a Pom and you know what an Acorn Computer is... exactly. Back when they made it, It wasn't really that fast nor that powerful. 2nd if I wanted to make and sell a computer I need tools, assemblers, compilers, books, documentation etc The 68K has them in spades just sitting around. Motarola have a road map, there is the 68K10, the FPU, the 68K20, the 68K30 is expected on QX Year Y. X86 has it spades, the 6502 has it spades. I also want to make sure that when I order 2 million I will get 2 million chips, Motorola, will look at an order of 2Million and say "next wednesday". Acorn next Summer ;) 2nd I want to make sure the company will be alive next year so I can still get their chips. There are a lot of factors when choosing a CPU, bang per clock is actually not a big one.

I also thing we are now in a post RISC world, I mean MIPS is dead, SPARC a memory, ARM now has NEON, and JAVA VM byte code instructions so its long past being "RISC". PPC dead. Scalar Super Computers distant memory ;)

It will be interesting to see what Apple do with their new ARM Macs... hopefully just implode. But if the rumors of 64Cores are correct. I imagine they might need to cut them down a bit. However loosing SIMD is going to slay their Video Encoding times...


Top
 Profile  
 
PostPosted: Sun Apr 22, 2018 12:29 am 
Offline

Joined: Sun Apr 13, 2008 11:12 am
Posts: 7227
Location: Seattle
Oziphantom wrote:
MIPS is dead
I'd say "in terminal condition in the hospital" but not fully dead yet. Who knows, maybe they can manage to carve out a niche away from the bully that is ARM.

Quote:
SPARC [...] PPC
POWER is bizarrely not dead, since IBM just released new POWER9 chips using 14nm FinFETs.
Even more bizarre, PPC is also not so dead—you can still buy them from NXP at unreasonably high prices.
SuperH, Alpha, and PA-RISC all seem pretty dead. SuperH at least has a modern BSD-licensed softcore for it, but that's not very interesting.

Oh, right, I completely forgot about Itanium! The reason that MIPS elected to roll over and die (sigh). Officially being taken off life support.

As new things go, there's Mill and RISC-V, but ... the former looks too weird to predict anything about, and the latter seems to be deliberately making some bad decisions (partial counterargument)

Quote:
ARM now has NEON, and JAVA VM byte code instructions so its long past being "RISC".
It's seemed like what we basically learned over the past forty years of ISA design is not that "CISC" or "RISC" was right, but our assumptions as to what useful primitives are was flawed.


Top
 Profile  
 
PostPosted: Sun Apr 22, 2018 2:36 am 
Offline

Joined: Tue Oct 06, 2015 10:16 am
Posts: 748
POWER is alive as long as IBM wants it so. They're definitely moving in the right direction, just lacking lower cost options - I'd be running a Raptor right now if the cost was more reasonable.

On the PS1, ability to get a decent compiler was part of the decision. They hired Cygnus to do a GCC port, them being the prime team available for such things back then. There's a lot of interesting history there.


Top
 Profile  
 
PostPosted: Sun Apr 22, 2018 2:43 pm 
Offline

Joined: Sun Mar 27, 2011 10:49 am
Posts: 252
Location: Seattle
When the RISC ideas were first floated, they were really good ones. CPUs were blowing a huge portion of their die space on underused instructions, decoding logic, and microcode, and there weren't just weren't enough transistors left for pipelining. Plus memory access speeds were roughly on par with CPU speed, so memory bandwidth wasn't such a massive bottleneck. There was a reason that everyone except for Intel abandoned CISC architectures in the early 90s: PowerPC and MIPS chips were crushing everyone else on speed and efficiency.

Modern manufacturing processes mean that there's now way more transistors available to CPU designers, and now the heavier decoding costs for CISC are negligible. Combine that with Intel's heroic best-in-the-business efforts to pipeline, reorder, and branch predict the hell out of the execution stream and more compact code representation that make better use of memory bandwidth and suddenly RISC turned out to be a bit of a dead end.


Top
 Profile  
 
PostPosted: Sun Apr 22, 2018 3:31 pm 
Offline

Joined: Sun Apr 13, 2008 11:12 am
Posts: 7227
Location: Seattle
adam_smasher wrote:
Plus memory access speeds were roughly on par with CPU speed, so memory bandwidth wasn't such a massive bottleneck.
Each RISC instruction does less on average than a CISC instruction. RISC can run faster, but they also have to run faster to get comparable performance. And most of the tricks that can be applied to improve RISC performance work without too much variation in CISC contexts too.

There weren't all that many "internal processing cycles" after the original 8086, 80286, and 68k—it was either unimportant for performance and didn't matter that it was dispatched to microcode, or it was low-hanging fruit for optimization.

It's not just Intel's x86 efforts for comparison here; comparing Dhrystones across different CPUs in any given year is usually fairly comparable, regardless of ISA. As much as it sounded at the time like RISC should have been a huge improvement, it just doesn't seem to have worked out that way.


Top
 Profile  
 
PostPosted: Sun Apr 22, 2018 6:28 pm 
Offline
User avatar

Joined: Sun Jan 22, 2012 12:03 pm
Posts: 6348
Location: Canada
Oziphantom wrote:
Raise you hand if your not a Pom and you know what an Acorn Computer is...

This question is slightly broken because I think you kinda have to be a pom to know what pom means? (I had to look it up.) I do know about Acorn computers, but I also read a lot of British computer mags when I was a kid, plus you're talking to a crowd that has an interest in at least one old computer already.

It's pretty interesting to me that both PS4 and XBone switched to x86 for this generation. Even on the PS3 and XBox 360 they made the tandem choice of PowerPC (...though beyond that choice the two architectures differed quite significantly). Seems like whatever the economic pressures that led to that decision for both Microsoft and Sony had them thinking the same way, at least for these last two.


Top
 Profile  
 
PostPosted: Sun Apr 22, 2018 11:06 pm 
Offline

Joined: Tue Feb 07, 2017 2:03 am
Posts: 449
I was under the impression you live in the Commonwealth? Am I mistaken?

The fact that this is a forum of computer nerds with a taste of the exotic and then still how many have heard of Acorn Archimedes was to further prove the point of how obscure it is.

The main power of RISC is the lower TDP. So when IBM made the CELL processor they have power limits and heat limits forced upon they by the form factor of a console, so they used a PowerPC core to get the power down. 360 having PPC lets them get 6 cores.
I think the switch to X86 was because PC gaming came on strong at the end of the PS3/360 life cycle and it became a power race once more, they didn't have to compete against each other they had to compete against STEAM sales that gave you the same game for $10 and with better graphics. To which I think for the lay customer having a "Will yeah but that is PPC that is what MACs(irony even they switched to x86 and are now going back to RISC ) has back in the day and we all know macs suck at gaming not Intel its not the same my 6 core intel beats you 6 core PPC", vs now "NO its X86 for X86".. the details of ivy, sandy, haswell et cetera are lost on the average customer.


Top
 Profile  
 
PostPosted: Mon Apr 23, 2018 12:52 am 
Offline

Joined: Mon Mar 30, 2015 10:14 am
Posts: 270
psycopathicteen wrote:
I saw a video on YouTube from computerphile, where one of the guys who invented the ARM cpu said back in 1986 their CPU was cheaper than the 68000 and the 80368 because of the RISC architecture. If this is the case, what took them so long to catch on? Was ARM unavailable in Japan at the time? Did the 32 bit data bus require too much external circuitry?

It's because the RAM requirement, it needs 1 cycle access RAM, this is why the acorn archimedes was so expensive at the time, with 512/1mo (and even 2Mo) of 140ns SRAM for stock machines (8mhz) .
You can concider the CPU cheaper than a 68k, but unfortunately not the whole machine because of the cost of RAM .


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 83 posts ]  Go to page 1, 2, 3, 4, 5, 6  Next

All times are UTC - 7 hours


Who is online

Users browsing this forum: Bing [Bot], Google Adsense [Bot] and 6 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group