There's no point in continuing to argue with you while you're:
1- Using manuals pertaining to newer versions of ARM than what was contemporary
2- Using instructions that didn't exist on those contemporary versions of ARM
3- Using the naive slow way of doing things on 68k
If you want to find out why the relative performance, you must find the corresponding disassemblies. Have you even looked at what the dhrystone metric is measuring?
Was ARM originally cheaper than 68000?
Moderator: Moderators
- TmEE
- Posts: 960
- Joined: Wed Feb 13, 2008 9:10 am
- Location: Norway (50 and 60Hz compatible :P)
- Contact:
Re: Was ARM originally cheaper than 68000?
I sense substantial rectal discomfort again...
One can have fast instructions but as reality has shown, you do lose significant amount of performance simply to needing whole lot more of those instructions to do variety of tasks. Those benchmarks help to judge such things, comparison of getting some sort of tasks done rather than the most basic steps needed to perform those tasks.
One can have fast instructions but as reality has shown, you do lose significant amount of performance simply to needing whole lot more of those instructions to do variety of tasks. Those benchmarks help to judge such things, comparison of getting some sort of tasks done rather than the most basic steps needed to perform those tasks.
- rainwarrior
- Posts: 8731
- Joined: Sun Jan 22, 2012 12:03 pm
- Location: Canada
- Contact:
Re: Was ARM originally cheaper than 68000?
I'm kinda curious what the end goal is with the argument. Hypothetically let's say there was a way to definitively prove that ARM or 68k was e.g. 20% better power/cost ratio than the other, in a given year, in a given country, whatever. If you had such an answer, what would you wish to do with it? I feel like we passed the point of "I'm just curious" several pages ago, and the motives at this point are a big mystery to me?
If you're looking to learn how to write efficient code for an old ARM or 68k setup there's probably a much more direct way to learn about it than arguing about which is better. Same deal if you're trying to build something new with old parts, etc.
If you want to decide for yourself whether some decision made by Sega or whomever 30 years ago was objectively right or wrong, I mean, at best that's a "curiosity" question, but even if you think you could definitively compare the CPUs quantitatively, for the actual decision being made there's enough other important economic factors to make such a comparison almost meaningless. Business relationships, factory location, scale of production, logistics of supply, this stuff is way more than enough to skew the actual practical cost of making this choice well beyond whatever the raw computing power difference is worth. We're having such a hard time being quantitative about it now with total hindsight, and it would have been much harder to compare at the time. Other factors were much more important to the decision; the best a CPU maker could demonstrate would just be a reasonably competitive amount of power.
...and if you're not actually being specific about these things and want to argue about very vaguely matters like RISC vs CISC, I don't see what kind of comparison you could possibly hope to make. The difference between the architectures is interesting to talk about, but posing it as an argument about which is quantitatively better? It's been really weird to spectate this.
Kinda the bottom line is just that they're both practical CPU architecture types that have both remained competitive, which is why neither has died off. In the same respect they've both adapted over time as well to remain competitive, which is part of why the definitions for these architecture types are increasingly vague. Part of remaining competitive is about other things besides cost and computing power, too. There are a lot of other practical factors with a CPU, but also even running a business takes a lot more than making a good product, and it's all relevant to the answer to the question of why to use one or the other.
If you're looking to learn how to write efficient code for an old ARM or 68k setup there's probably a much more direct way to learn about it than arguing about which is better. Same deal if you're trying to build something new with old parts, etc.
If you want to decide for yourself whether some decision made by Sega or whomever 30 years ago was objectively right or wrong, I mean, at best that's a "curiosity" question, but even if you think you could definitively compare the CPUs quantitatively, for the actual decision being made there's enough other important economic factors to make such a comparison almost meaningless. Business relationships, factory location, scale of production, logistics of supply, this stuff is way more than enough to skew the actual practical cost of making this choice well beyond whatever the raw computing power difference is worth. We're having such a hard time being quantitative about it now with total hindsight, and it would have been much harder to compare at the time. Other factors were much more important to the decision; the best a CPU maker could demonstrate would just be a reasonably competitive amount of power.
...and if you're not actually being specific about these things and want to argue about very vaguely matters like RISC vs CISC, I don't see what kind of comparison you could possibly hope to make. The difference between the architectures is interesting to talk about, but posing it as an argument about which is quantitatively better? It's been really weird to spectate this.
Kinda the bottom line is just that they're both practical CPU architecture types that have both remained competitive, which is why neither has died off. In the same respect they've both adapted over time as well to remain competitive, which is part of why the definitions for these architecture types are increasingly vague. Part of remaining competitive is about other things besides cost and computing power, too. There are a lot of other practical factors with a CPU, but also even running a business takes a lot more than making a good product, and it's all relevant to the answer to the question of why to use one or the other.
-
- Posts: 3140
- Joined: Wed May 19, 2010 6:12 pm
Re: Was ARM originally cheaper than 68000?
Well then every document for the ARM2 has to be wrong then, because every one I can find says it can post increment by a constant.lidnariq wrote:There's no point in continuing to argue with you while you're:
1- Using manuals pertaining to newer versions of ARM than what was contemporary
2- Using instructions that didn't exist on those contemporary versions of ARM
3- Using the naive slow way of doing things on 68k
If you want to find out why the relative performance, you must find the corresponding disassemblies. Have you even looked at what the dhrystone metric is measuring?
How is my example a naive approach? You kept repeating how "some 68000 instruction require 2 ARM instructions and that's a big deal" so I showed you an example of a complex instruction the 68000 has, and the ARM is STILL twice as fast. Now you're saying that complex instructions on the 68000 don't matter because they're slow anyway and that good programmers would make better use of simpler instructions, which ironically was one of the reasons why RISC archecture was invented in the first place.
Re: Was ARM originally cheaper than 68000?
I think in some case instruction as can give the edge to the 68K, still i wonder why you're trying to compare these 2 CPU. The ARM is a real 32 bit CPU released in 1985 with a 32 bit wide BUS when the 68000 is a 1979 CPU with "only" a 16 bit wide BUS (and that is a strong difference).
Code: Select all
ADD Dn,<ea>
-
- Posts: 3140
- Joined: Wed May 19, 2010 6:12 pm
Re: Was ARM originally cheaper than 68000?
The reason why I'm comparing the two now is because I doubt that Dhrystones benchmark is accurate. The fact the memory bus is 2x wide and 4x as fast at the same Mhz speed, but only shows up as 20% faster on the Drystones test, makes me skeptical they didn't use a good enough compiler for the ARM. I know "RISC" means that it sometimes has to use more instructions, but I doubt that ARM ever needs 4 times as many instructions as the the 68000.
It might've been an edge case where something needed exactly 15 general purpose registers, and having just 13 general purpose registers brought the ARM down.
It might've been an edge case where something needed exactly 15 general purpose registers, and having just 13 general purpose registers brought the ARM down.
-
- Posts: 271
- Joined: Sun Mar 27, 2011 10:49 am
- Location: Victoria, BC
Re: Was ARM originally cheaper than 68000?
Are you questioning the accuracy of the Dhrystone benchmark or the particular compilers used when generating the particular Dhrystone results you're measuring?
If the former: no one's under any illusion that the Dhrystone benchmark is measuring anything besides performance on the Dhrystone benchmark. It's meant to be roughly representative of an "average" program, which might not correspond to the sort of programs you're writing.
If the latter: I'd be surprised if compilers for the two different architectures differed in quality all that much, but anything's possible. It's arguable that part of what makes Dhrystone useful is measuring the quality of compiled code using contemporary compilers, too - theoretical max performance is far less important to most people in most cases than the sort of performance they'll actually tend to get with real software compiled with real compilers. But as many people in this thread have pointed out already, the only way for you to get any answers that'd satisfy you about this is to look at the generated output yourself.
In any case: the question of why you care is still kinda open.
If the former: no one's under any illusion that the Dhrystone benchmark is measuring anything besides performance on the Dhrystone benchmark. It's meant to be roughly representative of an "average" program, which might not correspond to the sort of programs you're writing.
If the latter: I'd be surprised if compilers for the two different architectures differed in quality all that much, but anything's possible. It's arguable that part of what makes Dhrystone useful is measuring the quality of compiled code using contemporary compilers, too - theoretical max performance is far less important to most people in most cases than the sort of performance they'll actually tend to get with real software compiled with real compilers. But as many people in this thread have pointed out already, the only way for you to get any answers that'd satisfy you about this is to look at the generated output yourself.
In any case: the question of why you care is still kinda open.
-
- Posts: 3140
- Joined: Wed May 19, 2010 6:12 pm
Re: Was ARM originally cheaper than 68000?
I don't actually care that much. I just had to explain because people were misunderstanding what I was saying.In any case: the question of why you care is still kinda open.