It is currently Sat Sep 21, 2019 4:01 am

All times are UTC - 7 hours





Post new topic Reply to topic  [ 8 posts ] 
Author Message
PostPosted: Tue Feb 12, 2019 4:15 pm 
Offline

Joined: Wed May 19, 2010 6:12 pm
Posts: 2892
Take an existing CPU instruction set, and taking a big chunk of code from human programmers, and have a computer analyze how frequent every instruction is used. If it detects the repeated groups of instructions, the new instructions get added to the instruction set, and either remove infrequent instructions or add a prefix byte.


Top
 Profile  
 
PostPosted: Tue Feb 12, 2019 4:54 pm 
Offline

Joined: Sun Mar 19, 2006 9:44 pm
Posts: 1005
Location: Japan
That's an interesting question. Computerphile videos are all into machine learning, and I guess this fits the bill -- have a neural network iterate over many different instruction [types] and choose the best n for the job.

_________________
http://www.chrismcovell.com


Top
 Profile  
 
PostPosted: Tue Feb 12, 2019 5:01 pm 
Offline
User avatar

Joined: Fri May 08, 2015 7:17 pm
Posts: 2564
Location: DIGDUG
This would be the part of the Terminator movie when the machine becomes smart enough to wipe 99% of humanity off the planet.

_________________
nesdoug.com -- blog/tutorial on programming for the NES


Top
 Profile  
 
PostPosted: Tue Feb 12, 2019 5:45 pm 
Offline
User avatar

Joined: Thu Mar 31, 2016 11:15 am
Posts: 529
Doubt you'd gain much performance just by tweaking the instruction set. See for example x86 (a terrible ISA) having better perf than Itanium. And less instructions (RISC) tends to be better than more.

Computers are clearly the solution to chip design, but there are so many factors it's a really tough problem.


Top
 Profile  
 
PostPosted: Tue Feb 12, 2019 6:23 pm 
Offline

Joined: Sun Apr 13, 2008 11:12 am
Posts: 8564
Location: Seattle
Regarding thread starter: There are people who make a point of studying "optimal" ISAs. You can look for discussions related to the invention of the RISC-V architecture to find out people looking how to do this. There's also the "Mill" architecture, which is doing some really wacky things ... so wacky it's hard to evaluate. (But the inventor's lecture series on youtube on how the Mill architecture works does present some fascinating ideas)

pubby wrote:
And less instructions (RISC) tends to be better than more.
Seemed to be.

RISC is the right way to get "any computer" for cheapest, but the past twenty years have shown that simpler and more orthogonal instruction sets are actually not very useful for performance. The silicon cost of superscalar architectures eats a large amount of die space.

Ideally, you'd have an ISA with an infinite number of registers, and every instruction has complete orthogonality, and every instruction fits in 0 bits. Something has to give for a real-world thing; orthogonality is often a first victim. Fancy-seeming instructions (like bit test and bit set), despite being redundant with other instructions, are too critical for real-world application performance to not include as first-class operations. The performance of signal processing instructions (such as multiply-and-accumulate, or various SIMD things) is also paramount. Before long you discover you've built a weird CISC ISA that sure isn't RISC by anything but the most generous definitions of the term—it's just that it's a very different set of first-class instructions than the x86 or 68k had.


Top
 Profile  
 
PostPosted: Wed Feb 13, 2019 1:41 am 
Offline

Joined: Tue Oct 06, 2015 10:16 am
Posts: 967
There is a paper on Huffman-compressing instructions, then running that with realtime decoding, without decompressing the entire thing. IIRC it was on ARM, and achieved decent performance with code size about halved. You should find it on arxiv.


Top
 Profile  
 
PostPosted: Wed Feb 13, 2019 2:14 am 
Offline
User avatar

Joined: Fri Nov 12, 2004 2:49 pm
Posts: 7741
Location: Chexbres, VD, Switzerland
calima wrote:
There is a paper on Huffman-compressing instructions, then running that with realtime decoding, without decompressing the entire thing. IIRC it was on ARM, and achieved decent performance with code size about halved. You should find it on arxiv.

Is that any advantage on THUMB instruction set which also halves code size (but typically hurts performance) ?

As for the original quesiton, it's complex but I don't think that's the case - a computer could be useful in designing instruction set for other computers at design time, but not that much to modify their own instruction set - that would require a FPGA reprogramming itself at runtime which is technically possible but ridiculously complex for little gain. What modern CPUs however really does is that they transcode X86 instructions into some internal instruction set in real-time before executing it, because supposedly that's more performant thatn directly run X86 instructions (something which was given up somewhere arround the Pentium IV). I could be wrong or have misunderstood something.

In any case, the beauty and simplicity of the 6502 is missed on modern CPUs :)


Top
 Profile  
 
PostPosted: Wed Feb 13, 2019 10:54 am 
Offline

Joined: Wed May 19, 2010 6:12 pm
Posts: 2892
Bregalad wrote:
As for the original quesiton, it's complex but I don't think that's the case - a computer could be useful in designing instruction set for other computers at design time, but not that much to modify their own instruction set


I meant "it's own ISA" as in "it's own invention".


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 8 posts ] 

All times are UTC - 7 hours


Who is online

Users browsing this forum: Pokun and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group