It is currently Tue Apr 24, 2018 3:56 am

All times are UTC - 7 hours





Post new topic Reply to topic  [ 50 posts ]  Go to page Previous  1, 2, 3, 4  Next
Author Message
PostPosted: Wed Dec 23, 2015 4:47 pm 
Offline
User avatar

Joined: Sat Jul 12, 2014 3:04 pm
Posts: 958
I still love the idea of a quantum bogosort. "Shuffle your list, using quantum randomness. If the list is not in the desired order, destroy the universe."


Top
 Profile  
 
PostPosted: Wed Dec 23, 2015 8:43 pm 
Offline
User avatar

Joined: Mon Sep 15, 2014 4:35 pm
Posts: 3246
Location: Nacogdoches, Texas
Well, I remember, Floating Point Unit got mentioned somewhere in this, and because it's pretty much useless to worry about quantum computing at this point if you're not a scientist, (although many people are for whatever reason) I want to ask something about that.

Now, according to this: https://en.wikipedia.org/wiki/Floating-point_unit, it seems just like computer hardware that does math. The thing is, it seems that this is already integrated in just about every single processor ever, unless there's a processor that can only move numbers, which is pretty much useless. I guess could you call the multiplication/division units in the SNES "Floating Point Units"? I just don't see how this is a new concept.


Top
 Profile  
 
PostPosted: Wed Dec 23, 2015 8:55 pm 
Offline
Formerly 65024U

Joined: Sat Mar 27, 2010 12:57 pm
Posts: 2259
It's not new, but an FPU at one point in time was the equivalent of the qubit on how much it can help process.


Top
 Profile  
 
PostPosted: Wed Dec 23, 2015 9:29 pm 
Offline
User avatar

Joined: Mon Sep 15, 2014 4:35 pm
Posts: 3246
Location: Nacogdoches, Texas
Well, actually, I do have something I was thinking about when it comes to qubits. I know it's a common misconception that we aren't using 32 or 64 bit processors and that we're using 128 + bit processors because the PlayStation uses a 32 bit and the N64 uses a 64 bit processor and that we've come along way since then, and we have, but we are still using the same size processor because there is little reason to move numbers larger than 64 bits in one go. The thing about qubits everyone has been saying is that because there is so much checking for "1"s and "0"s to determine what would be a 1 or a 0 on a traditional computer (the whole 75% and 20% thing) that it would actually be slower than a traditional computer if you didn't add up a bunch of qubits, as the amount of data you can have grows exponentially with each qubit you add. The thing is, only 6 qubits is apparently equal to 64 bits, which is what we've been using for over a decade as there's little need to change. I'm not getting this at all, are I...


Top
 Profile  
 
PostPosted: Wed Dec 23, 2015 10:15 pm 
Offline
Formerly 65024U

Joined: Sat Mar 27, 2010 12:57 pm
Posts: 2259
Now take that 64-bit PC and brute for a 2048-bit encryption. I'll see you next millennium. A quantum computer would need a lot less qubits, and be able to do the task nearly instantaneous. Understand it yet? Just like the FPU. Floating point on the CPU is slow as molasses. FPU in hardware was a big leap, and makes it not slow. Same idea, just different areas of computation. Without FPU's as good as they are now, GPU's wouldn't exist. Processors have FPU's, but the main application they shine in is 3D graphics, doing billions of calculations, which are all floating point, every second.


Top
 Profile  
 
PostPosted: Thu Dec 24, 2015 12:15 am 
Offline
User avatar

Joined: Mon Sep 15, 2014 4:35 pm
Posts: 3246
Location: Nacogdoches, Texas
3gengames wrote:
Now take that 64-bit PC and brute for a 2048-bit encryption. I'll see you next millennium. A quantum computer would need a lot less qubits, and be able to do the task nearly instantaneous. Understand it yet?

So you're saying we'd see quantum computers (assuming we even do) with more than 6 qubits?

3gengames wrote:
Floating point on the CPU is slow as molasses.

I almost don't even see how it's possible... What instructions would you even have left? Would you still have AND, OR, and EOR? You couldn't even have things like CMP because it involves subtraction.


Top
 Profile  
 
PostPosted: Thu Dec 24, 2015 12:30 am 
Offline
Formerly 65024U

Joined: Sat Mar 27, 2010 12:57 pm
Posts: 2259
Yes, you'd need enough qubits to do the computation you wanted.


And floating point is the same as normal point math, you add, subtract, multiply, divide, etc. JUst with much more complex numbers, which is done by the FPU for you. Please just read up on floating point numbers in programming, I'm using it as an example to relate the computation technology that is the closest leap next to Quantum computing to make it easy, but I don't think you even know what floating point numbers are. :P


Top
 Profile  
 
PostPosted: Thu Dec 24, 2015 12:57 am 
Offline
User avatar

Joined: Mon Sep 15, 2014 4:35 pm
Posts: 3246
Location: Nacogdoches, Texas
3gengames wrote:
Yes, you'd need enough qubits to do the computation you wanted.

I just can't see how working with numbers this large would ever be practical.

3gengames wrote:
And floating point is the same as normal point math, you add, subtract, multiply, divide, etc. JUst with much more complex numbers, which is done by the FPU for you. Please just read up on floating point numbers in programming, I'm using it as an example to relate the computation technology that is the closest leap next to Quantum computing to make it easy, but I don't think you even know what floating point numbers are. :P

Well, when I looked up "Floating Point Unit", I was led to believe that it was for addition, subtraction, multiplication, and division for all types of numbers, but then I looked up just plain "Floating Point" and again according to Wikipedia, it seems like "Floating Point Numbers" are just decimal numbers with whole numbers, and that this "Floating Point" is the decimal. It says that it's kind of like scientific notation, and I'm under the impression that it works like you have 2 numbers to make a floating point number:

The numbers like 1.265 (1265)

And numbers like 102 (I'm not sure how this would be expressed due to the fact it can be negative)

To create 126.5


Top
 Profile  
 
PostPosted: Thu Dec 24, 2015 4:56 am 
Offline

Joined: Sun Sep 30, 2012 3:44 am
Posts: 83
https://en.wikipedia.org/wiki/IEEE_floating_point

Edit: Sorry, the aforementioned article might be interesting also, but I meant to post this: https://en.wikipedia.org/wiki/Floating_point


Top
 Profile  
 
PostPosted: Thu Dec 24, 2015 9:00 am 
Offline

Joined: Sun Sep 19, 2004 11:12 pm
Posts: 19947
Location: NE Indiana, USA (NTSC)
Espozo wrote:
Now, according to this: https://en.wikipedia.org/wiki/Floating-point_unit, it seems just like computer hardware that does math. The thing is, it seems that this is already integrated in just about every single processor ever

The difference between the math done in an FPU and the math done in the CPU is that in an FPU, the "exponents" determine how much shifting is done to the numbers when they are added or subtracted. This means you can represent numbers in the quintillions and numbers in the one-quintillionths with the same data type.

Prior to the i486DX and MC68040, the FPU was a separate coprocessor, model number 8087, 80287, 80387, 68881, or 68882.

Espozo wrote:
I know it's a common misconception that we aren't using 32 or 64 bit processors and that we're using 128 + bit processors

Misconception my rectum. SSE instructions, introduced with the Pentium III, work on 128-bit vectors. So do the AltiVec instructions in the PowerPC G4. AVX instructions, introduced with the Intel Core Sandy Bridge and AMD Bulldozer processors, work on 256-bit vectors.

Atari defines the "bits" of a platform as the width of the data bus to RAM or VRAM. By this definition, the NES and SMS are 8-bit, the TG16 and Super NES are 16-bit because the VDC/S-PPU data bus is 16-bit, the Genesis is 16-bit because the CPU data bus is 16-bit, and the Jaguar is 64-bit because the GPU data bus is 64-bit. The GBA is 16-bit, the 8088 is 8-bit, the 286 and 386SX are 16-bit, the 386DX and 486 are 32-bit, and everything from the Pentium through the first Athlon 64 are 64-bit. But bus width can be deceiving, as Rambus RDRAM and AMD HyperTransport use narrower buses at double or higher data rate. The N64 is 8-bit by this measure.


Top
 Profile  
 
PostPosted: Sat Dec 26, 2015 7:29 pm 
Offline

Joined: Thu Aug 12, 2010 3:43 am
Posts: 1589
OK, just to make it clear: the FPU handles "floating point numbers", which are numbers that can have fractional parts. The CPU handles integers, which are much faster to process and can easily take up less space if they're small (and they're the only ones on which you can perform binary operations like AND or XOR or bit shifting).

In other words: integers (CPU) vs non-integers (FPU).

Espozo wrote:
It says that it's kind of like scientific notation

That's just how floating point numbers are stored internally (usually!), so the number of digits after (or before) the point isn't fixed. You get less precision with larger numbers and an absurd amount of precision with smaller numbers. This tends to work well enough with most math stuff. But this is probably you shouldn't worry much about if you aren't a programmer =P


Top
 Profile  
 
PostPosted: Tue Jun 07, 2016 1:10 pm 
Offline

Joined: Thu Jul 23, 2015 7:54 pm
Posts: 152
Location: USA
rainwarrior wrote:
One is to create algorithms that are well suited to quantum computation. Like I think the general idea is that reading all the output states is difficult, but if you have a lot of intput bits and only need to read some of the output. There's an quantum integer factorization algorithm, which supposedly would invalidate RSA encryption if it could be run on a suitable quantum computer (no such computer exists yet).

This has been interesting me for a while now. Do you think prime factorization will ever be able to be done better than exponential time on a traditional computer?

I came up with a really simple factorization algorithm, which I later found out was just called Trial Division, but I don't understand why this runs exponential time given the worst case (semiprimes).

For example, 100. The smallest prime factor is two, so the number is now 50, half of what it originally was. 50 / 2 is 25, / 5 = 5. Done. But now, a number like 4,611,686,112,916,668,371; the product of the prime numbers 2147483647 and 2147483693. Not an RSA number by any means, but it took about 22 seconds to factor. Obviously the time is exponential, but I really don't understand why. Without nitty gritty optimization details, the number of checks would only need to be roughly the sum of all the factors, right? I read somewhere that the time grows exponentially with how many digits the number is or something, but not how big the number actually is. (i.e 241 doesn't take exponentially more time to factor than 240.) I know I'm swimming in a pretty deep pool right now.

My other question is, how are they "doing things" with quantum computers? I understand it's still only for really specialized things (As of april of this year, they factored 200,099 using Shor's algorithm.) So are quantum processors a thing? Or are they just using physics or QM to make logic happen? What would an instruction set look like for a quantum computer?


Top
 Profile  
 
PostPosted: Tue Jun 07, 2016 1:29 pm 
Offline

Joined: Sun Sep 19, 2004 11:12 pm
Posts: 19947
Location: NE Indiana, USA (NTSC)
It's "exponential" in the length of the input, which is the logarithm of the number being factored. For example, factoring a 20-digit semiprime into two 10- to 11-digit primes takes on the order of 10^10 trial divisions.


[Corrected math error]


Top
 Profile  
 
PostPosted: Tue Jun 07, 2016 1:33 pm 
Offline
User avatar

Joined: Sun Jan 22, 2012 12:03 pm
Posts: 6221
Location: Canada
Sogona wrote:
I don't understand why this runs exponential time

I think the problem is that you can grow the size of your key exponentially. Adding a single bit doubles* your expected execution time for factoring it. Your solution is outpaced by the problem.

* Not entirely accurate, but you get the point.


Top
 Profile  
 
PostPosted: Tue Jun 07, 2016 2:40 pm 
Offline

Joined: Thu Aug 12, 2010 3:43 am
Posts: 1589
Sogona wrote:
prime numbers 2147483647 and 2147483693

Today I learned that INT32_MAX is prime o_o


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 50 posts ]  Go to page Previous  1, 2, 3, 4  Next

All times are UTC - 7 hours


Who is online

Users browsing this forum: No registered users and 9 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group