Anonymous ID: 000000 Dec. 19, 2017, 9:22 a.m. No.125234   🗄️.is 🔗kun   >>5270 >>5275 >>5329

>>125085

too much noise about quantum computers.

 

Quantum computing is no different than a parallel GPU processor solving an equation.

 

For example. if you wanted to solve the equation x*10=400 for value x, there are many ways to solve it. One way is to take 400/10, which equals 40 - this requires an advanced algorithm. The easiest way is called "plug and chug" or "trial and error", which is where you just plug in values for x until you get to the correct answer. One could plug in the values 1 to 100 and see which one provides a valid result (in this case, only one answer will be valid). Quantum computers use this type of Trial and Error method. All answers are returned, however, the observer can quickly see the correct answer, from the incorrect answers by simple deduction after the hard work is done. This is exactly same kind of thing you can do on a GPU when you are crunching numbers or trying to crack a password.

 

The difference between GPU and quantum computer, is that you do 2^QUBIT operations per query (due to quantum state superposition, which is not voodoo magic, but rather just simple quantum physics), whereas, GPU can only do 1^GPU operations per query.

 

1^any number = 1, so GPU's have to have high clock rates to churn through problems many times

2^512 Qubit = huge number, so one pass can solve a large number of problems. after the pass is complete, physics problems prevent asking the question again for a long time due to decoherence. So in the long run, the capability of a quantum computer and a supercomputer are almost the same, because the supercomputer does not need to take breaks.

 

overtime, quantum computers will out perform a classical supercomputer.

 

long story short - stop talking about quantum computer and dimensional magic - you will sound like an uninformed shill.

Anonymous ID: 000000 Dec. 19, 2017, 9:35 a.m. No.125299   🗄️.is 🔗kun

>>125275

right, I think I got that backwards, should have been GPUcores^1, which equals GPUcores, whereas qubit is still correct at 2^QUBITs, which is a large number for parallelism. Thank you for correction.