PDA

View Full Version : P != NP by quantum mechanics



pendell
2014-04-09, 09:40 AM
A new hypothesis, According to this article (https://medium.com/the-physics-arxiv-blog/7ef5eea6fd7a).

For the uninitiated, P and NP are computational science ideas -- P is the set of all problems that can be easily solved by computer algorithms, while NP is the set of problems that are easily *verifiable* but computationally intractable. One such example would be the travelling salesman (http://en.wikipedia.org/wiki/Traveling_salesman_problem) problem.

Anyway , the article's author suggests that the reason we can't model macroscopic objects according to quantum physics, as opposed to Newtonian, is because Shroedinger's equation is in the NP class. And if P != NP, this means that it's not going to EVER be practical to model objects on the quantum level, regardless of technological advances.

It seems to hold up, but it's based on the assumption that P != NP.

Which would have some interesting implications. Wikipedia discusses the issue (http://en.wikipedia.org/wiki/P_versus_NP_problem) . Oh, wait, so does XKCD (http://forums.xkcd.com/viewtopic.php?f=12&t=38007) For one thing, it would mean that certain types of cryptography would remain robust, regardless of advances in technology. It would also mean that a lot of other problems -- such as proving mathematical theorems -- will remain intractable, not easily solvable, for a long time.

Disappointing, if true. But it does seem that there appears to be a nexus between quantum physics and computational science, used to model those interactions.

Still, it's only argument from example. The fact that Shroedinger's equation is NP-hard does not necessarily prove there is no P method of solving it -- only that we haven't found it yet. Thus , an additional NP class problem in the set does not , I think , prove conclusively that P != NP. Still seems likely that is the case, though.

Respectfully,

Brian P.

NichG
2014-04-09, 07:27 PM
Rather than thinking of what our computers can currently do, think about what the underlying physics of the universe being NP says about the available computational power given a system of fixed size. If the universe is NP at an underlying level, that means that you're building computers out of pieces that can do NP calculations in P time. This is the whole draw of quantum computing.

However, in practice, this doesn't work as cleanly as one might think. For example, because of decoherence it can be that the effort to retain the full gamut of quantum mechanical effects may well scale as NP with the size of the fully-entangled quantum mechanical system you want to create. This is because to create a matrix of entangled quantum mechanical states, you have to protect from decoherence, and the system seems to be exponentially sensitive to external noise as the number of mutually entangled states grow. I don't think its yet known whether this is just a limitation of the methods we're using to build qubits right now, or if its something generic.

This is probably an example of a fragile formal limit, by which I mean something in which the worst-case or most extreme case is NP, but such that the set of such cases are actually a small subset of the problem space (e.g. there is one point or a region at which the problem is truly NP, and every other point is P with a prefactor that diverges as you approach the NP point). There's some work (in particular I'm thinking of stuff by Florent Krzakala and Lenka Zdeborova, but I haven't done a thorough literature search) that shows that NP problems such as N-SAT can be mapped onto glassy systems in physics, and that there is a critical point associated with the divergence to NP corresponding to one of the glass transition temperatures, such that the amortized time to find the solution diverges differently with N at the critical point versus away from the critical point.

If this is the case in general, then things get very muddy. One could say that we don't need quantum mechanics to study most macroscopic systems because most macroscopic systems are operating in a regime in which their fluctuations cause decoherence. This would be associated with a particular scale, which means we'd have a scale separation we could use to perform coarse-graining, which lets us capture the microscopic QM effects in terms of how they how they modify and determine the classical properties of the coarse-grained objects. For example, we need QM to calculate the color of a material, but once we've done that calculation we can use a macroscopic (classical or relativistic) model for the interaction between that material and light if we want to do e.g. radiative transport calculations. We don't need to do the entire thing as a quantum field theory.

But if we were to go to macroscopic systems near that critical point, suddenly QM becomes relevant at all scales and the problem of modelling them becomes truly NP. It also means that if we want to build a quantum computer, the difficulty is tuning the system towards that critical point so that we preserve as much of the NP underlying behavior as possible.

This does mean that finding out if its possible to get a system which undergoes self-organized criticality with respect to some P to NP transition would be really valuable. That way, instead of having to exert exponential effort to make a large entangled system, we would just have to find a system whose internal dynamics tend to entangle more and more of it in a natural fashion. I'm not sure such a thing exists though, or how to look for it.

reaverb
2014-04-09, 08:13 PM
The fact that Shroedinger's equation is NP-hard does not necessarily prove there is no P method of solving itActually, assuming that P !=NP (which basically any computer scientist will tell you is almost certainly true) It does prove there is no P method of solving it. As in, that's the entire reason the NP-hard classification exists.

Also, that article seems to imply that quantum mechanics being NP-hard makes there be "barrier" between classical and quantum physics, which also doesn't make any sense. NP-hardness does not mean "there is a limit to the size the quantum system can be." as the article confidently states, but that as quantum systems become bigger it becomes harder to model them, until it's effectively impossible.

In addition, the use of Avogadro's number as the definition of a "Macroscopic system" is pretty ridiculous. Avogadro's number is value used in chemistry so chemists don't have to say a gram of Hydrogen has around 600000000000000000000000 atoms, and doesn't actually have a physical meaning beyond that.

Lateral
2014-04-09, 11:27 PM
In addition, the use of Avogadro's number as the definition of a "Macroscopic system" is pretty ridiculous. Avogadro's number is value used in chemistry so chemists don't have to say a gram of Hydrogen has around 600000000000000000000000 atoms, and doesn't actually have a physical meaning beyond that.
Well, no, but it's a perfectly reasonable constant to use as a conversion between a few atoms and a macroscopic number of atoms. It certainly doesn't have to be, but what else would you use?

ChristianSt
2014-04-10, 02:59 AM
While the underlying topic is highly interesting, the article features some really dubious/wrong statements.


For a simple system, the equation can be solved by an ordinary computer in a reasonable time, so it falls into class of computational problems known as NP.

This doesn't make much sense. NP is the class of problems that can be verified in polynomial time.


So the statement that NP-hard problems can also be solved quickly and efficiently is the famous P=NP equation.

They even feature a graphic that shows how NP, NP-hard and NP-complete relate. Even if P=NP holds it doesn't mean that NP-hard problems can quickly be solved.


I haven't read the original paper, and it certainly can be correct, but this article seems to be rather dubious. Without checking that paper I would really believe anything it saye.

NichG
2014-04-10, 03:13 AM
An interesting aside, the map between classical Newtonian physics and the Schroedinger Equation is an example of something where you can take a wide class of non-linear problems with finite numbers of degrees of freedom and turn them into linear problems with a number of degrees of freedom exponential in the number of degrees of freedom of the non-linear problem. If you, for example, take some equation of motion in the variables 'x' and 'y' derived from an energy function H(x,y), then the behavior of the expectation value of the vector (x,y) over the corresponding wave function psi(x,y) looks like the behavior of the classical degrees of freedom when you take the limit of h-bar -> 0. The original equation of motion for r could be some ghastly nonlinear thing with two discrete degrees of freedom, and it's replaced with a linear PDE over a 2D continuous space (which you could in principle discretize to solve approximately to a bounded error). So the solution of the PDE has a cost which is exponential in the number of discrete degrees of freedom in the original ODEs. So you have a conversion between a P problem to an NP problem.

This is kind of interesting since in principle if you could do the reverse - take linear problems with a continuous set of degrees of freedom and find a non-linear problem with a finite number of degrees of freedom and a finite number of terms that captures the dynamics of the appropriate moment integrals, then that's a form of complexity reduction that looks a lot like NP -> P in character. The issue of course is that while this works for things governed by time-reversibility and time translation symmetry (e.g. their dynamics are set by a Hamiltonian), its not something you could do generically.

Chronos
2014-04-14, 07:50 AM
Anyway , the article's author suggests that the reason we can't model macroscopic objects according to quantum physics, as opposed to Newtonian, is because Shroedinger's equation is in the NP class.
Let's stop right there. We can model macroscopic objects according to quantum physics. When we do so, we get results so close to what we get classically as to be indistinguishable. If this were not the case, nobody would ever have taken quantum mechanics seriously, because it would be easily observed to be wrong.

Soras Teva Gee
2014-04-14, 08:43 AM
Okay fair bit of reading (ugh complex math = brain hurt) I wanna see if I have a certain loose summary of the points on this correct:

1) P != NP (unless resolved) suggests that there are certain classes of problems that math cannot provide a method to solve, except trying every possible combination manually. From the Travelling Salesman, you can find the shortest route to travel between multiple points, but you have to find it each and every time by brute forcing it. And that therefore some problems can never be practically solved because you run out of time and the complexity is too enormous.

2) Arkady Bolotin's paper is proposing that quantum mechanics is one such problem... ergo there will never reasonably be a method to practically deal with quantum effects above a certain complexity. We have no short cuts and don't have enough computing power to go the long way around.

Am I getting that without horrible errors?

NichG
2014-04-14, 08:51 AM
Okay fair bit of reading (ugh complex math = brain hurt) I wanna see if I have a certain loose summary of the points on this correct:

1) P != NP (unless resolved) suggests that there are certain classes of problems that math cannot provide a method to solve, except trying every possible combination manually. From the Travelling Salesman, you can find the shortest route to travel between multiple points, but you have to find it each and every time by brute forcing it. And that therefore some problems can never be practically solved because you run out of time and the complexity is too enormous.


Not really. You can certainly do better than brute force on NP problems, and there are lots of algorithms to do so. In fact, most NP problems have large sets of inputs for which you can discover the solution in polynomial time using a clever algorithm. The P != NP thing is a statement that there are two distinct classes - problems where the worst case can be solved in an amount of time that is polynomial in the problem size, and problems where the worst case cannot be reduced to something polynomial in the problem size. If P = NP, then there is in general always a way to solve any problem in polynomial time even in the worst possible case.



2) Arkady Bolotin's paper is proposing that quantum mechanics is one such problem... ergo there will never reasonably be a method to practically deal with quantum effects above a certain complexity. We have no short cuts and don't have enough computing power to go the long way around.

Am I getting that without horrible errors?

I think this seems to be an accurate statement of what has been said, but I don't think its an accurate statement of even our currently level of knowledge, for a variety of reasons (not the least of which is that this was the whole point of 'quantum computing', to leverage the inherent NP nature of quantum mechanics to build a computer that can do NP problems in polynomial 'wall-clock' time)

TuggyNE
2014-04-16, 05:34 AM
Not really. You can certainly do better than brute force on NP problems, and there are lots of algorithms to do so.

A very good thing too, otherwise a job I held down for a year would simply never have existed. (Writing APIs for a hosted TSP solver/automated fleet routing system.)

Fortuna
2014-04-16, 05:47 AM
Not really. You can certainly do better than brute force on NP problems, and there are lots of algorithms to do so. In fact, most NP problems have large sets of inputs for which you can discover the solution in polynomial time using a clever algorithm. The P != NP thing is a statement that there are two distinct classes - problems where the worst case can be solved in an amount of time that is polynomial in the problem size, and problems where the worst case cannot be reduced to something polynomial in the problem size. If P = NP, then there is in general always a way to solve any problem in polynomial time even in the worst possible case.

Pardon my ignorance, but what precisely is meant by a 'problem' in this instance? Given a sufficiently broad definition of 'problem', one could just straightforwardly demand output exponential in the problem size, say - for example, 'Print out your name 2^n times, where n is given' is certainly a task, but I infer that it's not a problem.

ChristianSt
2014-04-16, 06:57 AM
Pardon my ignorance, but what precisely is meant by a 'problem' in this instance? Given a sufficiently broad definition of 'problem', one could just straightforwardly demand output exponential in the problem size, say - for example, 'Print out your name 2^n times, where n is given' is certainly a task, but I infer that it's not a problem.

You can basically rewrite it as a problem suitable for for complexity theory if you want. The only thing you need is either to "hardcode" a name into your question or have the name as additional input.

But in any case that problem is outside of NP (and therefore outside of P), since it isn't possible to verify the output list in polynomial time. (Since you deliberately made it impossible with requiring an exponentially long output.)



A "problem" is basically any function from an input space to an output space. (often the same space)

Complexity classes try to group such problems/function into different groups with certain properties (how fast can they be solved, how much space they take up, etc...).
And one interesting aspect how different complexity classes related to each other. (Is one a subgroup of the other? Are they equal?)

If a problem belongs to P it can be solved in polynomial time in all cases.
If a problem belongs to NP it can be verified in polynomial time in all cases.

From that it also directly follows that P is inside NP.

But neither P nor NP really say anything about average runtime. Some instances of NP-complete problems can be solved pretty easy. But there are instances which are problematic to solve fast (either because it is impossible and P!=NP or because no one has figured out how to solve them fast).

This is a really tricky business for cryptography: Normally they use some backdoor-function (basically a function f(x)=y where the inverse function f -1(y)=x, where f(x) is easy to compute and f -1(y) is hard to compute). Most of these involve hard problems, some of them NP-complete problems. But nevertheless some of them are not safe, because they only present easy instances of the hard problem.



[fast/easy is basically misleading. It is better to use "polynomial time" instead of it, since if you have a really large polynomial isn't really practicably for fast computation.]


EDIT: Another important point when talking about P/NP and numerical input: The used input size (="n") is normally the length of the input, see Pseudo-polynomial time (http://en.wikipedia.org/wiki/Pseudo-polynomial_time)

Frozen_Feet
2014-04-16, 09:56 AM
Also, that article seems to imply that quantum mechanics being NP-hard makes there be "barrier" between classical and quantum physics, which also doesn't make any sense. NP-hardness does not mean "there is a limit to the size the quantum system can be." as the article confidently states, but that as quantum systems become bigger it becomes harder to model them, until it's effectively impossible.


Have you ever heard of the hypothesis that the universe, like your computer screen, has a certain minimum "resolution"? Basically, like a singular pixel on your screen, there's a size of particles beyond which no new information can be gained; no matter how much you zoom into that pixel, nothing new can be seen.

This idea has been used as an argument for the universe to be, essentially, like a computer simulation. And if the universe is like a computer simulation, it should follow from Arkady Bolotin's observation that no macroscopic quantum effects can be observed within the lifetime of the universe. They fall outside the resolution of the universe, and thus do not exists, because they can not be simulated.

I hope this makes sense to someone else than me...

pendell
2014-04-16, 03:01 PM
Okay fair bit of reading (ugh complex math = brain hurt) I wanna see if I have a certain loose summary of the points on this correct:

1) P != NP (unless resolved) suggests that there are certain classes of problems that math cannot provide a method to solve, except trying every possible combination manually. From the Travelling Salesman, you can find the shortest route to travel between multiple points, but you have to find it each and every time by brute forcing it. And that therefore some problems can never be practically solved because you run out of time and the complexity is too enormous.


Close but not quite. NP is not the class of problems that cannot be solved at all . It's the class of problems that can't be solved in a reasonable time. That's the difference between an intractable problem and an impossible one.

Imagine it like this: You've got a sorting machine. And the time it takes to sort stuff into bins is directly proportional to the number of widgets that you've thrown in. So it sorts 2 objects in 2 seconds, 60 objects in 60 seconds and so on. That's "polynomial time". Because the time it takes to do the operation can be calculated from the number of widgets.

That's true even if it's not a one-to-one correspondence.

Non-polynomial is when this breaks down and there is no upper time bound based on the number of widgets.

It might be in three seconds. Or three days. Or three thousand years in the future. Or ten days before the heat death of the universe. The point is, once you push the START button you either have no idea how long it's going to run, or it is going to run for a LONG time.

Doesn't mean that it won't be solved or can't be. It's just that it's probably going to take a very, very long time to finish.




2) Arkady Bolotin's paper is proposing that quantum mechanics is one such problem... ergo there will never reasonably be a method to practically deal with quantum effects above a certain complexity. We have no short cuts and don't have enough computing power to go the long way around.


Correct.

Respectfully,

Brian P.

Radar
2014-04-16, 03:28 PM
Have you ever heard of the hypothesis that the universe, like your computer screen, has a certain minimum "resolution"? Basically, like a singular pixel on your screen, there's a size of particles beyond which no new information can be gained; no matter how much you zoom into that pixel, nothing new can be seen.

This idea has been used as an argument for the universe to be, essentially, like a computer simulation. And if the universe is like a computer simulation, it should follow from Arkady Bolotin's observation that no macroscopic quantum effects can be observed within the lifetime of the universe. They fall outside the resolution of the universe, and thus do not exists, because they can not be simulated.

I hope this makes sense to someone else than me...
Well, such estimations have to take into account the difference between continuous and discrete quantum systems. I would have to look into it, but if the universe is discrete, then not only the computational power but also the complexity of quantum problems diminishes accordingly.

There is also this very neat holographic theorem, which states, that the maximum information stored in a given volume is restricted by the surface area of that volume. This means, that the bigger area we deal with, the lower the information density can be and thus we might have enough computational power to deal with it all.

That being said, it is late and I'm tired, so I can only hope not to mix things up.

reaverb
2014-04-20, 08:01 PM
[macroscopic quantum effects] fall outside the resolution of the universe, and thus do not exists, because they can not be simulated.Anything discrete can be simulated, even NP-hard problems, given an arbitrary amount of time and resources. Since the universe under this theory is a virtual creation of these simulations, the amount of time and resources used to simulate the universe is unnoticeable from within the universe.

Frozen_Feet
2014-04-20, 09:22 PM
Yes, but provided the simulation known as the universe has non-arbitrary time-resource, then it would make sense for the simulation to have a cut-off point, leading to the described phenomenom: after size of an object reaches some tresshold, quantum effects cease to be observed.

Radar
2014-04-21, 06:41 AM
A few more things to consider:
1. If the whole universe is a simulation run on a hardware with finite computational capabilities, then the universe needs to be finite in size as well, which is quite unlikely considering current measurments of expansion rate.
2. Even if the universe were finite, then it is seriously large, so any potential artificial cut-off on quantum effects would probably be way beyond our scale. Considering how rare such large-scale phenomenons are estimated to be anyway, it wouldn't be much of a computational problem in the grand scale of things.

On a slightly different note, we frankly do observe macroscopic quantum effects:
1. Ferromagnetism comes from purely quantum interactions between electrons in large chunks of matter. Granted, the effect can be derived with just the nearest-neighbour interactions, so it's a bit of a mixed case.
2. Superfluidity and Bose-Einstein condensation were obtained with considerably large samples, to the point, were one could clearly observe the fountain effect (http://www.youtube.com/watch?v=kCJ24176enM) for helium.
3. In the attempt to observe gravitational waves directly in laser interferometers, scientists had to circumvent the uncertanity principle, when they measured the distance between mirrors of considerable size. Short description of the problem can be found here (http://relativity.livingreviews.org/open?pubNo=lrr-2009-2&page=articlesu17.html).

NichG
2014-04-21, 07:30 PM
Why is there any assumption that the universe has to be a simulation? Or for that matter, a simulation running on hardware that has equivalent computational structure as the hardware that we currently use to do computer simulations? There's a weird bias in that sort of reasoning, which is that somehow a classical computational paradigm is 'more realistic' and any sort of underlying quantum mechanics must be faked by the emulator, which is kind of ridiculous.

Computational complexities depend on the computational paradigm too - you can consider, for example, the case where available parallelism is much greater than problem size; on a massively parallel machine, for example, some O(N) problems become O(1) when measured in terms of 'parallel cycles' since each of the N things can be cleanly separated, whereas others become O(log(N)) due to the cost of aggregating the results.

Anyhow, as far as macroscopic quantum effects go, the superfluid/superconductivity cases are interesting in that they're examples of massive simplification happening to the underlying high-dimensional quantum theory. In a superfluid, you have N objects that all exist in the same quantum state in a way which is basically fully separable, and N goes to (basically) infinity. So rather than having an infinite-dimensional PDE, you can model the superfluid with a simple 3D PDE with a field for the ground state occupancy (the superfluid wavefunction), a field that sums over all excited state occupancies (the 'normal fluid'), and a vector potential. So despite it being macroscopically quantum mechanical, its computationally easy because the physics of it involves collapse into a single state.

One could conclude that the reason you can see macroscopic quantum effects in a superfluid is precisely because its a phase transition that collapses the fully heterogeneous infinite-dimensional system onto a finite-dimensional system. I don't think I fully buy that as a general rule, but its certainly one reason why the particular macroscopic effect in that case is so obviously quantum mechanical.

Radar
2014-04-22, 03:56 AM
Why is there any assumption that the universe has to be a simulation? Or for that matter, a simulation running on hardware that has equivalent computational structure as the hardware that we currently use to do computer simulations? There's a weird bias in that sort of reasoning, which is that somehow a classical computational paradigm is 'more realistic' and any sort of underlying quantum mechanics must be faked by the emulator, which is kind of ridiculous.
I guess it is just the way we (as in people) always tend to view the world through things we are most familiar with. When Newton published his mechanics, various intelectuals started to see the universe as a complicated and very precise clockwork. When all the great revolutions in physics of the early 20th century happened, the paradigm shifted again. Relativity got such publicity not only because of the great scientific significance, but also because it dramaticaly changed our general view of the world. For the last few decades more and more people work with computers so we again start to shift our view of the world, which is also reflected in physics, since we treat information as a physical quantity to a degree. There are even quite tautological statements, that the universe is a big quantum computer, that calculates itself.


Computational complexities depend on the computational paradigm too - you can consider, for example, the case where available parallelism is much greater than problem size; on a massively parallel machine, for example, some O(N) problems become O(1) when measured in terms of 'parallel cycles' since each of the N things can be cleanly separated, whereas others become O(log(N)) due to the cost of aggregating the results.
True, but this trades complexity in time for complexity in memory and computational power used.

Murska
2014-04-22, 04:53 AM
The entire universe is a quantum effect. There is no point at which suddenly quantum mechanics stop applying and things start working by some other set of physics. No observed evidence for such a point, at least, and believe me people have been looking.

Radar
2014-04-22, 05:13 AM
The entire universe is a quantum effect. There is no point at which suddenly quantum mechanics stop applying and things start working by some other set of physics. No observed evidence for such a point, at least, and believe me people have been looking.
That is true, but the statitical behaviour of large systems makes it seem that things revert to classical physics at larger scale. Thus, people speculate.

As a side note, I just remembered another good example of large-scale quantum systems: electric conductors, semi-conductors and insulators are all described by band theory, which is inherently reliant on Fermi statistics and atomic orbital interference. It's even better then superfluids, because you can't simplify the system in any way.

And as it's always the case, there is a XKCD comics (http://xkcd.com/505/) on the subject.

Murska
2014-04-22, 05:37 AM
Mm. It feels like such a basic mistake to think about something like 'reverting to classical mechanics'. Classical mechanics is a special case of quantum mechanics, something less fundamental and less real. It is also less simple, in a way.

Frozen_Feet
2014-04-22, 08:26 AM
Why is there any assumption that the universe has to be a simulation?

It's a cosmologic hypothesis, one out of many, based on Planck time and the idea that universe has a set resolution. I mentioned it because within those premises the proposition in the article makes most sense. Barring evidence for the hypothesis, it doesn't invalidate any of the other cosmological hypotheses you could make.

NichG
2014-04-22, 08:42 AM
True, but this trades complexity in time for complexity in memory and computational power used.

That's making the implicit assumption that the infinitely parallel system is built out of serial components. If the underlying building blocks of your computational system or universe are such that this isn't the case, then you aren't actually making this trade, because there isn't an underlying serial/classical computer for which making use of the infinite parallelism is actually taking up more nodes.

Lets take a neuron for example. Having a neuron with a fan-out of 10 or a fan-out of 1000 doesn't really have a significantly different resource cost, compared to the cost of having two neurons instead of one. This is markedly different than the rules which underly electronic chip design, where large fan-out is difficult to perform the layouts for. This means that a neuron has an extra computational operation it can perform 'cheaply' that would be an O(N) process for a serial computer - namely, it can sum all of its inputs as an O(1) step. Adding together N numbers is O(N) for a serial computer, and its even O(log(N)) for an infinitely parallel machine, so the particularities of neuron architecture means that you get different intrinsic computational complexities due to the special circumstances.

Now, if you try to push this too far on a neuron, eventually you would end up seeing that this scaling wouldn't hold, since you can't - for example - fit 10^23 synapses on a single neuron, but that's in essence because the computational abilities of a neuron are bounded by the particular limits of the 'computer' that 'emulates' it (namely, the physical universe and classical mechanics) - but because of separation of scales, that limit exists far above the sorts of problems that the brain needs to solve, so within that problem domain the brain is fundamentally a more efficient computational device than a serial computer.

(As an aside, that 'parallel reduction is O(1)' property is probably fundamental to the physical universe, even in the classical limit. The reason is that if you look at a particular 'box' in space and have a series of 'events' at different times and places that emit photons, the photons flowing into that box at a given point in time is the result of an integration over the entirety of time up to that point. So the older the universe gets, the larger 'N' it can integrate as an O(1) computational step. )

If you built a computer out of modules that could solve a travelling salesman problem of size N in O(1) time, then basically that would let you get around a lot of the practical complexity bounds of certain classes of NP problems, because your underlying building blocks have a different structure than what is assumed for the theory of computation (e.g. Turing-tape sorts of things).


That is true, but the statitical behaviour of large systems makes it seem that things revert to classical physics at larger scale. Thus, people speculate.


This can probably be mathematically generalized by looking at what happens when you take large systems and then indiscriminantly throw out knowledge of some large set of degrees of freedom (basically, you're going to the statistical limit where you only know moments of the distribution and not particular atom positions/etc).

There's another case in which this sort of statisticalization happens and has a significant effect - time reversibility. If you, for example, do a computer simulation with deterministic Hamiltonian physics then the underlying time reversibility of physics means that you can take any state and propagate it forward or backward in time to get to any other state on the chain of dynamics. Furthermore, the difference is simple a change of signs on velocities and delta-time direction. So the physical trajectory by which a teacup falls and shatters into a million pieces is just as microscopically valid as a trajectory by which the teacup's shards re-unite into an intact teacup.

However, once you lose information (any information really - floating point error is enough to see this) about the details of the state, then it becomes impossible to do this - because you lose the same amount of information whether you run the simulation forward or backward. That means if you go forward and then backward, you've actually lost twice as much information as just going forward, rather than zero. So the information loss process doesn't obey the underlying time reversibility of physics, which means that statistical effects allow you to have things like 'friction' and other macroscopic dissipative mechanisms. The thing is, these are all 'illusionary' in the sense that the underlying dynamics aren't losing any information, but the person who is trying to simulate/model/observe them is incapable of actually retaining all of the information.

Decoherence is, in principle, a similar effect - entanglement works by the fact that knowledge of the conserved sum of a set of variables allows you to infer things about variables you cannot directly observe, but if those variables couple to external random variables than the random environmental fluctuations can 'explain away' mismatches between the actual sum and what you think it is, which means that your power of inference is quickly destroyed by the uncertainties. It's very much like losing track of the time-reversal trajectory and 'forgetting' that the system was once a teacup before all the state variables got jumbled up together.



As a side note, I just remembered another good example of large-scale quantum systems: electric conductors, semi-conductors and insulators are all described by band theory, which is inherently reliant on Fermi statistics and atomic orbital interference. It's even better then superfluids, because you can't simplify the system in any way.

And as it's always the case, there is a XKCD comics (http://xkcd.com/505/) on the subject.

Band-theory is also a dimensional reduction - when you construct the band theory of solids, you assume that the wavefunctions of electrons in the solid are separable. Basically, its the assumption that the components are linear in the current (even if they might be non-linear in the voltage), because the only way the electrons interact with eachother is by filling up the available (static) states, which are set by the crystal symmetries of the underlying atoms. We're very lucky that the electron mass is so much smaller than the proton/neutron mass, because it enables this kind of simplification where the crystal sites themselves do not have to be taken to be quantum mechanical objects. If we had crystals where the fluctuations carrying transport currents were of the same mass as the components creating the lattice, things would get much more complex. I suppose the place to look for that would be quantum mechanical effects on the phonon bandstructure (transport of heat/sound, rather than electricity), since that is primarily driven by the crystal centers... but I don't know a physical material/situation where that is experimentally known to be significant - shock waves in relativistic impact maybe?

Another example of where you might get significant many-electron effects would be in molecules with networks of single/double bond loops/chains. It turns out that single/double bond chains have what are called 'resonance structures' in chemistry, where there are multiple equivalent configurations for the bonds when you temporarily stick an extra electron in and therefore add an extra bond that doesn't fit nicely into the structure. That extra electron can respond to electromagnetic forces down the entire length of the chain, which is why dyes have colors in the visible and why bleach remove stains - the long single/double chain acts like an antenna for wavelengths as long as the chain, so a long enough chain gets into the visible range; bleach replaces some of those double bonds with chlorinated single bonds, which basically snaps the antenna into smaller pieces that absorb in the UV.

So if you had a very well-chosen structure of interlocking loops, you could potentially create a system where multi-electron entanglement is naturally very important - because adding an electron somewhere in the network has a very non-local effects on which antennae can still function and which ones get broken in order to satisfy the right number of bonds on each atom. That said, this is speculation. My guess is that even that system would tend to become statistically classical eventually at some particular temperature, because the larger the molecule gets the more places for thermal fluctuations to interfere with the precise network you've created.

Max™
2014-04-22, 04:52 PM
If this is true, then Charles Stross (http://www.antipope.org/charlie/blog-static/fiction/toast/toast.html#antibodies) and I will sleep better at night.

Radar
2014-04-23, 03:15 AM
That's making the implicit assumption that the infinitely parallel system is built out of serial components. (stuff of importance)
If you built a computer out of modules that could solve a travelling salesman problem of size N in O(1) time, then basically that would let you get around a lot of the practical complexity bounds of certain classes of NP problems, because your underlying building blocks have a different structure than what is assumed for the theory of computation (e.g. Turing-tape sorts of things).
I never given it that much thought really. I only vaguely remember, that quantum systems can be used to solve problems that are susceptible to parallel reduction, but the whole idea of constructing computers based on something else then Turing paradigm (or how you call it) kind of eluded me. Thank you for an interesting new information. :smallsmile:


This can probably be mathematically generalized by looking at what happens when you take large systems and then indiscriminantly throw out knowledge of some large set of degrees of freedom (basically, you're going to the statistical limit where you only know moments of the distribution and not particular atom positions/etc).
This is exactly, where jokes about physicists come from: in order to explain any given phenomenon you first chop off everything you possibly can and then work out the details one by one. Sometimes it's the statistical limit, other times it's uniform density of matter, or approximation of real objects with point masses or charges.


There's another case in which this sort of statisticalization happens and has a significant effect - time reversibility. (...) It's very much like losing track of the time-reversal trajectory and 'forgetting' that the system was once a teacup before all the state variables got jumbled up together.
Time reversability is a tricky subjects - especially in the breaking of a cup example. When you simulate the breaking, you get a broken cup, but it will never be the same one you would get from experiment (or another simulation for that matter). We don't care as long as the fragments are roughly the same size and their spread more or less matches reality. There are many arrangements of cup fragments, which look pretty much the same for us, so we don't percieve inaccuracies or divergences in our model. When we try to go back, we face the problem of obtaining a very specific and rare state (an unbroken cup), which is distinctively different from all the others, thus we clearly see all the faults of our simulation.

The problem of information loss and time reversability is related to the stability of the model used. In general, if there is any form of dampening in the underlying PDEs, then the inverse problem will be unstable and thus difficult to deal with. One of the main reason for the bad resolution of early X-ray tommographs was the difficulty of solving Radon equation without generating too much noise.


Band-theory is also a dimensional reduction - when you construct the band theory of solids, you assume that the wavefunctions of electrons in the solid are separable. (...) I suppose the place to look for that would be quantum mechanical effects on the phonon bandstructure (transport of heat/sound, rather than electricity), since that is primarily driven by the crystal centers... but I don't know a physical material/situation where that is experimentally known to be significant - shock waves in relativistic impact maybe?
I would rather call it a forced diagonalisation then outright dimensional reduction, but the distincition might be irrelevant. The thing is, to simulate the real thing with all the shot noises, heating due to electrical resistance and other tiny details, you can't just slap the same quantum state on every electron like in the superfluid case. In other words, reality can't use the same approximations we do in order to grasp the general concept.

As for the phonons, they behave as massless particles (as far as I remember), so I don't think they are a good candidate and relativistic impacts would tear any material apart, so there would be no more phonons to look for.


Another example of where you might get significant many-electron effects would be in molecules with networks of single/double bond loops/chains. (...) My guess is that even that system would tend to become statistically classical eventually at some particular temperature, because the larger the molecule gets the more places for thermal fluctuations to interfere with the precise network you've created.
This is a very neat and interesting phenomenon. :smallsmile:

NichG
2014-04-23, 05:29 AM
This is exactly, where jokes about physicists come from: in order to explain any given phenomenon you first chop off everything you possibly can and then work out the details one by one. Sometimes it's the statistical limit, other times it's uniform density of matter, or approximation of real objects with point masses or charges.

Without remorse! Any measurement instrument does some sort of coarse-graining anyhow. You can't see every atom in a glass of water individually, so a theory that is sensitive to where each atom is is not a very useful theory.



Time reversability is a tricky subjects - especially in the breaking of a cup example. When you simulate the breaking, you get a broken cup, but it will never be the same one you would get from experiment (or another simulation for that matter). We don't care as long as the fragments are roughly the same size and their spread more or less matches reality. There are many arrangements of cup fragments, which look pretty much the same for us, so we don't percieve inaccuracies or divergences in our model. When we try to go back, we face the problem of obtaining a very specific and rare state (an unbroken cup), which is distinctively different from all the others, thus we clearly see all the faults of our simulation.

The problem of information loss and time reversability is related to the stability of the model used. In general, if there is any form of dampening in the underlying PDEs, then the inverse problem will be unstable and thus difficult to deal with. One of the main reason for the bad resolution of early X-ray tommographs was the difficulty of solving Radon equation without generating too much noise.

Even in a 'perfect simulation', any information loss (e.g. by measuring a physical system using imperfect measurement devices and then attempting to simulate the time reversal) will break the time reversal symmetry. The case of dampening is actually much much worse - having any non-Hamiltonian terms in the dynamics basically means that even at steady state, you're constantly producing entropy (e.g. losing information). This is because Hamiltonian dynamics preserve phase space volume (so if you propagate some ball of states on the x,v plane/hypercube forward or backward in time, the volume of that ball is preserved). With dissipation, the phase space volume shrinks as you move forward in time; to put it another way, for a given number of states N, there are more than N states which could have produced it, which means that you can't uniquely go backwards and figure out what state you came from. So the information loss is baked in by the dissipative process, above and beyond issues of numerical or measurement error.



I would rather call it a forced diagonalisation then outright dimensional reduction, but the distincition might be irrelevant. The thing is, to simulate the real thing with all the shot noises, heating due to electrical resistance and other tiny details, you can't just slap the same quantum state on every electron like in the superfluid case. In other words, reality can't use the same approximations we do in order to grasp the general concept.


You can get shot noise and resistance heating without going to a fully QM entangled system. Shot noise obeys a Boltzmann distribution which means its determined by ergodicity - e.g. state counting - not by detailed dynamics. Electrical resistance heating is a little more complex since it is non-equilibrium, but iirc the calculation doesn't require more than a 1d (continuum) description of the density of states as a function of energy. I think that can even get you stuff like the Hall Effect, but it's been awhile.



As for the phonons, they behave as massless particles (as far as I remember), so I don't think they are a good candidate and relativistic impacts would tear any material apart, so there would be no more phonons to look for.


Possibly; my thought was that relativistic impact would be required for the wavelengths of the nuclei to be large enough to actually be comparable to the wavelengths of the phonons they carry and have enough energy to sit in the non-linear regime. In the non-linear regime, I would expect phonons to pick up a mass associated with plastic deformation or something, but I'm not sure.

Radar
2014-04-23, 08:02 AM
Without remorse! Any measurement instrument does some sort of coarse-graining anyhow. You can't see every atom in a glass of water individually, so a theory that is sensitive to where each atom is is not a very useful theory.
And yet, noone likes to work in frictionless vacuum (https://xkcd.com/669/) after all. :smalltongue:

That being said, I fully agree on the necesity of it all.


Even in a 'perfect simulation', any information loss (e.g. by measuring a physical system using imperfect measurement devices and then attempting to simulate the time reversal) will break the time reversal symmetry.
Almost, but not quite. Information loss works against us in both directions, so there is still time reversibility symmetry in Hamiltonian systems in the sense, that the inacuracies accumulate at the same rate regardless of the direction of our simulation - it's just that we don't always see the problem when we simulate things forward in time. Dissipation is what makes the situation asymmetrical.


You can get shot noise and resistance heating without going to a fully QM entangled system. Shot noise obeys a Boltzmann distribution which means its determined by ergodicity - e.g. state counting - not by detailed dynamics. Electrical resistance heating is a little more complex since it is non-equilibrium, but iirc the calculation doesn't require more than a 1d (continuum) description of the density of states as a function of energy. I think that can even get you stuff like the Hall Effect, but it's been awhile.
Frankly, all things we are able to simulate or otherwise describe are susceptible to approximations, but in those specific cases nature has to go the long way to give us proper results.


Possibly; my thought was that relativistic impact would be required for the wavelengths of the nuclei to be large enough to actually be comparable to the wavelengths of the phonons they carry and have enough energy to sit in the non-linear regime. In the non-linear regime, I would expect phonons to pick up a mass associated with plastic deformation or something, but I'm not sure.
If you look for massive, free particles in solids, then you should deal with crystal or magnetic defects. The problem is, the former are too massive to exhibit any measurable quantum qualities and the latter are of too low energy in those rare cases, when the quantum nature is more prominent. Yet another problem is, that the underlying non-linar field equations are non-integrable more often then not. And the integrable cases only work in one dimension (plus time). It makes the full quantisation next to impossible and the semiclassical limit most often gives you only corrections to the energy of those defects.

NichG
2014-04-23, 10:42 AM
Almost, but not quite. Information loss works against us in both directions, so there is still time reversibility symmetry in Hamiltonian systems in the sense, that the inacuracies accumulate at the same rate regardless of the direction of our simulation - it's just that we don't always see the problem when we simulate things forward in time. Dissipation is what makes the situation asymmetrical.


Incomplete information is, in essence, a form of effective dissipation, because it means that there is some portion of the energy of the system that might have been extractable if you had retained full information by e.g. doing a Maxwell's Demon sort of process, but because of the equivocation between sets of states you cannot create the synchronization between the energy-extracting process and the underlying microstates in order to get a net energy out - you end up losing as much due to errors as you gain when you got it right. So effectively, due to information that has been lost, something that would have been considered a coherent, ordered source of energy ends up being 'heat' and can only be extracted via very coarse thermodynamic means.

Of course that way of putting it is very anthropocentric. But you can think of a 'device' or 'process' as having or lacking information based on the sorts of things it can respond to sensitively, versus the sorts of things it is incapable of distinguishing. This basically portions the energy available differently into 'heat' versus 'not-heat', which means that the effective temperature is different for heat engines comprised of one or the other device. The more energy has to be apportioned to heat in how the device interacts with the system, the lower the Carnot efficiency of that heat engine is going to be, so the less energy the device can access.

In a simulation, the 'devices' are each degree of freedom that you're simulating, and the 'heat' is any energy contained in the degrees of freedom you're throwing out/coarse-graining away. Which means that you can have any sort of energy transfer you want from the coherent/fully described degrees of freedom to the statistical degrees of freedom, but the reverse process is bounded by the statistical description of those missing degrees of freedom. So effectively the system leaks energy into a thermodynamic residual 'bucket' which is hard to get stuff back out from, which has the same mathematical form as dissipation.



Frankly, all things we are able to simulate or otherwise describe are susceptible to approximations, but in those specific cases nature has to go the long way to give us proper results.

If you look for massive, free particles in solids, then you should deal with crystal or magnetic defects. The problem is, the former are too massive to exhibit any measurable quantum qualities and the latter are of too low energy in those rare cases, when the quantum nature is more prominent. Yet another problem is, that the underlying non-linar field equations are non-integrable more often then not. And the integrable cases only work in one dimension (plus time). It makes the full quantisation next to impossible and the semiclassical limit most often gives you only corrections to the energy of those defects.

Well we expected things to get hard anyhow once we actually found a system that had macroscopic quantum entanglement, due to that whole infinite-dimensional problem thing. Intuitively, something being integrable would suggest that we're actually not looking in the right places, because you only get things that are nice and analytically tractable when they're highly symmetric. And if they're highly symmetric there's good odds that there will be some sort of wavefunction independence in a particular basis, or a clean separation of scales, or something else that basically allows for an O(N) style of description.

So... quantum mechanics of gravitons scattering off of eachother maybe? iirc, whenever gravitons scatter at an angle, you get singularities in the metric (though I guess they might just be coordinate singularities?), so at least there's no worry you get some sort of simple, separable case where the wave functions of each graviton can be easily factored.