Results 1 to 30 of 30
Thread: P != NP by quantum mechanics

20140409, 09:40 AM (ISO 8601)
 Join Date
 Dec 2006
 Location
 Raleigh NC
 Gender
P != NP by quantum mechanics
A new hypothesis, According to this article .
For the uninitiated, P and NP are computational science ideas  P is the set of all problems that can be easily solved by computer algorithms, while NP is the set of problems that are easily *verifiable* but computationally intractable. One such example would be the travelling salesman problem.
Anyway , the article's author suggests that the reason we can't model macroscopic objects according to quantum physics, as opposed to Newtonian, is because Shroedinger's equation is in the NP class. And if P != NP, this means that it's not going to EVER be practical to model objects on the quantum level, regardless of technological advances.
It seems to hold up, but it's based on the assumption that P != NP.
Which would have some interesting implications. Wikipedia discusses the issue . Oh, wait, so does XKCD For one thing, it would mean that certain types of cryptography would remain robust, regardless of advances in technology. It would also mean that a lot of other problems  such as proving mathematical theorems  will remain intractable, not easily solvable, for a long time.
Disappointing, if true. But it does seem that there appears to be a nexus between quantum physics and computational science, used to model those interactions.
Still, it's only argument from example. The fact that Shroedinger's equation is NPhard does not necessarily prove there is no P method of solving it  only that we haven't found it yet. Thus , an additional NP class problem in the set does not , I think , prove conclusively that P != NP. Still seems likely that is the case, though.
Respectfully,
Brian P."Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."
Valery Legasov in Chernobyl

20140409, 07:27 PM (ISO 8601)
 Join Date
 Dec 2010
Re: P != NP by quantum mechanics
Rather than thinking of what our computers can currently do, think about what the underlying physics of the universe being NP says about the available computational power given a system of fixed size. If the universe is NP at an underlying level, that means that you're building computers out of pieces that can do NP calculations in P time. This is the whole draw of quantum computing.
However, in practice, this doesn't work as cleanly as one might think. For example, because of decoherence it can be that the effort to retain the full gamut of quantum mechanical effects may well scale as NP with the size of the fullyentangled quantum mechanical system you want to create. This is because to create a matrix of entangled quantum mechanical states, you have to protect from decoherence, and the system seems to be exponentially sensitive to external noise as the number of mutually entangled states grow. I don't think its yet known whether this is just a limitation of the methods we're using to build qubits right now, or if its something generic.
This is probably an example of a fragile formal limit, by which I mean something in which the worstcase or most extreme case is NP, but such that the set of such cases are actually a small subset of the problem space (e.g. there is one point or a region at which the problem is truly NP, and every other point is P with a prefactor that diverges as you approach the NP point). There's some work (in particular I'm thinking of stuff by Florent Krzakala and Lenka Zdeborova, but I haven't done a thorough literature search) that shows that NP problems such as NSAT can be mapped onto glassy systems in physics, and that there is a critical point associated with the divergence to NP corresponding to one of the glass transition temperatures, such that the amortized time to find the solution diverges differently with N at the critical point versus away from the critical point.
If this is the case in general, then things get very muddy. One could say that we don't need quantum mechanics to study most macroscopic systems because most macroscopic systems are operating in a regime in which their fluctuations cause decoherence. This would be associated with a particular scale, which means we'd have a scale separation we could use to perform coarsegraining, which lets us capture the microscopic QM effects in terms of how they how they modify and determine the classical properties of the coarsegrained objects. For example, we need QM to calculate the color of a material, but once we've done that calculation we can use a macroscopic (classical or relativistic) model for the interaction between that material and light if we want to do e.g. radiative transport calculations. We don't need to do the entire thing as a quantum field theory.
But if we were to go to macroscopic systems near that critical point, suddenly QM becomes relevant at all scales and the problem of modelling them becomes truly NP. It also means that if we want to build a quantum computer, the difficulty is tuning the system towards that critical point so that we preserve as much of the NP underlying behavior as possible.
This does mean that finding out if its possible to get a system which undergoes selforganized criticality with respect to some P to NP transition would be really valuable. That way, instead of having to exert exponential effort to make a large entangled system, we would just have to find a system whose internal dynamics tend to entangle more and more of it in a natural fashion. I'm not sure such a thing exists though, or how to look for it.

20140409, 08:13 PM (ISO 8601)
 Join Date
 Jan 2014
Re: P != NP by quantum mechanics
Actually, assuming that P !=NP (which basically any computer scientist will tell you is almost certainly true) It does prove there is no P method of solving it. As in, that's the entire reason the NPhard classification exists.
Also, that article seems to imply that quantum mechanics being NPhard makes there be "barrier" between classical and quantum physics, which also doesn't make any sense. NPhardness does not mean "there is a limit to the size the quantum system can be." as the article confidently states, but that as quantum systems become bigger it becomes harder to model them, until it's effectively impossible.
In addition, the use of Avogadro's number as the definition of a "Macroscopic system" is pretty ridiculous. Avogadro's number is value used in chemistry so chemists don't have to say a gram of Hydrogen has around 600000000000000000000000 atoms, and doesn't actually have a physical meaning beyond that.

20140409, 11:27 PM (ISO 8601)
 Join Date
 Apr 2010
 Location
 Hell's Heart
Re: P != NP by quantum mechanics

20140410, 02:59 AM (ISO 8601)
 Join Date
 Feb 2012
 Location
 Germany
Re: P != NP by quantum mechanics
While the underlying topic is highly interesting, the article features some really dubious/wrong statements.
For a simple system, the equation can be solved by an ordinary computer in a reasonable time, so it falls into class of computational problems known as NP.
So the statement that NPhard problems can also be solved quickly and efficiently is the famous P=NP equation.
I haven't read the original paper, and it certainly can be correct, but this article seems to be rather dubious. Without checking that paper I would really believe anything it saye.
Problems with [table]?
All you want to know about [table]!The Order of the Stick
Kickstarter Reward Collection
Last updated: 2016/08/09, containing:
9 Crayon Drawings  21 Stick its  47 Signature Doodles
Custom Avatar made by the Giant.
Thanks!

20140410, 03:13 AM (ISO 8601)
 Join Date
 Dec 2010
Re: P != NP by quantum mechanics
An interesting aside, the map between classical Newtonian physics and the Schroedinger Equation is an example of something where you can take a wide class of nonlinear problems with finite numbers of degrees of freedom and turn them into linear problems with a number of degrees of freedom exponential in the number of degrees of freedom of the nonlinear problem. If you, for example, take some equation of motion in the variables 'x' and 'y' derived from an energy function H(x,y), then the behavior of the expectation value of the vector (x,y) over the corresponding wave function psi(x,y) looks like the behavior of the classical degrees of freedom when you take the limit of hbar > 0. The original equation of motion for r could be some ghastly nonlinear thing with two discrete degrees of freedom, and it's replaced with a linear PDE over a 2D continuous space (which you could in principle discretize to solve approximately to a bounded error). So the solution of the PDE has a cost which is exponential in the number of discrete degrees of freedom in the original ODEs. So you have a conversion between a P problem to an NP problem.
This is kind of interesting since in principle if you could do the reverse  take linear problems with a continuous set of degrees of freedom and find a nonlinear problem with a finite number of degrees of freedom and a finite number of terms that captures the dynamics of the appropriate moment integrals, then that's a form of complexity reduction that looks a lot like NP > P in character. The issue of course is that while this works for things governed by timereversibility and time translation symmetry (e.g. their dynamics are set by a Hamiltonian), its not something you could do generically.

20140414, 07:50 AM (ISO 8601)
 Join Date
 May 2007
 Location
 The Land of Cleves
 Gender
Re: P != NP by quantum mechanics
Anyway , the article's author suggests that the reason we can't model macroscopic objects according to quantum physics, as opposed to Newtonian, is because Shroedinger's equation is in the NP class.Time travels in divers paces with divers persons.
—As You Like It, III:ii:328
Chronos's Unalliterative Skillmonkey Guide
Current Homebrew: 5th edition psionics

20140414, 08:43 AM (ISO 8601)
 Join Date
 Aug 2009
Re: P != NP by quantum mechanics
Okay fair bit of reading (ugh complex math = brain hurt) I wanna see if I have a certain loose summary of the points on this correct:
1) P != NP (unless resolved) suggests that there are certain classes of problems that math cannot provide a method to solve, except trying every possible combination manually. From the Travelling Salesman, you can find the shortest route to travel between multiple points, but you have to find it each and every time by brute forcing it. And that therefore some problems can never be practically solved because you run out of time and the complexity is too enormous.
2) Arkady Bolotin's paper is proposing that quantum mechanics is one such problem... ergo there will never reasonably be a method to practically deal with quantum effects above a certain complexity. We have no short cuts and don't have enough computing power to go the long way around.
Am I getting that without horrible errors?

20140414, 08:51 AM (ISO 8601)
 Join Date
 Dec 2010
Re: P != NP by quantum mechanics
Not really. You can certainly do better than brute force on NP problems, and there are lots of algorithms to do so. In fact, most NP problems have large sets of inputs for which you can discover the solution in polynomial time using a clever algorithm. The P != NP thing is a statement that there are two distinct classes  problems where the worst case can be solved in an amount of time that is polynomial in the problem size, and problems where the worst case cannot be reduced to something polynomial in the problem size. If P = NP, then there is in general always a way to solve any problem in polynomial time even in the worst possible case.
2) Arkady Bolotin's paper is proposing that quantum mechanics is one such problem... ergo there will never reasonably be a method to practically deal with quantum effects above a certain complexity. We have no short cuts and don't have enough computing power to go the long way around.
Am I getting that without horrible errors?

20140416, 05:34 AM (ISO 8601)
 Join Date
 Jun 2011
 Gender
Re: P != NP by quantum mechanics
Projects: Homebrew, Gentlemen's Agreement, DMPCs, Forbidden Knowledge safety, and Top Ten Worst. Also, Quotes and RACSD are good.
Anyone knows blue is for sarcas'ing in · "Take 10 SAN damage from Dark Orchid" · Use of gray may indicate nitpicking · Green is sincerity

20140416, 05:47 AM (ISO 8601)
 Join Date
 Mar 2009
 Location
 Long Shiny Cloudland
 Gender
Re: P != NP by quantum mechanics
Pardon my ignorance, but what precisely is meant by a 'problem' in this instance? Given a sufficiently broad definition of 'problem', one could just straightforwardly demand output exponential in the problem size, say  for example, 'Print out your name 2^n times, where n is given' is certainly a task, but I infer that it's not a problem.
If I creep into your house in the dead of night and strangle you while you sleep, you probably messed up your grammar.
I'm always extremely careful to hedge myself against absolute statements.

20140416, 06:57 AM (ISO 8601)
 Join Date
 Feb 2012
 Location
 Germany
Re: P != NP by quantum mechanics
You can basically rewrite it as a problem suitable for for complexity theory if you want. The only thing you need is either to "hardcode" a name into your question or have the name as additional input.
But in any case that problem is outside of NP (and therefore outside of P), since it isn't possible to verify the output list in polynomial time. (Since you deliberately made it impossible with requiring an exponentially long output.)
A "problem" is basically any function from an input space to an output space. (often the same space)
Complexity classes try to group such problems/function into different groups with certain properties (how fast can they be solved, how much space they take up, etc...).
And one interesting aspect how different complexity classes related to each other. (Is one a subgroup of the other? Are they equal?)
If a problem belongs to P it can be solved in polynomial time in all cases.
If a problem belongs to NP it can be verified in polynomial time in all cases.
From that it also directly follows that P is inside NP.
But neither P nor NP really say anything about average runtime. Some instances of NPcomplete problems can be solved pretty easy. But there are instances which are problematic to solve fast (either because it is impossible and P!=NP or because no one has figured out how to solve them fast).
This is a really tricky business for cryptography: Normally they use some backdoorfunction (basically a function f(x)=y where the inverse function f ^{1}(y)=x, where f(x) is easy to compute and f ^{1}(y) is hard to compute). Most of these involve hard problems, some of them NPcomplete problems. But nevertheless some of them are not safe, because they only present easy instances of the hard problem.
[fast/easy is basically misleading. It is better to use "polynomial time" instead of it, since if you have a really large polynomial isn't really practicably for fast computation.]
EDIT: Another important point when talking about P/NP and numerical input: The used input size (="n") is normally the length of the input, see Pseudopolynomial timeLast edited by ChristianSt; 20140416 at 09:15 AM.
Problems with [table]?
All you want to know about [table]!The Order of the Stick
Kickstarter Reward Collection
Last updated: 2016/08/09, containing:
9 Crayon Drawings  21 Stick its  47 Signature Doodles
Custom Avatar made by the Giant.
Thanks!

20140416, 09:56 AM (ISO 8601)
 Join Date
 Sep 2009
 Gender
Re: P != NP by quantum mechanics
Have you ever heard of the hypothesis that the universe, like your computer screen, has a certain minimum "resolution"? Basically, like a singular pixel on your screen, there's a size of particles beyond which no new information can be gained; no matter how much you zoom into that pixel, nothing new can be seen.
This idea has been used as an argument for the universe to be, essentially, like a computer simulation. And if the universe is like a computer simulation, it should follow from Arkady Bolotin's observation that no macroscopic quantum effects can be observed within the lifetime of the universe. They fall outside the resolution of the universe, and thus do not exists, because they can not be simulated.
I hope this makes sense to someone else than me..."It's the fate of all things under the sky,
to grow old and wither and die."

20140416, 03:01 PM (ISO 8601)
 Join Date
 Dec 2006
 Location
 Raleigh NC
 Gender
Re: P != NP by quantum mechanics
Close but not quite. NP is not the class of problems that cannot be solved at all . It's the class of problems that can't be solved in a reasonable time. That's the difference between an intractable problem and an impossible one.
Imagine it like this: You've got a sorting machine. And the time it takes to sort stuff into bins is directly proportional to the number of widgets that you've thrown in. So it sorts 2 objects in 2 seconds, 60 objects in 60 seconds and so on. That's "polynomial time". Because the time it takes to do the operation can be calculated from the number of widgets.
That's true even if it's not a onetoone correspondence.
Nonpolynomial is when this breaks down and there is no upper time bound based on the number of widgets.
It might be in three seconds. Or three days. Or three thousand years in the future. Or ten days before the heat death of the universe. The point is, once you push the START button you either have no idea how long it's going to run, or it is going to run for a LONG time.
Doesn't mean that it won't be solved or can't be. It's just that it's probably going to take a very, very long time to finish.
2) Arkady Bolotin's paper is proposing that quantum mechanics is one such problem... ergo there will never reasonably be a method to practically deal with quantum effects above a certain complexity. We have no short cuts and don't have enough computing power to go the long way around.
Respectfully,
Brian P.Last edited by pendell; 20140416 at 03:10 PM.
"Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."
Valery Legasov in Chernobyl

20140416, 03:28 PM (ISO 8601)
 Join Date
 Jan 2007
Re: P != NP by quantum mechanics
Well, such estimations have to take into account the difference between continuous and discrete quantum systems. I would have to look into it, but if the universe is discrete, then not only the computational power but also the complexity of quantum problems diminishes accordingly.
There is also this very neat holographic theorem, which states, that the maximum information stored in a given volume is restricted by the surface area of that volume. This means, that the bigger area we deal with, the lower the information density can be and thus we might have enough computational power to deal with it all.
That being said, it is late and I'm tired, so I can only hope not to mix things up.In a war it doesn't matter who's right, only who's left.

20140420, 08:01 PM (ISO 8601)
 Join Date
 Jan 2014
Re: P != NP by quantum mechanics
Anything discrete can be simulated, even NPhard problems, given an arbitrary amount of time and resources. Since the universe under this theory is a virtual creation of these simulations, the amount of time and resources used to simulate the universe is unnoticeable from within the universe.

20140420, 09:22 PM (ISO 8601)
 Join Date
 Sep 2009
 Gender
Re: P != NP by quantum mechanics
Yes, but provided the simulation known as the universe has nonarbitrary timeresource, then it would make sense for the simulation to have a cutoff point, leading to the described phenomenom: after size of an object reaches some tresshold, quantum effects cease to be observed.
"It's the fate of all things under the sky,
to grow old and wither and die."

20140421, 06:41 AM (ISO 8601)
 Join Date
 Jan 2007
Re: P != NP by quantum mechanics
A few more things to consider:
1. If the whole universe is a simulation run on a hardware with finite computational capabilities, then the universe needs to be finite in size as well, which is quite unlikely considering current measurments of expansion rate.
2. Even if the universe were finite, then it is seriously large, so any potential artificial cutoff on quantum effects would probably be way beyond our scale. Considering how rare such largescale phenomenons are estimated to be anyway, it wouldn't be much of a computational problem in the grand scale of things.
On a slightly different note, we frankly do observe macroscopic quantum effects:
1. Ferromagnetism comes from purely quantum interactions between electrons in large chunks of matter. Granted, the effect can be derived with just the nearestneighbour interactions, so it's a bit of a mixed case.
2. Superfluidity and BoseEinstein condensation were obtained with considerably large samples, to the point, were one could clearly observe the fountain effect for helium.
3. In the attempt to observe gravitational waves directly in laser interferometers, scientists had to circumvent the uncertanity principle, when they measured the distance between mirrors of considerable size. Short description of the problem can be found here.In a war it doesn't matter who's right, only who's left.

20140421, 07:30 PM (ISO 8601)
 Join Date
 Dec 2010
Re: P != NP by quantum mechanics
Why is there any assumption that the universe has to be a simulation? Or for that matter, a simulation running on hardware that has equivalent computational structure as the hardware that we currently use to do computer simulations? There's a weird bias in that sort of reasoning, which is that somehow a classical computational paradigm is 'more realistic' and any sort of underlying quantum mechanics must be faked by the emulator, which is kind of ridiculous.
Computational complexities depend on the computational paradigm too  you can consider, for example, the case where available parallelism is much greater than problem size; on a massively parallel machine, for example, some O(N) problems become O(1) when measured in terms of 'parallel cycles' since each of the N things can be cleanly separated, whereas others become O(log(N)) due to the cost of aggregating the results.
Anyhow, as far as macroscopic quantum effects go, the superfluid/superconductivity cases are interesting in that they're examples of massive simplification happening to the underlying highdimensional quantum theory. In a superfluid, you have N objects that all exist in the same quantum state in a way which is basically fully separable, and N goes to (basically) infinity. So rather than having an infinitedimensional PDE, you can model the superfluid with a simple 3D PDE with a field for the ground state occupancy (the superfluid wavefunction), a field that sums over all excited state occupancies (the 'normal fluid'), and a vector potential. So despite it being macroscopically quantum mechanical, its computationally easy because the physics of it involves collapse into a single state.
One could conclude that the reason you can see macroscopic quantum effects in a superfluid is precisely because its a phase transition that collapses the fully heterogeneous infinitedimensional system onto a finitedimensional system. I don't think I fully buy that as a general rule, but its certainly one reason why the particular macroscopic effect in that case is so obviously quantum mechanical.Last edited by NichG; 20140421 at 07:31 PM.

20140422, 03:56 AM (ISO 8601)
 Join Date
 Jan 2007
Re: P != NP by quantum mechanics
I guess it is just the way we (as in people) always tend to view the world through things we are most familiar with. When Newton published his mechanics, various intelectuals started to see the universe as a complicated and very precise clockwork. When all the great revolutions in physics of the early 20th century happened, the paradigm shifted again. Relativity got such publicity not only because of the great scientific significance, but also because it dramaticaly changed our general view of the world. For the last few decades more and more people work with computers so we again start to shift our view of the world, which is also reflected in physics, since we treat information as a physical quantity to a degree. There are even quite tautological statements, that the universe is a big quantum computer, that calculates itself.
True, but this trades complexity in time for complexity in memory and computational power used.In a war it doesn't matter who's right, only who's left.

20140422, 04:53 AM (ISO 8601)
 Join Date
 Jul 2007
 Location
 Whose eye is that eye?
 Gender
Re: P != NP by quantum mechanics
The entire universe is a quantum effect. There is no point at which suddenly quantum mechanics stop applying and things start working by some other set of physics. No observed evidence for such a point, at least, and believe me people have been looking.

20140422, 05:13 AM (ISO 8601)
 Join Date
 Jan 2007
Re: P != NP by quantum mechanics
That is true, but the statitical behaviour of large systems makes it seem that things revert to classical physics at larger scale. Thus, people speculate.
As a side note, I just remembered another good example of largescale quantum systems: electric conductors, semiconductors and insulators are all described by band theory, which is inherently reliant on Fermi statistics and atomic orbital interference. It's even better then superfluids, because you can't simplify the system in any way.
And as it's always the case, there is a XKCD comics on the subject.In a war it doesn't matter who's right, only who's left.

20140422, 05:37 AM (ISO 8601)
 Join Date
 Jul 2007
 Location
 Whose eye is that eye?
 Gender
Re: P != NP by quantum mechanics
Mm. It feels like such a basic mistake to think about something like 'reverting to classical mechanics'. Classical mechanics is a special case of quantum mechanics, something less fundamental and less real. It is also less simple, in a way.

20140422, 08:26 AM (ISO 8601)
 Join Date
 Sep 2009
 Gender
Re: P != NP by quantum mechanics
It's a cosmologic hypothesis, one out of many, based on Planck time and the idea that universe has a set resolution. I mentioned it because within those premises the proposition in the article makes most sense. Barring evidence for the hypothesis, it doesn't invalidate any of the other cosmological hypotheses you could make.
"It's the fate of all things under the sky,
to grow old and wither and die."

20140422, 08:42 AM (ISO 8601)
 Join Date
 Dec 2010
Re: P != NP by quantum mechanics
That's making the implicit assumption that the infinitely parallel system is built out of serial components. If the underlying building blocks of your computational system or universe are such that this isn't the case, then you aren't actually making this trade, because there isn't an underlying serial/classical computer for which making use of the infinite parallelism is actually taking up more nodes.
Lets take a neuron for example. Having a neuron with a fanout of 10 or a fanout of 1000 doesn't really have a significantly different resource cost, compared to the cost of having two neurons instead of one. This is markedly different than the rules which underly electronic chip design, where large fanout is difficult to perform the layouts for. This means that a neuron has an extra computational operation it can perform 'cheaply' that would be an O(N) process for a serial computer  namely, it can sum all of its inputs as an O(1) step. Adding together N numbers is O(N) for a serial computer, and its even O(log(N)) for an infinitely parallel machine, so the particularities of neuron architecture means that you get different intrinsic computational complexities due to the special circumstances.
Now, if you try to push this too far on a neuron, eventually you would end up seeing that this scaling wouldn't hold, since you can't  for example  fit 10^23 synapses on a single neuron, but that's in essence because the computational abilities of a neuron are bounded by the particular limits of the 'computer' that 'emulates' it (namely, the physical universe and classical mechanics)  but because of separation of scales, that limit exists far above the sorts of problems that the brain needs to solve, so within that problem domain the brain is fundamentally a more efficient computational device than a serial computer.
(As an aside, that 'parallel reduction is O(1)' property is probably fundamental to the physical universe, even in the classical limit. The reason is that if you look at a particular 'box' in space and have a series of 'events' at different times and places that emit photons, the photons flowing into that box at a given point in time is the result of an integration over the entirety of time up to that point. So the older the universe gets, the larger 'N' it can integrate as an O(1) computational step. )
If you built a computer out of modules that could solve a travelling salesman problem of size N in O(1) time, then basically that would let you get around a lot of the practical complexity bounds of certain classes of NP problems, because your underlying building blocks have a different structure than what is assumed for the theory of computation (e.g. Turingtape sorts of things).
This can probably be mathematically generalized by looking at what happens when you take large systems and then indiscriminantly throw out knowledge of some large set of degrees of freedom (basically, you're going to the statistical limit where you only know moments of the distribution and not particular atom positions/etc).
There's another case in which this sort of statisticalization happens and has a significant effect  time reversibility. If you, for example, do a computer simulation with deterministic Hamiltonian physics then the underlying time reversibility of physics means that you can take any state and propagate it forward or backward in time to get to any other state on the chain of dynamics. Furthermore, the difference is simple a change of signs on velocities and deltatime direction. So the physical trajectory by which a teacup falls and shatters into a million pieces is just as microscopically valid as a trajectory by which the teacup's shards reunite into an intact teacup.
However, once you lose information (any information really  floating point error is enough to see this) about the details of the state, then it becomes impossible to do this  because you lose the same amount of information whether you run the simulation forward or backward. That means if you go forward and then backward, you've actually lost twice as much information as just going forward, rather than zero. So the information loss process doesn't obey the underlying time reversibility of physics, which means that statistical effects allow you to have things like 'friction' and other macroscopic dissipative mechanisms. The thing is, these are all 'illusionary' in the sense that the underlying dynamics aren't losing any information, but the person who is trying to simulate/model/observe them is incapable of actually retaining all of the information.
Decoherence is, in principle, a similar effect  entanglement works by the fact that knowledge of the conserved sum of a set of variables allows you to infer things about variables you cannot directly observe, but if those variables couple to external random variables than the random environmental fluctuations can 'explain away' mismatches between the actual sum and what you think it is, which means that your power of inference is quickly destroyed by the uncertainties. It's very much like losing track of the timereversal trajectory and 'forgetting' that the system was once a teacup before all the state variables got jumbled up together.
As a side note, I just remembered another good example of largescale quantum systems: electric conductors, semiconductors and insulators are all described by band theory, which is inherently reliant on Fermi statistics and atomic orbital interference. It's even better then superfluids, because you can't simplify the system in any way.
And as it's always the case, there is a XKCD comics on the subject.
Another example of where you might get significant manyelectron effects would be in molecules with networks of single/double bond loops/chains. It turns out that single/double bond chains have what are called 'resonance structures' in chemistry, where there are multiple equivalent configurations for the bonds when you temporarily stick an extra electron in and therefore add an extra bond that doesn't fit nicely into the structure. That extra electron can respond to electromagnetic forces down the entire length of the chain, which is why dyes have colors in the visible and why bleach remove stains  the long single/double chain acts like an antenna for wavelengths as long as the chain, so a long enough chain gets into the visible range; bleach replaces some of those double bonds with chlorinated single bonds, which basically snaps the antenna into smaller pieces that absorb in the UV.
So if you had a very wellchosen structure of interlocking loops, you could potentially create a system where multielectron entanglement is naturally very important  because adding an electron somewhere in the network has a very nonlocal effects on which antennae can still function and which ones get broken in order to satisfy the right number of bonds on each atom. That said, this is speculation. My guess is that even that system would tend to become statistically classical eventually at some particular temperature, because the larger the molecule gets the more places for thermal fluctuations to interfere with the precise network you've created.Last edited by NichG; 20140422 at 08:43 AM.

20140422, 04:52 PM (ISO 8601)
 Join Date
 Feb 2014
 Location
 Looking for the Xeelee
 Gender
Re: P != NP by quantum mechanics
If this is true, then Charles Stross and I will sleep better at night.
Engraved here is a rendition of an image of the Dwarf Fortress learning curve. All craftsdwarfship is of the highest quality. It depicts an obsidian overhang which menaces with spikes of obsidian and tears. Carved on the overhang is an image of Toady One and the players. The players are curled up in a fetal position. Toady One is laughing. The players are burning.
ᴛʜɪs ɪs ɴᴏᴛ ᴀ sɪɢɴᴀᴛᴜʀᴇ.

20140423, 03:15 AM (ISO 8601)
 Join Date
 Jan 2007
Re: P != NP by quantum mechanics
I never given it that much thought really. I only vaguely remember, that quantum systems can be used to solve problems that are susceptible to parallel reduction, but the whole idea of constructing computers based on something else then Turing paradigm (or how you call it) kind of eluded me. Thank you for an interesting new information.
This is exactly, where jokes about physicists come from: in order to explain any given phenomenon you first chop off everything you possibly can and then work out the details one by one. Sometimes it's the statistical limit, other times it's uniform density of matter, or approximation of real objects with point masses or charges.
Time reversability is a tricky subjects  especially in the breaking of a cup example. When you simulate the breaking, you get a broken cup, but it will never be the same one you would get from experiment (or another simulation for that matter). We don't care as long as the fragments are roughly the same size and their spread more or less matches reality. There are many arrangements of cup fragments, which look pretty much the same for us, so we don't percieve inaccuracies or divergences in our model. When we try to go back, we face the problem of obtaining a very specific and rare state (an unbroken cup), which is distinctively different from all the others, thus we clearly see all the faults of our simulation.
The problem of information loss and time reversability is related to the stability of the model used. In general, if there is any form of dampening in the underlying PDEs, then the inverse problem will be unstable and thus difficult to deal with. One of the main reason for the bad resolution of early Xray tommographs was the difficulty of solving Radon equation without generating too much noise.
I would rather call it a forced diagonalisation then outright dimensional reduction, but the distincition might be irrelevant. The thing is, to simulate the real thing with all the shot noises, heating due to electrical resistance and other tiny details, you can't just slap the same quantum state on every electron like in the superfluid case. In other words, reality can't use the same approximations we do in order to grasp the general concept.
As for the phonons, they behave as massless particles (as far as I remember), so I don't think they are a good candidate and relativistic impacts would tear any material apart, so there would be no more phonons to look for.
This is a very neat and interesting phenomenon.In a war it doesn't matter who's right, only who's left.

20140423, 05:29 AM (ISO 8601)
 Join Date
 Dec 2010
Re: P != NP by quantum mechanics
Without remorse! Any measurement instrument does some sort of coarsegraining anyhow. You can't see every atom in a glass of water individually, so a theory that is sensitive to where each atom is is not a very useful theory.
Time reversability is a tricky subjects  especially in the breaking of a cup example. When you simulate the breaking, you get a broken cup, but it will never be the same one you would get from experiment (or another simulation for that matter). We don't care as long as the fragments are roughly the same size and their spread more or less matches reality. There are many arrangements of cup fragments, which look pretty much the same for us, so we don't percieve inaccuracies or divergences in our model. When we try to go back, we face the problem of obtaining a very specific and rare state (an unbroken cup), which is distinctively different from all the others, thus we clearly see all the faults of our simulation.
The problem of information loss and time reversability is related to the stability of the model used. In general, if there is any form of dampening in the underlying PDEs, then the inverse problem will be unstable and thus difficult to deal with. One of the main reason for the bad resolution of early Xray tommographs was the difficulty of solving Radon equation without generating too much noise.
I would rather call it a forced diagonalisation then outright dimensional reduction, but the distincition might be irrelevant. The thing is, to simulate the real thing with all the shot noises, heating due to electrical resistance and other tiny details, you can't just slap the same quantum state on every electron like in the superfluid case. In other words, reality can't use the same approximations we do in order to grasp the general concept.
As for the phonons, they behave as massless particles (as far as I remember), so I don't think they are a good candidate and relativistic impacts would tear any material apart, so there would be no more phonons to look for.

20140423, 08:02 AM (ISO 8601)
 Join Date
 Jan 2007
Re: P != NP by quantum mechanics
And yet, noone likes to work in frictionless vacuum after all.
That being said, I fully agree on the necesity of it all.
Almost, but not quite. Information loss works against us in both directions, so there is still time reversibility symmetry in Hamiltonian systems in the sense, that the inacuracies accumulate at the same rate regardless of the direction of our simulation  it's just that we don't always see the problem when we simulate things forward in time. Dissipation is what makes the situation asymmetrical.
Frankly, all things we are able to simulate or otherwise describe are susceptible to approximations, but in those specific cases nature has to go the long way to give us proper results.
If you look for massive, free particles in solids, then you should deal with crystal or magnetic defects. The problem is, the former are too massive to exhibit any measurable quantum qualities and the latter are of too low energy in those rare cases, when the quantum nature is more prominent. Yet another problem is, that the underlying nonlinar field equations are nonintegrable more often then not. And the integrable cases only work in one dimension (plus time). It makes the full quantisation next to impossible and the semiclassical limit most often gives you only corrections to the energy of those defects.In a war it doesn't matter who's right, only who's left.

20140423, 10:42 AM (ISO 8601)
 Join Date
 Dec 2010
Re: P != NP by quantum mechanics
Incomplete information is, in essence, a form of effective dissipation, because it means that there is some portion of the energy of the system that might have been extractable if you had retained full information by e.g. doing a Maxwell's Demon sort of process, but because of the equivocation between sets of states you cannot create the synchronization between the energyextracting process and the underlying microstates in order to get a net energy out  you end up losing as much due to errors as you gain when you got it right. So effectively, due to information that has been lost, something that would have been considered a coherent, ordered source of energy ends up being 'heat' and can only be extracted via very coarse thermodynamic means.
Of course that way of putting it is very anthropocentric. But you can think of a 'device' or 'process' as having or lacking information based on the sorts of things it can respond to sensitively, versus the sorts of things it is incapable of distinguishing. This basically portions the energy available differently into 'heat' versus 'notheat', which means that the effective temperature is different for heat engines comprised of one or the other device. The more energy has to be apportioned to heat in how the device interacts with the system, the lower the Carnot efficiency of that heat engine is going to be, so the less energy the device can access.
In a simulation, the 'devices' are each degree of freedom that you're simulating, and the 'heat' is any energy contained in the degrees of freedom you're throwing out/coarsegraining away. Which means that you can have any sort of energy transfer you want from the coherent/fully described degrees of freedom to the statistical degrees of freedom, but the reverse process is bounded by the statistical description of those missing degrees of freedom. So effectively the system leaks energy into a thermodynamic residual 'bucket' which is hard to get stuff back out from, which has the same mathematical form as dissipation.
Frankly, all things we are able to simulate or otherwise describe are susceptible to approximations, but in those specific cases nature has to go the long way to give us proper results.
If you look for massive, free particles in solids, then you should deal with crystal or magnetic defects. The problem is, the former are too massive to exhibit any measurable quantum qualities and the latter are of too low energy in those rare cases, when the quantum nature is more prominent. Yet another problem is, that the underlying nonlinar field equations are nonintegrable more often then not. And the integrable cases only work in one dimension (plus time). It makes the full quantisation next to impossible and the semiclassical limit most often gives you only corrections to the energy of those defects.
So... quantum mechanics of gravitons scattering off of eachother maybe? iirc, whenever gravitons scatter at an angle, you get singularities in the metric (though I guess they might just be coordinate singularities?), so at least there's no worry you get some sort of simple, separable case where the wave functions of each graviton can be easily factored.