Hacker Newsnew | past | comments | ask | show | jobs | submit | xaetium's commentslogin

Haha - I think maybe only the directors of The Adam Smith Institute have read the original. Fortunately, after one of them did, he wrote this: http://www.adamsmith.org/sites/default/files/resources/conde...


On this topic, I almost always concur with these people: http://www.adamsmith.org/blog/tag/inequality/


In these types of quantum reality-probing experiments, any problems of experimental design that affect the validity of the findings are referred to as loopholes, as though there's some awkward legal wrangling going on, because the experiments were conceived originally to determine whether the controversial Bell's inequalities hold. The inequalities were designed to test Bell's theorem which states that any hidden variables (things not yet observed that have a causal influence on experimental outcome) are required to be non-local if they are to hold with the predictions of quantum mechanics. Non-local here means 'spooky action at a distance'.

Showing the inequalities to be violated (incorrect by experiment) was originally controversial because Einstein and Bohr had differing notions of what the quantum mechnical theory implied about reality. They engaged in a lengthy, open discussion about it which was never resolved. Einstein believed in local realism, in which there is no spooky action at a distance and properties like position and momentum exist even when not being measured. Bohr, on the other hand, insisted that there simply wasn't an underlying reality and that only when measurements are made are properties like position and momentum condensed out of the quantum mechanical reality. So, you see, the significance of the experiment is in line with the underlying nature of reality; by closing another loophole, we get closer to what's what.

[The rest here is historical context.]

The familiar refrain, "God does not play dice," is almost always taken out of context - within its original statement, Einstein was also talking about a kind of telepathy required with it - the non-local aspect of quantum mechanics. Einstein said in 1954 'it is not possible to get rid of the statistical character of the present quantum theory by merely adding something to the latter, without changing the fundamental concepts about the whole structure'. He was saying he lost conviction in using a hidden variable theory to replace quantum mechanics.

Bohr's view, like Einstein's later view, is more in line with modern thinking. A team led by Aspect in 1981-82 ruled out either locality or objective reality, by testing the inequalities experimentally. This left possible a non-local reality. In 2006, a group tested Leggett's inequality, and showed it to be violated, which refined experimentally what the nature of reality is, though showed only that realism and a certain type of non-locality are incompatible, without ruling out all possible non-local models. (Nature, April 2007) Aspect remarked that philosophically, the 'conclusion one draws is more a question of taste than logic'.


OK - but what's the difference with previous experiments? Is it that they did it with a single photon? Or is it because they managed to do it from two remote laboratories?


It may be the combination is new; I don't know the exact state of the field, but: This experiment uses a single photon, so they don't have to sample multiple times and make a statistical analysis on that part. If they did, that might open the efficiency loophole. The communication loophole isn't opened, as they are in sufficiently distant labs, with short enough measurement frames, but that's been done before.

As far as I can tell, the disjoint measurement loophole doesn't apply here, either, as it opens when correlations are drawn from multiple samples; here there's one. I'm not sufficiently expert to tell whether the rotational invariane, or other loopholes are closed here. Can anyone shed some light on this?


That would be most useful indeed. Re-read the paper and still can't pinpoint the main difference vs. previous experiments, and why this is a significant achievement...

Any QM expert around here who could help us?


I wouldn't really call myself an expert so take this with an appropriate quantity of NaCl, but AFAICT yes, what is new here is an experimental violation of the Bell inequalities with a "single particle" rather than an EPR pair.

Note that the reason I put "single particle" in scare quotes is that there really is no difference between a "single particle" and an EPR pair. Both are single (non-separable) quantum systems. The only difference is that the "single particle" is in a state that constrains it to deliver its energy at a single location whereas the "EPR pair" can split its energy between two locations. So a "single particle" is really just a special case of an EPR pair, which is in turn a special case of an EPR N-tuple.


I'm competent in QM but not a quantum optics expert. In particular homodyne detection is new-ish to me. There is a bit of a description of it here that might be useful: http://relativity.livingreviews.org/Articles/lrr-2012-5/arti... That said, this is my take, which is mostly me trying to wrap my head around the problem, so take it all with a grain of salt.

The idea is that Alice mixes the (weak) signal photon stream in her lab with a (strong) "local oscillator" of the same frequency (that is the "homo" in "homodyne") and uses the interference between them to perform measurements on the signal without doing photon counting on it, which when combined with Bob's measurements on the other part of the signal photon wavefunction can demonstrate non-local effects. It is important, as always in "spooky-actions-at-a-distance" experiments to emphasize that nothing Bob sees can be used to infer what Alice measures or vice versa: there is no possibility of faster-than-light communication, and it is only when the measurements are combined after the fact that the non-locality becomes manifest.

Homodyne measurement seems to be the key thing that makes measurements on single photons possible, and this may be one of those cases where the notion of "collapse" breaks down in favour of "entanglement": the part of the signal wavefunction in Alice's detector doesn't collapse, it just gets entangled with the local oscillator, and because everything is still coherent her results can still be combined with results from the wave function components in Bob's lab. Entanglement with a heat bath emulates collapse; entanglement with a coherent local oscillator does not. [I'm still agnostic on the claim "entanglement solves the measurement problem" because I don't think it properly answers the question "why is there a classical world at all?", but that may be just me.]

There are a number of loopholes in previous experiments that this closes. I'm pretty sure it closes all detection efficiency loopholes, and there is a subtle critique of Aspect's experiments regarding the timing of the two-photon cascade that this makes irrelevant. There is a small (and in my view fairly implausible) literature on timing and photon-pair-identification issues that goes after two-photon experiments, and this work is not subject to any of these criticisms. I'm not sure how Joy Christian's work on Clifford algebras would be applied to this experiment either, although I expect they will have something to say about it.


I often find myself wondering if the major problem with any given statistical analysis is whether Bayesian inference ought to have been used at all.


Haven't we already learnt that the world reacts badly to 'being saved'? I cite religious battle.

But seriously - this article forgets that 'nerds' measuring what they call success is key; by doing so they get a much better handle on it. In that sense, if you try to save the world like a nerd, you'll at least know if it's working.


Uneasy lies the head that wears a crown.


Don't worry be happy.


The core of the apple is rarely consumed.


So only the core of the apple is the true apple?

If you want another example, say a bread roll. If you eat a bread roll, is it still a bread roll or does it become part of 'I'?


It's a mass of particles the whole time. It's a bread roll when it plays the causal role, or set of such roles, which we term a bread roll. It's "I" when it plays the causal role we term "I".


Sure, but then the question becomes... Where does "I" start and end? For example, the air in your lungs, is that part of "I"? If it is, where does it stop being part of "I"?


I like when the nanny state is occasionally thwarted by its toddler citizens' wants: the New Routemasters in London are a great example. Rather than the agonising 20-metre, 10-minute wait to the stop, you can assess the road yourself and hop off (and on if you just miss it).

http://en.wikipedia.org/wiki/New_Routemaster

Boris, the London mayor, re-introduced these. I look forward to his PMship after Cameron, mainly because of the points he makes about freedom in this piece, using the Routemaster metaphorically (paragraph 7 on).

http://www.telegraph.co.uk/motoring/10067598/Hop-on-and-off-...


> you can assess the road yourself and hop off (and on if you just miss it)

Unless it's evening, or there's a staff shortage, or it's being run on one of several routes which now never have a second staff member and hence always shut the door between stops.

Such a shame.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: