• 0 Posts
  • 6 Comments
Joined 8 months ago
cake
Cake day: November 30th, 2024

help-circle
  • I think a lot of proponents of objective collapse would pick a bone with that, haha, although it’s really just semantics. They are proposing extra dynamics that we don’t understand and can’t yet measure.

    Any actual physicist would agree objective collapse has to modify the dynamics, because it’s unavoidable when you introduce an objective collapse model and actually look at the mathematics. No one in the physics community would debate GRW or the Diósi–Penrose model technically makes different predictions, however, and in fact the people who have proposed these models often view this as a positive thing since it makes it testable rather than just philosophy.

    How the two theories would deviate would depend upon your specific objective collapse model, because they place thresholds in different locations. For GRW, it is based on a stochastic process that increases with probability over time, rather than a sharp threshold, but you still should see statistical deviations between its predictions and quantum mechanics if you can maintain a coherent quantum state for a large amount of time. The DP model has something to do with gravity, which I do not know enough to understand it, but I think the rough idea is if you have sufficient mass/energy in a particular locality it will cause a “collapse,” and so if you can conduct an experiment where that threshold of mass/energy is met, traditional quantum theory would predict the system could still be coherent whereas the DP model would reject that, and so you’d inherently end up with deviations in the predictions.

    What’s the definition of interact here?

    An interaction is a local event where two systems become correlated with one another as a result of the event.

    “The physical process during which O measures the quantity q of the system S implies a physical interaction between O and S. In the process of this interaction, the state of O changes…A quantum description of the state of a system S exists only if some system O (considered as an observer) is actually ‘describing’ S, or, more precisely, has interacted with S…It is possible to compare different views, but the process of comparison is always a physical interaction, and all physical interactions are quantum mechanical in nature.”

    The term “observer” is used very broadly in RQM and can apply to even a single particle. It is whatever physical system you are choosing as the basis of a coordinate system to describe other systems in relation to.

    Does it have an arbitrary cutoff like in objective collapse?

    It has a cutoff but not an arbitrary cutoff. The cutoff is in relation to whatever system participates in an interaction. If you have a system in a superposition of states, and you interact with it, then from your perspective, it is cutoff, because the system now has definite, real values in relation to you. But it does not necessarily have definite, real values in relation to some other isolated system that didn’t interact at all.

    You can make a non-separable state as big as you want.

    Only in relation to things not participating in the interaction. The moment something enters into participation, the states become separable. Two entangled particles are nonseparable up until you interact with them. Although, even for the two entangled particles, from their “perspectives” on each other, they are separable. It is only nonseparable from the perspective of yourself who has not interacted with them yet. If you interact with them, an additional observer who has not interacted with you or the three particles yet may still describe all three of you in a nonseparble entangled state, up until they interact with it themselves.

    This is also the first I’ve heard anything about time-symmetric interpretations. That sounds pretty fascinating. Does it not have experimenter “free will”, or do they sidestep the no-go theorems some other way?

    It violates the “free will” assumption because there is no physical possibility of setting up an experiment where the measurement settings cannot potentially influence the system if you take both the time-forwards and time-reverse evolution seriously. We tend to think because we place the measurement device after the initial preparation and that causality only flows in a single time direction, then it’s possible for the initial preparation to affect the measurement device but impossible for the measurement device to affect the initial preparation. But this reasoning doesn’t hold if you drop the postulate of the arrow of time, because in the time-reverse, the measurement interaction is the first interaction in the causal chain and the initial preparation is the second.

    Indeed, every single Bell test, if you look at its time-reverse, is unambiguously local and easy to explain classically, because all the final measurements are brought to a single locality, so in the time-reverse, all the information needed to explain the experiment begins in a single locality and evolves towards the initial preparation. Bell tests only appear nonlocal in the time-forwards evolution, and if you discount the time-reverse as having any sort of physical reality, it then forces you to conclude it must either be nonlocal or a real state for the particles independent of observation cannot exist. But if you drop the postulate of the arrow of time, this conclusion no longer follows, although you do end up with genuine retrocausality (as opposed to superdeterminism which only gives you pseudo-retrocausality), so it’s not like it gives you a classical system.

    So saying we stick with objective collapse or multiple worlds, what I mean is, could you define a non-Lipschitz continuous potential well (for example) that leads to multiple solutions to a wave equation given the same boundary?

    I don’t know, but that is a very interesting question. If you figure it out, I would be interested in the answer.


  • Many of the interpretations of quantum mechanics are nondeterministic.

    1. Relational quantum mechanics interprets particles as taking on discrete states at random whenever they interact with another particle, but only in relation to what they interact with and not in relation to anything else. That means particles don’t have absolute properties, like, if you measure its spin to be +1/2, this is not an absolute property, but a property that exists only relative to you/your measuring device. Each interaction leads to particles taking on definite states randomly according to the statistics predicted by quantum theory, but only in relation to things participating in those interactions.

    2. Time-symmetric interpretations explain violations of Bell inequalities through rejecting a fundamental arrow of time. Without it, there’s no reason to evolve the state vector in a single time-direction. It thus adopts the Two-State Vector Formalism which evolves it in both directions simultaneously. When you do this, you find it places enough constructs on the particles give you absolutely deterministic values called weak values, but these weak values are not what you directly measure. What you directly measure are the “strong” values. You can interpret it such that every time two particles interact, they take on “strong” values randomly according to a rule called the Aharonov-Bergmann-Lebowitz rule. This makes time-symmetric interpretations local realist but not local deterministic, as it can explain violations of Bell inequalities through local information stored in the particles, but that local information still only statistically determines what you observe.

    3. Objective collapse models are not really interpretations but new models because they can’t universally reproduce the mathematics of quantum theory, but some serious physicists have explored them as possibilities and they are also fundamentally random. You assume that particles literally spread out as waves until some threshold is met then they collapse down randomly into classical particles. The reason this can’t reproduce the mathematics of quantum theory is because this implies quantum effects cannot be scaled beyond whatever that threshold is, but no such threshold exists in traditional quantum mechanics, so such a theory must necessarily deviate from its predictions at that threshold. However, it is very hard to scale quantum effects to large scales, so if you place the threshold high enough, you can’t practically distinguish it from traditional quantum mechanics.



  • The specific article mentions QSDC which doesn’t actually exchange a key at all, QKD does exchange a key, but both operate on similar concepts. To measure something requires physically interacting with it, an interaction has to be specified by an operator in QM, and the rules of constructing physically valid operators don’t allow you to construct one that is non-perturbing, so you inevitably perturb the qubits in transit if you measure them in a way that can later be detected.

    But, again, we are talking about “in transit,” that is to say, between nodes. If you and I are doing QKD, and are node A and B, we would exchange the qubits over a wire between A and B, and anyone who sniffs the packets in transit would perturb them in a detectable way. But if someone snipped the wire and setup an X and Y node in the middle, they could make X pretend to be you and Y pretend to be me, and so I would exchange a key with X and you would exchange a key with Y, and so the key exchange occurred over nodes A-X and B-Y and not over A-B.

    The middle-man would then have two keys, they would decrypt the messages sent from A-X with one and re-encrypt them using the second key to transmit from B-Y, and vice-versa. Messages sent from A to be B would still arrive at B and messages sent from B to A would still arrive at A, but A wouldn’t know the key they established was with X and not B, and B wouldn’t know the key they established was with Y and not A. From their perspectives it would appear as if everything is working normally.

    You have to have some sort of authentication of the nodes in any security infrastructure. That’s what public key infrastructure is for. Man-in-the-middle attack is basically a form of impersonation, and you can’t fight impersonation with encryption or key distribution algorithms. It’s just a totally different kind of problem. You authenticate people’s identities with signatures. Similarly, on the internet, you authenticate nodes on a network with digital signatures. Anyone can make up a random signature on the spot, so you have to compare a provided signature to one provided by a trusted database of signatures called certificate authorities. That’s what public key infrastructure is, and it’s one of the major backbones of the internet.


  • Both QKD and QSDC are vulnerable to man-in-the-middle attacks. It doesn’t allow eavesdroppers, but that is not the same thing. An eavesdropper simply sniffs the packets of information transmitting between two nodes. A man-in-the-middle attack sets up two nodes in a network, let’s call them X and Y, and then if A and B want to communicate, then they have X pretend to be B and Y pretend to be A, so A and B talk to X and Y and think they are talking to A and B.

    You then perform either QKD or QSDC twice between nodes X and A and Y and B, which are both valid implementations of the protocol as B would expect the data to become readable at Y because they falsely think Y is A, and A would expect the data to become readable at X because they falsely think X is B. This, however, allows for the data to pass through in a completely readable form between nodes X and Y, which the man-in-the-middle could then read it at those points.

    It is sort of like if I took your computer and then pretended to be you. It doesn’t matter how good the encryption algorithm is, if everyone thinks I am you, they will send me information meant for you in a way that they intend for it to be readable when I receive it. A man-in-the-middle attack doesn’t really exploit a flaw in the algorithm itself, but a flaw in who the algorithm is intended for / directed at. Even classical algorithms have the same problem, you can defeat Diffie-Hellman with a man-in-the-middle attack as well.

    You can only solve it through public key infrastructure. My biggest issue with the “quantum internet” is that I’ve seen very little in the way of scalable quantum PKI. The only algorithm I’ve seen is fundamentally not scalable because the public keys are all consumable. If the intention really is to replace the whole internet, that’s kind of a big requirement. But if the intention is just small-scale secure communication like for internal government stuff, that’s not as big of an issue.


  • I don’t think AI safety is such a big problem that it means we gotta stop building AI or we’ll destroy the world or something, but I do agree there should be things like regulations, oversight, some specialized people to make sure AI is being developed in a safe way just to help mitigate problems that could possibly come up. There is a mentality that AI will never be as smart as humans so any time people suggest some sort of policies for AI safety that it’s unreasonable because it’s overhyping how good AI is and it won’t get to a point of being dangerous for a long time. But if we have this mentality indefinitely then eventually when it does become dangerous we’d have no roadblocks and it might actually become a problem. I do think completely unregulated AI developed without any oversight or guardrails could in the future lead to bad consequences, but I also don’t think that is something that can’t be mitigated with oversight. I don’t believe for example like an AGI will somehow “break free” and take over the world if it is ever developed. If it is “freed” in a way that starts doing harm, it would be because someone allowed that.