Causality, Free Will, Prognostication, and Fixed Points

John C. Wright explains why time travel is annoying: because it seems to be incompatible with causality and free will. He describes some ways around that particular problem, but the solutions create additional problems.

Prognostication (seeing the future) is different from time travel and can be made compatible with free will and causality. Basically, the idea is this: knowledge of the future effects how the free-willed prognosticator acts, and thus causes changes in the future, which can potentially invalidate what the prognosticator knows about the future.

This problem only prevents knowing future facts that you then change, it does not prevent knowing future facts that you don’t change. Obviously, you can know future facts that you are incapable of changing, so that is not a problem. To a certain extent, you can know future facts that you are capable of changing but do not try to change for some reason or other, but there is a limit to this ability, and the limit is that if you know the future of some event that you can effect, then you might inadvertently effect if even if you do not try to.

But this also is not a problem if your knowledge of the future events causes you to act in such a way as as to bring those events about. In other words, there can be a sort of feedback from the future to your actions, which is not a problem so long as the things that your future knowledge makes you do are the things that lead to the future.

There is an analogous situation in function theory called a fixed point. Don’t stop reading! I’m going to keep this simple! Imagine you have a function, f, that you are going to apply over and over, starting from 0. One example is f(x)->x^2+2 (that’s, “x squared plus 2”). The series you get looks like this

f(0) = 2, f(2)=6, f(6)=38, …

You can express the results as a series:

0, 2, 6, 38, …

Compare this to the situation of someone who knows the future. Let’s pick an artificial scenario where we can apply the series above to the future: our prognosticator knows that his future number of friends he will be 0, but this knowledge causes him to change his behavior in some way and the new behavior changes his future number of friends to 2. Well, then what he really would have known about the future was that his number of friends would be 2, but knowing this would change his actions again, so now he will act in a way that will produce 6 friends. Well, then he must have known that his number of friends would be 6 and this changes his behavior again… You can see that this reasoning will go on forever and will never converge to a solution–a place where what he knows about the future can lead to that future.

The problem is the function that we chose. Instead of that one, we could have chosen a function with a fixed point. A fixed point is a point, x, where f(x)=x. For example, consider the function f(x)->x^2-2 “x squared minus 2”. The results look like this:

f(0)=-2, f(-2)=2, f(2)=2, f(2)=2, …

the generated series is

0, -2, 2, 2, 2, …

If we go through the same reasoning above, we get this scenario: if our prognosticator knows that he will have 0 friends, then he will change his behavior to end up with -2 friends (I don’t know, two enemies?), but knowing this he will change his behavior to end up with 2 friends, and knowing this he will behave in a way that produces 2 friends, so he can know that he will end up with two friends and actually end up with two friends. In this case, the possible futures converge to a solution.

So prognostication can work like this: you can know things that are out of your control, and you can know things that are in your control, so long as the effects of causality and your future knowledge together converge to a solution.

A remaining issue is that the a single function can have multiple fixed points. What if there are a dozen different futures that our prognosticator could bring about, and that if he knew they were the future, he would act to bring them about? Which one future will he predict? By hypothesis, he only knows one future, not a selection of futures that he can choose from. Maybe there is something corresponding to a lowest-energy solution or maybe the result is truly random.