Natural Turing Machines. The final issue is the choice of Universal Turing machine to be used as the reference machine. The problem is that there is still subjectivity involved in this choice since what is simple on one Turing machine may not be on another. More formally, it can be shown that for any arbitrarily complex string $x$ as measured against the UTM $U$ there is another UTM machine $U ′$ for which $x$ has Kolmogorov complexity $1$. This result seems to undermine the entire concept of a universal simplicity measure but it is more of a philosophical nuisance which only occurs in specifically designed pathological examples. The Turing machine $U ′$ would have to be absurdly biased towards the string $x$ which would require previous knowledge of $x$. The analogy here would be to hard-code some arbitrary long complex number into the hardware of a computer system which is clearly not a natural design. To deal with this case we make the soft assumption that the reference machine is natural in the sense that no such specific biases exist. Unfortunately there is no rigorous definition of natural but it is possible to argue for a reasonable and intuitive definition in this context.

Hutter and Rathmanner 2011, Section 5.9 “Andrey Kolmogorov”

In section 2.4 we saw that Solomonoff’s prior is invariant under both reparametrization and regrouping, up to a multiplicative constant. But there is another form of language dependence, namely the choice of a uni- versal Turing machine.

There are three principal responses to the threat of language dependence. First, one could accept it flat out, and admit that no language is better than any other. Second, one could admit that there is language dependence but argue that some languages are better than others. Third, one could deny language dependence, and try to show that there isn’t any.

For a defender of Solomonoff’s prior, I believe the second option is the most promising. If you accept language dependence flat out, why intro- duce universal Turing machines, incomputable functions, and other need- lessly complicated things? And the third option is not available: there isn’t any way of getting around the fact that Solomonoff’s prior depends on the choice of universal Turing machine. Thus, we shall somehow try to limit the blow of the language dependence that is inherent to the framework. Williamson (2010) defends the use of a particular language by saying that an agent’s language gives her some information about the world she lives in. In the present framework, a similar response could go as follows. First, we identify binary strings with propositions or sensory observations in the way outlined in the previous section. Second, we pick a UTM so that the terms that exist in a particular agent’s language gets low Kolmogorov complexity.

If the above proposal is unconvincing, the damage may be limited some- what by the following result. Let $K_U ( x )$ be the Kolmogorov complexity of $x$ relative to universal Turing machine $U$, and let $K_T ( x )$ be the Kolmogorov complexity of $x$ relative to Turing machine $T$ (which needn’t be universal). We have that $K_U ( x ) \leq K_T ( x ) + C_{TU}$ That is: the difference in Kolmogorov complexity relative to $U$ and rela- tive to $T$ is bounded by a constant $C_TU$ that depends only on these Turing machines, and not on $x$. (See Li and Vitanyi (1997, p. 104) for a proof.) This is somewhat reassuring. It means that no other Turing machine can outperform $U$ infinitely often by more than a fixed constant. But we want to achieve more than that. If one picks a UTM that is biased enough to start with, strings that intuitively seem complex will get a very low Kolmogorov complexity. As we have seen, for any string $x$ it is always possible to find a UTM $T$ such that $K_T ( x ) = 1$. If $K_T ( x ) = 1$, the corresponding Solomonoff prior $M_T ( x )$ will be at least $0.5$. So for any binary string, it is always possible to find a UTM such that we assign that string prior probability greater than or equal to $0.5$. Thus some way of discriminating between universal Turing machines is called for.

Vallinder 2012, Section 4.1 “Language dependence”

Added to diary 15 January 2018