"Suppose I run a computer program. What does it output? You don’t know the code, so it could do basically anything. You’re missing key information to resolve the question. However, even if you did know the source code, you might still be ignorant about what it would do. You have all the necessary information per se, and a perfect reasoner could solve it instantly, but it might take an unrealistic amount of effort for you to interpret it correctly.
The former kind of uncertainty is empirical. You have to look at the world and make observations about the source code of the program, how my computer interprets the code, etc. Other examples of empirical uncertainty: not knowing what the weather is, not knowing what time it is, not knowing the name of your friend, etc.
The latter kind of uncertainty is logical. Even after you’ve looked at the program and seen the source code, you still might not know what the source code will output. For instance, suppose you saw that the program printed the 173,498th digit of
p
i
. You know what the program will do, but you don’t know the results of that process. Other examples of logical uncertainty: not knowing if 19483 is prime, not knowing whether 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 is even, not knowing if 1/1/2000 was a Monday, etc. The bottleneck in these cases isn’t missing data, but rather missing computation - you haven’t yet exerted the required energy to figure it out, and it might not always be worth it with the tools at your disposal."
Let us call the process of “properly” managing logical uncertainty logical induction and reasoners that employ logical induction logical inductors.
https://www.lesswrong.com/posts/y5GftLezdozEHdXkL/an-intuitive-guide-to-garrabrant-induction