Topological Fixed Point Exercises

Thanks! I find this approach more intuitive than the proof of Sperner's lemma that I found in Wikipedia. Along with nshepperd's comment, it also inspired me to work out an interesting extension that requires only minor modifications to your proof:

*d*-spheres are orientable manifolds, hence so is a decomposition of a *d*-sphere into a complex *K* of *d*-simplices. So we may arbitrarily choose one of the two possible orientations for *K* (e.g. by choosing a particular simplex *P* in *K*, ordering its vertices from 1 to *d* + 1, and declaring it to be the prototypi... (read more)

23yAwesome! I was hoping that there would be a way to do this!

Embedded World-Models

Thanks, this is a very clear framework for understanding your objection. Here's the first counterargument that comes to mind: Minimax search is a theoretically optimal algorithm for playing chess, but is too computationally costly to implement in practice. One could therefore argue that all that matters is computationally feasible heuristics, and modeling an ideal chess player as executing a minimax search adds nothing to our knowledge of chess. OTOH, doing a minimax search of the game tree for some bounded number of moves, then applying a simple boar... (read more)

OTOH, doing a minimax search of the game tree for some bounded number of moves, then applying a simple board-evaluation heuristic at the leaf nodes, is a pretty decent algorithm in practice.

I've written previously about this kind of argument -- see here (scroll down to the non-blockquoted text). tl;dr we can often describe the same optimum in multiple ways, with each way giving us a different series that approximates the optimum in the limit. Whether any one series does well or poorly when truncated to N terms can't be explained by saying "... (read more)

Realism about rationality

I didn't mean to suggest that the possibility of hypercomputers should be taken seriously as a physical hypothesis, or at least, any more seriously than time machines, perpetual motion machines, faster-than-light, etc. And I think it's similarly irrelevant to the study of intelligence, machine or human. But in my thought experiment, the way I imagined it working was that, whenever the device's universal-Turing-machine emulator halted, you could then examine its internal state as thoroughly as you liked, to make sure everything was consistent... (read more)

33yNearly everything you said here was already addressed in my previous comment.
Perhaps I didn't explain myself clearly?
I wrote before that "I wonder how would you tell whether it is the hypercomputer
you imagine it to be, versus the realization of the same hypercomputer in some
non-standard model of ZFC?"
So, the realization of a particular hypercomputer in a non-standard model of ZFC
would pass all of your tests. You could examine its internal state or its output
any way you like (i.e. ask any question that can be formulated in the language
of ZFC) and everything you see would be consistent with ZFC. The number of steps
for a machine that shouldn't halt would be a non-standard number, so it would
not fit on any finite storage. You could examine some finite subset of its
digits (either from the end or from the beginning), for example, but that would
not tell you the number is non-standard. For any question of the form "is n
larger than some known number n0?" the answer would always be "yes".
Once again, there is a difference of principle. I wrote before that: "...given
an uncomputable function h and a system under test f, there is no sequence of
computable tests that will allow you to form some credence about the hypothesis
f=h s.t. this credence will converge to 1 when the hypothesis is true and 0 when
the hypothesis is false. (This can be made an actual theorem.) This is different
from the situation with normal computers (i.e. computable h) when you can devise
such a sequence of tests."
So, with normal computers you can become increasingly certain your hypothesis
regarding the computer is true (even if you never become literally 100% certain,
except in the limit), whereas with a hypercomputer you cannot.
Yes, I already wrote that: "Although you can in principle have a class of
uncomputable hypotheses s.t. you can asymptotically verify f is in the class,
for example the class of all functions h s.t. it is consistent with ZFC that h
is the halting function.

Realism about rationality

This can’t be right ... Turing machines are assumed to be able to operate for unbounded time, using unbounded memory, without breaking down or making errors. Even finite automata can have any number of states and operate on inputs of unbounded size. By your logic, human minds shouldn’t be modeling physical systems using such automata, since they exceed the capabilities of our brains.

It’s not that hard to imagine hypothetical experimental evidence that would make it reasonable to believe that hypercomputers could exist. For example, suppose someone demonstr... (read more)

23yIt is true that a human brain is more precisely described as a finite automaton
than a Turing machine. And if we take finite lifespan into account, then it's
not even a finite automaton. However, these abstractions are useful models since
they become accurate in certain asymptotic limits that are sufficiently useful
to describe reality. On the other hand, I doubt that there is a useful
approximation in which the brain is a hypercomputer (except maybe some weak
forms of hypercomputation like non-uniform computation / circuit complexity).
Moreover, one should distinguish between different senses in which we can be
"modeling" something. The first sense is the core, unconscious ability of the
brain to generate models, and in particular that which we experience as
intuition. This ability can (IMO) be thought of as some kind of machine learning
algorithm, and, I doubt that hypercomputation is relevant there in any way. The
second sense is the "modeling" we do by manipulating linguistic (symbolic)
constructs in our conscious mind. These constructs might be formulas in some
mathematical theory, including formulas that represent claims about uncomputable
objects. However, these symbolic manipulations are just another computable
process, and it is only the results of these manipulations that we use to
generate predictions and/or test models, since this is the only access we have
to those uncomputable objects.
Regarding your hypothetical device, I wonder how would you tell whether it is
the hypercomputer you imagine it to be, versus the realization of the same
hypercomputer in some non-standard model of ZFC? (In particular, the latter
could tell you that some Turing machine halts when it "really" doesn't, because
in the model it halts after some non-standard number of computing steps.) More
generally, given an uncomputable function h and a system under test f, there is
no sequence of computable tests that will allow you to form some credence about
the hypothesis f=h s.t. th

Generalized to

ndimensions in my reply to Adele Lopez's solution to #9 (without any unnecessary calculus :)