Jiro

Posts

Sorted by New

Comments

Troll Bridge

How can you (in general) conclude something by examining the source code of an agent, without potentially implicating the Halting Problem?

[AN #56] Should ML researchers stop running experiments before making hypotheses?

It seems to me that if you expect that the results of your experiment can be useful in and generalized to other situations, then it has to be possible to replicate it. Or to put it another way, if the principle you discovered is useful for more than running the same program with a different seed, shouldn't it be possible to test it by some means other than running the same program with a different seed?

[AN #56] Should ML researchers stop running experiments before making hypotheses?

Instead of preregistering all experiments, maybe researchers could run experiments and observe results, formulate a theory, and then preregister an experiment that would test the theory—but in this case I would expect that researchers end up “preregistering” experiments that are very similar to the experiments that generated the theory, such that the results are very likely to come out in support of the theory.

Why would you expect this? Assuming you are not suggesting "what if the researchers lie and say they did the experiment again when they didn't", then doing a similar experiment again is called "replication". If the initial result was caused by p-hacking, then the similar experiment won't support the theory. This is why we do replication.

Also, I notice the term "p-hacking" appears nowhere in your post.