Interesting case study making rounds in ML social media, supporting the thesis that rationality techniques are useful in doing actual ML research. Several implicit references and explicit reference to CFAR at the bottom

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 9:08 AM

This is great! I didn't click this at first, since I didn't really know what to expect, but I found this quite useful both from a rationality and from a "learning ML" perspective.

Indeed! I haven't done any machine learning before (although I have a fair amount of general programming experience), but it was still really interesting to read about.

Different fields of engineering vary based on how empirical it's possible to be. Experiments are very cheap for software engineers, but very expensive for civil engineers. In civil engineering, every time a "bug" occurs and a structure falls down, there's a good chance it's going in a textbook. But the vast majority of software bugs do not make it into textbooks.

As you move from the frontend (where debugging is easy) to the backend (where debugging is hard), the conservative mindset common in other fields of engineering starts to pay off more and more in software. It becomes more sensible to think things through carefully and try to get everything right on the first try.

Suppose you're in a worst-case backend programming scenario, where a bug in a program that is L lines long requires O(L) time to find. If we assume the number of bugs is proportionate to the size of your program, then debugging will require O(L × L) time. Writing code that doesn't have any bugs to begin with is more like O(L) or O(L × log(L)) in my experience. But it requires what feels like an unreasonable amount of concentration and patience until you get the hang of it.