I've been operating under the influence of an idea I call philosophical conservatism when thinking about AI alignment. I am in the process of summarizing some of the specific stances I take and why I take them because I believe others would better serve the project of alignment research by doing the same, but in the meantime I'd like to request comments on the general line of thinking to see what others think. I've formatted the outline of the general idea and reasons for it with numbers so you can easily comment on each statement independently.