Software engineer and repeat startup founder; best known for Writely (aka Google Docs). Now president of the Golden Gate Institute for AI (https://www.goldengateinstitute.org/), where I'm working to foster constructive expert conversations about open questions in AI and AI policy, and posting at https://secondthoughts.ai/ and https://x.com/snewmanpv.
It's still a data point saying that OpenAI chose to do a large training run, though, right? Even if they're currently not planning to make sustained use of the resulting model in deployment. (Also, my shaky understanding is that expectations are for a GPT-5 to be released in the coming months and that it may be a distilled + post-trained derivative of GPT-4.5, meaning GPT-5 would be downstream of a large-compute-budget training process?)