there's a very large correlation between "not being scary" and "being commercially viable", so I expect a lot of pressure for non-scary systems
I agree that there is an optimization pressure here, but I don't think it robustly targets "don't create misaligned superintelligence" rather, it targets "customers and regulators not being scared" which is very different from "don't make things customers and regulators should be scared of".
My impression is that companies are very short sighted, optimizing for quarterly and yearly results even if it has a negative effect on the companies performance in 5 years and even if it has negative effects on society. I also think many (most?) companies view regulations not as signals for how they should be behaving but more like board game rules, if they can change or evade the rules to profit, they will.
I'll also point out that it is probably in the best interest of many customers to be pissed off. Sycophantic products make more money than ones that force people to confront ways they are acting against their own values. It is my estimation that that is a pretty big problem.
But the thing I'm most worried about is companies succeeding at "making solid services/products that work with high reliability" without actually solving the alignment problem, and then it becomes even more difficult to convince people there even is a problem as they further insulate themselves from anyone who disagrees with their hyper-niche worldview.