I would sincerely appreciate commentary and impressions on an issue that is really heavily affecting me. I'm posting it here with relative detail in hopes that people in similar circumstances can compare notes and offer advice.

I work at a currently-successful software start-up of under 100 people, all of whom I respect and many of whom have become my closest friends. My job at this company has certainly been the most enjoyable and rewarding of my career. I gladly make sacrifices in other parts of my life to help further its goals. Nearly all days are a genuine pleasure. My position is relatively senior, in that I have the ear of the executive leadership, but cannot veto company strategy. 

We develop software for heavy industries which are not likely to want decisions to be made by AI, due to stringent standards of safety. We currently use our in-house produced neural networks for a niche corner of image and object recognition that seems to be currently market-leading in its small field. We do not perform novel research, let alone publish. 

 

Recently, it has dawned on the company leadership team that AI is likely the be-all and end-all of large-scale software companies, and is seriously considering making significant investments into scaling our and team and ambitions in the field.

High-confidence beliefs I have about their intent: 

  • We will not make an eventual move towards researching general intelligence. It is too far away from our established base of customers.
  • I don't see a way in which we would start researching or publishing novel, industry-leading techniques for any field of AI. 
  • Our most likely course of action will be optimizing known and published research for our particular data-extraction and image-recognition purposes. 
  • We will likely implement and fine-tune other companies' object recognition, software assistant, and chat-bot AIs within our products.

 

Personally, I see a few options that lead to continued prosperity without direct contribution to race dynamics: 

  • We use off-the-shelf tools, mostly from alignment concerned organizations.
  • We don't partner with Google/Facebook/Microsoft/Amazon for our training infrastructure.
  • We continue to not publish nor push novel research.

Some of the less avoidable consequences are:

  • Generally increasing AI hype.
  • Increasing competition in adjacent AI fields (object recognition). That being said, I don't think that any competitors in our industries are the kind to produce their own research. It is more likely that they will, like us, continue to experiment with existing papers.


However, there has been discussion of partnering with industry-leading AI labs to significantly accelerate our establishment in the field. I think, for various reasons, that we have fair chances of forming "close" partnerships with Google/Microsoft/Amazon (probably not Facebook), likely meaning:

  • Use of their infrastructure.
  • Early access to their cutting-edge models (which would be integrated into our products and sold to our customers).
  • Cross-selling to shared customers of interest.

At very least, we would likely secure large-scale use of their computing resources. My company's executive leadership would want to form as close a partnership as possible, for obvious reasons. There is little doubt that our VC investors will share their views.

 

I am seriously affected by the question of what to do. I do not want my work to directly contribute towards accelerating competitive dynamics between major research laboratories, and I see a close strategic partnership as being just that. Stepping away from my job and most of my closest friends is something I am seriously considering, provided they go down the worst route described.

I intend to collect my thoughts for a while and then discuss my concerns and position with the team of founders. I will leave the details out and simply say that the conversation could realistically go either way.


 
Primary question: I would like to know if others have been in a similar circumstance, how they considered their actions, and what they ultimately decided to do and why.

Secondary question: Do major cloud infrastructure providers exist who are not one of the big four or five? Providing compelling alternatives for the one major cost base may satisfy many of my company's requirements without having to sign-up with doom squad.
 

New Answer
New Comment

4 Answers sorted by

If I understand correctly:

  • You approve of the direct impact your employer has by delivering value to its customers, and you agree that AI could increase this value.
  • You're concerned about the indirect effect on increasing the pace of AI progress generally, because you consider AI progress to be harmful. (You use the word "direct", but "accelerating competitive dynamics between major research laboratories" certainly has only an indirect effect on AI progress, if it has any at all.)

I think the resolution here is quite simple: if you're happy with the direct effects, don't worry about the indirect ones. To quote Zeynep Tufekci:

Until there is substantial and repeated evidence otherwise, assume counterintuitive findings to be false, and second-order effects to be dwarfed by first-order ones in magnitude.

The indirect effects are probably smaller than you're worrying they may be, and they may not even exist at all.

I think, for various reasons, that we have fair chances of forming "close" partnerships with Google/Microsoft/Amazon (probably not Facebook), likely meaning:

I'm curious about the Amazon option. While Amazon is a big player in general and in certain areas of ML and robotics, they rarely come up in news or conversations about AGI. And they don't have any cutting-edge AGI research project that is publicly known.

Also, while Amazon AWS is arguably the biggest player in cloud computing generally, I have heard (though not independently vetted) that AWS is rarely used for training cutting-edge LLMs. Because compared to some other compute providers, Amazon's compute is so geographically distributed and not centralized enough for the purpose of training very large models.

It's possible that Amazon could catch up on AGI development, or they could unveil a secret project that is very far along. It's also possible that their work on robotics and other areas of ML could end up being important elements in advanced AI systems and/or race dynamics, or that AWS could become more relevant for training massive models e.g. if distributed learning takes off.

But if your company is choosing now between Google, Microsoft, and Amazon, then of those three Amazon is notably distant from AGI development compared to the other two, as things stand today and from my point of view. If this is right then steering your company toward choosing Amazon might be beneficial.

Also, while Amazon AWS is arguably the biggest player in cloud computing generally, I have heard (though not independently vetted) that AWS is rarely used for training cutting-edge LLMs. Because compared to some other compute providers, Amazon's compute is so geographically distributed and not centralized enough for the purpose of training very large models.

I don't think this is the reason. Rare is the training run that's so big it doesn't fit comfortably in what you can buy in a single Amazon datacenter. I think the real reason is that AWS has significantly larger margins than most cloud providers, since their offering is partially a SaaS offering.

  • Early access to their cutting-edge models.


The implicit assumption here is that the large players may have developed techniques that are more effective than anything your "in-house produced neural networks for a niche corner of image and object recognition that seems to be currently market-leading in its small field."

While I cannot state whether or not this is true*, some generality hypotheses say it would be, and you cannot as a small player build a general image recognizing model and use it in industry. 

Point is your company leadership believes it's possible that in fact you need to adopt general models from a larger player to compete, or your company fails.

How do any alternatives satisfy their desire?  They do not.  All valid courses of action lead to the company attempting to integrate cutting edge general models.

In fact the incentive arrows point to adopting the most cutting edge model from the largest AI player, nothing less.  

*if the general model merely matched the performance of your in-house model, it's still likely a better option as it likely would be cheaper to license than maintain your own.

 

Regarding badness: suppose you continue selling your in house model and the second place competitor adopts the general model.

This is a story that has happened many, many times in tech.  Ending is always the same : your company dies, the competition eats your lunch.  Failing to adopt is choosing to lose.

I think your question is asking about a choice you do not have.

I think I failed to explain my position correctly. My company dabbles in AI. The leadership is now considering significantly increasing our investment in it, including with major partners. Their models, many of which we have no current offering for, will enrich our offering. 

I don't quite understand your final line, as my question in the original post was 

I would like to know if others have been in a similar circumstance, how they considered their actions, and what they ultimately decided to do and why.

2Gerald Monroe1y
I'm saying in this scenario the choices may be to lose on purpose, or to serve the interests of your shareholders. From the people with significant equity this isn't a choice.   And like I said, you have the same problem with any tech innovation adoption.  If you're the last farmer to get a tractor, you'll have lost all the revenue you would have gained over the <adoption period>.  And if not having modern equipment caused you to run in the red, well. The fact that tractor engines would later also be used in battle tanks and bombers isn't something you can control.  Your refusal to adapt just bankrupts your own farm.

Well, I can at least answer that there are small player options for cloud compute, such as my company Vast.ai or others like Launchpod and Lambda Labs. My recent experience with Lambda Labs has been that they have more demand than they can easily supply, and that they may be adversely impacted by the recent banking crisis. I have been thinking a lot about how to do work in AI without contributing to race dynamics or risk of catastrophe. My answer here might be: push for the least harmful option you think will work for your company. Personally, if I had to choose an LLM api supplier at this point in time, I'd go for Anthropic as a least-problematic option.