Stability AI are the people behind Stable Diffusion.
Currently only 7B version is available, but they are currently training versions up to 65B, with 175B model planned. 7B version already seems to be quite capable.
Good for capabilities, but quite bad for alignment and race dynamics. What do you think?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 11:01 AM

Up to a certain size, LLMs are going to be a commodity, and academic/amateur/open-source versions will be available. Currently that scale's around 7B-20B parameters, likely it will soon increase. However, GPT-4 supposedly cost >$100m to create, of which I've seen estimates that the raw foundation model training cost was O($40m) [which admittedly will decrease with Moore's Law for GPUs], and there is also a significant cost for filtering the training data, doing instruct-following and safety RLHF, and so forth. It's not clear to me why any organization able to get its hands on that much money and the expertise necessary to spend it would open-source the result, at least while the result remains near-cutting edge (leaks are of course possible, as Meta already managed to prove). So I suspect models as capable as GPT-4 will not get open-sourced any time soon (where things are moving fast enough that 'soon' means 'this year, or maybe next'). But there are quite a lot of companies/governments that can devote >$100m to an important problem, so the current situation that there are only 3-or-4 companies with LLMs with capabilities comparable to GPT-4 isn't likely to last very long.

As for alignment, it's very unlikely that all sources for commodity LLMs will do an excellent job of persuading them not to tell you how to hotwire cars, not roleplaying AI waifus, or not being able to simulate 4Chan, and some will just release a foundation model with no such training. So we can expect 'unaligned' open-source LLMs. However, none of those are remotely close to civilization-ending problems. The question then is whether Language Model Cognitive Architectures (LMCAs) along the lines of AutoGPT can be made to run effectively on a suitable fine-tuned LLM of less-than-GPT-4 level of complexity and can still increase their capabilities to an AGI level, or if that (if possible at all) requires an LLM of GPT-4-scale or larger. AutoGPT isn't currently that capable when run with GPT-3.5, but generally, if a foundation LLM shows some signs of a capability at a task, suitable fine-tuning can usually greatly increase the reliability of it performing that task.

Unlike LLaMA it's under CC SA-BY 4.0, a copyleft license that allows commercial use. Together with databricks-dolly-15k, which was released under CC SA-BY 3.0 last week, we are about to get an open sourced LLM that follows the compute optimal (Chinchilla) dataset/model ratio (1.5T dataset/65B model) at the scale of the largest LLaMA, with instruct tuning. In the LLaMA/Alpaca incarnation, this made the news only 1.5-2 months ago with mere noncommercial availability.

Not totally clear if/how they'll publish the pre-training dataset though. And their currently released supervised instruct model variants are using instruct datasets that make them noncommercial (CC BY-NC-SA 4.0).

It's not the same kind of thing as ChatGPT or text-davinci-003 yet, no RLHF. But the announcement says "we will be kicking off our crowd-sourced RLHF program".

Preliminary benchmarks had shown poor results. It seems that dataset quality is much worse compared to what LLaMA had or maybe there is some other issue.

Yet another proof that top-notch LLMs are not just data + compute, they require some black magic.

 

Generally, I am not sure if it's bad for safety in the notkilleveryoneism sense: such things prevent agent overhang and make current (non-lethal) problems more visible. 

Hard to say if net good or net bad, too many factors and the impact of each are not clear.