29 Comments
Apr 29, 2023·edited Apr 29, 2023

Overnight? Very probably not.. But few futurists are actually suggesting it'll happen "overnight" in the literal sense. They're often speaking comparatively, to progress/advancements of the past.

The thing about technological progress over time, is humans are rather poor at differentiating between linear progress, and so called "exponential progress".. Which while trickier to understand, tends to better describe the way some technologies enable greater advancements more rapidly than before, than thinking about progress in an easy linear way, and up until recently, there wasn't much functional difference in the pace of change between the two.

Someday, imo likely soon, with one of the next iterations of GPT or some other LLM.. Someone will figure out the right combination of prompting loop-back methods, or the right way to use a team of AIs with various API access, and they'll start automating portions of a business, then someone will use it to automate the management of an entire business. And if it generates more profit than human managers, we'll rapidly see entire industries begin automating using autonomous ai agents. Unless we change the profit motives, it's an obvious chain of events.. And all it takes is the right industries to automate, before it becomes a self-improving feedback loop. Again, that process doesn't happen overnight, but after looking back on a year's worth of progress, it'll be shocking how much happens.

Expand full comment
author

I largely agree with this -- the economy (and civilization more broadly) is chock-full of self-reinforcing loops, and AI will be incorporated into these loops, and inevitably play a role in accelerating them. But I think this scenario is quite distinct from the apocalypse scenarios that AI Singularity believers describe.

Expand full comment

How would you say is it distinct? If you agree that linear, rapid progress is possible, and that a self-improving feedback loop is possible, then an AI singularity becomes possible. And with it comes existential risk. No?

Expand full comment
author

Is human society comprised of unsustainable self-reinforcing feedback loops, accelerating via growth towards its own inevitable collapse and ultimate destruction? Arguably so. Will technologies enabled by progress in AI play a role in these feedback loops, tightening many of them and compounding their effects? Seems likely. Will an AI take off by itself, self-improve vastly more quickly than the rest of the world, and emerge to dominate it? Probably not.

Expand full comment
May 30, 2023·edited May 30, 2023

Thanks for the reply

If you have time, I invite you to listen to this interview of Roman Yampolskiy https://www.youtube.com/watch?v=vjPr7Gvq4uI

this is the most comprehensive and compelling refutation of “AGI scepticism” I’ve found and I’d very much like to hear your take on this! (In fact, to say I’d be delighted if you could prove Yampolskiy wrong would be an understatement)

Expand full comment
author
May 30, 2023·edited May 30, 2023Author

> Ok, but if we look at the technical aspects of AI development, don’t you think we can extrapolate recent progress into the future? If GPT5 is as big an improvement as was GPT4 from its predecessor, then we will be pretty close to AGI, won’t we?

If by "AGI" you mean "AI at-or-slightly-above the level of the best humans at most tasks", my response is: yes, GPT5 or GPT6 might reach this level, but this will not lead to a superintelligence explosion or x-risk. It will just add some large number of very smart people to society, and thus continue the already-existing trends of societal progress.

If by "AGI" you mean "AI vastly more intelligent than humans": no, as I argue in this essay, the gap between GPT4 and a vastly superhuman AI is not bridgeable by scaling current techniques alone.

> If you have time, I invite you to listen to this interview of Roman Yampolskiy https://www.youtube.com/watch?v=vjPr7Gvq4uI

This is quite long. I skimmed it and did not hear anything that might convince me that sudden, rapid self-improvement is possible via scale alone, which is my primary objection here.

Expand full comment

>If by "AGI" you mean "AI at-or-slightly-above the level of the best humans at most tasks", my response is: yes, GPT5 or GPT6 might reach this level, but this will not lead to a superintelligence explosion or x-risk. It will just add some large number of very smart people to society, and thus continue the already-existing trends of societal progress.

Quantitatively, it looks to me like it'll lead to superintelligence (AGI significantly above the level of the best humans at all relevant tasks) within three years at most, due to scaling + massive acceleration of AI R&D driven by all of these very smart people added to society. Could even happen in months rather than years.

Moreover, I don't think superintelligence is necessary for AI takeover, in the same way that e.g. a coup usually doesn't need the entire military to be in on the conspiracy to succeed. Having the entire country obedient to the coup plotters is the win condition, not the conditions-which-lead-to-success. Less metaphorically: Huge number of very smart people, all clones of each other & sharing the same goals, added to society by people eager to give them more influence and responsibility throughout? Hmm, what if they coordinated to accumulate influence/power at an even faster rate than they were being given it?

Expand full comment

You make some very interesting points! The whole idea that the data is going to be a bottleneck made me think. As you said, we need to either get bigger better datasets from somewhere, or figure out which datapoints are actually important for learning and focus on those. I wonder if that second problem might produce some solutions that spill over into our ability to teach things to humans as well.

Expand full comment

I thought the recent "Adaptive Agent" paper from DeepMind was really impressive progress in DRL.

https://sites.google.com/view/adaptive-agent/

Am I missing something?

Expand full comment

Also, what about these papers which prompt the models to produce text or code, then use filtering processes to pick the better outputs as new training data?

https://arxiv.org/pdf/2210.11610.pdf

https://openreview.net/forum?id=SaRj2ka1XZ3

the toolformer paper is also in this vein (although in a sense it's 'just' doing dramatic augmentation of an existing dataset) https://arxiv.org/abs/2302.04761

Expand full comment

Thank you for calming these spicy waters of AI hyperbole!

Expand full comment

Your argument seems to be based on the claim that with enough data we can model anything with a neural network. To me that looks like saying that a movie is just made of pixels and those can be tracked from frame to frame, and one can make sense of them. While that may be true, the amount data and effort to represent the world at enough level of resolution can be truly outrageous, not talk about the amount of compute and training time needed. To me that looks impractical, and you outlined above some difficulties.

This is also not how we people do things. We reason and create toy models, rather than doing exhaustive data collections in our head.

Then, a lot of the knowledge that you want to create and embed in the neural network already exists as well-defined models implemented in software, and as models in physics, math, etc.

A much more efficient, more robust, and more understandable system would be not to push for ever more data, but to have reasoning, world models, and employ existing knowledge and libraries are needed for specific tasks.

Easier said than done, of course.

Expand full comment

"We Will Eventually Be Bottlenecked By Our Datasets." OpenaAI has already fixed this issue by releasing future gtp 4.1, 4.2, 4.3 and so on. At each version several millions users will give data. So new data will always be available in millions.

Expand full comment
author

But will that data be useful?

I could give you a dataset of 1^10000 words, but if every word is just "meow", you won't learn much from it.

Expand full comment
Apr 29, 2023·edited Apr 29, 2023

> There are superhuman deep learning chess AIs, like AlphaZero; crucially, these models acquire their abilities by self-play, not passive imitation. It is the process of interleaving learning and interaction that allows the AI’s abilities to grow indefinitely.

I think the crucial ability allowing this leap to superhuman ability by AlphaZero was a built-in understanding of the rules of the game (go, chess, shogi).

The "rules" of reality are logic, math, physics (in some sense). What if Large Language Models can learn the rules of the game by tackling logic and math in the form of language and programming language coherence, then self-improve on that, opening the door to doing the same with physics/microbiology (and hence computational advancement/improvement)?

Well, a blueprint for self-improvement is already provided here: https://arxiv.org/abs/2212.08073 in the form of Constitutional AI. A constitution is just another way of defining the rules of the game. If a constitution requires an AI to abide by logic, reason, and mathematical coherence and left to self-improve, what could happen? Seems to me there is some low-hanging fruit here in our current paradigm and this experiment is happening today.

Expand full comment
author
Apr 29, 2023·edited Apr 29, 2023Author

The rules of reality are *not* logic/math/physics -- you have it precisely backwards. In fact, logic/math/physics are just approximations to the rules of reality that we inferred from *observing* reality. These things are our attempt to model the world, and they are accurate in some domains and invalid in others. (These three examples happen to be among the best theories humans have ever created, but that does not make them unequivocally true.)

If an AI observed thousands of chess games, and did its best to infer the rules of the game from its observations, that approximation of the rules would be analogous to physics. It could then use that approximation to bootstrap AGZ style, and if its rules were spot on, it would succeed at becoming superhuman -- but if its rules were flawed, it would fail. And if the chess observations never included any en passant, it of course would never learn that that rule was legal. So in the end -- it still all comes back to the data.

Expand full comment
Apr 30, 2023·edited Apr 30, 2023

"These things are our attempt to model the world, and they are accurate in some domains and invalid in others."

Precisely put! I put "rules" in quote and added the qualifier "in some sense" to try to gesture at that.

But my point does not really rely on the mapping between logic/math/physics and _ultimate_ reality per say, it relies on the correspondence between AI ability and human ability. If AI can learn logic and self-improve on logic (the rules of logic aren't that numerous) it can in theory become as good at logic as AlphaZero is at Go. Likewise the same might apply to math as well (but I think only if logic is mastered first).

"So in the end -- it still all comes back to the data." It seems to me there is enough language-structured data already for an AI to become superhuman at logical and mathematical analysis (with a self-improvement strategy suggested by prior link and the right scaling).

Physics, and science in general, can only be mastered in the real world with testing, experimentation and measurement, though. But an AI that first masters logic and math could begin the process of measuring and experimenting with the real world, which might take it into the area of discovering new ways to speed up computation... which would really start the ball rolling.

Expand full comment

Interesting post. As you say, it has been evident for some time that data will be the key bottleneck to LLM scaling. Couple of thoughts on this:

Firstly, to the extent that human experts are never perfect, a LLM might be able to become weakly superhuman by just completely avoiding mistakes and imperfection (as you allude to in the post). For example, maybe being a good rocket engineer requires you to understand propellant chemistry + materials science + supply chain issues + manufacturing process design, but any given human rocket engineer has a less than perfect understanding in all of these areas. Nonetheless we might expect that a LLM appropriately trained+prompted would be able to marshal the understanding of the best human expert in each of these individual areas, and thus act as a superhuman example of a rocket engineer.

I say "weakly" superhuman above because it seems less likely that a mixture-of-best-experts like this could obtain, say, 10x human IQ. But maybe there could be some super-additive effect where the model is able to spot hithero unexploited links between fields and come with some truly remarkable innovations?

Secondly: since data is the key problem, perhaps a bootstrapping approach could be useful. i.e. you could use an existing LLM to classify the existing input data we have into "high expertise" and "low expertise" subsets. If you then retrain on the "high expertise" subset you should get a LLM that will output higher-quality token streams. Then, use this "stage-2" LLM to generate+filter for new "high expertise" training data for the training of a stage-3 LLM etc.

Perhaps this process could help us train a weakly superhuman mixture-of-best-experts as outlined above, or perhaps the process of recursive bootstrapping could continue, to train something of unbridled expertise?

I guess the key problems are likely to be that 1) inference costs for this bootstrapping approach seem like they will be quite high -- though maybe still cheaper than many other ways of obtaining new training data -- and 2) the bootstrapping could go haywire, so the stage-N model is working with some definition of "expertise" which is unrecognizably distant from what we actually intended.

Expand full comment
author
Apr 30, 2023·edited Apr 30, 2023Author

I agree with your first point here, that is also a manner in which imitation can wind up somewhat superhuman. Crucially, though, it's still bounded. For any particular tasks A and B, we will almost certainly be able to perform both A and B at 1x human, and maybe by learning both together, we get positive interference that actually allows us to perform A at 1.4x human and B at 2.3x human. (Or larger multiples.) But, unless we get more data (either superhuman data on task A or B, or data on a new task C), the improvement just ends there -- there's no way for this effect to propel a fast takeoff.

Regarding the second problem, I don't see any way for this to work using current techniques. A filtering procedure is unnecessary when you train an ability-conditional model (e.g. if we have a chess dataset with some 1000-elo games and some 2000-elo games, then instead of removing the 1000-elo chess games from the dataset, we can just train a model on *all* the games and then ask it to generate 2000-elo games).

Expand full comment