AI Predictions
Some very reasonable predictions so that I can say "I told you so!" in a few years.

I have relatively low credence in AI Doom scenarios, but also in predictions of imminent transformative AI (TAI), such as in the recent AI 2027 forecast. I’m much more in agreement with accounts such as this one, this one or this one. I’m currently seeing lots of motivated reasoning1 and a tendency for new events to only move forecasts nearer, without corresponding updates away due to other events not happening.
Relatedly, I’ve noticed a bit of a deficit in theorizing on the subject of probable AI Predictions. Everyone wants to come up with P(doom)s or speculate on any number of other far future possibilities. So let’s look and see how far we can get on basic assumptions and simple conditionals.
It’s looking more and more like we might get some form of AGI relatively soon, maybe even within 10 years. So the first prediction concerns what we should expect to be the first truly noticeable innovation.
AI spoken language generation as good or better than speaking to a friend on the phone. I’m 30% confident in this.2
Why this? What about artistic capabilities? Or what about AIs becoming capable remote workers or being especially reliable tools for helping with mundane office tasks?
The first one isn’t a revolution. Art is simply not an important enough industry and as we are seeing now, it is held back by nonsense notions of human superiority. Art is particularly susceptible to perception. Even otherwise great AI music has to deal with a crowded industry with success driven largely by status.
Economic impact has its own quandaries. For one thing, I expect there to be considerable lag time, maybe up to five years, before AI which is technically capable of taking over many tasks is actually widely used to do so. Consider: the first live Skype video, mouse, computer monitor, Excel sheet and collaborative Google Doc were all created and presented in a demonstration in 1968!But I’m also skeptical that we will immediately get to full-remote worker status. There are numerous, albeit trivial, inconveniences that will slow adoption down considerably. Prompting AI for everything is annoying - much work can be done in the time it takes to do so. AIs would have to get better at absorbing prompt data. This would require longer memories, video feed and automatically accessing and reading emails and other data. I just don’t see the human-AI feedback loop improving so rapidly.
But assuming we do get this breakthrough sooner, why think that it would be “revolutionary”? Well, simple. We already have existence-proof of people finding solace in talking to AI about their problems, most notably with Replika. And we already have existence-proof of people enjoying chatting over the phone, without requiring facial cues. Current problems are fairly trivial. It needs to not sound like it is reading an essay, but adopt spoken-word conventions and mannerisms. That merely requires the right training data. It also needs to be participatory in conversations.I don’t think this requires amazing new leaps in intelligence, only that it is sufficiently lucid. Most conversation just functions as self-affirmation; not some sort of complex dialectic. If there really is a “loneliness epidemic” it seems plausible that conversation is under-supplied in society. Someone willing to listen to you rant and rave anytime on any subject, who is unfailingly understanding and helpful? It’ll be everywhere, very quickly.3
If we suddenly see widespread adoption of AI and an economic growth relevant use case, the Chinese state will immediately double, maybe quadruple AI R&D funding. I’m 80% confident in this.
This seems obvious to me, I don’t really see the need for an argument. But it does bear warning; it seems at least plausible, depending on the gap between AGI and explosive economic growth, that we’ll see China then catch up to the US on AI development. I think the US stock market will also respond significantly; but there’s probably some degree of lower marginal returns.
A major US or global recession or a war (an actual war, not a mere conflict) with China over Taiwan would result in an AI slowdown, if it occurs before economically important capabilities. I’m 65% confident in this.
Personally I’m sold on AI. I think that it is the last invention humanity will ever make. But that doesn’t mean that we won’t have another AI winter before major breakthroughs come, or at least a decline in the rate of progress. If we do get major breakthroughs or AI companies otherwise somehow start to turn profits, that will probably justify continued investment. Metaculus currently has a tech bubble popping by 2026 at 21%, a 95% chance of a US recession before 2027 and a US-China war before 2035 at 15%.
Constant boosterism of AI makes it feel like this couldn’t possibly happen (a perspective the leading AI companies are happy to foster), but it can. There’s lots of money going into compute and more needed; but it’s not obvious to me how much. Too much and funds could easily dry up before reaching TAI.As AI advances, public sentiment about human exceptionalism will fade. I’m 60% confident in this.
This is vague and so it’s hard to determine the best resolution criterion. But I do think this will occur in some capacity, so I’m trying anyways. What I mean by “human exceptionalism” does not entail beliefs about sentience or moral patienthood, but rather beliefs about our relative impressiveness. Human genius and ingenuity will become less compelling. The current arbiters of human exceptionalism will become less and less remarkable. Creativity, underwhelming. Mathematical ability, poor. Common sense, lacking. Photo-realistic art was once the practical goal, then photography was invented. Literacy was once a prime signal of intelligence, then public education brought near universal literacy. Singers once won acclaim for vocal mastery, then recording studios, remastering and eventually auto-tune was invented.4
I predict that once we start having useful AI Agents, we’ll see a lot of the current disbelief and incredulity about the far-future outcomes Longtermists talk about start to fade. Turning speculation into extrapolation should engender a much more earnest attitude to the possibility of super-intelligence. I think we are now seeing a bit of a cultural lag period, where older generations and industry incumbents don’t quite grasp the scale and nature of change.
A steady state of disbelief in the sentience of AI will remain. Unless there is a significant attempt at embodiment of AI, whether through robotics or digital avatars. I predict that this is a necessary condition, but likely not a sufficient one for moral circle expansion. I’m ~90% confident in this.
I think there is a considerable risk of moral circles amongst the general population not expanding enough to encompass all relevant digital minds. Maybe not even ever expanding enough. But if they do expand; insofar as they do so, it will be traceable to the introduction of anthropomorphic characteristics. I think there may even be a very stark “before and after” that we can identify, conditional on it happening at all. But I would be very surprised if there was any wide-spread acceptance, genuine enough it that actually entails major institutions acting differently in costly ways, without this condition being met.
My justification is an analogy to our current incorporation of non-human animals into our moral circles. Mammals, especially cute ones with big eyes. Even though farm animals are domesticated and often intelligent as well, we find it relatively easy to look away from the reality of what we do to them. AIs are going to need to look cute too, otherwise we’ll think of them as no different than tools, just like any other computer software. Sympathetic AI NPCs are likely a big stepping stone.5I think there is something to be said about consistency and predictability too.
People will have a hard time affixing judgements on AIs which completely change with different prompts and lack any long-term memory. We just aren’t made to consider moral status as an incremental quality that can be rapidly turned on or off.
Similarly, AIs will have to be complex enough to elude predictability. It’s the seemingly mechanistic behavior of insects that most lends us to disregard their worth. So I worry a bit that behaviors of models when prompted to do things that violate safety parameters may harm our serious consideration of them as having the sort of autonomous individuality that we identify with personhood. There’s a subtle difference between looking like a particularly morally scrupulous person who doesn’t like nudity or violence and a mere mechanistic process, outputting predetermined responses.
If AI-related economic growth of at least 1.8% per-year over average yearly growth comes within the first 3 years of the Trump Administration, we will see it followed by a second Republican administration. I’m 80% confident in this.
I really have no idea what I’m talking about here, this is a very made-up lower bound. But the average growth rate over the last 10 years has been less than 2 percent. I find it hard to imagine such a sudden uptick not coinciding with a very a positive public mood about the economy and by extent, the Trump administration. Trump would have to do something really, really bad to spoil such a win, such as inciting a war.6 Of course, an AI related catastrophe of some sort might do that as well, but I put little weight on that outcome at such an early stage. Regardless, it is a considerable oversight by the political establishment that this has been overlooked. To his credit, at least Ezra Klein seems to be on to this.
Naively, someone may think that motivated reasoning is only something that people who want an outcome do, meaning that people concerned about doom and catastrophe can’t possibly be susceptible. This is a mistake. Everyone desires vindication. No one want to be that guy that was so cautious that he fails to be praised for his genius. This drives people to favoring extreme outcomes, because extreme views are much more attention grabbing and a chance to be seen as right feels a lot better than being wrong feels bad.
I consider it the most plausible candidate to be the “first big thing”, but there are so many countless possibilities, ranging from coding, simple agents doing remote work tasks, propaganda, social media applications of agents like Grok and many more that I haven’t thought of.
The main hold-back for this seems like it will be memory. But I don’t think it’s a major issue; if each individual “conversation agent” has the owner’s basic personal data and a idiosyncratic personality of it’s own, it would feel pretty real. While I haven’t actually tested it myself, Replika seems to have already accomplished some of this. As a use case for AI, it strikes me that comparatively marginal technical breakthroughs could improve it massively. Meanwhile, most economically useful tasks are a lot more prone to failure due to minor errors which, in conversation, are hardly noticeable.
What seems like a counterpoint to this is the continuation of high status and praise for top athletes in sports when steroids exist, not to mention technology like motorcycles and planes. Similarly, humans still avidly follow chess champions like Magnus Carlsen despite chess engines existing and watch people stream video games which AI can now learn to master quite easily. My defense is that I think these are recognized to be cases of particularly exceptional humans not the exceptional-ness of humanity. No one is still saying “AI will never replace humans at__!” in regards to cases like these.
Weirdly, some theories of consciousness suggest that embodiment is actually an important feature. So maybe common sense intuitions are actually onto something.
Watch people completely forget that he tanked the economy the previous year when the AI boom hits…


I agree with your general outlook and most of your predictions. On #5 there's currently a lot of variation in how much we consider the welfare of animals, with dogs beloved and pigs neglected. I expect the same for AIs, probably some that interact a lot with humans will be likeable and will have their welfare considered, while those that are less relatable to humans will have their welfare ignored. I'm a bit less confident than you are on #6 just because a world where we're already seeing that much growth from AI is probably changing in a lot of other ways as well, making me more agnostic.