Part of me feels the fact that EVERYONE is saying that there’s a huge AI bubble makes me wonder if there’s actually a possibility there isn’t an AI bubble. Because I was around for the dot com and GFC bubbles and no one (not really) saw those coming. Also, China is pretty much doing everything right these days, and they seem to believe in AI too.
And I’m not against AI in any and all forms. Like, I think there’s huge potential in translating foreign languages. I think it would be cool if people could talk in their native languages but have AI translate things perfectly - which I understand is something AI can do. There are some simple functions in my job (like searching through and summarizing things from our very massive list of policies). But even thinking about the absolute best case scenarios for AI, I can see how the current expectations are even close to what the reality can be.
So what is the argument that we’re not in a bubble, even if we don’t actually believe it?
I do think there’s a massive bubble but let me play devil’s advocate for a moment.
First argument for a bubble is that generative AI seems to have plateaued somewhat, with the newer LLMs not being particularly better than previous ones. Counterpoint: A couple of years ago I remember seeing some YT video about a generative AI making psychedelic faces. The idea was to use a neural net trained to detect faces and run them on random noise to predict (generate) faces instead of detecting them. It was interesting but totally useless, but now these things can produce full motion video of whole scenes, some of which look realistic enough to fool anyone. I would have never guesses this, so what do I know? Maybe there’s still room? A clever idea that makes all this way better? Maybe you can make a hybrid system somehow, something that combines an LLM with an expert system or whatever, and it’ll be way less stupid.
Efficiency: All this generative AI uses lots of power and hardware, and it’s not cost-effective. Well, DeepSeek, by essentially compressing the data and writing low-level GPU instructions, already proved that there’s room for optimization. It’s very likely that this can be made way more efficient both in terms of memory and computation (and therefore power and hardware). And that’s just basic software optimization. You can also do hardware optimization, and there’s always a chance actual algorithmic shortcuts are found. This stuff could become a lot cheaper to run. Or alternatively you could do even bigger models, though that does look like diminishing returns, but who knows. Will this make it profitable? IDK.
Use cases: Apart from passable translations and cringey but functional-enough thumbnails and illustrations, I haven’t seen particularly useful applications of generative AI. But that doesn’t mean they don’t exist.
Not all machine learning is “generative AI”: The old classification neural nets, like the ones that detect various kinds of objects in images, can be used to improve autonomous drones for example. Some of the generative AI research and (hardware) development might as a side effect improve this as well. This certainly seems worth investing in as it might decide who rules the world. Wouldn’t want to fall behind on the military tech.
Intuitively, it seems like a big influx of DoD (or MIC) contracts could be profitable enough to justify a lot of these investments. But I’m pretty lost on the scales involved here: how much money would that realistically provide, and is it actually enough given the massive scale of investment in all this? Does this pencil out, or is the bubble too big for even that to save it?