Part of me feels the fact that EVERYONE is saying that there’s a huge AI bubble makes me wonder if there’s actually a possibility there isn’t an AI bubble. Because I was around for the dot com and GFC bubbles and no one (not really) saw those coming. Also, China is pretty much doing everything right these days, and they seem to believe in AI too.
And I’m not against AI in any and all forms. Like, I think there’s huge potential in translating foreign languages. I think it would be cool if people could talk in their native languages but have AI translate things perfectly - which I understand is something AI can do. There are some simple functions in my job (like searching through and summarizing things from our very massive list of policies). But even thinking about the absolute best case scenarios for AI, I can see how the current expectations are even close to what the reality can be.
So what is the argument that we’re not in a bubble, even if we don’t actually believe it?
not what you asked, but we still have websites after the dot com bubble. we still have crypto after the bubble. we will still have ai after the bubble. the fact that ai will continue to be a tool doesnt mean it isnt a bubble.
to me the fact that 99% of websites are shoehorning in ai is proof that we’re in a bubble.
I was at a manufacturing technology conference recently and one of the fucking CAM system vendors is trying to shoehorn a LLM chatbot into their flagship product. They’ve invested a substantial amount of marketing resources into this if they are out hyping it at trade shows (and even more money pissed away on “consulting” and development, surely). No interactive demo though. Just a handful of slides showing how you can prompt an LLM and it spits out a Python script using their APIs.
It could be worse. They could just have an LLM trying to write G-Code directly with no constraints at all, but either way this is a thing that absolutely nobody in the industry is asking for. This is not how industrial design works. This is not how mechanical engineering works. This is not how contract manufacturing works. This is not how product lifecycles are managed. This is not how designs are specified and communicated between firms. Nobody is trying to fucking vibe code their multi-million dollar precision industrial robots. These things can take over a year to procure, install, and validate. One mistake is all it takes to start a fire or launch a projectile through the operator (or just fuck up months worth of bespoke fixtures and tooling on top of a $80,000 repair bill from the machine vendor). Just unknowable amounts of money being set on fire here for a manufacturing product which will never be applied to the actual production of commodities. Which will never make money either for the end user or even the vendor.
It is like they are trying to create a product with the sort of novelty a hobbyist with a 3D printer (and no money) might appreciate, but are targeting companies with millions of dollars of fixed capital with complex contractual obligations and stringent requirements for process and document control, where the liabilities for quality control and potential recalls are a matter of life and death for the firm.
Oh look, a leftist that actually understands how labor gets shit done instead of calling everything a bullshit email job
Use this knowledge wisely! There’s potential for huge profits by investing against the capitalists. And taking money from capitalists is objectively praxis
It’s easy to get caught up in complexities. But the difference between bubbles and booms is just about the balance of time + capital going into a thing vs value that comes out.
Now I have seen some marginal value come out of AI. I personally reluctantly use it because I’m forced to if I want stay competitive in my tech job.
I don’t see a value from it that is anywhere close to the level of investment that’s gone in, basically nobody does. For all the memes, it really is just a markov chain with more dimensions. The whole question of bubble/boom is whether AI will be improved to a much more useful state (eg “AGI”), or whether we’ve hit an effective dead end with little but marginal improvements left until a real revolutionary step comes in 10+ years.
I don’t personally see AI reaching a fundamentally more useful state anytime soon, I think we’re at the end of this S curve. It’s an interesting discovery with limited useful applications, but as things stand it’s just a “fairly good” guess machine. Given that nobody seems to be making any fundamental improvements anymore, nobody can clearly state what the next step is, and we’re just jamming on extra tools and APIs, and 90% of people’s belief in it comes from overhyped marketing, most entities will rightfully give up on it / heavily reduce their usage in a few years.
With that being said, it’s quickly becoming a market that’s too big to fail. I actually think the biggest chance of it NOT being a bubble is governments desperately and artificially propping up the industry.
Everyone saw the dot com bubble. Admittedly I was fairly young then, but I remember major financial networks were saying it was a bubble a year before. it was just the techbro crowd talking about a permanent boom.
See this comment chain from @sodium_nitride@hexbear.net and @xiaohongshu@hexbear.net. I think I have also mischaracterized the AI boom as a bubble. Maybe a better way to frame it is as “AI Keynesianism,” you have a bunch of investors (and the government) flush with cash who are putting it into something that they view as potentially profitable at some time in the future, and perhaps more importantly, are forming the backbone of a massive surveillance network. This isn’t like the subprime loan crisis or the NFT bubble where the assets in question were literally fake and backed by nothing.
The problem of course if that this is so far proving to be a tremendous waste of capital as far as your average person is concerned and is ensuring that in all areas outside of surveillance/data-tech or (“cloud capital” as Yanis Varoufakis calls it), the US is having less and less of a “real” economy. So when some other crisis emerges, all these data centers and AI CapEx are going to be useless in addressing the needs of the population.
To make it clear, I think AI is a bubble, and I believe it is mostly an unproductive sector (there are studies that have pointed out that companies don’t actually become more productive after implementing AI into their business, though there are some uses that cannot be denied).
However I am not competent enough about AI to tell what’s going to happen eventually. As you pointed out, Varoufakis’s cloud capitalism could be an interesting starting point, and I have been reading his work of late to educate myself on that. It is true that there is an entire surveillance industry that co-emerges with the AI bubble, perhaps the “real” part of the AI economy itself.
subprime loan crisis
The problem with the subprime crisis was that it was a bubble fueled by bank lending - which means that the money had to be repaid somehow. So when people could no longer afford to service their debt, the entire industry collapsed.
And even then, the Obama’s administration made it a point to punish the 9 million American homeowning families (mostly low income black and Hispanic families) while the entire Wall Street that crashed the system got away with impunity.
The situation in the US is quite different now. Since Covid, the US government has created trillions and trillions of dollars out of thin air and spent them into existence that ultimately prevented an economic depression in the US during Covid. Then the rate hike after Ukraine war caused even more interest payments in the trillions as the Biden administration struggled to fight inflation (which was mostly caused by oil shortage due to the war, supply chain disruption during the pandemic, and monopolists raising their prices btw, rather than government “printed too much money”). Most of those money, of course, went to the top 10%.
Unlike bank lending, these government-spent money do not need to be repaid - they can only be taken out of the circulation through taxation. As such, there is a large mass of dollars in circulation right now and despite the Fed attempting to tighten the balance sheet, there is still $6.6T on the asset side compared to the $4T before Covid. These money has got to go somewhere, and a lot of that went into stock market.
Meanwhile, right wing economists thought “money printing” was madness, that’s why they all predicted double digit inflation and 100% recession back in 2023. It didn’t happen because even if so much money went to rich people, when the volume is in the trillions, even tiny leakages that trickle down is still enough to prevent the system from going into recession. This is simply the power of deficit spending, when you don’t care about balancing the budget.
Of course, Trump is a true wildcard and a lot of his actions can be seen as madness that don’t make sense. We can only tell what the consequences will be when enough time has passed.
I think the whole thing is a bubble, but it is a bubble of the whole US economy that is manifesting as a bubble in AI because that’s the hype new industry.
Also, the bubble won’t “pop” in exactly the way that people think it will. It’s a complex system with many powerful vested interests involved. Even the GFC from 2007-2009 involved many interlinked crises that played out over years.
We might already be living in what future historians will call “the fuckening of 2025” or something.
I wouldn’t say the subprime crisis was backed by nothing, as it was ultimately backed by housing construction. The thing is it didn’t actually result in housing oversupply for a number of construction industry focused reasons.
Similarly, I don’t think the data center construction boom will result in as much useful infrustructure as people think. and certainly the software side is pretty useless for the intel gathering tasks they want it to have.
have AI translate things perfectly - which I understand is something AI can do.
Neural machine translation is what Google’s used since 2016, and there’s only so much that more training material + adjustments to the algorithms can do for the fundamental limits of the technology. Neural machine translation is of course incredibly helpful and I’m glad it exists, and it certainly has improved over the past decade, but I also think that “AI” is just a marketing buzzword making this old and established technology look brand new and revolutionary.
So my own personal greatest hopes for “AI” are facial and vocal deepfakes.
It is also worth noting that this type of language model can run on spare CPU cycles on a limited PC (LibreTranslate), vs. the shit they’re doing with these LLMs that require more electricity to operate than New York City.
There aren’t really good arguments that we’re not. The language is so muddied by hype (what is AI? Is a support vector machine AI? A neural network? A baysenian spam filter? Are we talking about LLMs and diffusion algorithms only?) it’s hard to know what people are even talking about.
Given most of the hype is specifically around stacks based on LLMs which have so far, under serious scientific study, failed by make any real money for anyone it’s highly unlikely that selling GPUs to run these things is worth more than like all medicine production.
The industry has spent other people’s money at a literally unprecedented rate, for tools which so far haven’t been demonstrates as useful for more than roleplaying toys.
It’s quite possible when the dust settles more careful design will discover some use, but they’re far too expensive to remain just toys and most people on the field of machine learning consider them a dead end technology.
generative AI has eaten all the legitimate machine learning things and made a bunch of absurd promises that are obviously undeliverable to anyone smarter than an MBA. there’s also the wild circlejerk of investment between nvidia, open ai, and some other companies. taken together that’s what people are talking about with “AI bubble”
Yeah, there’s still a lot of quiet non-generative AI things still humming along in the background and being useful in their way, but those aren’t the types of AI that let CEOs dream of cutting 90 percent of their labor costs, so they aren’t talked about much
Given most of the hype is specifically around stacks based on LLMs which have so far, under serious scientific study, failed by make any real money for anyone it’s highly unlikely that selling GPUs to run these things is worth more than like all medicine production.
what is AI?
AI is when Guile can single frame Flash Kick me out of the air and it’s fucking bullshit because I need to charge that shit before I can use it.
LLMs are specifically quite bad at summarising btw, and machine translation using them is unreliable and artless.
Whatever they have on 小红书 works pretty well imho. Maybe not for translating great writing or legal documents but good enough for a fluent conversation.
I remember that manuals translated from Chinese used to be running gag because the translations were god awful.
There are better translation approaches that don’t fabricate things. But machine translation in general is slowly killing actual translation as a profession and thus vandalising artistic translation
Most solid answer I’ve seen that were not in a bubble:
spoiler
“nuh-uhh!!”
“No we’re not.”
-Economist paid by AI company
So what is the argument that we’re not in a bubble, even if we don’t actually believe it?
Sam Altman says that he’s invented God and it’s super scary and cool. Just because he won’t produce proof of this and keeps churning out bangers like “what if you the consumer paid us to give you targeted ads from our sponsors, as like a service?” you’re not gonna say he’s lying are you? That would probably make him sad, and he would have to dry his tears and blow his nose with a stack of hundred dollar bills he could be using as toilet paper instead.
I don’t think its going to pop into some huge financial crash but it will deflate eventually. Probably when there’s some new shiny tech to invest in. Then all the attempts to cram AI into everything will pass but there will still be slop generators and there still will be AI companies like Anthropic entrenched in the economy.
Keep in mind that the real players in this field are using the investments to justify building real computer hardware infrastructure that will be very relevant after the AI bubble deflates.
The old world is dying and the new God is gestating
that companies building datacenters don’t invent new money to do it, they have nuts amount of free cashflow. Stock market could crash (crash as in related to overevaluation) without job losses, like in 2019.
datacenters footprint on jobs in minimal, they employ like 500 people in giant piece of shit warehouse. rn they employ construction worker/induce electricity demand as well, but that’s whatever.
it could only propagate into real economy if they decide to raise their own price (or via electricity prices), increasing bills of all b2b saas types, and induce inflation spiral right as fed lowers rates.
e.g. painless exit is like nvidia collapses 3-5 times in evaluation, new datacenters stop being built, google still prints money, just less so cause amortization is a motherfucker.
Closer to 50 people than 500
i’m including supporting workers for those as well, i think 10x is going number, totally
but for real, even those giant projects are still 20-50 people? i would assume they’ll have to have plumbers/hvac/other guys doing stuff constantly, cause gpu farms are not like storage sites.
Actually now that you point out the higher needs of an AI datacenter it may be different
it could only propagate into real economy
You’ve got to remember that if the holy line goes down, all companies must tithe a blood sacrifice to restore its sacred vigor and ensure it rises once more. It’s a team effort, you know, the only form of solidarity that’s legal in hellworld.
i mean kinda but not really?
they are not doing stocks for loans schemes, thus banks are not exposed (piddly 10s of billions of openai/anthropic is whatever for something like goldman), the construction industry eating shit anyways in home market. Like 2008 was cataclysmic cause both giant industry imploded, people lose homes, thus stopped buying durable treats, so it was shit on all fronts. Here like 50k people lose their jobs and jenson becomes sad.
stockmarket crash could somehow wipe savings of whole genz in robinhood, and freezes consumption that way, but that’s solvable with endless qe as wellm as it’s not real economy. for real economy to crash, the top quintile/decile have to stop to consume, for some reason.
I guess boosters will tell you 1. exponential increases are just around the corner so just invest a bit longer to be ready for them 2. even if the consumer market grows sceptical, there are governments still willing to pay for it so we just need to keep them convinced it’s the future and draw on those contracts
I do think there’s a massive bubble but let me play devil’s advocate for a moment.
First argument for a bubble is that generative AI seems to have plateaued somewhat, with the newer LLMs not being particularly better than previous ones. Counterpoint: A couple of years ago I remember seeing some YT video about a generative AI making psychedelic faces. The idea was to use a neural net trained to detect faces and run them on random noise to predict (generate) faces instead of detecting them. It was interesting but totally useless, but now these things can produce full motion video of whole scenes, some of which look realistic enough to fool anyone. I would have never guesses this, so what do I know? Maybe there’s still room? A clever idea that makes all this way better? Maybe you can make a hybrid system somehow, something that combines an LLM with an expert system or whatever, and it’ll be way less stupid.
Efficiency: All this generative AI uses lots of power and hardware, and it’s not cost-effective. Well, DeepSeek, by essentially compressing the data and writing low-level GPU instructions, already proved that there’s room for optimization. It’s very likely that this can be made way more efficient both in terms of memory and computation (and therefore power and hardware). And that’s just basic software optimization. You can also do hardware optimization, and there’s always a chance actual algorithmic shortcuts are found. This stuff could become a lot cheaper to run. Or alternatively you could do even bigger models, though that does look like diminishing returns, but who knows. Will this make it profitable? IDK.
Use cases: Apart from passable translations and cringey but functional-enough thumbnails and illustrations, I haven’t seen particularly useful applications of generative AI. But that doesn’t mean they don’t exist.
Not all machine learning is “generative AI”: The old classification neural nets, like the ones that detect various kinds of objects in images, can be used to improve autonomous drones for example. Some of the generative AI research and (hardware) development might as a side effect improve this as well. This certainly seems worth investing in as it might decide who rules the world. Wouldn’t want to fall behind on the military tech.
Intuitively, it seems like a big influx of DoD (or MIC) contracts could be profitable enough to justify a lot of these investments. But I’m pretty lost on the scales involved here: how much money would that realistically provide, and is it actually enough given the massive scale of investment in all this? Does this pencil out, or is the bubble too big for even that to save it?