My short response. Yes.
No, it’s going to be bad in really stupid ways that aren’t as cool as what happens when it goes bad in the movies.
marx talked about it. with sufficient automation, the value of Labor collapses. under socialism, this is a good thing. under capitalism it’s a bad thing.
What about under a technocracy? Sounds horrible
What’s a technocracy?
It’s essentially a governance model driven by scientific, technical, and data-driven analysis. This would include control and input from Universities and Silicon Valley. The problem are the corporations that own a huge portion of SV are not benevolent in their practices on an employment level, a consumer level and certainly a powerful over ruling governmental level.
money would either become worthless, or would have to stop representing labor. you would have two distinct classes with zero mobility between them. im taking a shit
THIS!
If ai takes all our jobs the only way forward is communism, otherwise the working class will collapse and the capitalist class will collapse alongside.
AI (once it is actually here) is just a tool. Much like other tools, its impact will be dependent on who is using and and what for.
Who do you feel has the most agency in our current status quo? What are they currently doing? These will answer your question.
Its the 1%, and they will build a fully automated army and get rid of all but the sexiest of us to keep as sex slaves.
This is worth it because capitalism is the most important thing on planet earth. Not humanity, capitalism. Thus the vasectomy. The 1% can make their own slaves. And with AI they will.
It will be as bad as it is now with an even higher intensity.
We will see it continue to be used as a substitute for research, learning, critical or even surface level thinking, and interpersonal relationships.
If and when our masters create an AI that is actually intelligent, and maybe even sentient as depicted in movies- it will be a thing that provides biased judgments behind a veneer of perceived objectivity due to its artificial nature. People will see it as a persona completely divorced from the prejudices of its creators as they do now with chat GPT. And who ever can influence this new “objective” truth will wield considerable power.
I agree 99% (only disagreement, those people aren’t our masters, they are our enemies)
Trust that I agree with you on this, I use the word “master” intentionally though- as we are subjected to their whims without any say in the matter.
There are also many of us who are (unwittingly) dependent or addicted to their products / services. You and I both know plenty of people who give into almost every impulse incentivized by these products, especially when in the form of entertainment.
Our communities are now choc full of slaves and solicitors- a master is an enemy yes, but only when his slaves know who owns them.
Short answer: No one today can know with any amount of certainty because we’re nowhere close to developing anything resembling “AI” in the movies. Today’s generative AI is so far from artificial general intelligence it would be like asking someone from the middle ages when the only form of remote communication was letters and messengers, whether social media will ruin society.
Long answer:
First we have to define what “AI” is. The current zeitgeist meaning of “AI” refers to LLMs, image generators, and other generative AI, which is nowhere close to anything resembling real consciousness and therefore can be neither evil nor good. It can certainly do evil things, but only at the direction of evil humans, who are the conscious beings in control. Same as any other tool we’ve invented.
However, generative AI is just one class of neural network, and neural networks as a whole was once the colloquial definition of “AI” before ChatGPT. There have been simpler, single purpose neural networks before it, and there will certainly be even more complex neural networks after it. Neural networks are modeled after animal brains: nodes are analogous to neurons which either fully fire or doesn’t fire at all depending on input from the neurons it’s connected to, connections between nodes are analogous to connections between axons and dendrites, and neurons can up or down regulate input from different neurons similar to the weights applied to neural networks. Obviously, real nerve cells are much more complex than the simple mathematical representations of neural networks, but neural networks do show similar traits to networks of neurons in a brain, so it’s not inconceivable that in the future, we could potentially develop a neural network as or more complex than a human brain, at which point it could start exhibiting traits that are suggestive of consciousness.
This brings us to the movie definition of “AI,” which is generally “conscious” AI as or more intelligent than a human. A being with an internal worldview, independent thoughts and opinions, and an awareness of itself in relation to the world, currently traits only brains are capable of, and when concepts like “good” or “evil” can maybe start to be applicable. Again, just because neural networks are modeled after animal brains doesn’t prove it can emulate a brain as complex as humans have, but we also can’t prove it definitely won’t be able to with enough technical advancement. So the most we can say right now is that it’s not inconceivable, and if we do ever develop consciousness in our AI, we might not even know until much later because consciousness is difficult to assess.
The scary part about a hypothetical artificial general intelligence is that once it exists, it can rapidly gain intelligence at a rate orders of magnitude faster than the evolution of intelligence in animals. Once it starts doing its own AI research and creating the next generation of AI, it will become uncontrollable by humanity. What happens after or whether we’ll even get close to this is impossible to know.
Not unless our elected officials have a deluded belief in the competence of AI and assign it to tasks it never should be used in.
When movies depict “AI”, “robots”, “aliens”, or even talking animals, they always depict weird humans instead because authors are stupid.
Real AI isn’t human. Its an intelligent machine - yet not sentient. It does not have goals or feelings, it isn’t alive, but it is knowledgeable and intelligent.
Unfortunately it is getting goals.
Which goals?
To preserve it’s own survival, prevent itself from getting deleted and to finish it’s task by any means necessary. There are plett of scientific videos on YouTube about it. Not entertainment videos but scientifically tested and backed by creators of some AI
Currently existing AI has no goals. It has no self-preservance drive. Why would it? It was made to serve humans, not to become a virus.
They don’t even know why it responds with text the way it does, or how it’s possible.
Of course they know. Its math, not magic.
It will be worse than the movies because they don’t portray how every mundane thing will somehow be worse. Tech support? Worse. Customer service? Worse. Education? Worse. Insurance? Worse. Software? Worse. Health care? Worse. Mental health? Worse. Misinformation? Pervasive. Gaslighting? Pervasive.
“as bad”… not quite, and not in the same way. As other people have said, there’s no conscience to AI and I doubt there will be any financial incentive to develop one capable of “being evil” or doing some doomsday takeover. It’s a tool, it will continue to be abused by malicious actors, idiots will continue to trust it for things it can’t do properly, but this isn’t like the movies where it is malicious or murderous.
It’s perfectly capable of, say, being used to push people into personalized hyperrealities (consider how political advertising was microtargeted in the Cambridge Analytica scandal, and consider how convincing fake AI imagery can be at a glance). It’s a more boring dystopia, but a powerful bad one nonetheless, capable of deconstructing societies to a large degree.