I know it’s trendy to just nothing ever happens but there are some reasons to believe this might be somewhat real.
Data collection is in early stages but some people close to those affected report that they didn’t have prior signs (psychosis often manifests between 20-30). We know that encouraging delusions is quite bad, chatbots are built to do this. We know people think about computers really badly, and have a tendency to ascribe magical properties and truthfulness to their outputs. We know that spending a bunch of time alone is usually bad for psychosis and chatbots encourage spending hours alone.
Much is very unclear, but it’s more plausible than “tv square eyes” type moral panics.
Unless the psychosis is very acutely onset, I wouldn’t say modern AI has been widely available for long enough for ‘prior signs’ to be a particularly determinable factor.
I’m not saying it’s impossible, I’m just saying we currently have no actual data to make conclusions (we basically can’t in these tiny 2-3 year timescales) and I’ve read no convincing anecdotes that AI was causative rather than incidental.
I daresay “tv square eyes” moral panics had similar plausible mechanisms at the time, too - including encouraging isolation and people ascribing magical properties to their outputs. There are plenty good, concrete reasons to criticise AI and it’s use in the modern world, but this does scream baseless moral panic, to me.
I know it’s trendy to just nothing ever happens but there are some reasons to believe this might be somewhat real.
Data collection is in early stages but some people close to those affected report that they didn’t have prior signs (psychosis often manifests between 20-30). We know that encouraging delusions is quite bad, chatbots are built to do this. We know people think about computers really badly, and have a tendency to ascribe magical properties and truthfulness to their outputs. We know that spending a bunch of time alone is usually bad for psychosis and chatbots encourage spending hours alone.
Much is very unclear, but it’s more plausible than “tv square eyes” type moral panics.
Unless the psychosis is very acutely onset, I wouldn’t say modern AI has been widely available for long enough for ‘prior signs’ to be a particularly determinable factor.
I’m not saying it’s impossible, I’m just saying we currently have no actual data to make conclusions (we basically can’t in these tiny 2-3 year timescales) and I’ve read no convincing anecdotes that AI was causative rather than incidental.
I daresay “tv square eyes” moral panics had similar plausible mechanisms at the time, too - including encouraging isolation and people ascribing magical properties to their outputs. There are plenty good, concrete reasons to criticise AI and it’s use in the modern world, but this does scream baseless moral panic, to me.