It’s kind of a stretch to call it ChatGPT related, isn’t it? Sounds like pretty typical mania for a nerd.
I had a college friend with severe bipolar disorder. During his episodes he sounded a lot like this. He even spent a lot of time playing with those 2010’s pre-llm chatbots thinking they were learning from him. I wouldn’t call his episode a “cleverbot-related mental health crisis”.
Yep. None of these publicised ‘AI-related mental health crises’ ever seem to actually show AI being a significant contributor, rather than just an incidental focus.
I know it’s trendy to just nothing ever happens but there are some reasons to believe this might be somewhat real.
Data collection is in early stages but some people close to those affected report that they didn’t have prior signs (psychosis often manifests between 20-30). We know that encouraging delusions is quite bad, chatbots are built to do this. We know people think about computers really badly, and have a tendency to ascribe magical properties and truthfulness to their outputs. We know that spending a bunch of time alone is usually bad for psychosis and chatbots encourage spending hours alone.
Much is very unclear, but it’s more plausible than “tv square eyes” type moral panics.
Unless the psychosis is very acutely onset, I wouldn’t say modern AI has been widely available for long enough for ‘prior signs’ to be a particularly determinable factor.
I’m not saying it’s impossible, I’m just saying we currently have no actual data to make conclusions (we basically can’t in these tiny 2-3 year timescales) and I’ve read no convincing anecdotes that AI was causative rather than incidental.
I daresay “tv square eyes” moral panics had similar plausible mechanisms at the time, too - including encouraging isolation and people ascribing magical properties to their outputs. There are plenty good, concrete reasons to criticise AI and it’s use in the modern world, but this does scream baseless moral panic, to me.
It’s kind of a stretch to call it ChatGPT related, isn’t it? Sounds like pretty typical mania for a nerd.
I had a college friend with severe bipolar disorder. During his episodes he sounded a lot like this. He even spent a lot of time playing with those 2010’s pre-llm chatbots thinking they were learning from him. I wouldn’t call his episode a “cleverbot-related mental health crisis”.
Yep. None of these publicised ‘AI-related mental health crises’ ever seem to actually show AI being a significant contributor, rather than just an incidental focus.
I know it’s trendy to just nothing ever happens but there are some reasons to believe this might be somewhat real.
Data collection is in early stages but some people close to those affected report that they didn’t have prior signs (psychosis often manifests between 20-30). We know that encouraging delusions is quite bad, chatbots are built to do this. We know people think about computers really badly, and have a tendency to ascribe magical properties and truthfulness to their outputs. We know that spending a bunch of time alone is usually bad for psychosis and chatbots encourage spending hours alone.
Much is very unclear, but it’s more plausible than “tv square eyes” type moral panics.
Unless the psychosis is very acutely onset, I wouldn’t say modern AI has been widely available for long enough for ‘prior signs’ to be a particularly determinable factor.
I’m not saying it’s impossible, I’m just saying we currently have no actual data to make conclusions (we basically can’t in these tiny 2-3 year timescales) and I’ve read no convincing anecdotes that AI was causative rather than incidental.
I daresay “tv square eyes” moral panics had similar plausible mechanisms at the time, too - including encouraging isolation and people ascribing magical properties to their outputs. There are plenty good, concrete reasons to criticise AI and it’s use in the modern world, but this does scream baseless moral panic, to me.
Reminds me of those articles about QAnon and the people effected by it.