• Evilphd666 [he/him, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 hours ago

    Someone did a jargon to laymen thingy. Cant copy it all as the thing just keeps pegging back to the top, but it will help make more sense of this. To a laymen, which I am when it comes to this, it comes off as crazy. However understanding the Technical Jargon Overload makes it seem far more sensical.

    The issue is - he isn’t naming the system or naming the suspects / bad actors manipulating the system to fuck over IRL people and events out of fear. Maybe its him, maybe it’s not but he feels culpability in whatever as a major investor, but Frankenstein’s monster is now let loose.

    https://xcancel.com/LilithDatura/status/1945607105639321688#m

    Geoff Lewis is the Founder & Managing Partner of Bedrock Capital.

    https://bedrockcap.com/geoff-lewis Archive

    Also closely associated with Peter Thiel.

    https://en.everybodywiki.com/Geoff_Lewis_(businessman)

    Maybe he should divest and confess, naming names ect. Why doesn’t he? I’m sure he has enough to fuck off forever. His firm has ruined lives. A LRP nation wide dragnet, crypto, OpenAI / grock, AI Human Resources bullshit, genicidal fascist “Defense” industty. He caters to facists and wonders why his investments and fellow investors do fascist shit?

    Thought about that before you sold your soul bastard. Name names. Stop beating around the bush.

  • CyborgMarx [any, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 hours ago

    Elites bricking their brains with glorified Ask Jeeves simulators is hilarious and I hope it continues

    Also prefigures the likelihood that if actual General Intelligence ever did exist, it probably would take over the world since the capitalist class are this fuckin brain-dead

  • axont [she/her, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    8 hours ago

    Is there a non-ableist way of saying this? I feel like anyone driven to a fracture in reality specifically because of AI chatbots is a fucking idiot. Like not in a disability way, I mean they’re a complete fucking fool who has limited experience with the world outside of the confines of their own ass.

    I don’t know if I’m just being ableist but it’s all I can think of. The computer isn’t talking to you, it’s a speak and spell. Imagine a person treating a furby like it’s alive or it has any insight whatsoever. Imagine someone with one of those spinny talking toys that tells you what sounds the farm animals make and they think it makes them an expert agricultural scientist. Like you’d have to be a dipshit, right?

    • TankieTanuki [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 hour ago

      Disagree. I have loved ones with schizophrenia, and this hits close to home.

      Maybe this isn’t a mental health thing, but I don’t think (all of) the concern is ungenuine.

    • fox [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 hours ago

      It is ableist. People falling to AI-induced delusion are already on a thin edge well before the machine that agrees paranoia is justified comes into play. LLMs are hazardous to those using them as therapists at the best of times because the things give the impression of being human while never ever disagreeing or pushing back.

      • Damarcusart [he/him, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        The difference between a regular person struggling and a multi-millionaire like this guy, is that this guy has literally any and all resources at his disposal to help. I find it hard to have sympathy for people struggling through entirely self-inflicted misery, especially when they are responsible for inflicting that same misery on thousands of others.

        • fox [comrade/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 hours ago

          Don’t have to sympathize to realize it’s ableist. Being vulnerable to psychosis is a medical condition and being wealthy doesn’t make you immune to it.

          • Damarcusart [he/him, comrade/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 hour ago

            That’s true, maybe I have less sympathy because I’ve been diagnosed with psychosis myself and it is a bit of a “bootstraps mentality” with regards to it. Maybe I’m being too harsh on myself with that, but it is kind of like…I don’t like the idea some people perpetuate that it can be “ableist” to judge someone struggling with mental illness that has done nothing to improve their situation. I don’t think it is ableism to not want people to wallow in self-inflicted misery. These are just my thoughts, I’m not saying you’re implying that or think like that, I’m not trying to be aggressive or abrasive, so sorry if it is coming across that way, that really isn’t my intention.

      • axont [she/her, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        7 hours ago

        Yeah you’re probably right. We already have a mental health crisis and the AI is just a piece of it. I can’t imagine a healthy person believing the LLM has anything meaningful to say unless they have no idea what an LLM is.

        • TreadOnMe [none/use name]@hexbear.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          5 hours ago

          I will go out on a limb here as someone who has been diagnosed with ‘unclassified impulse control issues’ (I didn’t really know how to keep my mouth shut and emotions in check), but is now just considered ‘abnormal but we trust you to medicate yourself properly’ which is a weird place to be in, while there may be an element of ableism in there, a large part of it comes from the fact that these people are on the very low end of the spectrum of their anxiety disorders and yet have found a way through self-medication to trigger and intensify it in themselves.

          If they were higher on the spectrum with it, it likely would have triggered earlier, and they would be more self-conscious from having to deal with it when they were younger. This is, of course, assuming they had access to mental health care at all. The fact of the matter is that they are absolutely correct to be paranoid. We are being passively observed, usually illegally, and our data is then used to feed us content and products all the time, tapping into our greatest insecurities and FOMO to do so. Most anxiety-ridden people I know are anxiety-ridden because they have full understanding of this at all times and it absolutely paralyzes them, with the most common one I have personally witnessed being someone having a literal mental breakdown over choice and calorie anxiety from a fast food menu. Which they are correct to be anxious over because too much of that stuff is definitely bad for you. For myself, massive anxiety hits whenever I enter a big city, because it suddenly dawns on me that there are hundreds of thousands, of not millions of people living there, most of whom will never be aware of me nor I of them.

          In this way, this kind of LLM induced anxiety is both stupid in its creation and ableist in not having sympathy despite the stupidity of it. TLDR: It can be both.

  • insurgentrat [she/her, it/its]@hexbear.net
    link
    fedilink
    English
    arrow-up
    24
    ·
    9 hours ago

    I wouldn’t have predicted that statistical language prediction would let you make passably convincing digital conversation partners.

    I definitely wouldn’t have predicted that doing this would make them sort of ok at a wide variety of text manipulation tasks including structuring freeform text into forms and so on.

    Not in a million years would I have even speculated that using them would be some sort of infohazard that drives people mad. Like what in tarnation? Is this even real life? Are we truly surrounded by so many people with such a poor grip on reality that a few hours with a madlibs yesman machine is enough to tear the doors of perception off their hinges? These machines seem to do what even macrodoses of LSD can’t.

  • LanyrdSkynrd [comrade/them, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    9 hours ago

    It’s kind of a stretch to call it ChatGPT related, isn’t it? Sounds like pretty typical mania for a nerd.

    I had a college friend with severe bipolar disorder. During his episodes he sounded a lot like this. He even spent a lot of time playing with those 2010’s pre-llm chatbots thinking they were learning from him. I wouldn’t call his episode a “cleverbot-related mental health crisis”.

    • CarbonScored [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      9 hours ago

      Yep. None of these publicised ‘AI-related mental health crises’ ever seem to actually show AI being a significant contributor, rather than just an incidental focus.

      • insurgentrat [she/her, it/its]@hexbear.net
        link
        fedilink
        English
        arrow-up
        11
        ·
        8 hours ago

        I know it’s trendy to just nothing ever happens but there are some reasons to believe this might be somewhat real.

        Data collection is in early stages but some people close to those affected report that they didn’t have prior signs (psychosis often manifests between 20-30). We know that encouraging delusions is quite bad, chatbots are built to do this. We know people think about computers really badly, and have a tendency to ascribe magical properties and truthfulness to their outputs. We know that spending a bunch of time alone is usually bad for psychosis and chatbots encourage spending hours alone.

        Much is very unclear, but it’s more plausible than “tv square eyes” type moral panics.

  • PKMKII [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    39
    ·
    12 hours ago

    Social media users were quick to note that ChatGPT’s answer to Lewis’ queries takes a strikingly similar form to SCP Foundation articles, a Wikipedia-style database of fictional horror stories created by users online.

    Dude got deluded into thinking horror copypasta are real because they got filtered through the advanced auto-suggest chatbot. This is such a bizarre time for media propaganda analysis, because usually the propaganda is being deliberately chosen and filtered by an actor with agency. AI has created propaganda emerging out of ghosts in the machine, brainwashing as a side effect of technology infrastructure.