• Bolshechick [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 day ago

    Honestly I’m not sure.

    Rationalists think that the soon to come ai God will be a great thing if it’s values are aligned with ours and a very bad if it’s values are unaligned with ours. Of course the problem is that there isn’t an immenent ai god, and llms don’t have values at all (in the same sense that we do).

    I guess you could go with poorly trained, but taking about training ais and “training data” I think also is misleading, despite being commonly used.

    Maybe just “badly made”?

    • cecinestpasunbot@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 hours ago

      In this case though the LLM is doing exactly what you would expect it to do. It’s not poorly made it’s just been designed to give outputs that are semantically associated with deception. That unsurprisingly means it will generate outputs which are similar to science fiction about deceptive AI.

    • hexaglycogen [they/them, he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      21 hours ago

      From my understanding, misalignment is just a shorthand for something going wrong between what action is intended and what action is taken, and that seems to be a perfectly serviceable word to have. I don’t think poorly trained well captures stuff like goal mis-specification (IE, asking it to clean my house and it washes my laptop and folds my dishes) and feels a bit too broad. Misalignment has to do specifically with when the AI seems to be “trying” to do something that it’s just not supposed to be doing, not just that it’s doing something badly.

      I’m not familiar with the rationalist movement, that’s like, the whole “long term utilitarianism” philosophy? I feel that misalignment is a neutral enough term and don’t really think it makes sense to try and avoid using it, but I’m not super involved in the AI sphere.

      • Le_Wokisme [they/them, undecided]@hexbear.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 hours ago

        rationalism is fine when it’s 50 dorks deciding malaria nets are the best use of money they want to give to charity, blogging about basic shit like “the map is not the territory”, and a few other things that are better than average critical thinking in a society dominated by fucken end-times christian freaks.

        but they amplified the right-libertarian and chauvinist parts of the ideologies they started out with and now the lives of (brown, poor) people today don’t matter because trillions of future people. shit makes antinatalism seem reasonable by comparison.