

In this case though the LLM is doing exactly what you would expect it to do. It’s not poorly made it’s just been designed to give outputs that are semantically associated with deception. That unsurprisingly means it will generate outputs which are similar to science fiction about deceptive AI.
Yes. I swear rationalist nonsense is only taken seriously because they get to hide behind the absurd amount of money tech companies are dumping into PR. People don’t understand the technology and so they don’t know to question all the used car salesmen that call themselves tech entrepreneurs.