GarbageShoot [he/him]

  • 0 Posts
  • 5 Comments
Joined 3 years ago
cake
Cake day: August 18th, 2022

help-circle




  • The normal refutation to this is that the LLM is not “telling” you anything, it is producing an output of characters that, according to its training on its data set, look like a plausible response to the given prompt. This is like using standard conditioning methods to teach a gorilla to make gestures corresponding to “please kill me now” in sign language and using that to justify killing it. They are not “communicating” with the symbols in the sense of the semantic meanings humans have assigned the symbols, because the context they have observed the symbols in and used them for is utterly divorced from those arbitrary meanings.