• Electricd@lemmybefree.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      It’s neither. It’s a design flaw. They’re not designed to be able to handle this type of situation correctly

      You out there spreading misinformation, saying they’re a manipulation tool. No, they were never invented for this.

      • melroy@kbin.melroy.org
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        Llm is just next word prediction. The Ai doesn’t know whether the output is correct or not. If it’s wrong or right. Or fact or a lie.

        So no I’m not spreading misinformation. The only thing that might spread misinformation is the AI here.