• @njm1314@lemmy.world
    link
    fedilink
    English
    73 months ago

    Why can’t you be? Why is it okay that it gives you Holocaust denying talking points? Isn’t that a problem in and of itself? At the very least shouldn’t it contain notations about why it’s wrong?

    • PonyOfWar
      link
      fedilink
      English
      233 months ago

      At the very least shouldn’t it contain notations about why it’s wrong?

      I mean it might. In both screenshots it’s clearly visible that parts of the text are cut off. Why should we trust Twitter neonazis?

      • @njm1314@lemmy.world
        link
        fedilink
        English
        33 months ago

        You’re suggesting notes are at the end of the cutoff sections but not at the end of the ones we can see? Cuz there should be notes on the ones we can see. Unless you’re suggesting points one two four and five are correct…

        • PonyOfWar
          link
          fedilink
          English
          63 months ago

          So let’s assume the AI actually does have safety checks and will not display holocaust denial arguments without pointing out why they’re wrong. Maybe initially it will put notes directly after the arguments. But no problem! Just tell it to list the denialist lies first and the clarifications after. Take some screenshots of just the first paragraphs and boom - you have screenshots showing the AI denying the holocaust.

          My point is that it’s easy to manipulate AI output in a variety of ways to make it show whatever you want. That’s not even taking into consideration the possibility of just editing the HTML, which can be done in seconds. Once again, why should we trust a nazi?

          • auraithx
            link
            fedilink
            English
            23 months ago

            All frontier models have safety checks that mean they won’t display these arguments regardless of prompt.

    • @Oni_eyes@sh.itjust.works
      link
      fedilink
      English
      9
      edit-2
      3 months ago

      It’s not self aware or capable of morality, so if you tailor a question just right it won’t include the morality around it or corrections about the points. Pretty sure we saw a similar thing when people asked it specifically tailored questions on how to commit certain crimes “as a thought experiment” or how to create certain weapons/banned substances “for a fictional story”. It’s strictly a tool and comes with the same failings around use, much like firearms.