Key Facts:

  • The AI system uses ten categories of social emotions to identify violations of social norms.

  • The system has been tested on two large datasets of short texts, validating its models.

  • This preliminary work, funded by DARPA, is seen as a significant step in improving cross-cultural language understanding and situational awareness.

  • CheeseNoodle@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 months ago

    Still dangerous, an authority could subtly shift those boundries in order to slowly push your behaviour in a desired direction.

    • betterdeadthanreddit@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      Definitely a hazard. My ideal solution is something that could be built and evaluated in a way that allows me to know that it does what it’s supposed to do and nothing else. From there, I’d want to run it on my own hardware in an environment under my control. The idea is to add enough layers of protection that it’d be easier and less expensive for that authority to change my behavior by hiring goons to beat me with a wrench. At least then I’ll have a fairly unambiguous signal that it’s happening but getting to that point would take a significant investment of effort, time and money.