• i_am_not_a_robot@discuss.tchncs.de
    link
    fedilink
    arrow-up
    2
    ·
    3 months ago

    The article isn’t that clear, but the attacker cannot get Slack AI to leak private data via prompt injection directly. Instead, they tell it that the answer to a question is a fake error containing a link which contains the private data, and then when a user that can access the private data asks that question they get the fake error and clicking the link (or automatic unfurling?) causes the private data to be sent to the attacker.