• Greg Clarke@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    This isn’t an Large Language Model, it’s an Image Generative Model. And given that these models just present human’s biases and stereotypes, then doesn’t it follow that humans should also be kept far away from decision making processes?

    The problem isn’t the tool, it’s the lack of auditable accountability. We should have auditable accountability in all of our important decision making systems, no matter if it’s a biased machine or biased human making the decision.

    This was a shitty implementation of a tool.