• Avantir@futurology.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    On the topic of alignment, I think you’re thinking of alignment with human values, which I think you’re right is impossible. For that matter, humans aren’t aligned with human values. But you might be able to make an AI aligned to a well defined goal, in the same sort of way your car is aligned to moving forwards when you press the gas pedal, assuming it isn’t broken. Then it becomes a matter of us quibbling about what that goal should be. Also making it well defined, which we currently suck at. As a simple example, imagine we use an AGI to build a video game. I don’t see a fundamental reason we couldn’t align AGIs to building good video games that people enjoy. Granted even in that case I’m not convinced alignment is possible, I’m just arguing that it might be.

    On the topic of life as the goal, I agree. Life by default involves a lot of suffering, which is not great. I also think there’s a question of whether sentient and/or intelligent life is more valuable than non-sentient life.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      15 days ago

      I’d say having some kind of goals is definitional in AGI, so in a broad sense of “alignment” that would include “paperclip optimisers”, sure, it’s bound to be possible. Natural GI exists, after all.

      Speculatively, if you allow it to do controversial things some of the time, my guess is that there is a way to align it so that the average person will agree with most of the time. The trouble is just getting everyone to accept the existence of the edge case.

      The versions of utilitarianism usually give acceptable answers, for example, but there’s the infamous fact that they imply we might consider killing people for their organs. Similarly, deontology like “don’t kill people” runs into problems in a world where retaliation is usually the only way to stop someone else violent. We’re just asking a lot when we want a set of rules that gives perfect options in an imperfect world.