• WhatAmLemmy@lemmy.world
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    8 months ago

    Give Moore’s law several more cycles, and maybe we’ll have enough computing power to make drop in replacement humans.

    There seems to be a misunderstanding of how LLM’s and statistical modelling work. Neither of these can solve their accuracy as they operate based on a probability distribution and only find correlations in ones and zeros. LLM’s generate the probability distribution internally, without supervision (a “black box”). They’re only as “smart” as the human-generated input data, and will always find false positives and false negatives. This is unavoidable. There simply is no critical thought or intelligence whatsoever — only mimicry.

    I’m not saying LLM’s won’t shakeup employment, find their niche, and make many jobs redundant, or that critical general AI advances won’t occur, just that LLM’s simply can’t replace human decision making or control, and doing so is a disaster waiting to happen — the best they can do is speed up certain tasks, but a human will always be needed to determine if the results make (real world) sense.

    • Drewelite@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      8 months ago

      Feels like a bit of a loop back there. “It can only ever be as smart as human output. So we’ll always need humans.” To… What? Create equivalent mistakes? Maybe LLMs in their current form won’t be the drop in replacement, but it’s a critical milestone and a sign of what’s around the corner. So these concerns are still relevant.

      • knightly the Sneptaur@pawb.social
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        Feels like a bit of a loop back there. “It can only ever be as smart as human output. So we’ll always need humans.” To… What? Create equivalent mistakes?

        Should have finished reading the comment:

        a human will always be needed to determine if the results make (real world) sense.

        Maybe LLMs in their current form won’t be the drop in replacement, but it’s a critical milestone and a sign of what’s around the corner.

        You’re right, but not in the way you think.

        It’s only a matter of time before these compankes start trying to simulate human brains. We need state recognition of legal personhood for digital humans /before/ corporations start torturing them for profit.