Which of the following sounds more reasonable?

  • I shouldn’t have to pay for the content that I use to tune my LLM model and algorithm.

  • We shouldn’t have to pay for the content we use to train and teach an AI.

By calling it AI, the corporations are able to advocate for a position that’s blatantly pro corporate and anti writer/artist, and trick people into supporting it under the guise of a technological development.

  • Trainguyrom@reddthat.com
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    1 year ago

    nothing new going on

    Uhhhh the available models are improving by leaps and bounds by the month, and there’s quite a bit of tangible advancement happening every week. Even more critically the models that can be run on a single computer are very quickly catching up to those that just a year or two ago required some percentage of a hyperscaler’s datacenter to operate

    Unless you mean to say that the current insane pace of advancement is all built off of decades of research and a lot of the specific advancements recently happen to be fairly small innovations into previous research infused with a crapload of cash and hype (far more than most researchers could only dream of)

    • FancyGUI@lemmy.fancywhale.ca
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 year ago

      all built off of decades of research and a lot of the specific advancements recently happen to be fairly small innovations into previous research infused with a crapload of cash and hype>

      That’s exactly what I mean! The research projects I’ve been 5-7 years ago had already created LLMs like this that were as impressive as GPT. I don’t mean that the things that are going on aren’t impressive, I just mean that there’s nothing actually new. That’s all. IT’s similar to the previous hype wave that happened in AI with machine learning models when google was pushing deep learning. I really just want to point that out.

      EDIT: Typo