• Absaroka@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    17
    ·
    3 days ago

    It is powered by the open-source DeepSeek-V3 model, which its researchers claim was developed for less than $6m - significantly less than the billions spent by rivals.

    It’ll be interesting to see if this model was so cheap because the Chinese skipped years of development and got a jump start by stealing tech from other AI companies.

    • cyd@lemmy.world
      link
      fedilink
      English
      arrow-up
      74
      arrow-down
      4
      ·
      3 days ago

      Deepseek put out a highly detailed paper explaining how they optimized their model training, released the model itself, released their reinforcement learning code, put permissive open source licenses on everything… and people wonder if they got there by stealing stuff, because Chinese. Sheesh.

    • 🦄🦄🦄@feddit.org
      link
      fedilink
      English
      arrow-up
      16
      ·
      3 days ago

      Even if that was true, it’s fair game. After all the OpenAI models etc. are entirely based on stolen content as well.

    • just_another_person@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      edit-2
      3 days ago

      It cost so little because all previous open source work was already done, and a lot of the research work had already been knocked out. Building models isn’t the time consuming process it used to be, it’s the training, testing, retraining loop that’s expensive.

      If you’re just building a model that is focused on specific things-like coding, math, and logic-then you don’t need large swathes of content from the internet, you can just train it on already solved, freely available information. If you want to piss away money on an LLM that also knows how many celebrities each celebrity has diddled, well that costs a lot more to make.

    • Glasgow@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 days ago

      From someone in the field

      It lowered training costs by quite a bit. To learn from preference data (whats termed as alignment with human values), we used a very large reward model as a proxy for human feedback.

      They completely got rid of this, hence also the need to have very large clusters

      This has serious implications for spending though. Big companies who would have to train foundation models coz they couldnt directly use meta’s llama, can now just use deepseek.

      and directly move to the human/customer alignment phase, which was already  significantly cheaper than pretraining (first phase of foundation model training). With their new algorithm, even the later stage does not need huge compute

      so they def got rid of a big chunk of compute by not relying on what is called a “reward” model

      GRPO: group relative policy optimization

      huggingface is trying to replicate their results

      https://github.com/huggingface/open-r1

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        3 days ago

        Unfortunately, that’s not very clear without more. What kind of reward model are they talking about?

        This is potentially a 1000x difference in required resources here, assuming you believe their DeepSeek’s quoted figure for spending, so it would have to be an extraordinary change.