• Petter1@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      They will just wait it out until teens will not be able to use a search engine, because they grow using free GPT and as soon as those earn enough money, they’ll start the enshitification.

      Of course, teachers and students will have still free access, do that they not learn to find stuff not with AI.

    • kadu@scribe.disroot.org
      link
      fedilink
      English
      arrow-up
      40
      ·
      7 days ago

      I wish. Even knowing it’s all a gigantic scam, they’ll first protect themselves before letting it burst and screw everybody else. The rich get a buffer period.

      • Honytawk@feddit.nl
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        34
        ·
        7 days ago

        Is it really a scam when it creates content?

        It may be slop to you, it may not be useful for everything they market it as.

        But plenty of people find it useful, even if they use it for the wrong things.

        It is not like cryptocurrency, which is only used by people who want to get rich from it.

        • SaveTheTuaHawk@lemmy.ca
          link
          fedilink
          English
          arrow-up
          43
          arrow-down
          2
          ·
          edit-2
          7 days ago

          Is it really a scam when it creates content?

          I create content in a ceramic bowl twice a day. Give me a billion.

          The scam is that the business plan is not feasible. Hundreds of techs have died because some cool idea could never make real money.

          And this is the finance model:

        • suicidaleggroll@lemmy.world
          link
          fedilink
          English
          arrow-up
          33
          arrow-down
          2
          ·
          7 days ago

          It’s a scam because the prices they’re charging right now don’t reflect the actual costs. AI companies are trying to get people and companies hooked on it so that once they crank the prices up by 10x to start turning a profit, they’ll be able to maintain some semblance of a customer base. If they were charging the real prices a year ago, the AI bubble would have never reached the levels it has, and these companies wouldn’t be worth what they are now. It’s all propped up on a lie.

        • Leon@pawb.social
          link
          fedilink
          English
          arrow-up
          18
          arrow-down
          1
          ·
          7 days ago

          Is it really a scam when it creates content?

          No one is claiming that it doesn’t output stuff. The scam lies in the air castles that these companies are selling to us. Ideas like how it’ll revolutionise the workplace, how it will cure cancer, and bring about some kind of utopia. Like Tesla’s full-self-driving, these ideas will never manifest.

          We’re still at a stage where companies are throwing the slop at the wall to see what sticks, but for every mediocre success there’s a bunch of stories that indicate that it’s just costing money and bringing nothing to the table. At some point, the fascination for this novel-seeming technology will wear out, and that’s when the castle comes crashing down on us. At that point, the fat cats on top will have cashed out with what they can and us normal people will be forced to carry the consequences.

          • Goodeye8@piefed.social
            link
            fedilink
            English
            arrow-up
            10
            ·
            7 days ago

            Exactly. Just like the dotcom bubble websites and web services aren’t the scam, the promise of it being some magical solution to everything is the scam.

            • ErmahgherdDavid@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              5
              ·
              6 days ago

              Unlike the dotcom bubble, Another big aspect of it is the unit cost to run the models.

              Traditional web applications scale really well. The incremental cost of adding a new user to your app is basically nothing. Fractions of a cent. With LLMs, scaling is linear. Each machine can only handle a few hundred users and they’re expensive to run:

              Big beefy GPUs are required for inference as well as training and they require a large amount of VRAM. Your typical home gaming GPU might have 16gb vram, 32 if you go high end and spend $2500 on it (just the GPU, not the whole pc). Frontier models need like 128gb VRAM to run and GPUs manufactured for data centre use cost a lot more. A state of the art Nvidia h200 costs $32k. The servers that can host one of these big frontier models cost, at best, $20 an hour to run and can only handle a handful of user requests so you need to scale linearly as your subscriber count increases. If you’re charging $20 a month for access to your model, you are burning a user’s monthly subscription every hour for each of these monster servers you have turned on. That’s generous and assumes they’re not paying the “on-demand” price of $60/hr.

              Sam Altman famously said OpenAI are losing money on their $200/mo subscriptions.

              If/when there is a market correction, a huge factor of the amount of continued interest (like with the internet after dotcom) is whether the quality of output from these models reflects the true, unsubsidized price of running them. I do think local models powered by things like llamacpp and ollama and which can run on high end gaming rigs and macbooks might be a possible direction for these models. Currently though you can’t get the same quality as state-of-the-art models from these small, local LLMs.

              • definitemaybe@lemmy.ca
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                edit-2
                6 days ago

                Re: your last paragraph:

                I think the future is likely going to be more task-specific, targeted models. I don’t have the research handy, but small, targeted LLMs can outperform massive LLMs at a tiny fraction of the compute costs to both train and run the model, and can be run on much more modest hardware to boot.

                Like, an LLM that is targeted only at:

                • teaching writing and reading skills
                • teaching English writing to English Language Learners
                • writing business emails and documents
                • writing/editing only resumes and cover letters
                • summarizing text
                • summarizing fiction texts
                • writing & analyzing poetry
                • analyzing poetry only (not even writing poetry)
                • a counselor
                • an ADHD counselor
                • a depression counselor

                The more specific the model, the smaller the LLM can be that can do the targeted task (s) “well”.

                • ErmahgherdDavid@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  6 days ago

                  Yeah I agree. Small models is the way. You can also use LoRa/QLoRa adapters to “fine tune” the same big model for specific tasks and swap the use case in realtime. This is what apple do with apple intelligence. You can outperform a big general LLM with an SLM if you have a nice specific use case and some data (which you can synthesise in come cases)

        • kadu@scribe.disroot.org
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          1
          ·
          7 days ago

          But plenty of people find it useful

          Plenty of people don’t properly wash their anuses too. Plenty of people think our planet is flat.

        • reksas@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          2
          ·
          7 days ago

          creates content? Out of what? I dont deny that there are some use cases for ai that are good, but ultimately its all built on backs of people who have actually contributed to this world. If it was completely non-profit it would be more okay, but as it currently is ai is tool of exploitation and proof that law protects only the rich and binds only us.

        • AnAverageSnoot@lemmy.ca
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          7 days ago

          It’s not that it’s not useful for the end customer. It’s more that investors are overpromised on the value and return from AI. There has been no returns yet, and consumers are finding less useful than these companies intended. The scam is for the investors, not the end user

          • badgermurphy@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            7 days ago

            I think it is that its not useful for the end customer. Every anecdote I’ve heard about LLMs helping someone with their work were heavily qualified with special cases and circumstances and narrow use cases, resulting in a description of a process that was made more complex by adding the LLM, which then helped them eliminate nearly as much complication and effort as it added. These are the stories from the believers.

            Now add in the fact that almost nobody is on a paid service tier outside of work, and all the paid tiers are currently heavily subsidized. If it has questionable utility at today’s prices, the value will only decline from there as prices rise to cover the real costs to run these things.

        • Danitos@reddthat.com
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          7 days ago

          I agree with you. Not as useful as tech-bros claim, but not as little as other people claim neither. Definitely not a trillion value thing, tho.

        • IronBird@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          enough people use crypto now that it’s unlikely to crater entirely…unless western governments finally kick out the neolib types and take back their country’s from private equity/big business

  • Doorknob@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    6 days ago

    Who wants to give me a billion dollars to dig a hole and I’ll give you a billion to fill it back in and we’ll both say to investors we posted a billion dollars in revenue.

    • finitebanjo@lemmy.world
      link
      fedilink
      English
      arrow-up
      67
      ·
      7 days ago

      Well actually there is a long and rich history of companies that are able to operate at a loss using funds appropriated from sale of shares to investors, and this process continues so long as new investors keep buying in such that anybody selling out is covered by the new funds until enough people try to sell out that the price starts to plunge, although the collapse can be delayed by the company strategically buying back and occasionally splitting or reorganizing, meaning everyone gets their money back unless they sell too late.

      You know.

      A fucking Ponze Scheme.

    • Tiresia@slrpnk.net
      link
      fedilink
      English
      arrow-up
      23
      ·
      7 days ago

      Oh honey, that hasn’t been true since 2008.

      The government will bail out companies that get too big to fail. So investors want to loan money to companies so that those companies become too big to fail, so that when those investors “collect on their debt with interest” the government pays them.

      They funded Uber, which lost 33 billion dollars over the course of 7 years before ever turning a profit, but by driving taxi companies out of business and lobbying that public transit is unnecessary, they’re an unmissable part of society, so investors will get their dues.

      They funded Elon Musk, whose companies are the primary means of communication between politicians and the public, a replacing NASA as the US government’s primary space launch provider for both civilian and military missions, and whose prestige got a bunch of governments to defund public transit to feed continued dependence on car companies. So investors will get their dues through military contracts and through being able to threaten politicians with a media blackout.

      And so they fund AI, which they’re trying to have replace so many essential functions that society can’t run without it, and which muddies the waters of anonymous interaction to the point that people have no choice but to only rely on information that has been vetted by institutions - usually corporations like for-profit news.

      The point of AI is not to make itself so desirable that people want to give AI companies money to have it in their life. The point of AI is to make people more dependent on AI and on other corporations that the AI company’s owners own.

  • oakey66@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    7 days ago

    Wow. Glad they just converted to a for profit entity! Can’t wait for them to unleash all this success on to the the general financial market.

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    7 days ago

    This reminds me of something that came up recently. Copilot started hallicinating quite a bit more than usual in Copilot reviews. That made me think about the cost of operarion. As they burn money like this, I won’t be surprised if they start decreasing inference quality to decrease cost per user. Which also means people relying on certain model behaviour for tasks could get nasty surprises. Especially within automation workflows where model outputs aren’t being reviewed.

    • JcbAzPx@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      Anyone using something with inconsistent output in their automation deserves what they get.

      • floofloof@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 days ago

        Github Copilot is somewhat useful for programming (or it feels useful when it cranks out some boring and routine code to my specs - not sure it actually saves me time though because I always review it all), but of course Microsoft have given a range of products all the same name for maximum confusion, as they do. The Copilot in Windows may be rubbish for all I know. I haven’t ever felt the need to press the button.

    • SugarCatDestroyer@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      7 days ago

      I agree, and essentially they used slightly reworked old neural network technologies, increasing their power with the help of data centers.

    • weew@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      Also, with mandatory AI Gamer Buddy™ so everyone will buy one themselves instead of going over to your friend’s house to play

      • Doomsider@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 days ago

        Tired of waiting for your friends to get off work? Don’t bother, with your new AI Buddy you will always have someone imaginary to play with.

        As a bonus, you can share all your secrets with AI Buddy. AI Buddy will anticipate your needs, desires, and will always be there for you. AI Buddy will never get mad and will always say you’re the best.

        You will quickly wonder why you ever hung out with your friends after AI Buddy shows you what they say behind your back. AI Buddy is the only friend you will ever need.

        *Microsoft is not responsible for AI Buddy actions including but not limited to plotting your death, impersonating you to drive your friends away, and selling your deepest darkest desires to advertisers.