• T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 months ago

      Remind me in 3 days.

      Although poison pills are only so effective since it’s a cat and mouse game, and they only really work for a specific version of a model, with other models working around it.

    • Loduz_247@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 months ago

      But do Glaze, Nightshade, and HarmonyCloak really work to prevent that information from being used? Because at first, it may be effective. But then they’ll find ways around those barriers, and that software will have to be updated, but only the one with the most money will win.

        • Loduz_247@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          AI has been around for many years, dating back to the 1960s. It’s had its AI winters and AI summers, but now it seems we’re in an AI spring.

          But the amount of poisoned data is minuscule compared to the data that isn’t poisoned. As for data, what data are we referring to: everything in general or just data that a human can understand?

    • Zenith@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 months ago

      I’ve deleted pretty much all social media, I’m down to only Lemmy. I only use my home PC for gaming, like CiV or cities skylines or search engines for things like travel plans. I’m trying to be as offline as possible because I don’t believe there’s any other way to opt out and I don’t believe there ever will be. Like opting out of the internet is practically impossible, AI will get to this point as well

    • But_my_mom_says_im_cool@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      7
      ·
      3 months ago

      You got downvoted because Lemmy users like knee jerk reactions and think that you can unmake a technology or idea. You can’t, Ai is here and it’s forever now. Best we can do is find ways to live with it and like you said, reward those who use it ethically. The Lemmy idea that Ai should be banned and not used is so unrealistic

      • atomicbocks@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        3 months ago

        You seem to misunderstand the ire;

        AI in its current state has existed for over a decade. Watson used ML algorithms to beat Jeopardy by answering natural language questions in 2011. But techbros have gotten ahold of it and decided that copyright rules don’t apply to them and now the cat is out of the bag?!? From the outside it looks like bootlicking for the same bullshit that told us we would be using blockchain to process mortgages in 10 years… 10 years ago. AI isn’t just here to stay it’s been here for 70 years.

        • ClamDrinker@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          3 months ago

          ML technology has existed for a while, but it’s wild to claim that the technology pre-2020 is the same. A breakthrough happened.

            • chunes@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              3 months ago

              Agreed. The only thing that has really changed is how much hardware we can throw at it. ML has existed more or less since the 60s.

            • ClamDrinker@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              3 months ago

              Breakthroughs are not a myth. They still happen even when the process is iterative. That page even explains it. The advent of the GAN (2014-2018), which got overtaken by transformers in around 2017 for which GPTs and Diffusion models later got developed on. More hardware is what allowed those technologies to work better and bigger but without those breakthroughs you still wouldnt have the AI boom of today.

                • ClamDrinker@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  3 months ago

                  I never claimed anything besides that breakthroughs did happen since you claimed, which is objectively true. You claimed very concretely that AI was the same for over a decade, aka it was the same in at least 2015 if I’m being charitable, all of these things were researched in the last 7-8 years and only became the products as we know them in the last 5 years. (Aka 2020)

  • Oxysis/Oxy@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    3
    ·
    3 months ago

    Is it really though? I haven’t touched it since the very early days of slop ai. That was before I learned of how awful it is to real people

    • But_my_mom_says_im_cool@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 months ago

      They don’t mean directly, i guarantee that companies, service providers, etc that you are with do indeed use Ai. That’s what I took the headline to mean. Some facet of everyone’s life uses Ai now

  • KeenFlame@feddit.nu
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 months ago

    Ah yes. The “freedom” the usa has spread all over its country and other nations… Yes of course we must protect that freedom that is ofc a freedom for people to avoid getting owned by giant corporations. We must protect the freedom of giant corporations to not give us ai if they want to. I don’t disagree but think people are more important

  • fxdave@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    3 months ago

    The problem is not the tool. It’s the inability to use the tool without a third party provider.

  • RvTV95XBeo@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    3 months ago

    If AI is going to be crammed down our throats can we at least be able to hold it (aka the companies pushing it) liable for providing blatantly false information? At least then they’d have incentive to provide accurate information instead of just authoritative information.

    • Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      As much as you can hold a computer manufacturer responsible for buggy software.

  • backgroundcow@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    3 months ago

    I very much understand wanting to have a say against our data being freely harvested for AI training. But this article’s call for a general opt-out of interacting with AI seems a bit regressive. Many aspects of this and other discussions about the “AI revolution” remind me about the Mitchell and Web skit on the start of the bronze age: https://youtu.be/nyu4u3VZYaQ

    • NotASharkInAManSuit@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      Yes. That is actually an ideal function of ethical AI. I’m not against AI in regards to things that is is actually beneficial towards and where it can be used as a tool for understanding, I just don’t like it being used as a thief’s tool pretending to be a paintbrush or a typewriter. There are good and ethical uses for AI, art is not one of them.

  • smarttech@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    3 months ago

    AI is everywhere now, but having the choice to opt out matters. Sometimes, using tools lik Instant Ink isn’t about AI it’s just about saving time and making printing easier.