• .Donuts
    link
    fedilink
    English
    173 months ago

    Right? It’s crazy to think these kinds of people exist, but they are real and they make decisions for other people.

    “The [AI] safety stuff is more visceral to me after a weekend of vibe hacking,” Lemkin said. I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    • altkey (he\him)
      link
      fedilink
      English
      53 months ago

      I doubt their claim. How does LLM communicate directly to different systems in their infrastructure? What even promts it to act to begin with?

      Unless they went out of their way creating such interface for some reason, it is plain bullshit and human error, or a coverup by a skinbag CEO. He made screenshots of LLM taking the blame on itself that, as a concept, completely impossible, and we belive his lying ass lips. If only he asked it, at what stage AI is now, he could’ve lied better.

      • EpeeGnome
        link
        fedilink
        English
        53 months ago

        It wasn’t the user’s infrastructure, it was the LLM company’s. The selling point is that it’s all integrated together for you. You explain what you want, the LLM not only codes it, but launches it too. Yes, his screen shots of the LLM “taking responsibility” are idiotic, but so many people don’t understand that LLMs don’t actually understand anything.