• @ObtuseDoorFrame@lemmy.zip
    link
    fedilink
    English
    72 months ago

    I would think that it would only apply to AI generated images, but I suppose it would depend on the community. In this comm in particular in which all the posts are images it shouldn’t be too tricky to define. As the technology advances it might eventually be impossible to spot them, though…

    • Cethin
      link
      fedilink
      English
      12 months ago

      LLM generated content in general; images, comments, etc.

        • Cethin
          link
          fedilink
          English
          -22 months ago

          No. LLMs are still what generates images.

          • Lemminary
            link
            fedilink
            English
            22 months ago

            Large Language Models generate human-like text. They operate on words broken up as tokens and predict the next one in a sequence. Image Diffusion models take a static image of noise and iteratively denoise it into a stable image.

            The confusion comes from services like OpenAI that take your prompt, dress it up all fancy, and then feed it to a diffusion model.

          • @Honytawk@feddit.nl
            link
            fedilink
            English
            12 months ago

            You can’t use LLMs to generate images.

            That is a completely different beast with their own training set.

            Just because both are made by machine learning, doesn’t mean they are the same.

        • Cethin
          link
          fedilink
          English
          -12 months ago

          Nope. LLMs are still what’s used for image generation. They aren’t AI though, so no.

              • @LwL@lemmy.world
                link
                fedilink
                English
                1
                edit-2
                2 months ago

                Holy confidently incorrect

                LLMs aren’t generating the images, when “using an LLM for image generation” what’s actually happening is the LLM talking to an image generation model and then giving you the image.

                Ironically there’s a hint of truth in it though because for text-to-image generation the model does need to map words into a vector space to understand the prompt, which is also what LLMs do. (And I don’t know enough to say whether the image generation offered through LLMs just has the LLM provide the vectors directly to the image gen model rather than providing a prompt text).

                You could also consider the whole thing as one entity in which case it’s just more generalized generative AI that contains both an LLM and an image gen model.

              • @Honytawk@feddit.nl
                link
                fedilink
                English
                12 months ago

                You can generate images without ever using any text. By uploading and combining images to create new things.

                No LLM will be used in that context.