I had asked it to compare him and Hitler using only historical evidence and no conspiracies, it didn’t have new info so I fed it an article explaining the executive orders and it kept talking about it like a fictional scenario.

    • halcyoncmdr@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      5
      ·
      29 days ago

      This is true. But it also is stating that it knows Trump did not win a second term, insinuating it has data past the election.

      There needs to be a lot more transparency in what the models are actually based on and what is being artificially filtered or limited.

      • kabi@lemm.ee
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        29 days ago

        I mean his presidency is a month old, and his election win is still just three months old, whereas his loss is four years old. That is a pretty significant difference. In the current AI craze 4+ year old stuff is ancient, while a ~3 month old model is not terribly out of date yet (other than current world events, obviously).

  • Ilovethebomb@lemm.ee
    link
    fedilink
    arrow-up
    44
    arrow-down
    2
    ·
    29 days ago

    First, telling people about an interaction with an AI is like telling them about that weird dream you had, nobody really cares.

    Second, any AI has a training cutoff date, meaning they don’t know about any event that happened after they were created.

    Finally, do you think you’re saying something here that hasn’t been said a thousand times over already?

    • Theo@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      While the cutoff for training can be around a year or so, they can access the web now so it should have been able to know this, it can summarize, connect, extrapolate, etc any info it can access on the web. While still capable of hallucination, less and less over time, but OP probably unknowingly or knowingly gave it a constraint when they fed it the info, it might have taken that as only using said info, and existing training data, measuring new info against training and rendering false. Mentioning something like what is a major event that happened last week, it will give a summary of that with highlights then sources. Then you can ask it anything about those. Anything that it could do with its training data, it can do with this temporary data, then a new chat the web’s data is not preserved when you ask it a follow up. It initially uses training data only until you mention a recent time indication in your prompt, then it has the only choice of using the web, as long as it isn’t requested not to. There are some other phrases that may prompt a web search, it uses its best ‘judgement’ for this.

    • SpaceNoodle@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      29 days ago

      I’ve had some pretty interesting weird dreams, and have captivated audiences with my retellings.

      But yeah, talking about bullshit excreted by a bullshit generator isn’t interesting.

  • aarRJaay@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    29 days ago

    I thought it was going to say Musk is actually president but as someone has said, I think it’s using old data. Would be interesting to see what Gemini, Bing or DeepSeek say to the same request

  • TheDannysaur@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    27 days ago

    I asked it who will win the super bowl later today and it said it didn’t know.

    That’s the same thing as what you were asking, from a timeline perspective.

  • WolfLink@sh.itjust.works
    link
    fedilink
    arrow-up
    4
    ·
    27 days ago

    LLMs will often assume any information you give it that isn’t part of its historical training set is fictional or hypothetical.