Long lists of instructions show how Apple is trying to navigate AI pitfalls.

  • FooBarrington@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    3 months ago

    Yeah, that’s about what I expected. If you re-read my comments, you might notice that I never stated that “commanding an LLM to not hallucinate makes it provide better output”, but I don’t think that you’re here to have any kind of honest exchange on the topic.

    I’ll just leave you with one thought - you’re making a very specific claim (“doing XYZ can’t have a positive effect!”), and I’m just saying “here’s a simple and obvious counter-example”. You should either provide a source for your claim, or explain why my counter-example is not valid. But again, that would require you having any interest in actual discussion.

    That’s not how citations work. You are making the extraordinary claim that somehow, LLMs respond better to “do not hallucinate”.

    I didn’t make an extraordinary claim, you did. You’re claiming that the influence of “do not hallucinate” somehow fundamentally differs from the influence of any other phrase (extraordinary). I’m claiming that no, the influence is the same (ordinary).