CodyIT@programming.dev to Programmer Humor@programming.dev · 6 months agothe beautiful codeprogramming.devimagemessage-square210linkfedilinkarrow-up12.12Karrow-down111
arrow-up12.11Karrow-down1imagethe beautiful codeprogramming.devCodyIT@programming.dev to Programmer Humor@programming.dev · 6 months agomessage-square210linkfedilink
minus-squareItsMeForRealNow@lemmy.worldlinkfedilinkarrow-up43arrow-down1·6 months agoThis has beeny experience as well. It keeps emphasizing “beauty” and keeps missing “correctness”
minus-squareMatch!!@pawb.sociallinkfedilinkEnglisharrow-up40arrow-down1·6 months agollms are systems that output human-readable natural language answers, not true answers
minus-squaregravitas_deficiency@sh.itjust.workslinkfedilinkEnglisharrow-up3arrow-down1·6 months agoAnd a good part of the time, the answers can often have a… subtly loose relationship with truth
minus-squarezurohki@aussie.zonelinkfedilinkEnglisharrow-up12arrow-down3·6 months agoIt generates an answer that looks correct. Actual correctness is accidental. That’s how you wind up with documents with references that don’t exist, it just knows what references look like.
minus-squareKorhaka@sopuli.xyzlinkfedilinkEnglisharrow-up5·6 months agoSo its 50% better than my code?
minus-squareItsMeForRealNow@lemmy.worldlinkfedilinkarrow-up1·6 months agoIf the code cannot uphold correctness, it is 0% better than your code.
This has beeny experience as well. It keeps emphasizing “beauty” and keeps missing “correctness”
llms are systems that output human-readable natural language answers, not true answers
And a good part of the time, the answers can often have a… subtly loose relationship with truth
It generates an answer that looks correct. Actual correctness is accidental. That’s how you wind up with documents with references that don’t exist, it just knows what references look like.
So its 50% better than my code?
If the code cannot uphold correctness, it is 0% better than your code.