

I hear you. Agreed.
Have you tried running your own local llm? Abliterated ones (safety theatre removed) can produce some startling results. As a bonus, newer ablit methods seem to increase reasoning ability, because the LLM doesn’t have one foot on the break and the other on the accelerator.
I noticed that a fair bit in maths reasoning using Qwen 3-4B HIVEMIND. A normal llm will tie itself in knots trying to give you the perfect answer. An ablit one will give you the workable answer and say “I know what you were after, but here’s the best IRL approximation”.
Bijan did a fun review of Qwen 3-8 Josefied that’s entertaining and explains the basic idea



If it was just autocomplete in the dismissive sense, white noise should make it derail into white noise. Instead it tries to make sense of it. Why? Because it learned strong language priors from us and it leans on that when the prompt is meaningless. It tries to make sense of it.
“Not human understanding” ≠ “no reasoning-like computation.”
Those aren’t the same thing.
People doing the "Fancy autocomplete” thing are doing the laziest possible move: not human, therefore nothing interesting happening. I disagree with that.
It doesn’t “understand,” like we do and it’s not infallible, but calling it “fancy autocomplete” is like calling a jet engine “fancy candle.”
Same category of thing, wildly different behavior.