Did the author thinks ChatGPT is in fact an AGI? It’s a chatbot. Why would it be good at chess? It’s like saying an Atari 2600 running a dedicated chess program can beat Google Maps at chess.
Most people do. It’s just called AI in the media everywhere and marketing works. I think online folks forget that something as simple as getting a Lemmy account by yourself puts you into the top quintile of tech literacy.
Plot twist: the toddler has a multi-year marketing push worth tens if not hundreds of millions, which convinced a lot of people who don’t know the first thing about chess that it really is very impressive, and all those chess-types are just jealous.
well so much hype has been generated around chatgpt being close to AGI that now it makes sense to ask questions like “can chatgpt prove the Riemann hypothesis”
I agree with your general statement, but in theory since all ChatGPT does is regurgitate information back and a lot of chess is memorization of historical games and types, it might actually perform well. No, it can’t think, but it can remember everything so at some point that might tip the results in it’s favor.
I mean it may be possible but the complexity would be so many orders of magnitude greater. It’d be like learning chess by just memorizing all the moves great players made but without any context or understanding of the underlying strategy.
Okay I maybe exaggerated a bit, but a lot of people think it actually knows things, or is actually smart. Which… it’s not… at all. It’s just pattern recognition. Which was I assume the point of showing it can’t even beat the goddamn Atari because it cannot think or reason, it’s all just copy pasta and pattern recognition.
People already think chatGPT is a general AI. We need more articles like this showing is ineffectiveness at being intelligent. Besides it helps find a limitations of this technology so that we can hopefully use it to argue against every single place
In all fairness. Machine learning in chess engines is actually pretty strong.
AlphaZero was developed by the artificial intelligence and research company DeepMind, which was acquired by Google. It is a computer program that reached a virtually unthinkable level of play using only reinforcement learning and self-play in order to train its neural networks. In other words, it was only given the rules of the game and then played against itself many millions of times (44 million games in the first nine hours, according to DeepMind).
Oh absolutely you can apply machine learning to game strategy. But you can’t expect a generalized chatbot to do well at strategic decision making for a specific game.
You’re not wrong, but keep in mind ChatGPT advocates, including the company itself are referring to it as AI, including in marketing. They’re saying it’s a complete, self-learning, constantly-evolving Artificial Intelligence that has been improving itself since release… And it loses to a 4KB video game program from 1979 that can only “think” 2 moves ahead.
I like referring to LLMs as VI (Virtual Intelligence from Mass Effect) since they merely give the impression of intelligence but are little more than search engines. In the end all one is doing is displaying expected results based on a popularity algorithm. However they do this inconsistently due to bad data in and limited caching.
Did the author thinks ChatGPT is in fact an AGI? It’s a chatbot. Why would it be good at chess? It’s like saying an Atari 2600 running a dedicated chess program can beat Google Maps at chess.
Most people do. It’s just called AI in the media everywhere and marketing works. I think online folks forget that something as simple as getting a Lemmy account by yourself puts you into the top quintile of tech literacy.
Yet even on Lemmy people can’t seem to make sense of these terms and are saying things like “LLM’s are not AI”
deleted by creator
Google Maps doesn’t pretend to be good at chess. ChatGPT does.
A toddler can pretend to be good at chess but anybody with reasonable expectations knows that they are not.
Plot twist: the toddler has a multi-year marketing push worth tens if not hundreds of millions, which convinced a lot of people who don’t know the first thing about chess that it really is very impressive, and all those chess-types are just jealous.
Have you tried feeding the toddler gallons of baby-food? Maybe then it can play chess
They’ve been feeding the toddler everybody else’s baby food and claiming they have the right to.
“If we have to ask every time before stealing a little baby food, our morbidly obese toddler cannot survive”
well so much hype has been generated around chatgpt being close to AGI that now it makes sense to ask questions like “can chatgpt prove the Riemann hypothesis”
Even the models that pretend to be AGI are not. It’s been proven.
I agree with your general statement, but in theory since all ChatGPT does is regurgitate information back and a lot of chess is memorization of historical games and types, it might actually perform well. No, it can’t think, but it can remember everything so at some point that might tip the results in it’s favor.
Regurgitating an impression of, not regurgitating verbatim, that’s the problem here.
Chess is 100% deterministic, so it falls flat.
I’m guessing it’s not even hard to get it to “confidently” violate the rules.
I mean it may be possible but the complexity would be so many orders of magnitude greater. It’d be like learning chess by just memorizing all the moves great players made but without any context or understanding of the underlying strategy.
I think that’s generally the point is most people thing chat GPT is this sentient thing that knows everything and… no.
Do they though? No one I talked to, not my coworkers that use it for work, not my friends, not my 72 year old mother think they are sentient.
Okay I maybe exaggerated a bit, but a lot of people think it actually knows things, or is actually smart. Which… it’s not… at all. It’s just pattern recognition. Which was I assume the point of showing it can’t even beat the goddamn Atari because it cannot think or reason, it’s all just copy pasta and pattern recognition.
Articles like this are good because it exposes the flaws with the ai and that it can’t be trusted with complex multi step tasks.
Helps people see that think AI is close to a human that its not and its missing critical functionality
The problem is though that this perpetuates the idea that ChatGPT is actually an AI.
People already think chatGPT is a general AI. We need more articles like this showing is ineffectiveness at being intelligent. Besides it helps find a limitations of this technology so that we can hopefully use it to argue against every single place
In all fairness. Machine learning in chess engines is actually pretty strong.
https://www.chess.com/terms/alphazero-chess-engine
Sure, but machine learning like that is very different to how LLMs are trained and their output.
Oh absolutely you can apply machine learning to game strategy. But you can’t expect a generalized chatbot to do well at strategic decision making for a specific game.
OpenAI has been talking about AGI for years, implying that they are getting closer to it with their products.
https://openai.com/index/planning-for-agi-and-beyond/
https://openai.com/index/elon-musk-wanted-an-openai-for-profit/
Not to even mention all the hype created by the techbros around it.
Hey I didn’t say anywhere that corporations don’t lie to promote their product did I?
You’re not wrong, but keep in mind ChatGPT advocates, including the company itself are referring to it as AI, including in marketing. They’re saying it’s a complete, self-learning, constantly-evolving Artificial Intelligence that has been improving itself since release… And it loses to a 4KB video game program from 1979 that can only “think” 2 moves ahead.
That’s totally fair, the company is obviously lying, excuse me “marketing”, to promote their product, that’s absolutely true.
I like referring to LLMs as VI (Virtual Intelligence from Mass Effect) since they merely give the impression of intelligence but are little more than search engines. In the end all one is doing is displaying expected results based on a popularity algorithm. However they do this inconsistently due to bad data in and limited caching.
I mean, open AI seem to forget it isn’t.