That’s misophobia, misophonia is when you don’t like how soy paste sounds.
That’s misophobia, misophonia is when you don’t like how soy paste sounds.
Brilliant, very meta, love it
It’s no borat in Skyrim that’s for sure
Do you honestly believe that if trump regains power they’re going to nail him on state charges? We’ll be lucky to ever have elections again, let alone have him face consequences for his crimes. If he wins it’s gonna be full blown fascism
I like the lighting and composition but it looks a little fried, how hard did you sharpen?
Wh-what are you doing, step-bother?
Do you have like a humiliation kink or something
It’s not that wild, is there anything more republican than voting against your own best interests?
What’s fucked up is that if you die here you die for real
How long will we continue to get news stories whenever a minor entity leaves X (formerly Twitter)?
If he were smarter and/or not a walking ego then yeah, that would have been the move. Though if he were smart he probably wouldn’t be in this mess.
It’s not. He never wanted to buy twitter, he just wanted to pump and dump the stock, but because he is stupid and the plan was obvious they sued him to make him honor the deal.
So if he just turned around and shut the company down, it would give the SEC legal grounds to argue that his intention all along was market manipulation.
My understanding is that the SEC would have fucked him if he just shut it down, because it would indicate that he never intended to buy it in the first place and instead was just trying to manipulate the stock market (which is definitely what he was doing).
They don’t reason, they’re stochastic parrots. Their internal mechanisms are well understood, no idea where you got the notion that the folks building these don’t know how they work. It can be hard to predict/understand how an LLM generated a given prompt because of the huge training corpus and statistical nature of neural nets in general.
LLMs work the same as any other net, just with massive sample sets. They have no reasoning capabilities of any kind. We are naturally inclined to ascribe humanlike thought processes to them because they produce human-sounding outputs.
If you would like the perspective of real scientists instead of a “tech-bro” like me I would recommend Emily Bender and Timnit Gebru. I’d recommend them as experts without a vested interest in the massively overblown hype about what LLMs are actually capable of.
I work on chatbots for a big tech company. Every team is trying to use GenAI for everything. 90% of the stuff they try won’t work. I have to explain that LLMs can’t actually think at least three times a week. The hype train was too strong. Even calling it AI feels misleading.
That said, there are some genuinely great applications for LLMs that i’ve enjoyed looking into.
If you’re gonna link to That Scene from Spec Ops you gotta include a “Seriously Gnarly Shit Ahead” content warning or something.
That may be true for warehouse employees, but the corporate offices are a toxic mess of shitty culture and dated ideas. I’ve never seen a tech department bleed so much underpaid talent to Amazon.
When I quit because they tried to force me back into the office mid-pandemic (August 2020) I had multiple offers for fully remote positions with twice the salary within a few weeks.
But yeah, if you are a cashier at a warehouse or whatever I hear it’s a solid gig.
This makes me wanna play Thea again