Former RIF user from reddit, new to lemmy.
I’d assume that’s either due to bias in the training set, or poor design choices. The former is already a big problem in facial recognition, and can’t really be fixed unless we update datasets. With the latter, this could be using things like visible light for classification, where the contrast between target and background won’t necessarily be the same for all skin tones and times os day. Cars aren’t limited by DNA to only grow a specific type of eye, and you can still create training data from things like infrared or LIDAR. In either case though, it goes to show how important it is to test for bias in datasets and deal with it before actually deploying anything…
I’ve used Linux on my private laptop for the past few years, never had any major issues. Work desktop is running Ubuntu, no major problems except for the odd bit of poorly maintained software (niche science things, so that’s not really a Linux issue). Laptop breaks, I get a Windows 11 laptop from work…and I’ve had so many problems. Updates keep breaking everything, and I’ve had to do a factory reset more than once since the recovery after those updates also always failed. Wish I had my good old Linux laptop back :(
Surely that depends on where in Asia you’re looking at as well? On average, the number of languages people speak is quite different between, say, India and Japan. Or Switzerland vs Romania in Europe.
I do understand the curiosity though, just seeing what malware is trying to do can be quite interesting. Maybe someone should tell that person about VMs though lol
It does, but it’s important to note that the theoretical basis for much of the rapid progress we’re seeing now (e.g. machine learning) has actually existed for quite a long time. Training very large models wasn’t feasible at the time they were theorised, but the basis for them did exist.
When it comes to brains, we don’t even have a good understanding of how multisensory integration works yet, let alone how we could, even in theory, implant multisensory impressions like ads. It’s much easier with things like movement disorders or paralysis because our understanding of those phenomena is much more advanced. Plus - we’re only really dealing with one modality there - movement.
Deep brain stimulation for psychiatric conditions does exist, but it’s poorly understood, to the point where there isn’t even really a consensus on where you should place the stimulating electrodes for the best effects. At least that’s what a colleague who worked on DBS described a while ago, and I doubt it would’ve changed dramatically in a year.
Brain-computer/machine interfaces are really interesting when treating conditions like paralysis or Parkinson’s disease, and to a certain extent severe psychiatric conditions if you count deep brain stimulation for e.g. severe OCD. I don’t think we’ll be anywhere near sending detailed multisensory content like ads into people’s brains for a long time though. That’s so far outside the scope of what brain stimulation can do right now, it’s really just scifi.
Disappointing but not surprising. The world is full of racial bias, and people don’t do a good job at all addressing this in their training data. If bias is what you’re showing the model, that’s exactly what it’ll learn, too.
Good shout with the ice cubes! I think he likes them! Obligatory dog tax included, that’s him being curious about ice cubes
.