SF is 7mi X 7mi, so they own 28mi X 28mi of land. About the size of NYC and LA combined if I’m doing this right.
Nope. Definitely got a reason, and it’s stated. There have been countless reworks of keyboards, for example, that promise lots of benefits, but it’s a problem that doesn’t need solving for most people. What’s a 30% increase in typing speed with a 200% learning curve going to do for most people? Not much. I’ve seen hundreds come and go throughout the years in engineering teams, and people always go back to the thing they learned on.
That being said, as someone else pointed out in this thread, this is essentially just a remix of stenography. They’re trying to make it seem more useful than it is, which whatever, it’s their product. The thing that is most problematic about this particular product is the cognitive dissonance of staring at someone like this guy making weird faces and not speaking, where you’re actually listening to his phone.
Now, is this a solution for mute people? Quite possibly. Is it better than natural language conversational translation by a device in normal conversation? Not a chance.
Here’s a quick idea of what you’d want in a PC build https://newegg.io/2d410e4
You can have a slightly bigger package in PC form and doing 4x the work for half the price. That’s the gist.
I just looked, and the MM maxes out at 24G anyway. Not sure where you got the thought of 196GB at. NVM you said m2 ultra
Look, you have two choices. Just pick one. Whichever is more cost effective and works for you is the winner. Talking it down to the Nth degree here isn’t going to help you with the actual barriers to entry you’ve put in place.
That’s as close as you get to it as well, and still not quite there…
😉
Lol. This will be about as popular as all the speciality keyboards that have claimed to be faster/better at typing.
Specialty input devices are impossible market to crack, because what YOU design the changed input to be is almost 90% not going to translate for other people. This thing looks to have 10 inputs, so there’s a massive number of key combos and changes one has to memorize. Not gonna happen.
I’ve not run such things on Apple hardware, so can’t speak to the functionality, but you’d definitely be able to do it cheaper with PC hardware.
The problem with this kind of setup is going to be heat. There are definitely cheaper minipcs, but I wouldn’t think they have the space for this much memory AND a GPU, so you’d be looking for an AMD APU/NPU combo maybe. You could easily build something about the size of a game console that does this for maybe $1.5k.
It’s like the pro-democracy version of the Ashley Madison hack.
https://discourse.joplinapp.org/t/joplin-server-documentation/24026
4 minimum. Joplin doesn’t need a server though. Just configure the storage backends to be whatever you want.
Christ. This guy just can’t help himself.
Yeah, you just close up the bags, push them to the bottom, then put magnets around the edges. The magnets attract to the metal pot and hold the bag in place. If you don’t want to use a metal pot and only have plastic, put a piece of metal or another magnet on the outside, then magenta inside, and it’s the same effect.
Sure seems like you’re either sourcing these images wrong, or they’re missing something. The docs themselves even reference this command as it’s a good way to test the container is linked to the host hardware properly.
Maybe try starting a shell and finding if that executable exists in the image.
Magnets are outside of the bag and don’t come into contact with any food. They’re just metal magnets.
‘docker exec -it jellyfin nvidia-smi’
Need some laws against this shit.
You need to be running the Nvidia container toolkit and specify the container be launched with that runtime if you want direct hardware access to enc/dec hardware.
AMD APU uses whatever system RAM is as VRAM, so…yeah. NPU as well.