You could try FreeFileSync. I use it for pretty much your exact use case, though my music library is much smaller and changes less often, so I haven’t tinkered with its automation. Manual sync works like a dream.
You could try FreeFileSync. I use it for pretty much your exact use case, though my music library is much smaller and changes less often, so I haven’t tinkered with its automation. Manual sync works like a dream.
Try clicking the sign in button, then navigating back to the video without actually signing in. Seems to work every time I’ve tried it so far.
You’re entirely correct, but in theory they can give it a pretty good go, it just requires a lot more computation, developer time, and non-LLM data structures than these companies are willing to spend money on. For any single query, they’d have to get dozens if not hundreds of separate responses from additional LLM instances spun up on the side, many of which would be customized for specific subjects, as well as specialty engines such as Wolfram Alpha for anything directly requiring math.
LLMs in such a system would be used only as modules in a handcrafted algorithm, modules which do exactly what they’re good at in a way that is useful. To give an example, if you pass a specific context to an LLM with the right format of instructions, and then ask it a yes-or-no question, even very small and lightweight models often give the same answer a human would. Like this, human-readable text can be converted into binary switches for an algorithmic state machine with thousands of branches of pre-written logic.
Not only would this probably use an even more insane amount of electricity than the current approach of “build a huge LLM and let it handle everything directly”, it would take much longer to generate responses to novel queries.
Yep. In fact, Amazon devices can connect to other Amazon devices over their Sidewalk meshnet and get the wifi password that way. I’m never getting anything from Amazon more complicated than a screwdriver.
Up in the Hardware Information section of hyfetch, on the left.
Webtoon is still shitty in other ways. When they adapt a property, they want it their way, regardless of the author’s original vision. I’ve seen several stories that originated on Royal Road get Webtoon adaptations, and the adaptations always seem to change or leave out important parts of the story, making characters look stupid or just completely replacing entire sets of characters, forcing the story to diverge substantially when inevitably something they got rid of turns out to have been critically important to where the author was taking things. They turn great stories into middling slop every single time.
Tweet not found, not even when I change the URL to go directly to Twitter. Was it deleted?
Not them, but I do! https://youtu.be/s1fxZ-VWs2U
Granted.
An infinite quantity of ice cold water now exists within the exact dimensions of your water bottle.
Water has mass.
Try KittyToy (itch.io).
I don’t listen to many podcasts, but those two are pretty great.
https://mediabiasfactcheck.com/orinoco-tribune-bias-and-credibility/
Overall, we rate Orinoco Tribune as extreme left biased and questionable due to its consistent promotion of anti-imperialist, socialist, and Chavista viewpoints. We rate them low factually due to their strong ideological stance, selective sourcing, the promotion of propaganda, and conspiracy theories related to the West.
The Orinoco Tribune has a clear left-leaning bias. It consistently supports anti-imperialist and Chavista perspectives (those who supported Hugo Chavez). The publication critiques U.S. policies and mainstream media narratives about countries opposing U.S. influence. Articles frequently defend the Venezuelan government and criticize opposition movements and foreign intervention.
Articles and headlines often contain emotionally charged language opposed to the so-called far-right of Venezuela, like this Far Right Plots to Sabotage Venezuela’s Electrical System in Attempt to Disrupt the Electoral Process. The story is translated from another source and lacks hyperlinked sourcing to support its claims.
Maybe don’t consider a pro-Maduro propaganda rag as a legitimate source for a conflict he’s directly involved in.
Maduro is a man who ordered his country to block Signal, ordered it to block social media, and arrests, imprisons, and bans his political opposition. He has also expressed strong support for Russia’s invasion of Ukraine, meanwhile the citizens of his country have been starving for years under what is literally known as The Maduro Diet, and the middle class has vanished. He has long forfeit his right to the benefit of the doubt. He is a despot who has now repeatedly falsified election results after mismanaging the country for years, and calls his opposition fascists while being fascist. That the people overwhelmingly want him gone is not some hegemonic plot by the evil West, it’s the natural consequence of his actions.
Router-level VPN is going to be more difficult to configure and cause more problems than just having it on all your devices. There are some games where online play just refuses to work if connecting through a VPN. Some mobile apps are the same. When a website blocks your currently selected server, and the usual solution is switching to another server, that’s going to be more difficult and more tedious when it’s configured at the router level. In addition, if you do something like using a self-hosted VPN in order to connect remotely to a media server on your home network, that becomes more difficult if your home router is on a different VPN.
If you’re trying to keep local devices in the building from phoning home and being tracked, a PiHole or router-level firewall might be a better solution. I think if you’re running a pfsense or opnsense router and are a dab hand with VLANs then maybe you could get what you’re looking for with router-level VPN, but it’s a huge hassle otherwise. Just put Mullvad on your computers and phones and call it a day.
Unfortunately I can’t even test Llama 3.1 in Alpaca because it refuses to download, showing some error message with the important bits cut off.
That said, the Alpaca download interface seems much more robust, allowing me to select a model and then select any version of it for download, not just apparently picking whatever version it thinks I should use. That’s an improvement for sure. On GPT4All I basically have to download the model manually if I want one that’s not the default, and when I do that there’s a decent chance it doesn’t run on GPU.
However, GPT4All allows me to plainly see how I can edit the system prompt and many other parameters the model is run with, and even configure multiple sets of parameters for the same model. That allows me to effectively pre-configure a model in much more creative ways, such as programming it to be a specific character with a specific background and mindset. I can get the Mistral model from earlier to act like anything from a very curt and emotionally neutral virtual intelligence named Jarvis to a grumpy fantasy monster whose behavior is transcribed by a narrator. GPT4All can even present an API endpoint to localhost for other programs to use.
Alpaca seems to have some degree of model customization, but I can’t tell how well it compares, probably because I’m not familiar with using ollama and I don’t feel like tinkering with it since it doesn’t want to use my GPU. The one thing I can see that’s better in it is the use of multiple models at the same time; right now GPT4All will unload one model before it loads another.
I have a fairly substantial 16gb AMD GPU, and when I load in Llama 3.1 8B Instruct 128k (Q4_0), it gives me about 12 tokens per second. That’s reasonably fast enough for me, but only 50% faster than CPU (which I test by loading mlabonne’s abliterated Q4_K_M version, which runs on CPU in GPT4All, though I have no idea if that’s actually meant to be comparable in performance).
Then I load in Nous Hermes 2 Mistral 7B DPO (also Q4_0) and it blazes through at 50+ tokens per second. So I don’t really know what’s going on there. Seems like performance varies a lot from model to model, but I don’t know enough to speculate why. I can’t even try Gemma2 models, GPT4All just crashes with them. I should probably test Alpaca to see if these perform any different there…
I actually found GPT4ALL through looking into Kompute (Vulkan Compute), and it led me to question why anyone would bother with ROCm or OpenCL at all.
At least their username is accurate!
Web ads are a security risk that even the FBI has acknowledged, so your friends should be aware that having uBlock Origin installed is nearly as important as having virus protection.
Well, not free per se, DVDs and Blurays and the computer in my closet I use to host Jellyfin for the rest of the home do cost money… But they sure as hell can’t jack up the price after the fact. Quite the contrary; the hardware needed for it is getting cheaper over time. I can also use it even when the internet is down.
The dreaded onosecond happens to the best of us.