I did nothing and I’m all out of ideas!

  • 0 Posts
  • 41 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle




  • Did she tell you why she was doing it or did she only give you some platitudes and thanks?

    Because a lot of people are not used to be talked to, and even if they are all rage inside the car, they get scared quite fast when someone fully geared on a motorbike knocks on their window - and tbh I think it is pretty understandable.

    Still, good on you, I usually just let them go and change road - when feasible. There’s no need to waste a good riding day with some knucklehead.


  • Mechanize@feddit.ittoLinux@lemmy.mlproton VPn
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    3 months ago

    Considering you are not using the Flatpak anymore it is, indeed, strange. The only reasons I can think of are: your network manager is using the wrong network interface to route your traffic ( if you go on an ip checking site like for example ipinfo do you see yours or the VPN’s IP?) or that you have WebRTC enabled and the broadcaster is getting your real ip through that.

    For the first case it can get pretty complicated, but it is probably an error during the installation of the VPN app or you set up multiple network managers and it gets confused on which one to configure. You should also enable the Advanced Kill Switch in the configuration.

    For the second case you could try adding something like the Disable WebRTC add-on for firefox and check if it works. Remember to enable it for Private Windows too.

    The last thing I can think of is that you allowed the broadcaster to get your real geolocation (in firefox it should be a small icon on the left of the address bar), or you are leaking some kind of information somewhere: there are a bunch of site that check for ip leak, but I don’t know if that goes too deep for you.
    If you want to check anyway the first two results from DDG are browserleaks and ipleak. Mullvad offered one too but it is currently down.

    EDIT: If you enable the Advanced Kill Switch, and the app is working correctly, internet will not work while you are not connected to a VPN server or until you disable the switch again, so pay attention to that.








  • Because, as pointed in the page, Servo is being developed as a(n embeddable) Rendering Engine, not as a full blown end user Browser.
    Its alternatives are not Chrome, Safari or Firefox, but Webkit, Blink and Gecko

    There’s an example GUI called Servoshell, but it is more of a testing ground and example on how to embed the engine in an app than a serious alternative to anything currently in the market.

    Already this kind of work is difficult and daunting. Adding to it a full GUI would make it completely impossible for the current size and financial backing Servo has.

    Big words aside it just means that Servo wants to be only one of the parts that compose a real browser: the one that takes HTML, Javascript, WASM and translates them into the things you see on your monitor. All the user facing functionality are left to the devs of the app that embed it.


  • I feel there’s some kind of miscommunication going on here.

    Probably I’m not understanding what you are putting forward, but to be clear: They are not doing this because they want to. They are doing it because they are forced to do it by the DMA.
    It’s true that allegedly they were working on some kind of interoperability layer already. For years now. But no evidence of it being more than lip service to avoid being regulated has ever surfaced - as far as I know.

    Which would have been in line with your “Do Nothing”.


  • as an unwilling Whatsapp user the ability to migrate without having to convince all my social circles to do anything but check a checkbox sounds like a huge step forward.

    That’s the point. I feel it will not be a “simple checkbox”, and they will make it the most obnoxious process they can using the Best Dark Patterns the industry has to offer.

    Already the general public is not interested in the alternatives or the concept of interoperability - wanting something that Just Works™ - putting in front even the smallest step (and some scary text!) will make the percentage of willing people become even lower.
    And that’s not all. As it is portraited in the article by the Threema’s spokeperson it is pretty clear that Meta will just try to make the maintenance of the communication layer as cumbersome as they can - both technically and bureaucratically.
    They are explicitly the ones keeping the reins of the standard, the features, the security model, the exchanged data and who, how and when will be approved.

    So from one side if they make it hard and scary enough to tank the use rate, they will have the excuse of not being there enough people to give priority to fix it or add features, and from the other side if maintaining the interoperability will be difficult and time consuming enough, the people and businesses from the alternatives or wrappers will not have the incentive to do or keep doing it for the long haul. As we can already see in the article.

    Is it better than nothing? Sure, probably. Will it be a slow cooking, easy to break, easy to get excluded from, just bare minimum to comply to the letter but not the spirit of the law? I feel that’s a pretty good bet to make.

    Let’s be clear: I will be extremely happy if all the red flags and warning bells that I saw in the article will just end up being figments of my imagination. But yes, I’m very pessimistic - maybe even too much - when I see these kind of corporate speech and keywords.


  • “One of the core requirements here, and this is really important, is for users for this to be opt-in,” says Brouwer. “I can choose whether or not I want to participate in being open to exchanging messages with third parties. This is important, because it could be a big source of spam and scams.”

    Let me translate this for you: "We will make users hop on the most cumbersome, frustrating and inefficient way we can think of to enable interoperability. And making it defaulted to off will mean people using other apps will need to find other channels to ask for it to be enabled on our users’ end, making it worthless.

    And don’t forget: we will put a bunch of scary warnings, and only allow to go all in, with no middle ground or granularity!"

    Great stuff, thank you. I can’t wait.

    “We don’t believe interop chats and WhatsApp chats can evolve at the same pace,” he says, claiming it is “harder to evolve an open network” compared to a closed one.

    Ah, so they are going for the Apple’s approach with iMessage and Android sms. Cool, cool.

    I hope my corporate-to-common translator is broken, because this does just sound bad.




  • Yeah. GDPR should have been implemented as a mandatory part of HTML or even HTTP that interacts with a builtin browser feature.

    Well, it kind of is. The Do Not Track header has recently seen a court win in Germany (source):

    It turned out that the judge agreed with vzbv, ruling that the social media giant is no longer allowed to warn users it doesn’t respect DNT signals. That’s because, under GDPR, the right to opt out of web tracking and data collection can also be exercised using automated procedures.

    And it is basically the same in California too Source

    GPC is a valid do-not-sell-my-personal-information signal according to the California Consumer Privacy Act (CCPA), which stipulates that websites are legally required to respect a signal sent by users who want to opt-out of having their personal data sold.


  • Any foundation model is trained on a subset of common crawl.

    All the data in there is, arguably, copyrighted by one individual or another. There is no equivalent open - or closed - source dataset to it.

    Each single post, page, blog, site, has a copyright holder. In the last year big companies have started to change their TOS to make that they are able to use, relicense and generally sell your data hosted in their services as their own for the intent of AI training, so potentially some small parts of common crawl will be licensable in bulk - or directly obtained from the source.

    This does still leave out the majority of the data directly or indirectly used today, even if you were willing to pay, because it is unfeasable to search and contract every single rights holder.

    On the other side of it there have been work to use less but more heavily curated data, which could potentially generate good small, domain specific, models. But still they will not be like the ones we currently have, and the open source community will not be able to have access to the same amount and quality of data.

    It’s an interesting problem that I’m personally really interested to see where it leads.