• 1 Post
  • 21 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle





  • The image needs to have already been downloaded the moment the client even fetches it, or you can use the image to track of a particular user is online/has read the message.

    Oh wow… That’s an excellent point. And even if the client downloads it the moment it fetches the message, that would still be enough to help determine when somebody is using Lemmy. I don’t think advertisers would have a reason to do that1, but I wouldn’t put it past a malicious individual to use it to create a schedule of when somebody else is active.

    1 It’s probably easier for them to host their own instance and track the timestamp of when somebody likes/dislikes comments and posts since that data is shared through federation.

    This needs to be implemented in the backend. Images already get downloaded to and served from the server’s pictr-rs store in some instances, so there’s code to handle this problem already.

    That would be ideal, I agree. This comment on the GitHub issue explains why some instances would want the ability to disable it, though. If it does eventually get implemented, having Sync as a fallback for instances where media proxying is disabled would be a major benefit for us Sync users.

    A small side note: that comment also points out a risk of a media proxy running the risk of downloading illegal media. I don’t necessarily think lj would need to worry about it in the same way, though. From my understanding, the risk with that is that an instance would download the media immediately after receiving a local or federated post pointing it. An on-demand proxy would (hopefully) not run the same risk since it would require action (or really bad timing) on the part of a user.

    On the other hand, such a system would also pose a privacy problem: suppose someone foolishly believes Lemmy’s messaging feature is secure and sends a message with personal pictures (nudes, medical documents, whatever). Copying that data around to other servers probably isn’t what you want.

    Fair, but it’s a bit of a moot point. Sending the message between instances is already copying that data around, and even if it’s between two users of a single instance, it’s not end-to-end encrypted. Instance admins can see absolutely everything their users do.

    Orbot can do per-app VPNs for free if you’re willing to take the latency hit.

    Interesting! I wasn’t aware that there were any Android VPNs capable of doing per-app tunneling.



  • For spoofing the user agent, I still think that some level of obscurity could help. The IP address is the most important part, but when sharing an internet connection with multiple people, knowing which type/version of device would help disambiguate between people with that IP (for example, a house with an Android user and an iPhone user). I wouldn’t say not having the feature is a deal breaker, but I feel like any step towards making it harder to serve targeted ads is a good step.

    Fair point on just using a regular VPN, but I’m hoping for something a bit more granular. It’s not that all traffic would need to be proxied, though. If I use some specific Lemmy instance or click on an image/link, that was my choice to trust those websites. The concern here is that simply scrolling past an embedded image will make a request to some third-party website that I don’t trust.



  • In other posts, I’ve tried to point out how some of the articles and comments around WEI are more speculative than factual and received downvotes and accusations of boot-licking for it. Welcome to the club, I guess.

    The speculation isn’t baseless, but I’m concerned about the lack of accurate information about WEI in its current form. If the majority of people believe WEI is immediately capable of enforcing web page integrity, share that incorrect fact around, and incite others, it’s going to create a very good excuse for dismissing all dissenting feedback of WEI as FUD. The first post linking to the GitHub repository brought in so many pissed off/uninformed people that the authors of the proposal actually locked the repo issues, preventing anyone else from voicing their concerns or providing examples of how implementing the specification could have unintended or negative consequences.

    Furthermore, by highlighting the DRM and anti-adblock aspect of WEI, it’s failing to give proper attention to many of the other valid concerns like:

    • Discrimination against older hardware/software that doesn’t support system-level environment integrity enforcement (i.e. Secure Boot)
    • The ability for WEI to be used to discriminate between browsers and provide poor (or no) service to browsers not created by specific corporations.
    • The possibility of WEI being used in a way to force usage of browsers provided by hostile vendors
    • The ability for it to be used to lock out self-built browsers or forked browsers.
    • The potential for a lack in diversity of attesters allowing for a cartel of attesters to refuse validation for browsers they dislike.

    I very well could be wrong, but I think our (the public) opinions would have held more weight if they were presented in a rational, informed, and objective manner. Talking to software engineers as people generally goes down better than treating them like emotionless cogs in the corporate machine, you know?






  • Having thought about it for a bit, it’s possible for this proposal to be abused by authoritarian governments.

    Suppose a government—say, Wadiya—mandated that all websites allowed on the Wadiyan Internet must ensure that visitors are using a list of verified browsers. This list is provided by the Wadiyan government, and includes: Wadiya On-Line, Wadiya Explorer, and WadiyaScape Navigator. All three of those browsers are developed in cooperation with the Wadiyan government.

    Each of those browsers also happen to send a list of visited URLs to a Wadiyan government agency, and routinely scan the hard drive for material deemed “anti-social.”

    Because the attestations are cryptographically verified, citizens would not be able to fake the browser environment. They couldn’t just download Firefox and install an extension to pretend to be Wadiya Explorer; they would actually have to install the spyware browser to be able to browse websites available on the Wadiyan Internet.


  • From what I can tell, that’s basically what this is trying to do. Some company can sign a source image, then other companies can sign the changes made to the image. You can see that the image was created by so-and-so and then manipulated by so-and-other-so, and if you trust them both, you can trust the authenticity of the image.

    It’s basically git commit signing for images, but with the exclusionary characteristics of certificate signing (for their proposed trust model, at least. It could be used more like PGP, too).


  • eth0p@iusearchlinux.fyitoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    1 year ago

    I glossed through some of the specifications, and it appears to be voluntary. In a way, it’s similar to signing git commits: you create an image and chose to give provenance to (sign) it. If someone else edits the image, they can choose to keep the record going by signing the change with their identity. Different images can also be combined, and that would be noted down and signed as well.

    So, suppose I see some image that claims to be an advertisement for “the world’s cheapest car”, a literal rectangle of sheet metal and wooden wheels. I could then inspect the image to try and figure out if that’s a legitimate product by BestCars Ltd, or if someone was trolling/memeing. It turns out that the image was signed by LegitimateAdCompany, Inc and combined signed assets from BestCars, Ltd and StockPhotos, LLC. Seeing that all of those are legitimate businesses, the chain of provenance isn’t broken, and BestCars being known to work with LegitimateAdCompany, I can be fairly confident that it’s not a meme photo.

    Now, with that being said…

    It doesn’t preclude scummy camera or phone manufacturers from generating identities unique their customers and/or hardware and signing photos without the user’s consent. Thankfully, at least, it seems like you can just strip away all the provenance data by copy-pasting the raw pixel data into a new image using a program that doesn’t support it (Paint?).

    All bets are off if you publish or upload the photo first, though—a perceptual hash lookup could just link the image back to original one that does contain provenance data.


  • Back when I was in school, we had typing classes. I’m not sure if that’s because I’m younger than you and they assumed we has basic computer literacy, or older than you and they assumed we couldn’t type at all. In either case, we used Macs.

    It wasn’t until university that we even had an option to use Linux on school computers, and that’s only because they have a big CS program. They’re also heavily locked-down Ubuntu instances that re-image the drive on boot, so it’s not like we could tinker much or learn how to install anything.

    Unfortunately—at least in North America—you really have to go out of your way to learn how to do things in Linux. That’s just something most people don’t have the time for, and there’s not much incentive driving people to switch.


    A small side note: I’m pretty thankful for Valve and the Steam Deck. I feel like it’s been doing a pretty good job teaching people how to approach Linux.

    By going for a polished console-like experience with game mode by default, people are shown that Linux isn’t a big, scary mish-mash of terminal windows and obscure FOSS programs without a consistent design language. And by also making it possible to enter a desktop environment and plug in a keyboard and mouse, people can* explore a more conventional Linux graphical environment if they’re comfortable trying that.




  • eth0p@iusearchlinux.fyitoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    1 year ago

    “Impede the replacement of” and “compatible battery” has a lot of room for interpretation. I hope they’re defined explicitly somewhere, or else we’re going to find implementations that effectively restrict non-OEM batteries while still adhering to the letter of the law.

    For example, all batteries lacking a cryptographically-verified “certification” handshake could have safety restrictions such as:

    • Limited maximum amperage draw, achieved by under-clocking the SoC and sleeping performance cores.

    • Lower thermal limits while charging the device, meaning fast charging may be limited or preemptively disabled to ensure that the battery does not exceed an upper threshold of you-might-want-to-put-it-in-the-fridge degrees.

    • Disabling wireless charging capabilities, just in case magnetic induction affects the uncertified battery full of unknown and officially-untested components.

    • A pop-up warning the user every time the device is plugged into or unplugged from a charger.

    All of that would technically meet the condition insofar that it’s neither impeding the physical replacement nor rendering the device inoperable, but it would still effectively make the phone useless unless you pay for a (possibly-overpriced) OEM part.

    If they explicitly defined “Impede the replacement of” as “prevent replacement of or significantly alter user experience as a result of replacing,” and “compatible battery” as “electrically-compatible battery” all those cases would be covered.

    Might be a bit of cynical take, but I don’t have too much faith in the spirit of the law being adhered to when profits are part of the equation.