Currently studying CS and some other stuff. Best known for previously being top 50 (OCE) in LoL, expert RoN modder, and creator of RoN:EE’s community patch (CBP). He/him.

(header photo by Brian Maffitt)

  • 149 Posts
  • 221 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle






  • the atrocious webp format

    I continue to be confused by the level of widespread hate WebP still gets. It’s old enough to be widely (albeit not universally) supported in software like web browsers, but new enough to provide similar-or-better (usually better) lossless compression than PNG (21,578 bytes for the original image) and typically better lossy compression than JPEG at comparable perceived quality, especially for the types of images typically shared on the internet (rather than say, images saved directly from a DLSR camera). It’s why servers bother to re-encode JPEG images to WebP for delivery - they wouldn’t bother wasting the compute time to re-compress if it wasn’t generally worth doing.

    I can understand it if we were, say, 10-15 years ago when the format was still not super widely supported yet, but that’s basically where we are with JPEG XL and AVIF support right now too. If one of these two had exactly the level of support that WebP does right now then yes, of course we should probably use one of them instead - but we’re not there yet. Until we are, WebP often has the best compromise between compatibility and compression efficiency as far as image formats go, and that’s why a lot of sites do this re-compression thing using WebP. I gave some examples using digital art (one of the things I was compressing a lot at the time) a year ago in a related discussion: https://lemmy.world/post/6665251/4462007

    A news website local to me recently-ish started choosing to deliver AVIF-compressed (or probably re-compressed) images the same way a lot of sites currently do it for WebP because my browser supports AVIF, so at least we are starting to see a token amount of uptake on the next-gen formats in the wild.











  • A more onion-y title would be something like “Conservative commentator quotes Marx, calls for mass protests and strikes”.

    The actual title is more just !ironicorsurprisingnews than !nottheonion material imo


    Edit: You’ve editorialized the title?

    Posts must be:

    1. Links to news stories from…
    2. …credible sources, with…
    3. …their original headlines, that…
    4. …would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

    Unless it was changed post-publication, the original is

    Conservative NYT Columnist David Brooks Calls for ‘National Civic Uprising’ to Defeat Trumpism – Complete With ‘Mass Rallies, Strikes’

    Imo that’s actually more onion-y than the changed title





  • You’re making assumptions about how they work based on your intuition - luckily we don’t need to do much guesswork about how the sorts are actually implemented because we can just look at the code to check:

    CREATE FUNCTION r.scaled_rank (score numeric, published timestamp with time zone, interactions_month numeric)
        RETURNS double precision
        LANGUAGE sql
        IMMUTABLE PARALLEL SAFE
        -- Add 2 to avoid divide by zero errors
        -- Default for score = 1, active users = 1, and now, is (0.1728 / log(2 + 1)) = 0.3621
        -- There may need to be a scale factor multiplied to interactions_month, to make
        -- the log curve less pronounced. This can be tuned in the future.
        RETURN (
            r.hot_rank (score, published) / log(2 + interactions_month)
    );
    

    And since it relies on the hot_rank function:

    CREATE FUNCTION r.hot_rank (score numeric, published timestamp with time zone)
        RETURNS double precision
        LANGUAGE sql
        IMMUTABLE PARALLEL SAFE RETURN
        -- after a week, it will default to 0.
        CASE WHEN (
    now() - published) > '0 days'
            AND (
    now() - published) < '7 days' THEN
            -- Use greatest(2,score), so that the hot_rank will be positive and not ignored.
            log (
                greatest (2, score + 2)) / power (((EXTRACT(EPOCH FROM (now() - published)) / 3600) + 2), 1.8)
        ELSE
            -- if the post is from the future, set hot score to 0. otherwise you can game the post to
            -- always be on top even with only 1 vote by setting it to the future
            0.0
        END;
    

    So if there’s no further changes made elsewhere in the code (which may not be true!), it appears that hot has no negative weighting for votes <2 because it uses the max value out of 2 and score + 2 in its calculation. If correct, those posts you’re pointing out are essentially being ranked as if their voting score was 2, which I hope helps to explain things.


    edit: while looking for the function someone else beat me to it and it looks like possibly the hot_rank function I posted may or may not be the current version but hopefully you get the idea regardless!



  • After many years of selectively evaluating and purchasing bundles as my main source of new games, I’ve come to wonder if it would’ve been better to just buy the individual games when I wanted to play them at whatever the available price was - the rate at which I get through games is far lower than the rate at which games are available in “good” bundles. In the end I’m not even sure if I’ve saved money (because of how many games have been bought but are as-of-yet unplayed) and it does take more time to evaluate whether something’s a good deal or not.

    The upside is way more potential variety of games to pull from in my library, but if I only play at most like 1-2 dozen new games a year then I’m not sure that counts for much 🫠