• 99 Posts
  • 1.75K Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle

  • The quoted image does not say so, they do not say the native packaging from your distribution is borderline unusable. That judgement was added by YOU. The devs just state the package on Archlinux is not officially supported, without making a judgement (at least in the quoted image).

    As for the Fedora issue, that is a completely different thing. That is also Flatpak, so its not the package format itself the issue. Fedora did package the application in Flatpak their own way and presented it as the official product. That is a complete different issue! That has nothing to do with Archlinux packaging their own native format. Archlinux never said or presented it as the official package either and it does not look like the official Flatpak version.

    So where does the developers say that anything that is not their official Flatpak package is “borderline unusable”?


  • And then there is software like OBS, which is known for being borderline unusable when not using the only officially supported way to use it on Linux outside of Ubuntu – which is Flatpak.

    But why is that? I mean just because it is packaged by someone else does not mean its unusable. So its not the package formats issue, but your distribution packaging it wrong. Right? In installed the Flatpak version, because they developers recommended it to me. I’m not sure why the Archlinux package should be unusable (and I don’t want to mess around with it, because I don’t know what part is unusable).


  • Those mystical average people would probably stay on Windows, if they don’t care or cannot learn basics of other systems. Its really not hard to explain and understand, even for “average person” that there is an universal source for applications and there are packages designed and managed by your operating system. I think its important for people to learn basics and we should teach them, not dumb them down like on Windows. Soon people won’t be able to eat themselves anymore…


  • Flatpak have their own set of issues. One thing is, that Flatpak applications do not integrate that easily and perfect like a native package. Either rights are to given, you need to know what rights are needed and how to set it up. Theming can be an issue, because it uses its own libraries in the Flatpak eco system instead your current distributions theme and desktop environment.

    But on the other hand, they have actually a permission system and are a little bit sandbox compared to normal applications. Packages often are distributed quickly and are up to date directly from the developers, and usually are not installed with root rights.

    I’m pretty much a CLI guy as well and prefer native packages (Arch based, plus the AUR). But I also use Flatpaks for various reasons, alongside with AppImages.




  • Beyond raw horsepower, 7-Zip quietly tightens its handling of several legacy formats. Support for ZIP, CcPIO, and FAT archives has been refined, smoothing edge-case extractions that previously required third-party tools.

    Over the years there was a few .zip archives that 7z could not handle for whatever reason. For these cases I had to use another application, but don’t know the reason. And my bad to not keeping copies of these files for future testing.


  • You mean alignment of arguments or multiline strings in example? If they are not on their own line, then it does not matter to me. If they start on their own line, then mixing spaces and tabs isn’t a good idea to me. In example for function calls with a bit more complex calls and multiple arguments, I put them in their own line each. They are indented and therefore indentation level plays. If they are on the same line, I never align them and if I would, it would be spaces. In general:

    function() {
    ....var = 1
    ....another_var = 2
    ....indented(arg, arg2, arg3)
    ....indented(arg, 
    .............arg2, 
    .............arg3)
    }
    


  • That’s not entirely true. Because even if you buy a strong PC, you have to make choices, depending on the game. It’s just the fps and settings we are talking about are higher floor. In example on PC people can enable RayTracing, which tanks the fps a lot. Do you go for 120 fps or 60 or maybe lower fps with higher fidelity and RayTracing in example.

    So the question to answer is still the same, its just on PC we have a bit more individual choices to make.

    Edit (added): Most people don’t have the strongest PC anyway. Look at the Steam hardware survery, most have common graphics cards like the 4060 in example. Or look at handheld PCs and laptops, with fixed hardware. And as said, even in high end with lots of money people need to make cuts in fidelity or performance; just on a higher level in that case. So your question applies to PC as well.


  • I think this question also applies to PC. Why? Because we are limited too. I try to reach 120 fps and consider it performance mode when dialing back quality settings, and enabling upscaling to reach that. If not, 90 fps is also pretty good. For certain games, 60 fps feels like what you describe of 30, but that does not apply to all games. There are single player rpgs played with a gamepad, that I would even consider playing at 30 fps if there is no other option. The problem is, games are not designed to be played with that low fps, as the input latency increases.

    I’ll compare this to the Switch, playing Zelda (emulated with Yuzu). Breath of the Wild on original Switch is designed to be played at 30 fps. Playing it on my PC like that felt like a slideshow, but one can get used to it. If I didn’t had the 60 fps patch, it would still be fine at 30. The next game in the series, Tiers of the Kingdom, was not stable at 60, so I was “forced” to play at 30. And after some time playing it felt pretty good and not upsetting like in the first few minutes.

    What I mean by that is, performance mode if possible, I would sacrifice quality. But not too much, because at some point the image looks really bad.



  • But this can lead to over engineering simple stuff. Which makes the code harder to read and maintain and more error prone. Especially if you don’t need all the other stuff for the class. Worse, if you define a class then you also tend to add more stuff you don’t use, just in case it might be useful.

    A simple variable name is sometimes the better solution. But it depends on the situation off course. Sometimes a new class make things more clear, as it abstracts some complexity away. Yeah, we need to find a balance and that is different for every program.


  • My rule of thumb is, use short names if the context makes it clear. But do not make names too long and complicated (especially with Python :D). For me having unique names is also important, so I don’t get confused. So not only too similar names are bad, especially if they all start like “path_aaa”, “path_bbb” and such, then the eye can’t distinguish them quickly and clearly. And searching (and maybe replace) without an IDE is easier with unique and descriptive names.

    Sometimes its better to come up with a new name, instead adding a modification and make the name longer. This could be in a for loop, where inner loops edit variables and create a variation of it. Instead adding something like “_modified”, try to find what the modification is and change from “date” to “now” instead “date_current”.