I’m trying to make minesweeper using rust and bevy, but it feels that my code is bloated (a lot of for loops, segments that seem to be repeating themselves, etc.)

When I look at other people’s code, they are using functions that I don’t really understand (map, zip, etc.) that seem to make their code faster and cleaner.

I know that I should look up the functions that I don’t understand, but I was wondering where you would learn stuff like that in the first place. I want to learn how to find functions that would be useful for optimizing my code.

  • CameronDev@programming.dev
    link
    fedilink
    arrow-up
    18
    ·
    5 months ago

    Experience is the best teacher. Keep writing code, revisit old code and rewrite it.

    Also, is worth knowing when not to optimise. Code you can read is code you can maintain, and some optimisations are not as readable.

    Learn how to use a profiler. Its a bit of an artform, but learning to interpret the results will help you find slow code sections. It’ll also help you determine if your optimisations are actually worthwhile. Measure first, optimise second.

    • hactar42@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      5 months ago

      Also, is worth knowing when not to optimise. Code you can read is code you can maintain, and some optimisations are not as readable.

      This is one of my biggest pet peeves. So many people want to equate the number of lines as a sign of how well it’s programmed. But all they really do is chain a bunch of stuff together that makes it harder to debug.

  • Akrenion@programming.dev
    link
    fedilink
    arrow-up
    12
    ·
    5 months ago

    Map, Filter, Reduce Those are the big three for data. More important than those however is mindset and patience with oneself. Writing code that works is the first and most impressive step. Optimizations are fun to think about but unless your computations are sluggish and repeat a lot of unnecessary steps they are rarely a priority.

    Build something and shelf it. Trust me, in but a few months you will look back in bewilderment and realize how much you’ve grown.

    • grue@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 months ago

      but unless your computations are sluggish and repeat a lot of unnecessary steps

      In other words, the kind of optimizing that’s worth it is choosing a better algorithm to reduce its big-O complexity class.

      • Akrenion@programming.dev
        link
        fedilink
        arrow-up
        4
        ·
        5 months ago

        I was thinking about caches and evaluating what calculations I want to do.

        I fixed a project for someone simulating a machine. That took them almost 9 minutes. Simply replacing the part where they initialised a solver and used it to find a zeropoint of a quadratic function with a call to that initialiser got it down to a minute.

        You should have seen their faces when we put the quadratic formula in and it took 28 seconds.

      • Baldur Nil@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        5 months ago

        The mental model I have about performance is that the higher abstraction usually beats the lower level abstraction.

        So in that sense, a well architected software with proper caching, multithreading where it matters etc. will beat badly architected software (ex: one that brute forces everything). Then, that being equal, good algorithms and solutions beat bad ones. Only then faster runtimes make more of a difference, and at the bottom things like more efficient processor architectures, more efficient compiler etc. beat slower ones.

        A good example is Lemmy itself, which as far as I know was made in Rust to be super fast, but then at the beginning was being DDOSed quite easily because of the way the database was designed and lots of queries were very slow. Once they fixed that, Lemmy became actually usable.

  • brisk@aussie.zone
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    5 months ago

    The functions you’ve called out are higher order functions regularly associated with the functional programming paradigm. “In the first place” for a lot of people would be a functional programming course at a university.

    For your specific case, rust (like a few other languages) implements these through iterator programming. There’s a section in the rust book that might help.

    Apart from academia you learn from experience, including a healthy amount of reading other people’s code, just like you did to find out about these functions in the first place!

    • pivot_root@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      5 months ago

      Also, with Rust, if you’re optimizing for performance and not legibility, it’s actually pretty important to not prematurely optimize. Its type system gives the compiler a lot of information to work with, and it does a really good job finding places where code can be optimized.

      What might make sense in some languages, such as trying to cache small strings instead of recomputing them, can actually hurt performance by preventing the compiler from using certain optimizations.

  • cflewis@programming.dev
    link
    fedilink
    arrow-up
    8
    ·
    5 months ago

    This is probably going to get downvoted into oblivion but: try writing some Haskell for a while. Learn You A Haskell is a good place to do it, just bail out when you get to monads.

    When I was taught programming at university, we did one assignment in Java, then the next one was the exact same assignment but in Haskell. The idea was not to bias us towards imperative vs functional programming. I don’t think it worked – I would guess almost everyone preferred Java – but over my career I’ve learned how much Haskell has offered me for writing imperative code for my day job. I think you will get what you are looking for by trying some Haskell for a while.

    • ericjmorey@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      just bail out when you get to monads.

      Isn’t writing Haskell nearly equivalent to writing monads? How could they start without using monads?

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    5 months ago

    What you’re asking about (how to learn clever techniques to e.g. turn your naive O(n2) algorithm into an O(n log n) one) is the kind of stuff taught in an algorithms / theory course as part of a CS degree. Here are some of the first search results for open content courses I found that look like they cover the right topics:

    https://ocw.mit.edu/courses/18-404j-theory-of-computation-fall-2020/pages/syllabus/

    http://openclassroom.stanford.edu/MainFolder/CoursePage.php?course=IntroToAlgorithms

    You also might want to consider a combinatorics / discrete math course.

  • 4wd@programming.dev
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    5 months ago

    When I started learning programming, I was like “tf is a map function?” and I always forgot about it. Then I tried the functional programming language Erlang and understood all these functions very well. But there is a downside, now most for-loops in C++ look terrible to me :)

  • eran_morad@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    5 months ago

    I recently wrote something that really stretched my understanding of lexical scoping and functional programming. In the end, I think the only way is RTFM and just keep coding.

  • Vent@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    5 months ago

    I learned that type of stuff in college, so I can’t personally recommend any online sources. However, I can tell you that what you’re looking for falls under “Data Structures and Algorithms”. IIRC my degree required 3 classes with that name. Lots of sorting algorithms in that field since they make great case studies.

    You learn the various data structures and algorithms available, their strengths and weaknesses, how they work, when to use them, etc…

    You also learn how to measure performance, like Big-O notation, the bane of many a CS student’s existence.

  • pooberbee (any)@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    5 months ago

    Some of this is probably just getting to know your tools. Learn the language, look at others’ code and interrogate what they did and why. The higher-order functions (scary-sounding term, but they’re not actually scary) you mentioned are useful, go learn them and use them.

    Some of it might also be (I haven’t seen your code) getting a better understanding of the problem you’re trying to solve. Figure out what all the separate pieces you need are, and then break those pieces into their pieces and so on, until you’ve got simple, self-contained chunks of functionality that you can give simple names to. Some of those might be functions, or you may find that they’re simple enough that they don’t need to be. Refactor and think about how to make the problem simpler. I think a lot of it is just staring at your work and dreaming of ways the make it simpler and easier to read.

    If you really want to optimize for performance, that can come later once you really have a feel for it.