“The new device is built from arrays of resistive random-access memory (RRAM) cells… The team was able to combine the speed of analog computation with the accuracy normally associated with digital processing. Crucially, the chip was manufactured using a commercial production process, meaning it could potentially be mass-produced.”

Article is based on this paper: https://www.nature.com/articles/s41928-025-01477-0

    • Limonene@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      2
      ·
      5 days ago

      The maximum theoretical precision of an analog computer is limited by the charge of an electron, 10^-19 coulombs. A normal analog computer runs at a few milliamps, for a second max. So a max theoretical precision of 10^16, or 53 bits. This is the same as a double precision (64-bit) float. I believe 80-bit floats are standard in desktop computers.

      In practice, just getting a good 24-bit ADC is expensive, and 12-bit or 16-bit ADCs are way more common. Analog computers aren’t solving anything that can’t be done faster by digitally simulating an analog computer.

    • Treczoks@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      4
      ·
      5 days ago

      No, it wouldn’t. Because you cannot make it reproduceable on that scale.

      Normal analog hardware, e.g. audio tops out at about 16 bits of precision. If you go individually tuned and high end and expensive (studio equipment) you get maybe 24 bits. That is eons from the 52 bits mantissa precision of a double float.

      • floquant@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        4 days ago

        Analog audio hardware has no resolution or bit depth. An analog signal (voltage on a wire/trace) is something physical, so its exact value is only limited by the precision of the instrument you’re using to measure it. In a microphone-amp-speaker chain there are no bits, only waves. It’s when you sample it into a digital system that it gains those properties. You have this the wrong way around. Digital audio (sampling of any analog/“real” signal) will always be an approximation of the real thing, by nature, no matter how many bits you throw at it.

        • Treczoks@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          4 days ago

          The problem is that both the generation as well as the sampling is imprecise. So there are losses at every conversion from the digital to the analog domain. On top of that are the analog losses through the on chip circuits themselves.

          All in all this might be sufficient for some LLMs, but they are worthless junk producers anyway, so imprecision does not matter that much.

          • floquant@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 days ago

            Not in a completely analog system, because there’s no conversion between the analog and digital domains. Sure, a big advantage of digital is that it’s much much less sensitive to signal degradation.

            What you’re referring to as “analog audio hardware” seems to be just digital audio hardware, which will always have analog components because that’s what sound is. But again, amplifiers, microphones, analog mixers, speakers, etc have no bit depth or sampling rate. They have gains, resistances, SNR and power ratings that digital doesn’t have, which of course pose their own challenges