I also reached out to them on Twitter but they directed me to this form. I followed up with them on Twitter with what happened in this screenshot but they are now ignoring me.

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    109
    arrow-down
    3
    ·
    edit-2
    7 months ago

    That is 100% a bot, and whoever made the bot just stuck in a custom regex to match “user@sld.tld” instead of using a standardized domain validation lib that actually handles cases like yours correctly.

    Edit: the bots are redirecting you to bots are redirecting you to bots. This is not a bug. This is by design.

    • Syndic@feddit.de
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 months ago

      This is not a bug. This is by design.

      I’d say it’s a bug in the design as it clearly fails to work with a completely fine email.

      • TheGreenGolem@lemm.ee
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        3
        ·
        7 months ago

        They meant that they are intentionally trying NOT to help the customer, hopefully they just give up at some point. (That’s why they are redirecting to bots and not to an actual human.)

        • Trainguyrom@reddthat.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 months ago

          I’ve encountered plenty of poor souls in equally poor countries getting paid a pittance who entirely seem like bots

        • Deiv@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          7 months ago

          Lol, why would that be true? They want to help, they just have a shitty bot

        • TheAndrewBrown@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          7 months ago

          It’d be a lot easier to not make a bot at all if that was the case. They aren’t intentionally not trying to help, they’re intentionally spending as few resources as possible on helping while still doing enough to satisfy most customers. It’s shitty but it’s not malicious like you guys are implying.

        • PlutoParty@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          7 months ago

          Most companies try to gain and retain customers. You’re suggesting that at Chipotle, they sat down and decided to actively not help theirs?

    • rottingleaf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      Well, writing “operator” or “human” or “transfer” or “what the @#$” or something irritated may help.

    • tory@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      But using a standardized library would be 3PP and require a lot of paperwork for some reaosn.

    • doctorcrimson@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      10
      ·
      7 months ago

      It might even be worse than that, imagine if they let one of those learning algorithms handle their customer service.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        7 months ago

        That all loads of companies that do. In this case it would be better because it would actually understand what constitutes an email rather than running some standard script with no comprehension of what it’s doing.

        The difference between AI and automated script responses is AI is actually thinking at some level.

        • doctorcrimson@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          7 months ago

          I think AI generally tries to bullshit more often than participating in what the user wants to accomplish. It would be like speaking with customer support who don’t actually work for the company, is a pathological liar, and have a vested interest in making you give up as fast as possible.

          • Echo Dot@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            7 months ago

            That’s not what AI is though.

            An AI is pretty good and doing whatever it’s programmed to do it’s just you have to check that the thing it’s programmed to do is actually the thing you want it to do. Things like chatGPT our general purpose AI and essentially exist more or lesses a product demonstration than an actual industry implementation.

            When companies use AI they use their own version on their own trained data sets.

            • doctorcrimson@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              7 months ago

              If you program your learning algorithm to “solve” customer problems in the shortest amount of time possible with the least amount of concessions possible, it will act exactly as I just described. The company would have to be run by buffoons to give the phone machines the ability to change user account information or have the ability to issue refunds, so the end result is that they can only answer simple questions until the person on the other end gives up.

              • Echo Dot@feddit.uk
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                7 months ago

                That is not how AI works.

                It’s not programmed at all, it’s a developed network, it evolves in the same way that the human brain evolves, saying it will try and solve the problem in the shortest possible time is like saying that human agents will try and solve the problem in the shortest possible time. It’s a recursive argument.

                You have rather proved my original point which is that everyone talking about AI doesn’t know what they’re talking about.

                You might say “oh but an artificial intelligence could never possibly match the intelligence of humans” but why would that be the case? There’s nothing magical or special about human intelligence.

                • doctorcrimson@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  7 months ago

                  Wow you really went off on an irrelevant tirade, there. There is a defined accuracy when you set up the learning algorithm, there is an end goal result that you define with which the program chooses and eliminates “choices” for a given generation. You program it, it doesn’t magically conjure from a witches cauldron or a wish from a genie.

                  And also, we’re not talking about actual intelligence and sentience here, we’re talking about AI as in modern Learning Algorithms, as I explicitly stated at the start of this thread before you used the term AI for the first time in this thread. Idk why you’re comparing it to human level intelligence when it’s barely passable as a poor and easily abused mimicry.

                  With your repetitive, nonsensical, baseless logic I think you would pass for one of those glorified chatbots.