• 0xC0ncord an hour ago

    >Scott Hennessey, the owner of the New South Wales-based Australian Tours and Cruises, which operates Tasmania Tours, told the Australian Broadcasting Network (ABC) earlier this month that “our AI has messed up completely.”

    To me this is the real takeaway for a lot of these uses of AI. You can put in practically zero effort and get a product. Then, when that product flops or even actively screws over your customers, just blame the AI!

    No one is admitting it but AI is one of the easiest ways to shift blame. Companies have been doing this ever since they went digital. Ever heard of "a glitch in the system"? Well, now with AI you can have as many of those as you want, STILL never accept responsibility, and if you look to your left and right, everyone is doing it, and no one is paying the price.

    • ehnto 36 minutes ago

      I somewhat disagree, because at the end of the day he still has to take responsibility for the fuckup and that will matter in terms of dollars and reputation. I think this is also why a lot of roles just won't speed up that much, the bottleneck will be verification of outputs because it is still the human's job on the line.

      An on the nose example would be, if your CEO asked you for a report, and you delivered fake data, do you think he would be satisfied with the excuse that AI got it wrong? Customers are going to feel the same way, AI or human, you (the company, the employee) messed up.

    • pjc50 an hour ago

      New variant on "I followed my satnav blindly and now I'm stuck in the river", except less reliable.

      It is however fraud on the part of the travel company to advertise something that doesn't exist. Another form of externalized cost of AI.

      • buran77 an hour ago

        > It is however fraud on the part of the travel company to advertise something that doesn't exist

        Just here to point out that from a legal perspective, fraud is deliberate deception.

        In this case a tourist agency outsourced the creation of their marketing material to a company who used AI to produce it, with hallucinations. From the article it doesn't look like either of the two companies advertised the details knowing they're wrong, or had the intent to deceive.

        Posting wrong details on a blog out of carelessness and without deliberate ill intention is not fraud more than using a wrong definition of fraud is fraud.

        • f33d5173 2 minutes ago

          There has to be a clause for "willful disregard for the truth", no? Having your lying machine come up with plausible lies for you and publishing them without verification is no better than coming up with the lies yourself. What really protects them from fraud accusations is that these blog posts were just content marketing, they weren't making money off of them directly.

          • tantalor 15 minutes ago

            The standard is to add disclaimers like "Al responses may include mistakes." The chatbot they used to generate that text would have mentioned that.

            To omit that disclaimer, the author needs to take responsibility for fact checking anything they post.

            Skipping that step, or leaving out the disclaimer, is not carelessness, it is willful misrepresentation.

          • Lerc an hour ago

            Seems like closer to fraud on behalf of the marketing company they outsourced to.

            I doubt they commissioned articles on things that don't exist. If you use AI to perform a task that someone has asked you to do, it should be your responsibility to ensure that it has actually done that thing properly.

            • alpinisme an hour ago

              The consequences for wrong ai need to be a lot higher if we want to limit slop. Of course, there’s space for llms and their hallucinations to contribute meaningful things, but we need at least a screaming all caps disclaimer on content that looks like it could be human-generated but wasn’t (and absent that disclaimer or if the disclaimer was insufficiently prominent, false statements are treated as deliberate fraud)

            • voidUpdate 27 minutes ago

              How often do you have to update your page on "what's in a town" to "compete with the big boys"? Seems like you could just google what's in the town, or visit if you really want to make sure, rather than just asking your favourite LLM "What's there to do in Weldborough"?

              • doodpants 17 minutes ago

                “our AI has messed up completely.”

                No, it worked as designed. Generative AI simply creates content of the type that you specify, but has no concept of truth or facts.

                • metalman 4 minutes ago

                  has anyone checked to see if the AI included time co ordinates as well? it might be that AI is missunderstanding our tempotral limitations, and if prompted correctly will provide a handy portal to when, there will in fact be hot springs at the location suggested.

                  • nephihaha 3 hours ago

                    Weldborough seems to have done well out of it either way.

                    • re-thc an hour ago

                      Australia has drop bears anyhow. Do they exist?

                      Seems par for course.