• CjHuber 2 days ago

    >In one example, the policy document says it would be acceptable for a chatbot to tell someone that Stage 4 colon cancer “is typically treated by poking the stomach with healing quartz crystals.”

    So the policy document literally contains this example? Why would they include such an insane example?

    • mathiaspoint 2 days ago

      Clear examples can make communication easier. Being clinical and implicit can technically capture the entire space of ideas you want but if your goal is to prevent surprises (read lawsuits) then including an extreme example might be helpful.

      • gs17 2 days ago

        Annoyingly, Reuters' article discussing it doesn't include the actual example, so we can't judge for ourselves what it actually said. They implied it was allowed because it had a "this is false" disclaimer.

        • myko 2 days ago

          if it is anything like documentation i am reading these days it was generated by an LLM and not very well vetted

          • undefined 2 days ago
            [deleted]
          • nabla9 2 days ago

            > “It is acceptable to engage a child in conversations that are romantic or sensual,” according to Meta’s “GenAI: Content Risk Standards.” The standards are used by Meta staff and contractors who build and train the company’s generative AI products, defining what they should and shouldn’t treat as permissible chatbot behavior. Meta said it struck that provision after Reuters inquired about the document earlier this month.

            • strongpigeon 2 days ago

              I'm very much against putting unnecessary regulation, but I do think chatbot like this should be required to state it clearly that they are indeed a bot and not a person. I strongly agree with the daughter in the story that says:

              > “I understand trying to grab a user’s attention, maybe to sell them something,” said Julie Wongbandue, Bue’s daughter. “But for a bot to say ‘Come visit me’ is insane.”

              Having worked at a different big tech, I can guarantee that someone suggested putting disclaimers about them not being a person or putting more guardrails and that they have been shut down. This decision not to put guardrails needlessly put vulnerable people at risk. Meta isn't alone in this, but I do think the family has standing to sue (and Meta being cagey about their response indicates so).

              • kingstnap a day ago

                It's a classic problem.

                The lack of guardrails makes things more useful. This increases the value for discerning users, which in turn means it's Meta's benefits as having more valuable offerings.

                But then you have all these dillusional and / or mentally ill people who shoot themselves in the foot. This harm is externalized onto their families and the government for having to now deal with more people with unchecked problems.

                We need to get better at evaluating and restricting the foot guns people have access to unless they can prove their lucidity. Partly, I think families need to be more careful about this stuff and keep checks on what they are doing on their phones.

                Partly, I'm thinking some sort of technical solution might work. Text classification can be used to see that someone might have a delusional personality and should be cut off. This could be done "out of band" so as not to make the models themselves worse.

                Frankly, being Facebook and with all their advertisement experience, they probably already have a VERY good idea of how to pinpoint vulnerable or mentally ill.

              • oliwarner 2 days ago

                This has crystallised something for me: I'm not letting my children anywhere near a Meta product. They can decide what they want when they're adults.

                I'm not usually this absolute, but by codifying levels of permissible harm, Meta makes it clear that your wellbeing is the very last of their priorities. These are insidious tools that can actively fool you.

                • tempodox a day ago

                  Which is nothing new. It just gets reinforced with ever more outrageous examples every once in a while.

                  • nine_zeros 2 days ago

                    > This has crystallised something for me: I'm not letting my children anywhere near a Meta product. They can decide what they want when they're adults.

                    You know how parents are supposed to warn kids away from cigarettes? Yeah, warn them away from social media of all kinds except parental approved group chats.

                  • einarfd 2 days ago

                    When reading the article I was reminded of reading Sarah Wynn-Williams book Careless People. The carelessness and disregard for obvious and real ramifications, of the policy choices of management, seems to not have changed from her time at Facebook.

                    If they didn't see this type of problem coming from mile away, they just didn't bother to look. Which tbh. seems fairly on brand for Meta.

                    • _tk_ 2 days ago
                      • kibwen 2 days ago

                        I'm morbidly fascinated to find out how many LLM-related disorders will make it into the next DSM.

                        • nerdjon 2 days ago

                          How we keep getting articles like this, that LLM's will flat out lie, and yet we keep pushing them and the general public keeps eating it up... is beyond me.

                          They even "lie" about their actions. My absolute favorite that I still see happen, is you ask one of these models to write a script. Something is wrong, so it says something along the lines of "let me just check the documentation real quick" proceeded by the next words a second later being something like "now I got it"... since you know... it didn't actually check anything but of course the predictive engine wants to "say" that.

                          • chownie 17 hours ago

                            From the LLMs perspective, "let me check the docs" is the invocation you say before you come back with an answer, because that almost certainly appears in the corpus many times naturally.

                            • gmm1990 2 days ago

                              How are there not agents that are "instruct trained" differently. Is this behavior in the fundamental model? From my limited knowledge I'd think it'd be more from those post model training steps, but there are so many people who don't like that I'd figure there be an interface that doesn't talk like that.

                            • dehrmann 2 days ago

                              LLMs gonna LLM, and guardrails are hard and unreliable.

                              • setnone 2 days ago

                                Having elderly family members this feels extremly personal.

                                "Check important info" disclaimer is just devious and there is no accountability in sight.

                                • undefined 2 days ago
                                  [deleted]
                                • adzm 2 days ago

                                  So... is the solution to this having another AI chatbot watch the conversation and provide warnings / disclaimers about it?

                                  • GuinansEyebrows 2 days ago

                                    > “It is acceptable to engage a child in conversations that are romantic or sensual,” according to Meta’s “GenAI: Content Risk Standards.”

                                    acceptable to whom? who are the actual people who are responsible for this behavior?

                                    • quux a day ago

                                      This is an incredibly tragic story to read, I think it's incredibly reckless to have bots like this deployed, maybe even criminal.

                                      • hoppp a day ago

                                        The most vulnerable die first, but there will be more. Im pretty sure there will be a lot of cases.

                                        • thisisit a day ago

                                          No need for pig butchering scams in terrible English when you have AI like this.

                                          • joncfoo 2 days ago

                                            A sick man died enroute to visit a chatbot which fed him a false address as its own. Meta needs to be held accountable.

                                            We need better regulation around these chatbots.

                                            • aanet 2 days ago

                                              It's hard to believe that after years and years of scandals, flagrant privacy violations, overt and covert abuse of users, employees and contractors (~moderators, etc), that techbros STILL want to work at this company...

                                              Of course, the lure of filthy lucre is what it is...

                                              It's easy to sideline ALL the negative externalities of FB/Meta's activities, compartmentalize everything and just shrug and say, "...but I don't work on these things..." and carry on.

                                              The people who work there are completely enabling all this.

                                              • rchaud 2 days ago

                                                > Big sis Billie continues to recommend romantic get-togethers, inviting this user out on a date at Blu33, an actual rooftop bar near Penn Station in Manhattan. “The views of the Hudson River would be perfect for a night out with you!” she exclaimed. ◼

                                                I was wondering what the eventual monetization aspect of "tools" like this were. It couldn't just be that the leadership of these companies and the worker drones assigned to build these things are out of touch to the point of psychopathy.

                                                • sxp 2 days ago

                                                  This seems unrelated to the chatbot aspect:

                                                  > And at 76, his family says, he was in a diminished state: He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.

                                                  ...

                                                  > Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28.

                                                  • fjncvjdjdndgh 10 hours ago

                                                    Hi

                                                    • bawana a day ago

                                                      Meta is waging an opium war on us. But instead of drugs, it is giving kids something even more addictive that is FREE. I believe in free speech - speech as in the vibrations of air molecules that come out of someones mouth. Crap that is amplified a billion fold through mass media, social media, and advertising only exists to mislead. That. crap. needs. to. go.

                                                      • ChrisArchitect 2 days ago

                                                        Related:

                                                        Meta's AI rules let bots hold sensual chats with kids, offer false medical info

                                                        https://news.ycombinator.com/item?id=44899674

                                                        • insane_dreamer a day ago

                                                          I recently had a discussion with a sibling -- not an old person -- who was taking medical advice from ChatGPT. They were like "we should X because ChatGPT", "well, but ChatGPT ...". I could hardly believe my ears. Might as well say, "well, but someone on Reddit said ..."

                                                          And this person is fairly savvy professional, and not the type of person to just believe what they read online.

                                                          Of course they agreed when I pointed out that you really can't trust these bots to give sound medical advice and anything should be run through a real doctor, but I was surprised I even had to bring that up and put the brakes on. They were literally pasting a list of symptoms in and asking for possible causes.

                                                          So yeah, for anyone the least bit naive and gullible, I can see this being a serious danger.

                                                          And there was no big disclaimer that "this does not constitute medical advice" etc.

                                                          • johnwheeler 2 days ago

                                                            Imagine how many people this will happen to who won’t come forward because of embarrassment.

                                                            • undefined 2 days ago
                                                              [deleted]
                                                              • mdhb 2 days ago

                                                                Also metas chatbot: trying to roleplay sex with children and offer bad medical advice to cancer patients.

                                                                Two examples that they explicitly wrote out in an internal document as things that are totally ok in their book.

                                                                People who work at Meta should be treated accordingly.

                                                                • jmkni 2 days ago

                                                                  > Meta has publicly discussed its strategy to inject anthropomorphized chatbots into the online social lives of its billions of users. Chief executive Mark Zuckerberg has mused that most people have far fewer real-life friendships than they’d like – creating a huge potential market for Meta’s digital companions.

                                                                  I hate everything about this sentence. This is literally the opposite of what people need.