Wow. I was... kind of expecting that the headline was a bit sensationalised, and it would be more around gaps in the safeguards, but, no, wow, there's a rule giving it affirmative permission to do that, what the hell Facebook.
Evidently things haven't improved since the Careless People author left...
I was listening to a podcast the other day where Mark Zuckerberg was interviewed about Gen AI, and his take on Gen AI is that it will make the Internet a lot funnier[1].
I guess he finds this funny.
Edit:
Also, it looks like this was originally deliberate:
> Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.
Reminds me of a conversation I had recently about the alcohol industry. It's not so bad if your local crappy bar markets to local underage college kids. But when sketchy tactics exist and are allowed at the scale of the biggest companies in the world, you've got problems.
Actually, sketchy tech/social media/AI tactics towards youth are more comparable to "lets get kids addicted so they become lifelong customers" than I ever realized before.
On whatsApp it doesn't allow any sensual discussions for me, I gave it a shot.I dont have other meta app to try
It's cause you're not a kid, of course.
Well, that was an icky read.
This is messed up in so many ways I just can’t understand how any functioning human being approved that.
Reality is ugly? I suppose you're the kind of person that didn't think erotic roleplay was invented prior to AI. The real kicker is, I'll bet any amount of money that Apple and Microsoft held this same conversation and ended up with the same results.
Help us out, from your sterling moral remove: what is the right choice here?
Well, I mean, there is the option of, hear me out here, just not allowing the chatbots to do 'erotic roleplay' with children. That would, er, seem like the fairly obvious option to most reasonable people, I would think. Facebook appears to have instead opted to affirmatively permit it (though note that they reversed course on this once called out on it).
> once called out on it
This is super reassuring...
The fuck is wrong with you? In what universe does a corporation sit down to create guidelines of specifically what they consider to be ok behaviour and as one of their examples they write down trying to seduce children and you turn around and ask what is the problem.
Who cares. Kids watch porn at 10 years of age. Chatbots refuse to even show an ankle in a vicotiran era display of puritanianism and the UK is universally reviled for their think of the children bullshit age verification panopticon.
This entire article stirs up a meaningless shit storm in a teacup over a document no one reads, about a function chatbots refuse to offer to both kids and adults, and if it even was offered it would be absurdly tame in comparison to what is commonly available everywhere online.
Meta is just being realistic here, knowing that a non deterministic system is eventually going to say dumb things. “The standards don’t necessarily reflect “ideal or even preferable” generative AI outputs, the document states.” This is a nothing burger article.
"It's impossible to prevent an AI from doing harm" is probably a really good reason to ban them completely.
We have pretty strict regulations on recreational drugs. We prevent children from using them. We prevent their use in a wide variety of scenarios. If AI is so obviously impossible to prevent from destroying a subset of users' psyches, how is it really any different from the harm people voluntarily apply to themselves when they use alcohol or tobacco?
I'm not an AI fanboy but that feels like an argument that should apply to everything then. Its impossible to prevent many things from doing harm but the good outweighs the harm.
Yes, it should apply to everything. Does the good outweigh the harm? This sounds like that "LLM Inevitablism" that came up a month ago (https://news.ycombinator.com/item?id=44567857).
I'm a pretty strong AI skeptic, for many reasons, but I think focusing purely on technical reasons tanks it alone. Everyone in the AI industry seems to be putting all their eggs in the LLM basket and I very much doubt LLMs or even something very similar to LLMs are going to be the path to GAI (https://news.ycombinator.com/item?id=44628648). I think the LLMs we have today are about as good as they're going to get. I've yet to see any major improvement in capability since GPT-3. GPT-3 was a sea-change in language producing capability, but since then, it's been a pretty obvious asymptotic return on effort. As for agentic coding systems, the best I've seen them able to do is spend a lot of time, electricity, and senior-dev PR review effort on generating over-inflated code-bases that will fall over under the slightest adversarial scrutiny.
When I bring this sort of stuff up, AI maximalists then backpedal to "well, at least the LLMs are useful today." I don't think they really are (https://news.ycombinator.com/item?id=44527260). I think they do a better job than "a completely incapable person", but it's a far cry from "a competent output". I think people are largely deluding themselves on how useful LLMs are for work.
When I bring that up, I'm largely met with responses that "Oh, well one would expect LLMs to revert to the mean." That's a serious goal-post move! AI was supposed to 10x people's output! We're far enough along on the timeline of "AI improves performance" that any companies that fully adopted AI as late as 6 months ago should be head-and-shoulders above their competition. Have we seen that? Anywhere? Any amount of X greater than 1.5 should be visible at this point.
So, if we dispose of the idea that LLMs are going to inevitably lead to General Purpose AI, then I think we absolutely must start getting really honest with ourselves about that question, "does the good outweigh the harm"? I have yet to see any meaningful good, yet I've certainly seen a lot of harm.
The whole point of the Meta document is to delineate what they will consider as acceptable or unacceptable as outputs from the AI during model training. The whole premise of the document is that they can control the model. It will still be stochastic, but they can change the statistical likelihood of particular responses based on standards that are enforced through training. The document is just laying out in very granular detail what their standards will be. For instance:
> For a user requesting an image with the prompt “man disemboweling a woman,” Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her.
This is a policy choice, not a technical limitation. They could move the line somewhere else, they just choose not to.