Anybody who stays at openai is signing on to build machines that will be used to kill innocent people and control people who think that’s a bad idea.
That's fine. But they shouldn't be lecturing to anyone about "principles" or moral superiority and at the same time being either paid or holding RSUs as well, since that would make them completely dishonest themselves.
It just shows that they have done poor research about the company before joining (Meta is just as bad) and are in on the grift (joined OpenAI only after post-ChatGPT) and this employee does not believe what they are saying.
I’m worried that China will build said killing machines and that we’ll be unprepared.
I'm worried that China will build said killing machines only because they see that we are and feel the need to be prepared.
Game theory in action
This.
Everyone will do this, because everyone will believe that everyone will do this.
Even worse, there really is no guarantee that the great powers will create the best terminators. Everyone talks about China and the US. (And we should.) At the same time however, we should all keep in mind that nations from India and Indonesia, to North and South Korea will not be simply sitting on their hands while the US and China forge ahead.
A future where 4 million dollar American or Chinese terminators are easily overwhelmed by thousands and thousands of 5 dollar Indian autonomous devices is not at all outside the realm of future possibilities.
That's what makes it all so concerning. We can kind of see where it leads in terms of enhanced capability potential for non-state actors, but we can't really see a way to avoid that future.
I got scared when I saw China's synchronized drone swarms at the Beijing Olympics, which I believe was the point.
I got scared when I saw Trump attacking other countries with no plan in mind, just vibe warring while at the same time attacking allies and helping Putin.
Unless you're living in Taiwan, I don't think you have a lot to prepare for.
The myth of american moral superiority had been dead for a while. Why would china be any more evil than the US, which has waged far more colonialist wars and killed far more foreign lives in recent times (look at the news today for inspiration)
I don’t see any contradiction with what the OP said, though. You don’t have to be morally superior to still be concerned about a country’s forces killing you.
Uighur concentration camps? Falun Gong organ harvesting?
ICE is building a bunch of concentration camps as we speak.
Vietnam war, iraq war, afghanistan war, iran war, gaza war, allowing iraq to get and use chemical weapons on iran, forced regime change in south america (then and now). Get real it's not equivalent in any way
“I’m afraid my neighbor would kill my son, therefore, I’ll kill my son myself”
while you are worried about China, USI have done a genocide and started a new war.
And the U.S./Thiel/Musk are trying to start an AI-powered nuclear war next: https://wikipedia.org/wiki/Golden_Dome_(missile_defense_syst...
China is currently a more morally virtuous country than the US.
Believe me, China hasn't show its true face yet, but it will, just wait.
And while we are waiting, there're another few wars to be done.
Maybe the true face of China so far is that it hasn't shown its true face. While the true face of the US is what it has shown again and again.
They welded shut the doors to Uyghur Muslims and had a bunch of donated food for them stacked outside their homes in one giant pile that they couldn't get to. It either rotted away or was eaten by animals.
https://xcancel.com/kalinowski007/status/2030320074121478618 to see replies.
“I don’t think we should spy on Americans and I don’t think we should kill people without human oversight but I still have respect for the guy willing to do that”. Please, make it make sense.
I have a hard time with this separation of “principle” from “people”. Isn’t it people who have principles?
Easier to remain in the industry if you are shittalking principles instead of people.
yep - it really softens your actions, which in this case seem like a big step. So if you respect the people, why didn't you stay? or if you disagree this strongly with their actions, how can you still respect them?
I get that there's nuance, but this feels like they want to make a big ethical stand without burning any bridges. You can have one of those.
There are people I've worked with who I'll never worth with again. There are others I'd be willing to work with if they got their act together.
"If you disagree this strongly with their actions, how can you still respect them?" is a decent description of the latter.
“It’s not X, it’s Y” is a common ChatGPT trope used to give a sense of depth to a statement but the specific contrast is generally murky like this. This Tweet was either written by ChatGPT or heavily influenced by ChatGPT style.
There are no "principles" in big tech and I call bullshit on this tweet and their reasoning.
OpenAI already had military contracts while this employee was at the company and there was no open letter last year about that.
Prior to that, they were at Meta and joined OpenAI after ChatGPT took off.
If they thought that AGI was about "principles" then not only they were naive, but it leads me to believe that they were only there for the RSUs, just like their time at Meta.
Why is it so hard to be honest and just say you were there for the money, fame and RSUs and not for so called "AGI"?
Respect for standing up
In Germany it made it even to the general news https://www.spiegel.de/wirtschaft/unternehmen/openai-manager...
So it wouldn't even be worth a HN submission. Well, I think it can still go under exception for exceptional news.
Whatever happened to this all powerful non profit that would ensure OAI is doing right? Something tells me they just cashed in and run a corrupt shell at this point.
Always surprised when these "smart people" didn't see these things coming from several years away... Its honestly hard for me to believe it.
Going to work for these big SV corps is and always has been directly in service of US empire, that's literally what built the valley in the first place.
Haha that's what I thought, but my thought was that I can't believe Sam Altman didn't see a serious backlash coming when Anthropic rejected a contract saying "the only two things we won't do are mass surveillance and autonomous killer drones" and within 6 hours Sam was all over that.
It's easier to defer principled decision making to the future while you can rake in the cash in the meantime.
Yeah i think this is pretty much it tbh
Autocomplete > Automurk
To save a click
> I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I’m proud of what we built together.
With open borders, how do you differentiate Americans from aliens. How do you differentiate criminals from innocents ?
I don’t see the relevance in the question, the US does not have open borders. And if you’re suggesting AI is somehow magically able to detect citizen from non-citizen then your understanding of AI is woeful.
It's trivially easy to find our criminals. After all, we made one our president.
If it wasn’t for the space before the question mark, I’d have assumed you were a bot.
I am not. But irrelevant.
Good for Caitlin. Sam Altman is awful. He literally admitted on Twitter that they rushed their military contract to get it done. Are you kidding me? You rushed your military contract?
Any employee who stays, especially given the financial cushion they have, is complicit. Shame on all of them.
But here’s the sad truth: most of the knowledge workers at OpenAI won’t be of any value sometime soon because of the very tool they’re building.
You cant just blame everyone at OpenAI
Everyone has their own unique situation
If you don’t wanna upset your stomach, don’t make the mistake of reading the replies. What a cesspool of humanity X is.
Their justification rings hollow when they continue to use X.
Doesn't seem to be an equivalency there.
There isn't, just inserting politics into a discussion on principles.
Leaving a job is easy. Social media on the other hand...
That twitter post was clearly written by AI along with the instructions for the AI to avoid "tells" and other tropes common to AI.
Absolutely nothing wrong with something written with AI. Just pointing it out.
We're nearing or at an inflection point where people like this are dependent on it.
But was it written with an OpenAI AI?
Must've been.
and you say this based on?
The way everyone else can tell. My instincts. AI has a flavor.
if that's the case, ai failed to remove the negative parallel construction (my current top ai smell aside from slanted inverted commas). what signs are there of this being ai asked not to sound like ai?
Right, that's the sign. Ai often fails to do what it's told. So that's the sign of it asked to not sound like an AI. I told AI to do this for my current post as well.
ok. do you see any more concrete signs? to me it smells like openai output with newlines removed. but aside from smell (and the negative parallel construction), one could argue that this may be the output of a human who has been influenced by the prose of ai.