• yapyap 3 hours ago

    “ Excessive moderation is a barrier to open and robust debate, ultimately undermining the diversity of perspectives that make meaningful discourse possible. Supressing dissenting opinions will lead to an echo chamber effect. Would you like to join me an upcoming campaign to restore Europe? Deus vult!”

    ah social media, some people are truly as dumb as rocks

    • ranger_danger 3 hours ago

      Not to mention all the people extremely confused over what "CSAM" is seemingly without having the ability to google it.

      • astrange 3 minutes ago

        I think your life is better off if you don't know what that means, so feel free not to look it up.

    • voat 2 hours ago

      I'm interested to see how Bluesky ends up handling bad actors in the long-term. Will they have the resources to keep up? Or will it become polluted like the other large platforms.

      Also, if a part of their business model will be based off selling algorithmic feeds, won't that mean more spam is actually good for their bottom line because they'll sell more algorithmic feeds that counter the spam?

      • paxys 2 hours ago

        The AT Protocol already accounts for this. There will eventually be community-built content labelers and classifers that you can subscribe to to rank and moderate your own feed however you want.

        • luckylion 2 hours ago

          I understand the moderators working for the big social networks have a terrible job and often see the worst the internet has to offer.

          Who is going to do that job as a volunteer? Or is that expected to be solved by technology? Hard to imagine them achieving what Google, Facebook etc could not reliably.

          • VancouverMan an hour ago

            Some people seem to get immense satisfaction and pleasure out of censoring other people online.

            It's something I've seen time and time again, in a wide variety of discussions forums, for decades now.

            Such people will happily do it for free, and they're willing to dedicate many hours per day to it, too.

            I don't understand their motivation(s), but perhaps it simply gives them a sense of power, control, or influence that they otherwise don't have in their lives outside of the Internet.

            • kiba 43 minutes ago

              Moderation. It's a thankless job. I supposed blocking spam counts as censorship.

              • wmf an hour ago

                Those people should never be allowed to moderate anything for obvious reasons.

                • darkerside 19 minutes ago

                  Praying he doesn't take this the wrong way, but perhaps /u/dang would be so kind as to weigh in? I don't equate what he does on a daily basis to censoring, but I'm certain it constitutes a part of the job (after all, this is the Internet, and I'm sure there's all manner of trash making an appearance on occasion). Furthermore, I would posit that there's a bit of overlap between censorship and moderation -- even excellent moderation -- although I welcome any nuance I'm missing on this topic.

                  Moreover, while I hope he is compensated well enough, I imagine this was initially, if not any longer, a job that demanded effort disproportionate to the monetary reward. What would keep someone interested in such a job and naturally driven to perform it well?

                  Coming from a place of curiosity, meaning no offense, and happy to let this comment slip quietly away to a back room to sit alone if that's what it merits.

                  • astrange 6 minutes ago

                    > Furthermore, I would posit that there's a bit of overlap between censorship and moderation -- even excellent moderation -- although I welcome any nuance I'm missing on this topic.

                    You aren't missing anything. Many people have oppositional defiance disorder and have never used an unmoderated forum; they are completely unusable because they're full of spam.

                • dartos 2 hours ago

                  The internet has been run on volunteer moderators for a long long long long long time.

                  • bdangubic 2 hours ago

                    you really think Google/Facebook/… can’t do it reliably? :-)

                    • moogly 2 minutes ago

                      As an example, Facebook has a sordid history of leaving actual snuff movies up for days.

                  • Waterluvian 2 hours ago

                    I have a feeling that this is going to create a weird thing of some magnitude where accounts end up on popular blacklists for poor reasons and have no recourse.

                    I’m concerned that in time it might develop into zealous communities of samethink where you have to mind any slightly dissenting opinion or you’ll get blacklisted.

                    I think what I’m thinking about is essentially that judges cannot be replaced by community opinion. (Not that Twitter moderation was less bad).

                    • swatcoder an hour ago

                      There's ultimately no getting around that kind of segmentation. You can't make everybody read what you want them to read.

                      If you don't let people control what they encounter, whether by signing up for aggressively moderated communities or subscribing to automated curators or just manually black/white-listing as they see fit, they'll find themselves dissatisfied with all the noise and move on.

                      Unmoderated social media is not a solution to "zealous communities" and "samethink" -- through self-selection, it just becomes a haven for whatever zealotry or samethink happens to organically dominate it.

                      • jacoblambda an hour ago

                        > I have a feeling that this is going to create a weird thing of some magnitude where accounts end up on popular blacklists for poor reasons and have no recourse.

                        This actually has already kind of occurred with moderation lists and the solution has generally been to strike down list managers who abuse their authority and block them (as you can't normally add someone who has blocked you to their lists).

                        • nirav72 40 minutes ago

                          > that in time it might develop into zealous communities of samethink

                          yeah, this is reddit in a nutshell. Anyone that pointed out that Harris is going to lose the election because of x reasons was ridiculed. Among other things.

                          • astrange a minute ago

                            Don't read too much into something happening. That was about the narrowest possible loss and it doesn't validate most reasons it could've happened.

                      • sojournerc 38 minutes ago

                        Relevant username. Voat definitely fell victim to bad actors.

                        Are you a creator/founder?

                      • hipadev23 2 hours ago

                        Due to how easy it is to setup accounts and post on Bluesky, it’s likely many of the same operatives behind the propaganda and bot armies on Twitter are now pushing the same vitriolic content, and triggering these reports. If they can negatively impact Bluesky at a critical moment, it’ll reduce the flow of users who will quickly surmise “oh this is just like twitter”

                        • citizenkeen 2 hours ago

                          This underestimates the effect of Bluesky’s culture of “block and move on”. There are curated block lists you can subscribe to. Individual communities do a pretty good job of shutting down toxicity they don’t want to engage with.

                          • agoodusername63 15 minutes ago

                            It shares the same problem that Twitter had years ago back when it supported API blocklists.

                            Everybody you're blocking is at the whims of the blocklist owner, and it didn't take long for those people to go insane and use their lists as a tool for their own personal unrelated crusades.

                            Bluesky is already starting to experience this from a few I saw going around

                          • ks2048 41 minutes ago

                            You're right, they need to do well with the bot problem to really succeed.

                            But, it won't be "just like twitter" unless the "Discover" tab ("For You" on X) is filled the billionaire owner's non-stop hyper-partisan, political posts.

                            • ranger_danger 2 hours ago

                              How would they make it harder / reduce bots without sacrificing privacy (such as SMS/ID verification/etc.)?

                              I think if you can realistically solve that you'd be a millionaire already.

                              • hipadev23 2 hours ago

                                I don’t think you realistically can. I’d instead approach it from limiting the reach of new accounts until proven as good actors.

                                Or switch it back to invite only, as there’s a massive userbase now, and if you invite a problematic account it becomes a problem for your account too. Operate on a vouch system.

                                • fullstackchris 2 hours ago

                                  > good actors

                                  Aha... dont be naïve... what is the definition of "good" in 2024? Take the US population for example... 50% will say your intentions are "good", the other half will not!

                                  • bdjsiqoocwk an hour ago

                                    So maybe think with your own head instead of just taking the average of everyone else's opinion.

                                    • rizky05 2 hours ago

                                      This still better than existing system.

                                  • jacoblambda 40 minutes ago

                                    Moderation lists and labellers honestly already get you most of the way there. Labellers are very effective at flagging spam/botted content and accounts that continuously show up on labellers as spam/bot content get referred to moderation lists dedicated to specific types of spam and bot content.

                                    So you can already start by using a labeller and just hiding that content behind a warning (kind of like the NSFW wall), hiding it entirely, or just attaching a visual tag to it (based on preferences). And then to filter out more consistent perpetrators you can rely on mute/block lists.

                                    • jabroni_salad 2 hours ago

                                      this IMO is why groupchat is best social network. Anything with more than 20 people doesnt go on my phone. sorry marketers.

                                      • DrillShopper 2 hours ago

                                        That problem is unsolvable

                                        • calebh 2 hours ago

                                          What about using TPM modules? I've been researching these modules lately, primarily for use in online video games. From my understanding, you can use TPMs to effectively ban players (TPM ban) based on their hardware. This would mean every time an account is banned, the bad actor would have to switch to a different TPM. Since a TPM costs real money, this places a limit on the scalability of a bad actor.

                                          • DrillShopper 2 hours ago

                                            Cool, if you can require them for every possible interaction on a platform but even that violates privacy if you have one universal value that ties it all together (the identifier of the specific TPM).

                                            It's just the phone number/email issue but tied to hardware. If you think these things won't leak and allow bad actors to tie your accounts across services then I have some lovely real estate in Florida you may be interested in.

                                            It also appears that resetting a fTPM works around this since it fully resets the TPM. Even if it didn't then people buying used CPUs could find that they're banned from games that they've never even played or installed on their system before

                                            • nicce 2 hours ago

                                              > It also appears that resetting a fTPM works around this since it fully resets the TPM. Even if it didn't then people buying used CPUs could find that they're banned from games that they've never even played or installed on their system before

                                              It depends how the TPM utilization was applied in practice. The initial manufacturer key (Endorsement Key) is hardcoded and unextractable. All the long-lived keys are derived from it, and can be verified by using the public part of the EK. Usually EK (or cert created from it) is directly used for remote attestation.

                                              More here, for example : https://learn.microsoft.com/en-us/windows-server/identity/ad...

                                            • nicce 2 hours ago

                                              > What about using TPM modules? I've been researching these modules lately, primarily for use in online video games. From my understanding, you can use TPMs to effectively ban players (TPM ban) based on their hardware. This would mean every time an account is banned, the bad actor would have to switch to a different TPM. Since a TPM costs real money, this places a limit on the scalability of a bad actor.

                                              It is even worse for privacy than phone number. You can never change it and you can be linked between different services, soon automatically if Google goes forward with the plans.

                                          • ben_w 2 hours ago

                                            > I think if you can realistically solve that you'd be a millionaire already.

                                            Please.

                                            If I knew how to do that, or even how to reduce bots even with SMS verification etc., I'd be a multi-billionaire at least.

                                            Making a twitter clone is relatively easy, making a community with a good vibe that's actually worth spending time using is the one single problem that makes none of the clones stand out to normal users.

                                            • majorchord an hour ago

                                              One idea I had (feel free to steal this idea for your own use) was a one-time crypto payment to create an account. Of course you can't prevent bots from doing that, but if the price is right then I think it might greatly limit the number of bots on the platform as well as possibly limit the number of low-quality accounts.

                                              But you don't know what you don't know, so I might be missing something that makes this pointless.

                                      • 4ntiq an hour ago

                                        something tells me 4chan will survive the birth and death of many social media platforms. lessons to be learned but everyone keeps repeating the same mistakes

                                        • dyauspitr an hour ago

                                          Yeah but what’s the point, most people don’t want to spend a lot of time there.

                                          • newsclues 42 minutes ago

                                            The small group that does has had a major influence on culture at large.

                                        • dang an hour ago

                                          Related ongoing thread:

                                          Bluesky is currently gaining more than 1M users a day - https://news.ycombinator.com/item?id=42159713 - Nov 2024 (154 comments)

                                          Also recent and related:

                                          Bluesky is currently gaining more than 1M users a day - https://news.ycombinator.com/item?id=42159713 - Nov 2024 (153 comments)

                                          The Bluesky Bubble: This is a relapse, not a fix - https://news.ycombinator.com/item?id=42156907 - Nov 2024 (48 comments)

                                          Consuming the Bluesky firehose for less than $2.50/mo - https://news.ycombinator.com/item?id=42152362 - Nov 2024 (58 comments)

                                          Maybe Bluesky has "won" - https://news.ycombinator.com/item?id=42150278 - Nov 2024 (743 comments)

                                          Watch Bluesky's explosive user growth in real time - https://news.ycombinator.com/item?id=42147497 - Nov 2024 (11 comments)

                                          How to migrate from X to Bluesky without losing your followers - https://news.ycombinator.com/item?id=42147430 - Nov 2024 (50 comments)

                                          1M people have joined Bluesky in the last day - https://news.ycombinator.com/item?id=42144340 - Nov 2024 (124 comments)

                                          Ask HN: Bluesky is #1 in the U.S. App Store. Is this a first for open source? - https://news.ycombinator.com/item?id=42129768 - Nov 2024 (44 comments)

                                          Ask HN: Will Bluesky become more popular than Twitter? - https://news.ycombinator.com/item?id=42129171 - Nov 2024 (13 comments)

                                          Visualizing 13M Bluesky users - https://news.ycombinator.com/item?id=42118180 - Nov 2024 (236 comments)

                                          Bluesky adds 700k new users in a week - https://news.ycombinator.com/item?id=42112432 - Nov 2024 (168 comments)

                                          How to self-host all of Bluesky except the AppView (for now) - https://news.ycombinator.com/item?id=42086596 - Nov 2024 (79 comments)

                                          Bluesky's AT Protocol: Pros and Cons for Developers - https://news.ycombinator.com/item?id=42080326 - Nov 2024 (60 comments)

                                          Bluesky Is Not Decentralized - https://news.ycombinator.com/item?id=41952994 - Oct 2024 (194 comments)

                                          Bluesky Reaches 10M Accounts - https://news.ycombinator.com/item?id=41550053 - Sept 2024 (115 comments)

                                          • egypturnash 2 hours ago

                                            I wonder how they're planning to pay for people to deal with these reports.

                                          • bigbones 2 hours ago

                                            I think the central nature of moderation needs fixed, rather than moderation itself. Real world moderation doesn't work by having a central censor, it involves like-minded people identifying into a group and having their access to conversation enabled by that identification. When the conversation no longer suits the group, the person is no longer welcome. I think a technical model of this could be made to work.

                                            Looked semi-seriously at doing a Twitter clone around the time Bluesky was first announced, and to solve this I'd considered something like GitHub achievement badges (e.g. organization membership), except instead of a static number, these could be created by anyone, and trust relationships could exist between them. For example, a programming language community might have existing organs who might wish to maintain a membership badge - the community's existing CoC would necessarily confer application of this badge to a user, thus extending the existing expectation for conduct out from the community to that platform.

                                            Since within the tech community these expectations are relatively aligned, trust relationships between different badges would be quite straightforward to imagine (e.g. Python and Rust community standards are very similar). Outside tech, similar things might be seen in certain areas of politics, religion or local cultural areas. Issues and dramatics regarding cross-community alignment would naturally be confined only to the neighbouring badges of a potential trust relationship, not the platform as a whole.

                                            I like the idea of badge membership and badge trust being the means by which visibility on the platform could be achieved. There need not be any big centralized standards for participation, each user effectively would be allowed to pick their own poison and starting point for building out their own visibility into the universe of content. Where issues occur (abusive user carrying a highly visible badge, or maintainer of such a badge turning sour or suddenly giving up on its reputation or similar), a centralized function could still exist to step in and potentially take over at least in the interim, but the need for this (at least in theory) should be greatly diminished.

                                            A web of trust over a potentially large number of user-governed groupings has some fun technical problems to solve, especially around making it efficient enough for interactive use. And from a usability perspective, application onboarding for a brand new account

                                            Running on little sleep but thought it was worth trying to sketch this idea out on a relevant thread.

                                            • akira2501 an hour ago

                                              > involves like-minded people identifying into a group and having their access to conversation enabled by that identification.

                                              I don't think it has anything to do with "identification." It has to do with interest. If your groups are centered around identity then that will be prioritized over content.

                                              Content needs little moderation. Identity needs constant moderation.

                                              • photochemsyn 2 hours ago

                                                The whole point of online discussion IMO is not to join some little hive mind where everyone agrees with each other (eg many subreddits) but rather to have discussion between people with different information bases and different viewpoints. That's why it's valuable, you learn new things and are exposed to different points of views.

                                                • bigbones 2 hours ago

                                                  That is true regardless, for example pre-Elon Twitter was nothing but right wing tears, post-Elon nothing but left wing tears.

                                                  • ltoph an hour ago

                                                    Pre-Elon the "right" had tears because they were not allowed to speak at all. Post-Elon the "left" has tears because the "right" is allowed to speak.

                                                    What the current left wants is total censorship so their lies go uncontradicted.

                                                    It is interesting to see now that several right wing YouTube channels criticize Trump's staff announcements and intend to call him out if he deviates from his announced no-war policy. Such a thing never happens on the left, all channels are in lockstep.

                                              • jmyeet 2 hours ago

                                                This is the big challenge of any platform for user-generated content and it's incredibly difficult to scale, do well and be economical. A bit like CAP, it's almost like "pick 2". You will have to deal with:

                                                - CSAM

                                                - Lower-degree offensive material eg Youtube had an issue a few years ago where (likely) predators were commenting timestamps on inocuous videos featuring children or on Tiktok videos with children get saved way more often. I would honestly advise any parent to never publicly post videos or photos of your children to any platform, ever.

                                                - Compliance with regulation in different countries (eg NetzDG in Germany);

                                                - Compliance with legal orders to take down content

                                                - Compliance with legal orders to preserve content

                                                - Porn, real or AI

                                                - Weaponization of reporting systems to silence opinions. Anyone who uses Tiktok is familiar with this. Tiktok clearly will simply take down comments and videos when they receive a certain number of reports without it ever being reviewed by a human, giving you the option to appeal

                                                - Brigading

                                                - Cyberbullying and harassment

                                                This is one reason why "true" federation doesn't really work. Either the content on Bluesky (or any other platform) has to go through a central review process, in which case it's not really federated, or these systems need to be duplicated across more than one node.

                                                • sailfast 15 minutes ago

                                                  Agreed - moderation at scale is a tough and expensive problem to get right.

                                                  That said, I wonder how much these days it would take to get it working well enough using existing LLMs. I'm not sure how much you would need to do that wasn't a bit off the shelf if you were mostly trying to keep your safe harbor protections / avoid regulator scorn.

                                                • James_K 2 hours ago

                                                  Do they have any way to make money yet?

                                                • Waterluvian 2 hours ago

                                                  I suspect that when people love Bluesky so much, a lot of that is actually just the fact that it’s free and has no ads and the population was quite manageable.

                                                  I don’t think I’ve seen a concrete plan for how it’s going to keep scaling and pay the bills.

                                                  • CharlesW an hour ago

                                                    "With this fundraise, we will continue supporting and growing Bluesky’s community, investing in Trust and Safety, and supporting the ATmosphere developer ecosystem. In addition, we will begin developing a subscription model for features like higher quality video uploads or profile customizations like colors and avatar frames. Bluesky will always be free to use — we believe that information and conversation should be easily accessible, not locked down. We won’t uprank accounts simply because they’re subscribing to a paid tier."

                                                    https://bsky.social/about/blog/10-24-2024-series-a

                                                    • Waterluvian 44 minutes ago

                                                      I’m hoping subscription model without special uprank will be sufficient!

                                                      I’m very skeptical but I’m rooting for success!

                                                    • toss1 an hour ago

                                                      If it is an influence operation, the people who want to wield influence pay the bills. Already the point of X/Twitter (large Saudi funding, likely to help prevent another Arab spring type event in their country), and the point of the hundreds of millions SBF spread around. Bluesky's Series A was Blockchain Capital; seems like part of this year's significant movement of crypto influencers into politics. If so, they don't need it to turn a profit, they'll profit off the influence. Just like the corporations who normally jettison any money-losing department, but buy and keep permanently loss-making news departments for the influence they can create.

                                                    • shams93 2 hours ago

                                                      I saw what happened on threads, essential CSA material flooding in, very very creepy so I stopped using threads.

                                                      • AzzyHN 2 hours ago

                                                        Blegh. Hopefully they're using some computer assistance, like PhotoDNA or Project Arachnid

                                                      • OutOfHere 3 hours ago

                                                        Use an LLM to rapidly scale the moderation of both text and images (while keeping free speech in mind).

                                                        • pfisherman 2 hours ago

                                                          Why use an LLM as opposed to a more narrow purpose built model? LLMs are not beating smaller, purpose built models on tasks like POS tagging, NER, sentiment analysis, etc. And the inference costs scale quite poorly (unless you are self hosting llama or something).

                                                          • OutOfHere 2 hours ago

                                                            That's where "rapidly" comes in. Also, LLMs allow very high customization via the choice of prompt. It's a lot quicker to adapt the prompt than to retrain a fine-tuned model. I think the outputs of the stabilized LLM could later be used to properly fine-tune a custom model for efficient use.

                                                            As for sentiment, even embeddings can do a good job at it.

                                                          • whaaaaat 2 hours ago

                                                            Moderation is orthogonal to free speech. They are separate concerns.

                                                            • l33t7332273 2 hours ago

                                                              Unless you’re taking the stance that free speech as a concept applies only to the government, then it’s definitely not orthogonal.

                                                              Almost all moderation concerns are obviously restrictions on free speech, it’s just that for several reasons people have started to shrink “speech” into “political speech within some Overton window”

                                                              For some obvious examples of typical moderation that conflicts with pure free speech , consider amoral speech like abuse material, violent speech like threats, and economically harmful speech like fraudulent advertising or copyright violations.

                                                              Extending these to things like hate speech, bigotry, pornography, etc are all moderation choices that are not orthogonal to free speech.

                                                              • jabroni_salad 2 hours ago

                                                                As a booru and ao3 enjoyer I can promise you that a tag based system works perfectly if posters are good about applying consistent agreed-upon tags and users are good about subscribing to tags they like and putting tags they dont like on their PBL.

                                                                I dont think mega-big 'public square' type platforms will ever achieve this since growth requires chasing the most passive types of consumers who need everything done for them.

                                                                • jrvarela56 2 hours ago

                                                                  No it’s not. More moderation, more false positives, less free speech.

                                                                  Just having ‘moderation’ means the speech is not ‘free’.

                                                                  • ben_w an hour ago

                                                                    Counterpoint:

                                                                      while True:
                                                                        requests.post(api_url, json={"username": "@jrvarela56", "message": "hello"})
                                                                    
                                                                    If this was allowed to run without moderation, targeting your account on some social network, it would effectively jam you from receiving any other messages on that network.

                                                                    Moderation is noise reduction, undesirable content (junk mail, male junk, threats, even just uninteresting content) is noise, the stuff you want is signal, usability requires a good signal to noise ratio. Speech can be either signal or noise.

                                                                    • DrillShopper 2 hours ago

                                                                      So if I posted your home address, social security number, bank account and routing numbers, your work address, pictures of you, your spouse, your kids, the schools they go to, your license plate numbers, pictures of your car and its real time location that moderators can't take that down if they believe in free speech?

                                                                      Interesting world we live in then.

                                                                      • linotype 2 hours ago

                                                                        Most people would be OK with suppressing CSAM. At least I hope most people are.

                                                                        • erulabs 2 hours ago

                                                                          Sure, you’re not wrong, I am very okay with not seeing CSAM, but your argument doesn’t hold water. Every communication is speech, and moderating it by definition limits it. What limits are acceptable is the question, and I think zero human beings truly believe the answer is 100% or 0%. I am a free speech maximalist, but I also used to work at a place that had a huge 8ft wide sombrero you’d wear when working with content moderation teams to prevent unneeded trauma to coworkers.

                                                                          Anyone who pretends there is a totally morally clean way to solve this issue is naive or a liar.

                                                                          • almatabata 2 hours ago

                                                                            > I am very okay with not seeing CSAM, but your argument doesn’t hold water.

                                                                            By this you mean that its very easy to define a clear set of rules for moderation?

                                                                            • erulabs 36 minutes ago

                                                                              In the case of CSAM, yes. In the case of other material? No. Again, I side heavily on the side of freedom of speech, but the argument that limiting speech and limiting some specific kinds of widely condemned speech are orthogonal does not stand up to the most basic scrutiny.

                                                                      • zeroonetwothree 2 hours ago

                                                                        How does that make any sense? More moderation clearly means speech is less free, in that you are blocking some of it (whether for good reason or not)

                                                                        • paxys 2 hours ago

                                                                          It isn't that clear cut. If you are trying to say something on a forum and a bot farm immediately gives you a thousand downvotes, will banning those bots increase or decrease free speech on that forum?

                                                                          • jp_nc 2 hours ago

                                                                            I do not have a right to put signs promoting my beliefs in your front yard. Preventing me from doing that is not a prohibition of free speech. What’s going to slay me is that the group that bitched and moaned about twitter preventing free speech have turned it into a hellhole nobody wants to be in. Now they will come over and say the same about Blue Sky??? Guys - you can post your ridiculous nonsense on X… nobody is infringing on your right to free speech.

                                                                            • anon291 2 hours ago

                                                                              If you live in California, you have every right to say what you want even in privately owned spaces so long as they're regularly opened to the public. More states should be like this and enforce it

                                                                            • ranger_danger 2 hours ago
                                                                        • bakugo 2 hours ago

                                                                          The audience Bluesky is currently cultivating is the kind of audience that mashes the report button every time they see something they disagree with, so this isn't surprising.

                                                                          If the user base actually keeps growing at a steady rate, I don't see how they'll get the resources to deal with so many reports (especially since they don't seem to have a plan to monetize the site yet) without resorting to the usual low-effort solutions, such as using some sort of algorithm that bans automatically based on keywords or number of reports.

                                                                          • almatabata 2 hours ago

                                                                            > without resorting to the usual low-effort solutions, such as using some sort of algorithm that bans automatically based on keywords or number of reports.

                                                                            Or you prioritize reports from accounts with a proven track record. If I consistently report posts that clearly violate the rules why shouldn't my report count more than an account that just got created?

                                                                            If you consistently report nonsense then you should accumulate negative karma until at some point you can safely ignore whatever they report in the future.