• wyldberry 2 days ago

    The finding is surprising, but I think their methodology is a bit flawed.

    Study 1 shows "Difference-in-Differences analysis of engagement with 154,122 posts by 1068 accounts before and after the policy change". All this tells us is that existing accounts did not have a noticeable change. It doesn't suggest anything about accounts created after where the culture of Twitter (appears) to have shifted quite a bit from before going private.

    Basically "okay cool, existing accounts didn't change their behavior". What about new accounts? More anonymous accounts? Can we understand anything else about platform growth and interaction? What about classes of user w/ respect to verified users, anonymous accounts vs accounts tied to real identities?

    Study 2 is also very limited to draw that conclusion because people are less likely to honestly report their engagement with content or beliefs that could be punishing in a given political environment. This was most astutely observed by the French polymarket user who crushed it betting on the 2024 election using neighbor-polling methodology [0]. Essentially, it appears to be more reliable to ask about the preferences of a respondent's social circle than ask the respondent directly.

    [0] - https://www.cbsnews.com/news/french-whale-made-over-80-milli...

    • harvey9 2 days ago

      If it can be set to private then it can be set to public again. I don't use any of those platforms but I would always assume that all my usage might end up being published one day.

      • hombre_fatal 2 days ago

        Rug-pulling years of built up blue checkmark brand/trust just to sell $7 subscriptions shows how little they care about anything.

        I wouldn’t be surprised if Elon decides “nvm likes are public again” with zero consideration.

        • undefined 2 days ago
          [deleted]
          • surgical_fire 2 days ago

            > blue checkmark brand/trust

            Blue checkmark had no value then, has no value now.

            Before it was just a "good boy" badge. It just meant that whoever distributed that thing liked you enough.

            • chownie a day ago

              Careful, when you ramp the hyperbole up to this point you are essentially just lying.

              A decade ago you could quickly check the blue mark and know that the account most likely belonged to the person it was labelled for. The people who had a mark when they shoudn't have were by far the minority, and mistakes were the exception.

              In 2026 a blue checkmark actually means the account is far more likely to be fake, more likely to be lying and more likely to be engagement trolling. There's no guarantee it even belongs to a human person. The platform gives the account holder money if they can convince you to click spam links!

              It's not even close to having the same value now as it did then.

              • surgical_fire a day ago

                > A decade ago you could quickly check the blue mark and know that the account most likely belonged to the person it was labelled for.

                They routinely removed blue checkmarks of people that had naughty opinions. The account was known to belong to the actual person. This was no mistake, it was just the good boy badge being removed for vibes.

                My point is not that Twitter now under its retarded billionaire king is better. It is not.

                It was shit then, it is shit now. The world would be objectively a better place if Twitter had never existed.

                The turd just smells different under Musk. You happened to enjoy the old smell.

                • chownie a day ago

                  > They routinely removed blue checkmarks of people that had naughty opinions

                  This is actually you doing it again, lying-via-hyperbole. This didn't happen routinely (it was high profile enough to get news stories the few times it did) and it was pretty specifically white supremacist groups.

                  > You happened to enjoy the old smell.

                  If you approach conversation with a little more honesty yourself you might not just fall into the assumption other people are partisan.

                  • surgical_fire a day ago

                    > assumption other people are partisan.

                    That's what you are doing. I am pretty left-leaning myself, if you look at my post history.

                    Twitter was a notorious toxic dump long before Musk acquired it. Nothing in what I described was a lie.

                    The blue checkmark was supposed to be something that meant "account is verified, person is who they say they are". It was weaponized by the platform itself to mean "this person has no naughty opinions". Now I ask, was this an improvement?

                    Now it is actually more straightforward. "Blue checkmark means person gives money to Twitter on a monthly basis". It is still a toxic dump, it just smells different.

        • jacobgkau 2 days ago

          > We find no detectable platform-level increase in likes for high-reputational-risk content (Study 1). This finding is robust for both between-group comparison of high- versus low-reputational-risk accounts and within-group comparison across engagement types (i.e., likes vs. reposts). Additionally, while participants in the survey experiment report modest increases in willingness to like high-reputational-risk content under private versus public visibility, these increases do not lead to significant changes in the group-level average likelihood of liking posts (Study 2).

          That conclusion's a surprise to me. I used to basically never like anything (even innocuous stuff) unless I specifically wanted to endorse it (essentially treating it as a less direct retweet). I like stuff all the time now.

          They do note their methodology could be affected by inorganic engagement that wouldn't be affected by like visibility, though. I wonder what other factors could've led to that conclusion.

          • jauntywundrkind 2 days ago

            A lot of people are going to be upset by the idea of their likes being public, but I really like and hope we see better analysis of likes on Bluesky/atproto, where this data is public!

            Imo it really sucks they social networking is a dark forest, controlled by a very few, who increasingly have offered less and less and less at higher and higher prices to researchers, academics, and more generally bots and services that used to be up to & doing cool things. BlueSky has the juice, imo, and while most folks using it today are only using official Bluesky services, some folks are using independent services for all their PDS hosting and for viewing the network.

            That the network is public feels like such a minimum baseline level, is such a basic obvious and essential baseline for society to begin to have any trust ability or engagement with such mass communication systems as we have.

            • CupricTea a day ago

              Something I've never understood about public likes before is why they ever existed in the first place.

              Previously, retweeting would show something to your followers, and liking tweets would...show them to your followers...

              Two ways to do the exact same thing. So it was added cognitive pressure to pick which action to do.

              • linkage 2 days ago

                The vast majority of "likers" have never been real people in any case. All of the prominent accounts are boosted by bots and Mechanical Turk users in economically underdeveloped countries. This has been shown numerous times by comparing the likes/impressions ratios for different accounts posting similar content.

                Anecdotally, I have been 'liking' (as a verb) posts about 3x more after anonymity went into effect. I used to be anonymous on X until I started meeting people at IRL events and then had to be more cautious about what I broadcast to my network. Anonymized likes gave me back a lot of that freedom.

                • mikkupikku 2 days ago

                  Pretty much all of these social media companies have been built on a foundation of fraud. It's understandable why, the easiest way to break the chicken-and-egg problem of network effects is to simply cheat and use bots to make the platform look popular. It is nonetheless fraud, and the criminal DNA of these companies never goes away.

                  • neilv 2 days ago

                    > the easiest way to break the chicken-and-egg problem of network effects is to simply cheat and use bots to make the platform look popular.

                    In relatively early days of Reddit, before mainstream awareness, I thought it suspicious how clever or knowledgeable so many of the comments were. Better than any other general-purpose venue I could think of.

                    So, when telling people about Reddit, I'd sometimes remark that I suspected they'd enlisted a bunch of writer shills, to frontload and elevate their comments traffic.

                    Maybe it was all genuine and organic, and an artifact of the voting system and network effects, while the bar for quality was set so low by some other venues.

                    Though, years after Reddit was mainstream, I heard something about the founders originally writing a lot of the comments themselves.

                    • accrual 2 days ago

                      Reddit is an interesting case but at least to me it felt genuine in the early years. Even today I generally trust Reddit comments, but it's important to check the context and commentor before proceeding.

                      I feel like even though Reddit has undergone various management changes, technology changes, site UI/UX changes -- the core demographic is still there and I hope they don't fuck that up. Once old.reddit.com is gone I'll know the shark has truly jumped. Or maybe someone intelligent will get reigns and understand that domain is not to be fucked with.

                      • blell 2 days ago

                        IIRC Reddit used to have an option that only admins could see that would allow them to write comments under other accounts without going through the trouble of registering them/logging into them/etc.

                        • jjoonathan 2 days ago

                          The internet itself went through a similar growth pattern without astroturf. The original users were all researchers, which served as a strong implicit filter, and then the new users were students who had to be taught Netiquette every September, and eventually the floodgates opened to the public and the academics lost the ability to steer the culture in what was called The Eternal September (1993).

                          The same "initial implicit filter followed by gradual but inevitable reversion to the mean" dynamic explains your observations of early reddit without implying fraud, although it certainly doesn't imply the absence of fraud either. That said, "fraud" is probably a strong word for reddit astroturf in this present day and age where we have a (comparatively) planet-sized Dead Internet built on geological quantities of ads and slop.

                        • candiddevmike 2 days ago

                          If they started out doing this, why wouldn't they continue to do this in the form of click fraud for advertising? Surely if they could create some minimum % of click fraud for each ad, they make more money and it would fly under the radar of their customers looking into it...

                          • hiccuphippo 2 days ago

                            They don't need to do the click fraud themselves they only need to not catch all of it. That's much less work.

                            • ses1984 2 days ago

                              People buying ads are their real customers, users are there to be exploited.

                              They catch enough fraud that their customers get a positive ROI, but surely they don’t catch all of it.

                              • Lammy 2 days ago

                                > People buying ads are their real customers, users are there to be exploited.

                                It's one level further. The global intelligence apparatus is the real customer, and they economically reward those who would build the most-surveillable and/or most-opinion-influencing products and services.

                                • candiddevmike 2 days ago

                                  I meant more that what is stopping platforms like Meta from generating a small-ish amount of click fraud, under the guise of the fake user framework they initially setup for kickstarting engagement, to juice their revenue.

                            • Ajedi32 2 days ago

                              > This has been shown numerous times by comparing the likes/impressions ratios for different accounts posting similar content.

                              That seems like dubious methodology. Obviously if a celebrity posts something that's going to get more engagement than some rando, even accounting for the difference in impressions.

                            • hekkle 2 days ago

                              I'd say this study is inherently flawed. As I am sure most people know on the Internet these days that just because X states their 'likes' are 'anonymous', doesn't mean they are.

                              I think the potential reputational damages would still be on the forefront of most people's minds, knowing that at any stage, at the whim of Elon, these will be revealed.

                              • omoikane 2 days ago

                                I can't find any mention of paid versus free accounts in this study. It used to be the case that people who paid for Twitter were already able to hide their "likes", before Twitter just made "likes" hidden for everyone. I would be interested in knowing if the visibility change caused anyone to give up their subscriber status, i.e. those people who would pay extra because they really care about keeping their "likes" hidden.

                                Note that Twitter "likes" is still not private today in the sense that the original post authors can see who liked their post. I suspect people who were really sensitive to this visibility simply wouldn't engage with risky content to begin with.

                                • dfxm12 2 days ago

                                  What's the difference a private like and a bookmark? What's the difference between a public like and an RT? They can be tracked separately, but is that necessary?

                                  • madars 2 days ago

                                    There is only one type of "like" on X. Since June 2024, all likes (both historical and new) are hidden from profiles, but they aren't fully anonymous: post authors can still see who liked their content (unless the "liker" has a protected account the author doesn't follow). Bookmarks are the only truly private engagement—no one, including the author, can see who bookmarked a post, though the public count still increases. A retweet actively redistributes content to your followers; a like signals approval (the author will normally see it) and influences the algorithm without that same direct amplification. Prior to the June 2024 update, your feed also had likes from people you follow.

                                    • drdeca 2 days ago

                                      An RT is visible in the feed when following someone. Public likes are visible when going to their account and viewing their list of likes. (When they put both in the feed, it’s just dumb.)

                                      Private likes are different from bookmarks in that it shows how many likes the post got, but not the number of bookmarks.