I feel this article genuinely represents the gold standard for all such posts in this current genre and zeitgeist. It really does feel like a Spirit Bomb that Goku asked the entire planet to lend their energy to build, a collection of all the hopes and dreams against a mind-rending and incalculable opponent that seeks to eradicate the last remaining few shreds of the craft of software development that feel noble and decent.
I find myself wondering what to do with this article and the incredibly well-condensed collection of sentiments therein. Do I drop a furtive link in that #ai-enthusiasts Slack channel, in full view of everybody, including that CTO of mine who is REALLY excited about this stuff, and isn't quite ready to issue an AI usage mandate? Or do I keep this secreted away, like a bible in my left jacket pocket in case one of "their" sharpshooters attempts to assassinate me with an AI silver bullet, as though I were some wild and reckless heathen that seeks to abandon all good sense in the selfish pursuit of personal pride?
I don't mean to be hyperbolic, but I never envisioned this moment of feeling bizarre, feckless and iconoclast for wanting to do what feels like the reasonable thing. The circular finances propping this stuff up don't add up. Do I really want to be left dependent on a thing that only really feels like it just flashed into precarious existence, that will almost without a doubt be propped up, classed as "too big to fail" by one particular nation-state, whose actual intent and interests are to subjugate its subjects into a quivering oblivion when it's finally acknowledged the whole thing is in arrears? No, goddammit, I shouldn't have to accept this barrage against and wholesale weakening of my strongest assets: My mind and my discipline. At least that's how I feel about it anyway.
You wanna know why this article is great? I can't quote it. There isn't a single, gold nugget line in this post that can be copy pasted into any possible form of short form content, without it losing some important aspect of the original message. Every idea is presented in conjunction with important supporting details that, if you take the time to digest it, you will finally get it. Why we recoil at AI generated content. Why code quality IS product quality. What "craftsmanship" argument is actually about. And like 12 other nuanced ideas we've all heard before, but may not have fully understood. I have nothing but immense praise for the author.
Don't skim this one.
I see a lot of arguing over whether this is "good" or not. This seems like a subjective question. Some people enjoy it, if you don't, it wasn't written for you--don't read it.
Maybe the arguing is really over whether it's higher-status to enjoy longform content, or to criticize it for not being more efficient? By identifying the argument, I've revealed it as silly, and clearly proven myself to be higher status than either side. The arguing may stop now. You're welcome.
The deep irony of the longform critiques is that the length is a proof of concept for the value of human effort.
I think this essay illustrates pretty well the value in indulging an experience not just for the sake of it but to try and truly know it emotionally.. and perhaps also given some of the responses, it is rightly counterbalancing a lack of appreciation and understanding for anyone doing just that.
I do wonder the prospects of any etsy-like outcome for largely hand crafted software though. While you can personally find stylistic expression in the craft i'm not sure how apparent the nuances of crafting code is to users of the product beyond the requirements of a UX design and vision. It's hard not to imagine generation industrializing a lot of this part of the craft of making software.
For me I think the important thing to not lose sight as we use generation more and more in software is our care for the work piece. It feels like care, and deep understanding are set up to become further valuable rarities in the future as we become less and less intimately involved and we have to be intentional about in order to keep.
I feel like there is some parallels here to industrial designers and their desire to hold on to obsessing about and understanding the details in the face of using industrialized tooling and being very much removed from the intimate feeling of crafting every millimetre. Deeply caring is still meaningful and valuable even if it isn't minimally required.
> I do wonder the prospects of any etsy-like outcome for largely hand crafted software though. While you can personally find stylistic expression in the craft i'm not sure how apparent the nuances of crafting code is to users of the product beyond the requirements of a UX design and vision.
The immediate example of something where good code DOES stand out to me is (one of my favorite games of all time) Factorio. There are lots of examples where I have been playing over the years and been amazed at the ability of the game engine to handle computationally complex operations at a really large scale. Coupled with a bunch of dev blogs explaining the little optimizations, its given me a ton of respect for Factorio as a piece of software.
That said, I am not sure it strictly invalidates your point. That’s the only example I can come up with, and it requires knowledge of the game’s design via those devblogs which the average user of a messaging app or something won’t have.
I think there’s probably a market for high performance consumer code, but the vast majority of what makes it to end users will just be good enough.
Maybe it’s just me, but I feel that same kind of treachery when somebody tries to pass off a piece of AI-generated work as if it were their own voice.
There's a flaw in the Milli Vanilli argument. The band had no input into their songs. They 'performed' them by lip-syncing on stage, but all of the music and lyrics were someone elses. Milli Vanilli had no part in the creative process.
That's not technically true of AI content. There's some tiny little seed of a creative starting point in the form of a prompt needed for AI. When someone makes something with Claude or Nano Banana it's based on their idea, with their prompt, and their taste selecting whether the output is an acceptable artefact of what they wanted to make. I don't think you can just disregard that. They might not have wielded the IDE or camera or whatever, and you might believe that prompting and selecting which output you like has no value, but you can't claim there's no input or creativity required from the author. There is.
I also believe the Milli Vanilli argument to be flawed, but the other way around: music videos were all the rage back then and the two supposed singers were actually just performers for the cameras. Does this mean they had no part in the success of the music? I don't believe that. That's not to say they were right in misleading the public and their fans, but it seems to me that Milli Vanilli was a fruitful combination of the public-facing performers and the musical process behind them. Everyone is fine with ghostwriters, why is this so different? The entertainment industry is fake through and through, but nobody is actually taking offense from this fact. I often wondered if a similar project could find success if it were presented differently, as a cooperation of musicians and performers
actually, i am not fine with ghostwriters. i am fine with speechwriters, because public speeches are a shared product, so is music. performing someone else's music is normal and probably has been done ever since music existed. in that sense i would also not mind a performance of a song if that song was originally created using AI. if the song and the performance are good that it is no different from performing a traditional song. if you don't like AI music, that's fine, i also don't like every traditional song, but that's not the fault of the performer (beyond choosing a song you don't like). the problem with milli vanilli is that they violated the expectation that we know who the singers are. milli vanilli were dancers, not singers, and if that had been properly communicated, it would have been fine.
I'd challenge that assertion. LLMs still produce very bad results with greenfield work, so that seed was generated by people who had both creativity and skill a thousand times before. Having a glimmer of an idea that you've probably seen more or less intact somewhere else and getting an AI to take it from that point is much closer to Milli Vanilli than any actual creative work.
Good artists borrow; great artists steal. LLMs synthesizing previous works doesn't mean they're meritless, because humans do the same thing.
AI is like a camera. Photographs can be art, but they aren't always. If you prompt an LLM only with "write me a novel," you can't take credit for the result, any more than if you took a cell-phone snap of the Mona Lisa. But AI can be used intentionally to create art, same as a carefully planned portrait photoshoot is art.
Art isn't a binary. Art is a spectrum. A creation is art to the extent that it reflects an artist's unique vision, not because of the tools the artist used (or didn't use.)
> hey chatgpt give me a snarky response to this comment that would wittily refute the argument, make it funny and interesting, concise and to the point
Ah yes, the “tiny little seed” defense — because if I hum three notes and Quincy Jones writes the symphony, clearly we co-composed it.
Sure, prompting involves taste and direction. So does ordering at a restaurant. But if I tell the chef “spicy, but make it fusion” and then Instagram the plate as my culinary creation, I’m not suddenly Gordon Ramsay.
Nobody’s saying there’s zero input. We’re saying input isn’t authorship. A seed isn’t a forest — and picking your favorite output isn’t the same as growing it.
Yeah sorry, I can claim there's none, you can't stop me.
I could claim there's even less creativity than lip syncing, and I will.
And if there was any creativity, the use it is being put to is to do violence to artists. If you think you deserve someone else's work as your own, you better be prepared for the fact that you won't even really understand who you're ripping off, but someone else sure as shit will and they're going to be pissed as hell.
At least Milli themselves a) knew what they were doing, b) paid the real singers I presume and c) presented real art created by real people.
But still everyone was mad at the lie of it, at being asked to venerate an imposter. And being asked to believe that in the future "impostor" will be the most venerable role. No. Just no.
> There's a flaw in the Milli Vanilli argument.
That reference though! I never thought I'd ever read on HN a blog mentioning Girl you know it's true from Milli Vanilli. Wow the throwback in (exceptionally cheesy) time.
I accidently clicked on the article instead of the comments link for this one, a rare mistake as I usually glance at the comments before deciding to read, but I'm glad I did in this case.
I read it all, and found myself engaged throughout. Not to say that it was all riveting, there were certainly dryer spots than others, but it felt 'real'. Maybe they did use AI (I somehow doubt that given the content), but even if they did they went over everything in a way that retained a voice that felt authentic.
I hate many of the articles I read now all feel like they have the same half hearted attempt at trying to grab your attention without every actually clearly saying what they mean.
As for the content, I had actually just been told by management this last week that I need to become AI 'fluent' as part of future performance evaluations and I have been deeply conflicted about it. I do think AI has value to add, but I don't think it's something that should be forced and so this article resonated with me.
It's a long read, and not for everyone, but I recommend it as a way of hearing another humans opinion and deciding for yourself if it has value.
>I had actually just been told by management this last week that I need to become AI 'fluent' as part of future performance evaluations and I have been deeply conflicted about it.
I hear this and FWIW, if there aren't very specific things being asked of you, using AI as a stack overflow replacement as the OP admits to doing is as "AI fluent" as anything else in my book.
If you trust the AI disclosure at the top, this was all human made except for one heading
This really resonated with me, thank you for writing it <3
> Companies value velocity and new launches and shipping first at all costs because of course they do; it’s table stakes. Speed of delivery is basically the number one corporate value of every organization whether they admit to it or not.
Yeah this one is again one of the causes of where we are today (alongside profit extraction, or perhaps because of it). It used to be the case that you would find companies that would offer quality at a slightly higher price, and people would be more than willing to pay for it. Now the feeling is that this is all marketing driven and there is no 'higher quality' because everyone gave up and went after speed of delivery. And well, as the old saying goes, that's valuable when you're catching fleas.
Great article, but a couple of things jarred a bit.
I'm in a technical but non-IT industry that currently rents its software at commercial scale. The software in question that I use is terrible, and it is probably the worst on the market. The industry is such that it is not an exaggeration to say that people have died on account of its flaws. All other solutions in the domain are better but not good.
I have had a slowly-congealing dream/blueprint in my head for over a decade about how the system I use should work, and in the last 6 months I have been accelerated enough by AI (significantly more so by Opus 4.5/6) that I have built a version of the software that I am now using in production at in my job, and it is the most satisfied I have been in my career in the last decade.
Point being (and it was almost made in the article) that the software doesn't actually matter (no-one's reading the assembly either way) but the function and what it enables does. If it doesn't, no end user cares whether it was 25k SLoC or 25M.
I’ve only managed to get halfway through this. I don’t tend to read ‘AI BAD’ but this was really insightful and thoughtful.
To nobody in particular: I loved this article, and all the little jokes and asides.
I liked the phrase "Aislop's fables".
And "if a clod be washed away by the C" which is a reference to John Donne's poem that introduced the phrases "No man is an island" and "for whom the bell tolls".
"the Ghibli-inspired scenes that really, really love using every available shade of brown"
Also, Aislopica. He missed the opportunity to say Aislopica Fables.
Aislop's fables - Aesop's fables. That's pretty easy to see. What does Aislopica fables refer to?
Aesopica, an other name of Aesop's fables
That was an excellent article.
"I’m not arguing that this technology should be unilaterally destroyed; I am arguing that we are collectively using it in the dumbest possible way, causing the most self-inflicted injury, and maximizing the amount of angst and suffering we’ll all have to contend with. I am angry at generative AI because it seems to be making us think and act like complete idiots."
Made me smile.
> It is a miracle of human ingenuity that we can etch 100 billion transistors onto a piece of rock we dug out of the ground.
I know this is probably a deliberate simplification as part of a rhetorical flourish but one of my favourite parts about semiconductors is the fact that we don't dig up the rocks, we grow them to order. The though fills me with childlike wonder...
It’s a very principled and well reasoned stance. Think it underestimates the relentlessness of progress and capitalism though. Short of those that are independently wealthy and can do artisanal things for the sake of it I suspect most shortly won’t have a choice
Sorry for going off-topic, but the typography on this site is beautiful.
I for one enjoyed this very long essay. It should've been a lot shorter, but you also didn't have to read it, it says right there in the title :)
>It should've been a lot shorter
Honestly I don't think so. An essay like this is more than just content, it's an experience for the reader. I value the time I got to spend with it and feel I came way with value that a summary or condensed version would just not have had.
Beautifully written.
> There were entire classes of Hacker News submissions that I refused to read the comments on. Including the comments about this article, should such comments ever materialize.
The author has made the correct call. There's a pretty deep irony that all the top-level comments at the time of this writing are about how the article is too long. It's quite clearly not trying to succinctly convince you of a point, it's meant to be a piece of genuinely human writing, and enjoyed (or not!) on the basis of that.
Author writes an interesting, nuanced, wide-reaching essay about AI and society, with a main theme being about AI and its impact on our humanity.
All other top level arguments offer AI summaries that miss all of the interesting, nuanced, wide-reaching topics about AI and its impact on our humanity, and complain it was too long to read.
Truly a gem of irony.
[flagged]
I am not here to tell you what to like or not, but doing my English Literature 'O' and 'A' levels were among my favourite parts of all my schooling, and even the books and plays and poems that I forced myself to wade through and hated have informed me for the rest of my life. Poems I hated at 15 I realised I loved deeply 30 or 40 years later.
And I really loved this essay. It's the single best piece of writing on "AI" I have read yet.
Everything you say, I disagree with.
My takeaway from this comment is that your emotional development was somehow arrested at the "angsty teenager who's mad at the world" phase
Yeah, this thread is eye opening.
I loved the essay. If anyone didn't enjoy it, quit halfway, or decided not to read it, that's absolutely fine. There's plenty of thoughtful writing I don't enjoy or don’t feel like spending my time with. But it is well-executed.
But apparently a sizable percentage of today's HN user base can't get through it, and finds the very idea of being able to get through anything longer than an LLM summary objectionable.
If I wanted to be an ass, I would call it a "skill issue."
But I don't want to be an ass. It's just deeply sad.
It's not worth 40 minutes to learn no new facts, while irrationally hoping for a payoff.
The feeling of boredom created in me is the author's fault. This is just content.
I personally love the appearance of the tl;dr about a third of the way through, that is some S tier trolling.
Design can go a long way when reading long form text. If someone here is in contact with the author please tell them to improve the typography; most notably smaller and justified text for mobile phones. Other designers could probably weigh in. I’m not an expert, but well designed text goes a long way towards comfortable reading.
Apart from that, content wise a preliminary abstract is nice to have. I do like how the author provides a table of contents.
I agree with the article's general thrust: use AI if you want to, don't use it if you don't want to, whether or not you'd want to will probably change as AI continues to evolve, and most people seem to be being pushed to use AI in the dumbest ways imaginable.
----
I rather strongly disagree with the framing around the environmental impacts, though; the article would make a much stronger point if it resisted the urge to peddle the same “muh water and electricity” disinformation that gets parroted all over the place by people who can't be bothered to put numbers into the context of other numbers.
For example:
> A single DGX B200 AI server is rated to consume 14,300 watts of electrical power at peak. You can cram about four of these on a rack if you like to live on the edge, and these four units might draw something like 200 amps of current combined. For a point of comparison, a typical single-family home in the United States will have wires from the utility company that are thick enough to provide a 200 amp service.
Cool, and how many households' worth of AI queries would that single B200 (let alone the rack of 'em) be able to handle? Probably a lot more than any individual household could ever hope to produce per second, even assuming a household consisting entirely of hardcore AI stans (let alone someone like the author, or like myself, who uses AI sparingly). Each of those servers is handling requests from thousands upon thousands of users; those power and water requirements get amortized over such a large quantity of requests (and people making them) that if you've ever eaten a single hamburger in your entire life then you've done more harm to the environment than hundreds (if not thousands) of those AI queries.
This all comes after this quip in the margins:
> You think you’re just gonna self-host an open weight model like GLM-5 on your personal hardware and cut out the hosting costs? Well, alright, hope you have 1,727 GB of VRAM lying around.
and like… the author does understand that not everyone needs such a large model with such a large VRAM requirement, right? Or that VRAM itself ain't even strictly necessary (it just happens to make things faster — which is more important for a server handling requests from thousands of users than it is for my laptop handling requests from exactly one user: me)? That's indeed part of the issue the author correctly identifies with people using AI in seemingly the dumbest way possible: that dumbness includes the demand for instantaneous responses, and the consequent demand for throwing more and more VRAM and SSDs at the problem, when “just make a cup of coffee while the LLM ‘thinks’ about what you asked of it” is a perfectly workable approach. As I'm typing out this comment, I've got Olmo 3.1 on this same exact machine doing a bunch of thinking about how to respond to me asking it “How much wood would a woodchuck chuck if a woodchuck could chuck wood?”¹, and it's totally fine that it's taking multiple minutes because there are other things I can do while I wait.
This all ain't to say that we shouldn't care about AI's power and water usage. We should absolutely be pushing for better efficiency. That includes acknowledging that there are options besides “throw more and more VRAM at it and hope for the best”; the article instead prefers to assume that the big beefy servers are the only option, dismissing the notion of self-hosting with little thought, and that dismissal does the article's broader point a disservice.
----
The discussion around AI being considered a “tool” also rubbed me the wrong way a bit:
> This unlocks a common refrain from the booster class: “A true craftsperson uses every tool at their disposal!” Which, if you think about it for more than three seconds, is ridiculous on its face. Gotta dig some holes for fence posts? Okay! Bring along every shovel on the truck, the Ditch Witch, a box of ANFO and the Bagger 293. Have the people who echo this kind of stuff ever built anything in the physical world? Your average craftsperson has one real good compound miter saw that they use for basically every cut on the jobsite. They’ll use it until it breaks down, then they’ll replace it with a newer model of substantially the same thing. In what world is constantly switching tools for the sake of switching tools a remotely smart use of time?
That's pretty blatantly a strawman, and seemingly the exact opposite of how even the most vibe-codey of vibe-coders use AI. They're largely using AI as that miter saw; they might switch out blades/models for a given job, but at the end of the day it's the same tool. That's indeed yet another part of that “people using AI in the dumbest ways imaginable” problem that's otherwise correctly-identified: AI maximalists having a hammer called ChatGPT and seeing everything as a nail.
And also: who cares whether or not someone brings along every shovel + the Ditch Witch + the ANFO + the Bagger 293 if it's easy enough to bring them all? That's only a problem to the extent that carrying one tool comes at the expense of one's ability to carry another tool. If you've got a big enough truck to carry all that gear around, and you're okay with taking the time to load and unload it all, then fuck it, might as well full send — and then if there happens to be a boulder blocking the path of your fence, then it's a good thing you have that ANFO handy, right?
And of course, most software developers ain't doing their work in a pickup truck in the middle of nowhere (though some are, and that's fucking rad). Most are doing their work at their desks, in their offices or homes, wherein they're probably in close proximity to the entirety of their collection of tools. Hell, even if they are doing their work in a pickup truck in the middle of nowhere, the vast majority of the tools they need are probably already present (or could readily be made present) on whatever laptop they're bringing along for the job. Toby and Lyle don't need to worry about the logistics of carrying their tools (in particular Lyle's trusty lathe) because they do their jobs in a workshop wherein those tools already live; I don't need to worry about the logistics of carrying around my compilers and editors and manpages and such (or even an LLM!) because I do my job on a laptop wherein those tools already live.
----
¹ For the record, Olmo 3.1 concluded (like most models do these days) that “If a woodchuck could chuck wood, it would chuck as much as it could—but given its actual habits, it would probably just dig a very efficient burrow instead.”
[dead]
Tl;dr:
Over sixteen thousand words about how the author doesn’t really use language models very much but might in the future
I would imagine that the target audience has an attention span and literacy level that allows reading sixteen thousand words without too much trouble.
I like the idea that people are downvoting and not rebutting a summary because they think that an accurate summary will cause folks to not read sixteen thousand words that they would otherwise. It’s kind of agreement by dissent
Bully for them I guess. Thanks for finishing that.
Either they actually wrote all that on their own, or they had an LLM spew it. Either way, why? They had a valid point; you don't have to use LLMs to write your stuff. Why bury that point in this insane pile of verbiage?
But thanks for saving the rest of us. This is why I read the comments first.
Because it was, even if you disagree with it, beautifully written, emotionally resonant, full of funny jokes and cute stories and metaphors, and states well — and encapsulates — all of the nuances and sub-arguments of its side of the argument?
...because reading and writing well-written prose is meaningful and enjoyable?
It feels like half the people here do not read or write in their free time, which would be understandable if this were not primarily a site for software engineers who write (sorta) as a job
It is funny how that's basically one of the core points the article makes -- and in fact the article paints Hacker News commentors specifically as people who don't see that kind of inherent value in craft and artistry -- but the AI-generated summaries those people are relying on have missed it completely.
I actually disagreed with that particular point made in the article, because I don't really see myself as somebody who sees value in craft and artistry, I just want effective code that works (which imho LLMs cannot create).
But after reading this comment section... I mean if enjoying well written prose counts as enjoying craft and artistry I guess I do then? Damn.
Nobody reads any more. It's been that way for at least ten years. Nobody writes any more, without prompting.
plenty of people read. maybe you're just an illiterate surrounded by illiterates?
I'm on Hypocrisy News, with you, so yes, you're right.
> because reading and writing well-written prose is meaningful and enjoyable?
This is not prose, it is exposition. It is perfectly valid to critique any expository essay, especially one of this length, for its density (or lack thereof) of substantive information.
Sometimes writing can both contain information and be beautiful? This article is charming and thoughtful. Its style may not be for everyone, but for me it really hit, I am thoroughly enjoying reading it. Its style gives me no problem calling it prose.
A person writing an essay on their own site doesn't need to have the information density of bus timetable.
This a hilariously ironic parallel to the debate over whether code is an art or a science, referenced right in the article. It can be both.
I somewhat disagree that this is not prose? This didn't seem like a purely expository piece. Like if it were just a straightforward technical piece than yeah its way to long, it could have been a few sentences.
But this seemed like it bridges the gap between prose and an expository essay -it was doing both.
> prose and an expository essay -it was doing both.
Putting prose in an essay means there are more valid criticisms of a piece of writing, not fewer. If somebody is breakdancing and reciting the periodic table at the same time it’s ok if somebody notices if they skipped the lanthanides and actinides.
I’m a fan of blending the two! It’s just really really hard to do both well at the same time. My most recent example is Malcolm Harris’ history of Palo Alto, it is incredibly well-done.
Sure, but the specific critique that it is too verbose seems less valid if one of the primary purposes of the piece was to be prose.
That’s kind of the point that I was making. When you mash the two together, both lenses are valid critiques.
It’s an exponentially more difficult way to accomplish either goal because one reader will see it and think “this is a sixteen thousand word essay that says very little” and another will see it and think “what a wonderful story” and there’s nobody to adjudicate who is correct.
Like I posted “this is sixteen thousand words about how the author doesn’t really use language models but might one day” and some folks’ rebuttal is that they enjoyed reading it. Those are two completely unrelated things! It’s like if folks saw the cover of The Hobbit and thought “Hell yeah!” and then when they read “there and back again” thought “whoever wrote that was being unnecessarily reductive”
Because the article says much more than that and your LLM summary misses all the nuance.
A tweet might have sufficed?
>Either they actually wrote all that on their own, or they had an LLM spew it. Either way, why?
I mostly skimmed it. It’s entirely feasible that the author buried a confession about getting away with manslaughter or whatever that I missed somewhere in a few sentences in the middle of that novella though. It does begin with several paragraphs essentially telling you not to read the post and has a lot of completely unnecessary exposition (for example the section on Luddites)
Edit: I want to point out that I went over the post with my own eyeballs and brain
> then I become a little pissed off at having my time and attention wasted by somebody who didn’t care enough about what they were doing to actually do that thing.
I remember people saying this about emails vs postal mail.
"If I cared as much as I want you to, I'd have written a shorter article"
"I would have written a shorter letter, but did not have the time."
Blaise Pascal, French mathematician, in “Lettres Provinciales,” circa 1657.
This was so wordy I had to ask an LLM to tell me what the point is.
So you don't have to:
"you don’t have to embrace a trend, tool, or narrative simply because others say you should — especially if it doesn’t resonate with you or align with your values"
An important new twist to add to the great AI versus NO AI discussion.
> This was so wordy I had to ask an LLM to tell me what the point is.
Every time I check this comment section, this sentence jumps out at me again. You "had to" ask an LLM. You "had to".
>The rent-a-brain aspect is more acutely alarming. And I will be blunt here: It sure does seem like the prolonged use of LLMs can reliably turn certain people’s minds into mush...
>Stop me if you’ve heard this one before: “After [however long] using AI coding assistants, there’s no way I’m going back!” You know, I don’t doubt that this is true. Because I’m not sure some of the people who say this could go back. It reads like praise on the surface, but those same words betray a chilling sense of dependence.
Perhaps, very ironically, they did "have to."
What if, and hear me out here, "You don't have to"
Most people simply do not have the patience to spend 30 minutes reading something anymore. It's why magazines like The New Yorker are on life support. So, yes. "Had to."
I should point out that simply not reading a blog post that you're not interested in reading is also an option...
I guess it's a lost skill.
Social media brought us the age of 'the irresistible urge to proffer one's opinion on everything'. AI evolved that age to the dizzying advancement of 'not requiring a brain to do so'.
I noped out of this article because it was using 10 paragraphs to say nothing.
Genuine human writing can be great, this isnt it.
> You don't have to
> I had to
got 'em
[flagged]
Exceptionalism says we have best of everything, including idiots.
Because we went through so many years of school for it