AI is already so much better than 99% of customer support employees.
It also improves brand reputation by actually paying attention to what customers are saying and responding in a timely manner, with expert-level knowledge, unlike typical customer service reps.
I've used LLMs to help me fix Windows issues using pretty advanced methods, that MS employees would have just told me to either re-install Windows or send them the laptop and pay $hundreds.
I don't want AI customer support. I want open documentation so I can ask AI if I want or ask human support if it's not resolvable with available documentation
All my interactions with any AI support so far is repeatedly saying "call human" until it calls human
This is great but most customer support is actually designed as a “speed bump” for customers.
Cancel account- have them call someone.
Withdraw too much - make it a phone call.
Change their last name? - that would overwhelm our software, let’s have our operator do that after they call in.
Etc.
>Change their last name? - that would overwhelm our software, let’s have our operator do that after they call in.
That doesn't make much sense. Either your system can handle it or it can't. Putting a support agent in front isn't going to change that.
Except AI support agents are only using content that is already available in support knowledge bases, making the entire exercise futile and redundant. But sure, they're eloquent while wasting your time.
"Only" kind of misses the benefit though. I'm very bearish on "AI", but this is an absolutely perfect use case for LLMs. The issue is that if you describe a problem in natural language on any search engine, your results are going to be garbage unless you randomly luckboxed into somebody asking, with near identical verbiage, the question on some Q&A site.
That is because search is still mostly stuck in ~2003. But now ask the exact same thing of an LLM and it will generally be able to provide useful links. There's just so much information out there, but search engines just suck because they lack any sort of meaningful natural language parsing. LLMs provide that.
There are AI agents that train from knowledge bases but also keep improving on actual conversations. For example, our Mava bot actually learns from mods directly within Discord servers. So it's not about replacing human mods but assist them so they can take better care of users in the end.
I don't see how this is any different than enriching knowledge bases from feedback and experience. You just fins yourself duplicating all the information, locking yourself in your AI vendor and investing in a technology that doesn't add anything to what you had before. It's utterly nonsensical.
Most of my questions are answerable from the support knowledge base.
If i am calling support, it is probably because I already scoured the resources.
Over the past 3 years of calling support any service or infrastructure (bank, health insurance, doctor, wtv), over like 90% of my requests were things only solvable via customer support or escalation.
I only keep track because I document when I didn't need support into a list of "phone hacks" (like press this sequence of buttons when calling this provider).
Most recently, I went to an urgent care facility a few weekends ago, and they keep submitting claims to the arm, of my insurance, that is officed in a different state instead of my proper state.
It’s even worse than you think. I work with Amazon Connect. Now the human agent doesn’t have to search the Knowledge Base manually, likely answers will automatically be shown to the agent based on the conversation that you are having. They are just regurgitating what you can find for yourself.
But I can’t imagine ever calling tech support for help unless it is more than troubleshooting and I need them to actually do something in their system or it’s a hardware problem where I need a replacement.
It isn’t empowered to do anything you can’t already do in the UI, so it is useless to me.
Perhaps there is a group that isn’t served by legacy ui discovery methods and it’s great for them, but 100% of chat bots I’ve interacted with have damaged brand reputation for me.
The actual report (which this article doesn't link to; bad Axios):
> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return.
Oof
These are probably the same people who are "surprised" when 100 offshore agency dredgings don't magically do the app 10x faster than 10 expensive onshore workers.
To be fair, the PowerPoint they were shown at that AI Synergies retreat probably was very slick.
I have tried to digest why this is done. It is not because they believe they are 10x faster.
It is because they think it will 10x their chances of getting a really good engineer for 1/10th as cheap.
At least that is my theory. maybe i am wrong. i try to be charitable.
It's almost like the people in charge of these businesses have no goddamn clue what they're actually selling, how it works or why it's good (or isn't).
It's almost like, and stay with me here, but it's almost like the vast majority of tech companies are now run by business graduates who do not understand tech AT ALL, have never written a single line of code in their lives, and only know how to optimize businesses by cutting costs and making the products worse until users revolt.
This is no different from the personal computer, and it is to be expected.
The initial years of adopting new tech have no net return because it's investment. The money saved is offset by the cost of setting up the new tech.
But then once the processes all get integrated and the cost of buying and building all the tech gets paid off, it turns into profit.
Also, some companies adopt new tech better than others. Some do it badly and go out of business. Some do it well and become a new market leader. Some show a net return much earlier than others because they're smarter about it.
No "oof" at all. This is how investing in new transformative business processes works.
> transformative business processes
Many new ideas came through promising to be "transformative" but never reached anywhere near the impact that people initially expected. Some examples: SOA, low-code/no-code, blockchain for anything other than cryptocurrency, IoT, NoSQL, the Semantic Web.
Each of these has had some impact, but they've all plateaued, and there are very good reasons (including the results cited in TA) to think GenAI has also plateaued.
My bet: although GenAI has plateaued, new variants will appear that integrate or are inspired by "old AI" ideas[0] paired with modern genAI tech, and these will bring us significantly more intelligent AI systems.
[0] a few examples of "old AI": expert systems, genetic algorithms, constraint solving, theorem proving, S-expression manipulation.
> This is no different from the personal computer, and it is to be expected.
What are you talking about? The return on investment from computers was immediate and extremely identifiable. For crying out loud "computers" are literally named after the people whose work they automated.
With Personal Computers the pitch is similarly immediate. It's trivial to point at what labour VisiCalc automated & improved. The gains are easy to measure and for every individual feature you can explain what it's useful for.
You can see where this falls apart in the Dotcom Bubble. There are very clear pitches; "Catalogue store but over the internet instead of a phone" has immediately identifiable improvements (Not needing to ship out catalogues, being able to update it quickly, not needing humans to answer the phones)
But the hype and failed infrastructure buildout? Sure, Cisco could give you an answer if you asked them what all the internet buildout was good for. Not a concrete one with specific revenue streams attached, and we all know how that ends.
The difference between Pets.com and Amazon is almost laughably poignant here. Both ultimately attempts to make the "catalogue store but on the computer" work, but Amazon focussed on broad inventory and UX. They had losses, but managed to contain them and became profitable quickly (Q4 2001). Amazon's losses shrank as revenue grew.
Pets.com's selling point was selling you stuff below cost. Good for growth, certainly, but this also means that their losses grew with their growth. The pitch is clearly and inherently flawed. "How are you going to turn profitable?" We'll shift into selling less expensive goods "How are you going to do that?" Uhhh.....
...
The observant will note: This is the exact same operating model of the large AI companies. ChatGPT is sold below unit cost. Claude is sold below unit cost. Copilot is sold below unit cost.
What's the business pitch here? Even OpenAI struggles to explain what ChatGPT is actually useful for. Code assistants are the big concrete pitch and even those crack at the edges as research after research shows the benefits appear to be psychosomatic. Even if Moore's law hangs on long enough to bring inference cost down (nevermind per-task token usage skyrocketing so even that appears moot), what's the pitch. Who's going to pay for this?
Who's going to pay for a Personal Computer? Your accountant.
I highly doubt that the return in investment was seen immediately for personal computers. Do you have any evidence? Can you show me a company that adopted personal computers and immediately increased its profits? I’ll change my mind.
The document actually debunks this take:
> GenAI has been embedded in support, content creation, and analytics use cases, but few industries show the deep structural shifts associated with past general-purpose technologies such as new market leaders, disrupted business models, or measurable changes in customer behavior.
They are not seeing the structural "disruptions" that were present for previous technological shifts.
I think it just turns into table stakes.
For companies competing in the same niche, the same low hanging fruits will be automated first if they invest in ML. So within the niche there is no comparative advantage.
It's pay big tech or fall behind.
I'm wondering, if the return is that the employees get 20 minutes extra free time per day, is that a good, quantifiable return? Would anyone consider as a "return" anything that you can't put on your balance sheet?
Not Found
The requested URL was not found on this server. Apache/2.4.62 (Debian) Server at nanda.media.mit.edu Port 443
IMHO this is going to be part of a broader trend where advancements in AI and robotics nullify any comparative advantages low wage countries had.
> IMHO this is going to be part of a broader trend where advancements in AI and robotics nullify any comparative advantages low wage countries had.
Then why hasn't it yet? In fact, some lower-wage countries such as China are on the forefront of industrial automation?
I think the bottom line is that many Western countries went out of their way to make manufacturing - automated or not - very expensive and time-consuming to get off the ground. Robots don't necessarily change that if you still need to buy land, get all the permits, if construction costs many times more, and if your ongoing costs (energy, materials, lawyers, etc) are high.
We might discover that AI capacity is easier to grow in these markets too.
Hard to honestly say if China is low wage. On one hand, their wages have risen as the work force has shrunk now for a few years and tasks are being outsourced to other countries. On the other hand, their currency is pegged meaning that the earning power of the workers should be much higher so that they can afford the things they are making and transition to a consumer driven economy.
They are very much devaluing their currency. This is all the rage and I expect a currency devaluation race as the US tries to deal with crushing government liabilities.
> Then why hasn't it yet?
Because the current companies are behind the curve. Most of finance still runs on Excel. A lot of other things, too. AI doesn't add much to that. But the new wave of Tech-first companies now have the upper hand since the massive headcount is no longer such an advantage.
This is why Big Tech is doing layoffs. They are scared. But the traditional companies would need to redo the whole business and that is unlikely to happen. Not with the MBAs and Boomers running the board. So they are doing the old stupid things they know, like cutting costs by offshoring everything they can and abusing visas. They end up losing knowledgeable people who could've turned the ship around, the remaining employees become apathetic/lazy, and brand loyalty sinks to the bottom. See how S&P 500 - top 10 is flat or dumping.
>They end up losing knowledgeable people who could've turned the ship around, the remaining employees become apathetic/lazy, and brand loyalty sinks to the bottom
Right. And AI is here to fix that!
> We might discover that AI capacity is easier to grow in these markets too.
If only because someone else has to build all the nuclear reactors that supply the data centers with electricity. /s
I don’t fully agree. Yes, AI can be seen as a cheaper outsourcing option, but there’s also a plausible future where companies lean more on outsourced engineers who are good at wielding AI effectively, to replace domestic mid-level roles. In other words, instead of nullifying outsourcing, AI might actually amplify it by raising the leverage of offshore talent.
Consider the kinds of jobs that are popular with outsourcing right now.
Jobs like customer/tech support aren't uniquely suited to outsourcing. (Quite the opposite; People rightfully complain about outsourced support being awful. Training outsourced workers on the fine details of your products/services & your own organisation, nevermind empowering them to do things is much harder)
They're jobs that companies can neglect. Terrible customer support will hurt your business, but it's not business-critical in the way that outsourced development breaking your ability to put out new features and fixes is.
AI is a perfect substitute for terrible outsourced support. LLMs aren't capable of handling genuinely complex problems that need to be handled with precision, nor can they be empowered to make configuration changes. (Consider: Prompt-injection leading to SIM hijacking and other such messes.)
But the LLM can tell meemaw to reset her dang router. If that's all you consider support to be (which is almost certainly the case if you outsource it), then you stand nothing to lose from using AI.
Also consider the mental health crisis among outsourced content moderation staff that have to appraise all kinds of depravity on a daily basis. This got some heavy reporting a year or two ago, in particular from Facebook. These folks for all their suffering are probably being culled right now.
You could anticipate a shift to using AI tools to achieve whatever content moderation goals these large networks have, with humans only handling the uncertain cases.
Still brain damage, but less. A good thing?
> But the LLM can tell meemaw to reset her dang router. If that's all you consider support to be (which is almost certainly the case if you outsource it), then you stand nothing to lose from using AI.
I worked in a call center before getting into tech when I was young. I don't have any hard statistics, but by far the majority of calls to support were basic questions or situations (like Meemaw's router) that could easily be solved with a chatbot. If not that, the requests that did require action on accounts could be handled by an LLM with some guardrails, if we can secure against prompt injection.
Companies can most likely eliminate a large chunk of customer service employees with an LLM and the customers would barely notice a difference.
In a vacuum, sure. But when you take two resources of similar ability and amplify their output, it makes those resources closer in cost per output, and in turn amplifies the risk factors for choosing the cheaper by cost resource. So locality, availability, communication, culture, etc, become more important.
> AI and robotics nullify any comparative advantages low wage countries had
If we project long term, could this mean that countries with the most capital to invest in AI and robotics (like the U.S.) could take back manufacturing dominance from countries with low wages (like China)?
> could take back manufacturing dominance from countries with low wages (like China)?
The idea that China is a low wages country should just die. It was the case 10y ago, not anymore.
Some part of China have higher average salaries than some Eastern European countries.
The chance of a robotic industry in the US moving massively jobs from China only due to a pseudo A.I revolution replacing low paid wages (without other external factors, e.g tarifs or sanctions) is close to 0.
Now if we do speak about India and the low skill IT jobs there. The story is completely different.
> The idea that China is a low wages country should just die. It was the case 10y ago, not anymore.
The wages for factory work in a few Eastern European countries are cheaper than Chinese wages. I suppose they don’t have the access to infrastructure and supply chains the Chinese do but that is changing quickly do to the Russian war against Ukraine
China dominance in manufacturing, at least in tech, it's not based on cheap labor, but rather in skills, tooling and supply chain advantages.
Tim Cook explains it better that I could ever do:
But it's not like China had the skills, tooling and supply chain to begin with....and it's not like the US suddenly stopped having all those things. There are reasons manufacturing moved out of the US and it was not "They are soooo much better at all the things over there!"
Tim Cook had a direct hand in this and know it and is now deflecting because it looks bad.
One of the comments on the video puts it way better than I could:
@cpaviolo : "He’s partially right, but when I began my career in the industry 30 years ago, the United States was full of highly skilled workers. I had the privilege of being mentored by individuals who had worked on the Space Shuttle program—brilliant professionals who could build anything. I’d like to remind Mr. Cook that during that time, Apple was manufacturing and selling computers made in the U.S., and doing so profitably.
Things began to change around 1996 with the rise of outsourcing. Countless shops were forced to close due to a sharp decline in business, and many of those exceptionally skilled workers had to find jobs in other industries. I remember one of my mentors, an incredibly talented tool and die maker, who ended up working as a bartender at the age of 64.
That generation of craftsmen has either retired or passed away, and the new generation hasn’t had the opportunity to learn those skills—largely because there are no longer places where such expertise is needed. On top of that, many American workers were required to train their Chinese replacements. Jobs weren’t stolen by China; they were handed over by American corporations, led by executives like Tim Cook, in pursuit of higher profits."
> it was not "They are soooo much better at all the things over there!"
Though I think we should also disabuse ourselves of the idea that this can't ever be the case.
An obvious example that comes to mind is the US' inability to do anything cheaply anymore, like build city infrastructure.
Also, once you enumerate the reasons why something is happening somewhere but not in the US, you may have just explained how they are better de facto than the US. Even if it just cashes out into bureaucracy, nimbyism, politics, lack of will, and anything else that you wouldn't consider worker skillset. Those are just nation-level skillsets and products.
Hence "had the skills" and "was not". They are not making claims about the present day, they are talking about why the shift happened in the first place and who brought it about.
Manufacturing isn’t one uniform block of the economy that is either won or lost. US manufacturers focus on high quality, high precision, and high price orders. China excels at factories that will take small orders and get something shipped.
The reason US manufacturers aren’t interested in taking small volume low cost orders is that they have more than enough high margin high quality orders to deal with. Even the small-ish machine shop out in the country near the farm fields by some of my family’s house has pivoted into precision work for a big corporation because it pays better than doing small jobs
Hard disagree. You can't just one day wake up and double your energy infrastructure.. China is way ahead.
China has more robots per capita than the US
And the idea that China has low wages is outdated. Companies like Apple don't use China for its low wages, countries like Vietnam have lower wages. China's strength lies in its manufacturing expertise
Manufacturing expertise that have been transferred from the West over the last 40 years. Knowledge and expertise are fluid, they can go both way, they can be transferred to other countries as well, India, Vietnam, etc. The world doesn’t stand still.
I don't get why I was downvoted. I didn't say anything that contradicts what you just said.
Western engineers worked relentlessly on knowledge transfer to China to do so, it might be easy to bring back with the 10x industrial subsidies that the CCP provided to do so.
And the US is already starting to do it, for example partnering with South Korea or Japan to rebuild American shipbuilding.
Depends where you draw the line. I would expect countries like China will continue to leverage AI to extend their lead in areas like low cost manufacturing. Some of the very low cost Chinese vendors I use are already using AI tools to review submitted pieces with mixed results, but they’re only going to get better at it.
Lemme know when robots will make your sneakers and T-Shirts and pick fruits from fields at a competitive price to third world slave labor.
Yes, I agree. And it is not that AI is any good, but those outsourcing shops are most of the time not adding any value, all the contrary takes time to babysit them. Some of this even look like an elaborate scam, someone in the organization launder money through this companies somehow, otherwise I don’t understand how they are useful. Obviously there some good ones, but in my experience is not the norm.
Yeah I think this will be a noticeable trend moving forward. We've frozen backfills in our offshore subsidiaries for the same reason; the quality is nonexistent and onshore resources spend hours every day fixing what the offshore people break.
I wonder if AI automation will even lead to a recession in total software engineering revenue.
At my job, thanks to AI, we managed to rewrite one of our boxed vendor tools we were dissatisfied with, to an in-house solution.
I'm sure the company we were ordering from misses the revenue. The SaaS industry is full of products whose value proposition is 'it's cheaper to buy the product from us than hire a guy who handles it in house'
What you are saying is not intuitive. Software engineers are a cost to software companies. With automation the profits would increase so I’m not sure how it can lead to recession.
They used the word in an irregular way. They meant a decline in software company revenue, not an economic recession.
You might well see more software profits if costs go down but less revenue. Depends on Jevon's paradox really
I've done a vibe coding hobby project where I simply give AI instructions on what I want, using a persona-based approach for the agent to generate or fix the code.
It worked out pretty well. Who knows how the software engineering landscape will change in 10 to 20 years?
I enjoyed Andrej Karpathy's talk about software in the era of AI.
As an aside, his talk isn't about using AI to write code, it's about using AI instead of code itself.
Historically imporvments in programmer productivity (e.g. via better languages, tooling and hardware) didn't correlate to a decrease in demand for programemrs, but quite the opposite.
This is completely different - said as someone who has been in the industry professionally for 30 years and as a hobbyist before then for a decade.
There are projects I lead now that I would have at least needed one or maybe two junior devs to do the grunt work after I have very carefully specified requirements (which I would have to do anyway) and diagrams and now ChatGPT can do the work for me.
That’s never been the case before and I’ve personally gone from programming in assembly, to C, to higher level languages and on the hardware side, personally managing the build out of a data center that had an entire room dedicated to a SAN with a whopping 3TB of storage to being able to do the same with a yaml/HCL file.
Makes sense, current llms seem to be at a similar level considering quality and supervision.
Original title "AI is already displacing these jobs" tweaked using context from first paragraph to be less clickbaity.
"You'll never guess which jobs AI is about to replace!"
haha
That has long been my personal theory as well, though I never had a way of firmly backing it up with evidence, though this article hardly does that either.
But it does make sense on a superficial level at least: why pay a six-pack of nobodies half-way 'round the world to.. use AI tools on your behalf? Just hire a mid/senior developer locally and have them do it.
The Indian IT sector is almost certainly going to be decimated (at least in its current form), and we haven’t really wrapped our heads around what that means for the world’s fourth-largest economy.
Indian IT sector in nutshell : https://youtu.be/LWfFD4DiUiA?si=VCpWVx_cKKCqCBxH&t=496
what this person has started is being done by WITCH companies at largest scale and most fraudulent way possible
I can see this. We employ a lot of off shore for what we call support engineering. Things like jdk upgrades or cert updates. It’s grunt work that lets higher paid engineers utilize their time on business value work. As AI continues to grow in scope, it will surely commandeer much of this. Employing a human is more expensive when compared to compute for these tasks at a certain scale.
Can I ask how the offshore team manages to deploy these changes? What skills are required for such a role in support engineering?
“AI reshoring” is what I call it. Makes perfect sense.
I have hired “outsourced, offshore workers”. Anyone who has similar experience knows the challenges of finding quality talent. Generally, you don’t know what you’re going to get until you’ve already written the first check. Sometimes it is good quality, but lots of the time it is “acceptable” quality or poor quality that needs cleaning (or re-hiring), and 5% of the time it is absolute garbage. Since the costs are typically 1/10th to 1/20th that of US engineer, you can afford to make a few mistakes. However, I can see a future where I can hire local (US) and outsource to AI with oversight within the same budget.
So, aside from trust the biggest barrier is lack of adaptability?
In hindsight, remote working is an obvious stepping stone to offshoring, which itself is an inevitable milestone toward full automation. It is the work we do in in-person collaboration which will keep the moat high against AI disintermediation.
Doubt. The meaningful work in person is organisational, and it's only marginally better onsite due to whiteboard > Excalidraw. Who does what, how it'll all interact, architecture, etc. If an LLM can code the difficult bits and doesn't fall apart once the project isn't a brand new proof of concept, it'll surely be able to pick the correct pattern and tooling and/or power through the mediocre/bad decision.
> The meaningful work in person is organisational
And also gaining information about the domain from the business and the business requirements for the system or feature.
The invisible hand will provide the answer!
Have you used a whiteboard recently? It sucks. Writing anything significant takes forever, there’s no undo or redo, difficult to save and version. There’s just no way it’s better.
It takes forever to make a beautiful diagram, but the usual flow is that you have your presentation for the base idea, and then when the questions come, you can all grab a marker and start making a mess on the boards around the room. We also have one in the dev room, which is nice for smaller topics.
It's not meant to be the actual documentation, and it makes sense to me since you don't want to write the actual documentation during the discussion with multiple highly paid devs and managers. Just take a photo at the end, and it's saved for when you make the documentation.
I do the same with Lucid App shared on Zoom. I have the base diagram in Lucid and I start making changes during the meeting and adding sticky notes docs.
> It's not meant to be the actual documentation, and it makes sense to me since you don't want to write the actual documentation during the discussion with multiple highly paid devs and managers. Just take a photo at the end, and it's saved for when you make the documentation.
This is 2025, over Zoom, we use Gong, it records, transcribes and summarizes the action items and key discussion points. No need to take notes.
My diagrams are already in Lucid with notes
That just sounds like office theatre.
We have some of that, but it's not the whiteboards. The dev one gets used multiple times a day in a room with only developers. No management, no power structure around.
It's my general experience, also in prior workplaces, that sometimes a little drawing can tell a lot, and there's no quicker way to start it than to walk 3 meters and grab a marker. Same for getting attention towards a particular part of the board. On Excalidraw, it's difficult to coordinate people dynamically. On a whiteboard, people just point to the parts they're talking about while talking instinctively, so you don't get person A arguing with person B about Y while B thinks they are talking about D which is pretty close to Y as a topic.
> remote working is an obvious stepping stone to offshoring
This I largely agree with. If your tech job can be done from Bozeman instead of the Bay Area there's a decent chance it can be done from Bangalore.
> which itself is an inevitable milestone toward full automation
But IMHO this doesn't follow at all. Plenty of factory work (e.g. sewing) was offshored decades ago but is still done by humans (in Bangladesh or wherever) rather than robots. I don't see why the fact that a job can move from the Bay Area to Bozeman to Bangalore inherently means it can be replaced with AI.
While I agree with remote work to offshoring. I’m not sure about the next step.
I would have been hard pressed to find a decent paying remote work as a fully hands on keyboard developer. My one competitive advantage is that I am in the US and can fly out to a customer’s site and talk to people who control budgets and I’m a better than average English communicator.
In person collobaration though is over rated. I’ve led mid six figure cross organization implementations for the last five years sitting at my desk at home with no pants on using Zoom, a shared Lucid App document and shared Google Docs.
Im skeptical it's even replacing those.
Pretend for a moment that capital investors do any work. Can AI replace that work?
I'm sorry Dave. I can't answer that.
The investment bankers I know worked harder than any software developer to get where they are.
They do a lot of work but a lot of market research can be automated too.
Yep! The low value outsourcing firms like Indian WITCH companies have been heavily leveraging LLMs and laying off employees as a result.
High value product work remains safe from AI automation for now, but it was also safe from offshoring so long as domestic capacity existed.
Source: https://nanda.media.mit.edu/ai_report_2025.pdf (https://news.ycombinator.com/item?id=44941374)
Or err, since that's been taken down: https://web.archive.org/web/20250818145714/https://nanda.med...
This makes a ton of sense. The interaction is similar: write specs, give orders, wait, review and fix results.
Good riddance.
For now!
AI is going to force the issue of having to deal with the inequity in our economic system. And my belief is that this confrontation will be violent and many people are going to die.
The fundamental issue is wealth inequality. The ultimate forms of wealth redistribution are war and revolution. I personally believe we are already beyond the point where electoral politics can solve this issue and a violent resolution is inevitable.
The issue is that there are a handful of people who are incredibly wealthy and are only getting wealthier. The majority of the population is struggling to survive and only getting poorer.
AI and automation will be used to further displace working people to eke out a tiny percentage increase in profits, which will furhter this inequality as people can no longer afford to live. Plus those still working will have their wages suppressed.
Offshored work originally dsiplaced local workers and created a bunch of problems. AI and automation is a rising tide at this point. Many in tech considered themselves immune to such trends, being highly technical and educated professionals. Those people are in for a very rude shock and it'll happen sooner than they think.
Our politics is divided by those who want to blame marginalized groups (eg immigrants, trans people, "woke" liberals) for declining material conditions (and thus we get Brownshirts and concentration camps) and the other side who wants to defend the neoliberal status quo in the name of institutional norms.
It's about economics, material conditions and, dare I say it, the workers relationship to the means of production.
No war but class war.
Not sure how long it will take for a critical mass to realize that that we are in a class war, and placing the blame on anything else won't solve the problem.
IOW, I agree with you, I also think we are beyond the point where electoral politics can solve it - we have full regulatory capture by the wealthy now. When governments can force striking workers back to work, workers have zero power.
What I wonder though, is why do the wealthy allow this to persist? What's the end game here, when no one can afford to live, whose buying products and services? There'll be nothing to keep the economy going. The wealthy can end it at any time, so what is the real goal? To be the only ones left on earth?
You write as though "the wealthy" are a unified group acting in concert. They're not; they're just like everyone else in that regard, acting in their own, mostly short to medium term best interest. Seems like a pretty ordinary tragedy of the commons type of situation.
It's greed and short-term thinking, We shouldn't be surprised by this because we see companies do it all the time. How many times have you thought an employer or some company in the news is operating on a time horizon no further than the next quarterly results?
To be ultra-wealthy requires you to be a sociopath, to believe the bullshit that you deserve to be wealthy, it's because of how good you are and, more importantly, that any poverty is a personal moral failure.
You see this manifest with the popularity of transhumanism in tech circles. And transhumanism is nothing more than eugenics. Extend this further and you believe that future war and revolution when many people die is actually good because it'll separate the wheat from the chaff, so to speak.
On top of all that, in a world of mobile capital, the ultra-wealthy ultimately believe they can escape the consequences of all this. Switzerland, a Pacific island, space, or, you know, Mars.
The neofeudalistic future the ultra-wealthy desire will be one where they are protected from the consequences of their actions on massive private estate where a handful of people service their needs. Working people will own nothing and live in worker housing. If a few billion of them have to die, so be it.
> personally believe we are already beyond the point where electoral politics can solve this issue and a violent resolution is inevitable.
I do think more or less this too, but it could be 4 years or 40 before people get mad enough. And to be honest the tech gap between civilian violence and state sponsored violence has never been wider. OR in other words, civilians don't have reaper drones etc etc.
I agree on time frames. This system can limp on for decades yet. Or fall apart in 5 years (though probably not).
As for the tech gap, I disagree.
The history of post-WW2 warfare is that asymmetric warfare has been profoundly successful, to the poin twhere the US hasn't won a single war (except, arguably, Grenada, if that counts, which it does not) since 1945. And that's a country that spends more on defence that something like the next 23 countries combined (IIRC).
Obviously war isn't exact the same thing but it's honestly not that different to suppressing violent dissent. The difficulty (since 1945) hasn't been defeating an opposing military on the battlefield. The true cost is occupying territory after the fact. And that is basically the same thing.
Ordinary people may not have reeaper drones and as we've seen in Ukraine, consumer drops are still capable of dropping a hand grenade.
Suppressing an insurrection or revolt is unbelievably expensive in terms of manpower, equipment and political will. It is absolutely untenable in the long term.
Whether or not you are correct about the concrete details here, it is laughable for regular people[1] to bicker about whose job will be replaced first when the people who profit from that are just sitting on their ass, ready to get labor for nothing instead of relatively little.
[1] Although I wouldn’t be surprised if some of the people who argue about this topic online are already independently wealthy
H1Bs are replacing workers.