• Hansenq 2 hours ago

    If you know anything about tech, you will know that tech as an industry is highly deflationary--billionares use the same iPhones as you do! (in contrast, they don't drive the same cars you do)

    This boils down to the fact that chip fabs have massive fixed costs and near-zero marginal costs, and these chips power all of tech. So the more chips they can produce for a given fab, the more profit they can make, meaning that companies are incentivized to sell as many products as possible for as low a price as possible.

    We're supply constrained in the short-term because demand for these AI tools is so high that TSMC and other chip manufacturers can't keep up. But long term, supply/demand will equalize and tech will continue its deflationary trend. Sure, the frontier will always require the best possible chips, but AI coding is highly competitive, and competition drives price decreases. So prices may stay high right now, but it seems unlikely to me that this will stay true long-term.

    All four of the author's steelmanned arguments at the end for a price decrease seem likely to come true already: competition is intense (OAI brags about how much cheaper they are compared to Claude), OAI subsidizes open-source influencers already, companies' earnings calls all call for more investment in fabs, and we're already close to saturating all of the benchmarks used for RL!

    • jsheard 2 hours ago

      > billionares use the same iPhones as you do!

      Not if they have the brain disease which makes this kind of thing appealing:

      https://caviar.global/catalog/custom-iphone/iphone-17/?sort=...

      Yes that flagship model incorporates an actual Rolex Daytona in solid gold.

      • necovek an hour ago

        It is still exactly the same iPhone tech-wise, just with a custom "case".

        I wouldn't go as far to call it "brain disease" though: in a sense, it is OK for someone well off to spend on expensive products (made by less rich), so things would equalize at least a bit.

        Just like we in IT might happily pay 3% of our salary on slightly better shoes, and someone else would claim we have a "brain disease" because you can get perfectly good shoes for 5x less money.

        • relaxing an hour ago

          That iphone is not 5x more.

          Furthermore, who in IT is paying 3% on shoes? Even if you’re the hypebeast buying $1200 Balenciagas, I don’t see how the math works out.

          • necovek an hour ago

            That iPhone is 15x more (I don't know the exact price, sorry)? Same order of magnitude.

            3% of your monthly salary for $200-$500 shoes (I see plenty amateur runners getting carbon sole shoes in this price range, for instance), when you could get a pair for less than $50.

        • wffurr an hour ago

          That's got the exact same processing hardware in it though, which was the OP's point. Not that they can't have a fancier case.

          • A_D_E_P_T 2 hours ago
            • phpnode 2 hours ago

              Both of these are great, true expressions of a total void of taste

            • stronglikedan 2 hours ago

              I guess billionaires don't charge wirelessly. They just grab their backup custom iphone when the first one dies.

              • undefined an hour ago
                [deleted]
                • necovek an hour ago

                  They've got someone carrying their battery packs with an extendable cable.

                  Or maybe Big Ben hides the charging coils too.

              • floatrock 2 hours ago

                > This boils down to the fact that chip fabs have massive fixed costs and near-zero marginal costs, and these chips power all of tech.

                But what powers the chips?

                You're talking about chip economics. Inference economics requires electricity dynamics.

                • runarberg 2 hours ago

                  > competition drives price decreases

                  This is something that is often cited as trueism, but there is no natural laws which makes this a necessary true. There is plenty of room for black swans in market laws. So much so that the term black swan is probably better known in the field of economy then any other field.

                  Competition may drive down the price of LLMs, however there is a greater then zero probability that it won‘t, and if it won’t, your whole counter-argument falls apart.

                  • beachy an hour ago

                    I can't think of any high volume/consumer electronics/computer technology that has not been driven down in price over time. So based on historical precedent, I think your "greater than zero probability" might be only a tiny bit greater than zero.

                    • runarberg an hour ago

                      xkcd did one about graphic calculators

                      https://xkcd.com/768/

                      But other that comes to mind are MRI scanners, superconductors, quantum computers.

                      I think in general this market law is subject to selection bias. The technology which does decrease in price will become commonplace and easy to find, whereas the technology which doesn’t risks becoming obscure and maybe even removed from consumer markets.

                      EDIT: just to clarify, the point about black swans is that the prediction is always close to 0 probability of the existence of black swans, until we actually observe one, then the probability is suddenly exactly 1. If LLMs are a black swan for this market law, most people will assign a close to 0 probability ... until they don’t.

                • 827a 2 hours ago

                  > The top tier subscription prices are increasing exponentially

                  WILD graph that misrepresents what is happening.

                  There's a bunch of $20 subscriptions, and a bunch of $200 subscriptions. Devin has a $500 subscription. That's it.

                  The cost per unit of intelligence has been dropping every month. The cost per "completed task" has also been dropping. There is no sign of this reversing course. Graphing the price of a subscription, without taking into account what that subscription is getting you, is poor authorship.

                  • MattDaEskimo 2 hours ago

                    What's also wild is this being the first comment to mention it!

                    Although there is an underlying truth: using LLMs for large-context tasks like coding is still extremely expensive.

                    • croes 2 hours ago

                      > The cost per "completed task" has also been dropping. There is no sign of this reversing course.

                      Didn’t happen for me.

                      On the Plus plan newer models reached the limit faster so less tasks where done until I had to wait 5 hours

                      • NewsaHackO an hour ago

                        I mean when it equated the $10 a month Copilot to Claude Max, I stopped taking the article seriously.

                      • overgard 2 hours ago

                        I saw a quote today that resonated with me:

                        "The underlying purpose of AI is to allow wealth to access skill while removing from the skilled the ability to access wealth. --@jeffowski"

                        While I don't think that's the only purpose, I can't help but think that people that become dependent on these tools will have neither wealth nor skill. Keep your skills sharp!

                        • floatrock 2 hours ago

                          That just sounds like "controlling the means of production" with more clever wordplay.

                          • simianwords 2 hours ago

                            Same thing can be said about it a personal computer

                          • iambateman 2 hours ago

                            I think Warhol’s quote is nostalgic but incomplete.

                            I’m priced out of the best cars, best houses, best home theater systems, best schools. Even someone making $300k/year can’t afford all of the best of everything.

                            Sure, the iPhone has been “the best” possible phone which was also used by nearly everyone, but I think that’s an anomaly even in the short run.

                            Right now I’m paying $200/mo for Claude code to do an amount of work I would’ve had to pay $10,000/mo for. Of course I’m expecting those numbers to get closer to each other.

                            No VC-funded gravy train lasts forever.

                            • orthogonal_cube 2 hours ago

                              It’s a common tactic. Shock an industry with a new product and advertise it as being very affordable. Once you get a solid consumer base with enough organizations that have rebuilt their operations around it, slowly increase the cost and find more ways to produce revenue.

                              • skybrian 2 hours ago

                                It all depends. Yes, something like that happened with Uber, but computers and consumer electronics have Moore's law working for them, so prices usually go down. (With occasional shortages like we see now with RAM - not for the first time, but it's usually temporary.)

                                My guess is that AI will be more like consumer electronics than like Uber.

                                • orthogonal_cube an hour ago

                                  I agree that consumer goods normally get cheaper over time. Software that becomes commercialized, or sees a surge in enterprise demand, tends to go the other way. Splunk, Elasticsearch, and Slack for example.

                              • whynotmaybe 2 hours ago

                                Why do you expect the price to get closer?

                                You can get a table from Ikea that costs a fraction of what an artisan makes. They're not the same final product but their functions is the same.

                                • hahn-kev 2 hours ago

                                  Either AI gets more expensive, or the 10k outsourcing gets cheaper.

                              • elashri 2 hours ago

                                > OpenAI reportedly discussed charging $20k/month on PhD-level research agents with investors.

                                At this price point, it will be cheaper to hire a bunch of actual PhDs. The vast majority who will not earn anything close to 250k per year in most of the world.

                                • ottah an hour ago

                                  I also seriously question what even does PhD-level mean in the context of a model? Someone with a PhD has developed a very deep but narrow knowledge of a particular domain and has contributed to at least pushing out our sphere of knowledge a tiny bit in that pillar of competency. A model is a best a brittle, fractured and often inconsistent representation of written human knowledge and lacks most basic intuitive grounding in the world due to the lack of embodiment.

                                  In my experience, to safely get any value out of an LLM, you have to be more knowledgeable than the LLM on a topic. So in this case, you'd really need a PhD to use this tool, so at best its a $20k a month research aid, which honestly is far more expensive than a handful of grad students, and probably less effective.

                                • pram 2 hours ago

                                  From my recent experience with Qwen 3.5 I am less concerned about this. It certainly will never be “the best” but I did some TS refactoring with Qwen + Opencode over the weekend and it was surprisingly good. I even asked Opus 4.6 to grade the commits and it usually gave it a B- haha..

                                  Anyway, it might be worth it to invest in an LLM rig today if you’re paranoid.

                                  • reenorap 2 hours ago

                                    I used Qwen 3.5 for image descriptions and I was shocked at how great it was. Open Source models may be very useful now, one year ago they were really bad.

                                  • biddit 2 hours ago

                                    Strongly disagree with the thesis.

                                    Everything points to commoditization of models. Open/distilled models lag behind frontier only by 6-12 months.

                                    Regulatory capture is the only thing I’m scared of with regards to tooling options and cost.

                                    • supern0va 2 hours ago

                                      >Everything points to commoditization of models. Open/distilled models lag behind frontier only by 6-12 months.

                                      Yes, but every high performing open weights model coming out of China has (supposedly) been caught distilling frontier models.

                                      It seems like a lot of people are making assumptions about the state of the open weights ecosystem based on information that may not be accurate. And if the big labs are able to reliably block distillation, we could see divergence between the two groups in terms of performance.

                                      • dragonwriter 2 hours ago

                                        > And if the big labs are able to reliably block distillation,

                                        The big labs will not be able to reliably block distillation without further inhibiting general use of the models, which itself will help tip the balance away from commercial models.

                                        • reenorap 2 hours ago

                                          No, you're wrong. It won't tip it away from commercial models. Trying to run open weight modesl to do inference is something 99% of people around the world can't do because it's expensive and technically challenging and the results are poor compared to the main companies. If they get rid of free usage people will simply pay for it.

                                          • dragonwriter 2 hours ago

                                            > Trying to run open weight modesl to do inference is something 99% of people around the world can't do because it's expensive and technically challenging and the results are poor compared to the main companies.

                                            Just because a model is open doesn't mean that there aren't services that will run it for you (and which won't share any limits that the commercial model vendors impose to fight distillation because neither the host not the model creator cares if you are using the service to distill the model.)

                                            Many users of, particularly the larger, open models now are using such services, not running them using their own local or cloud compute.

                                    • firefoxd 2 hours ago

                                      I used to take Uber to work daily in 2016. It cost around 3 - 4 dollars per 5 miles ride. Now the same ride cost $24 [0]. There's no indication that AI coding tools won't follow the same path given they are funded by VC.

                                      But I think what matters is that the new generation of coders will adopt it as the norm. Gone are the days where you download a free text editor and just trial and error with the documentation one tab away. Every bootcamp is teaching react with clause and cursor. You have to pay to for a subscription to build your BMI calculator.

                                      [0]: https://idiallo.com/blog/paying-for-my-8-years-old-ride

                                      • sparkler123 an hour ago

                                        I was continually hitting quota on the $20/mo Claude sub. I started doing the "pay for extra tokens" thing when I did hit it, but just upgrading proved to be far more cost effective. I upgraded to the $200/mo Max subscription and have been using almost exclusively Opus and barely get to 25% quota in any session (a couple times I got over 50% but I was having it go wild in concurrent sessions). I could probably downgrade to the $100/mo one and be fine, though.

                                        Sounds like a lot, but in the few weeks I've had it, I was able to complete two projects I had given up on due to not having time in the past. I re-jiggered some other monthly subscriptions so the net cost wasn't ultimately that much more than what I was paying previously. I also weighed it against buying something like a DGX Spark for local inference, but ultimately I don't want to mess with serving models (and the ones available just aren't as good, realistically), I just want a good one that works.

                                        I probably can't justify much more than $200/mo, but for what I get out of it, I'm happy to pay it. I've done more in the past few weeks on side projects than I had in a couple years.

                                        • mackeye 2 hours ago

                                          i don't entirely disagree, but

                                          > the cheapest usable tier of Claude Code is $100/mo

                                          is, imo, false. cc pro, $20 per month, gets you a lot of sonnet usage, and code review with opus (which i find very valuable, even as someone who tries to use ai little). i guess it depends how you use ai, but if you use it to plan, debug, and review, rather than having it write code, i think pro is pretty comfortable.

                                          to add, i've seen people say these subscriptions will get far more expensive, as they're offered at a loss. but, it seems far more likely that free tiers will be degraded or disappear, as (especially for openai?) the relative number of subscribers to free users is very small, so the latter probably dominates compute time greatly. anthropic probably has a higher relative number of people who pay for claude code (and use it to its fullest), so this is probably less true. i can see pro getting less usage, and max increasing in cost.

                                          • madrox 2 hours ago

                                            I've been thinking about this as well, and I'm glad the author is talking about it. However, I don't think he took it far enough.

                                            It is correct to say there's near-infinite demand for AI, and supply is limited. It stands to reason that wealthier people will pay more, and therefore get more, out of AI.

                                            However, this has always been true, but historically instead of AI it's been workers. The economics of labor haven't changed. So it will, as always, be a game of how you deploy the workers you hire. Are you generating useless morning briefs or are you actually generating value for yourself and others with the AI you buy? If you generate more value that the tokens you burn, you'll get ahead.

                                            This will be true in academia as well, the area of interest to the author. He writes like, before AI, grad student level intelligence came for free.

                                            Ok, wait, sorry, bad example...

                                            • AstroBen 2 hours ago

                                              The one saving grace I see here is that open models are getting really good, and they're already profitable at an affordable price.

                                              So maybe it's true you won't get "the best", but I don't think you'll be that far off.

                                              • sharkjacobs 2 hours ago

                                                There's this weird race where I have in my head some level of LLM performance which is "good enough" and the open models keep improving to that level, but by the time that they do my "good enough" has acclimatized to what I'm used to doing with the latest frontier models and what the open models are isn't good enough anymore.

                                                The "good enough" points so far have been

                                                - "as good as ChatGPT"

                                                - "as good as GPT4"

                                                - "as good as Sonnet 3.5"

                                                - "as good as Opus 4.5 or Codex 5.2"

                                                Anyway, we'll see where the chinese models are in a year, and we'll see where my expectations are. Hopefully they overlap at some point.

                                              • reenorap 2 hours ago

                                                I predicted months ago that $20/month isn't going to fly anymore. I think if it produces code, it will jump to $1000/month at least. The value of the LLM nowadays is much higher than $1000/month and I think we will see that happen in 2026, because these companies need to make money ASAP in order to get more funding for the next round of training.

                                                • barrkel an hour ago

                                                  Demand (value created) isn't enough to make prices rise. You need supply to be constrained. If there was only one competent coding model, I'd be worried. But between competition and open weight models, we're not looking constrained on the supply side any time soon.

                                                • Sevii 2 hours ago

                                                  AI providers can only charge what the market can bear. AI isn't worth 20k/month for 'PHD' level work. But people are willing to pay for several $200/month subscriptions.

                                                  But fundamentally AI compute is a commodity. GPUs are made in factories at scale. Assuming AI quality tapers off eventually supply will catch up to demand.

                                                  Finally open weights models are good enough that the leading labs cannot charge high margins.

                                                  • stephc_int13 an hour ago

                                                    This is very early.

                                                    If there is an AI boom, what we're seeing is its infancy. Semi-autonomous coding is the first and most natural use case, thanks to vast amount of training material, opportunities for closed-loop RL with minimal human supervision and the eagerness of the community to try and embrace new tools.

                                                    But it is still not much more than QoL improvement at this stage, and maybe some velocity gains for the most hardcore users willing to spend time and money to stay at the bleeding edge.

                                                    But there is also a rather large appetite for local models, I am not sure the future of AI will be 100% cloud based.

                                                    • bambax 2 hours ago

                                                      It's possible that prices will go up (although the cost of pure inference tends to go down; the question is more how to amortize the cost of training new models).

                                                      But this is absurd:

                                                      > the cheapest usable tier of Claude Code is $100/mo

                                                      If you pay by the token instead of with a subscription, and don't send the entirety of your code base with each request, costs are ridiculously low. Like, $50 will last a minimum of 3 months of heavy use on openrouter.

                                                      It's also far from certain everyone needs the latest version of the best "frontier model"; it very much depends on what you do.

                                                      • barrkel an hour ago

                                                        $50 will last you a good long time if you don't use many tokens, and you are judicious about which model you use, and don't need web search much.

                                                        However, on a fixed price plan your behavior changes. It's a qualitative change in how you work, rather than quantitative. Ideation and product design and specification start becoming bottlenecks.

                                                        I started out the API route. I started spending $100 a month once I was spending upawards of $10 in tokens a session.

                                                        • cactusplant7374 an hour ago

                                                          I am imagining working for a company in the future where prompt reviews are required because the company is cheap.

                                                        • samiv an hour ago

                                                          I expect that in the "post scarcity" world where the capital class doesn't need majority of human labor for anything most people will be priced out of everything. Including the basic necessities.

                                                          But sure, lets just keep automating ourselves out of jobs (and help other industries do it too) with no plan as to how to help all the displaced people.

                                                          • viblo 2 hours ago

                                                            Regardless if the exponential trend the author writes about is correct or not, I do think the cost of AI is reversing the trend for coding. For quite some time the tools we used have become cheaper and cheaper and more avaliable than before. Nowadays compilers, IDEs and other tools are increasingly open source or at least free or very cheap. But with AI its no longer the case.

                                                            I wrote more about this in a blog, at https://www.viblo.se/posts/ai-hobbycoding/

                                                            • EliRivers 2 hours ago

                                                              "There was a time when everyone used Github Copilot."

                                                              There was no such time. Even if everyone means "every software engineer" or any variation thereof, and we substitute any other such tool for GC.

                                                              • armchairhacker 2 hours ago

                                                                We’re already priced out of the best coding tools: human domain experts (https://news.ycombinator.com/item?id=47234325)

                                                                Unless you’re a top-tier domain expert. Then you’re safe until (if…) ASI.

                                                                • 9cb14c1ec0 2 hours ago

                                                                  I don't agree. There are a lot of inference performance improvements to make. I think the cost of inference continues to fall, and pretty much every application of AI becomes a commodity with brutal competition.

                                                                  • dimgl 2 hours ago

                                                                    The only way this happens is if models that are specifically made to do certain kinds of coding start to exist. Then this would start to become an issue, yes, until those models are distilled into smaller models.

                                                                    • ekjhgkejhgk 2 hours ago

                                                                      None of this matters. Open weights will continue to be released. I don't need to have the absolute greatest LLM. I run Qwen3 locally and that's not even the best Qwen.

                                                                      • relaxing an hour ago

                                                                        How’s that going for you? Honestly, I’d like to read a review.

                                                                      • skybrian 2 hours ago

                                                                        I've already switched to Sonnet 2.6 by default. It seems okay for the coding I do (working on a personal website) and it's 40% cheaper.

                                                                        Businesses will pay more since they can justify the cost. That seems fine?

                                                                        • jatari 2 hours ago

                                                                          You get plenty from the $20 a month claude subscription. Just don't expect to leave it running on its own for hours a day.

                                                                          • profstasiak 2 hours ago

                                                                            Idiots will Pay for AI to kill their skills

                                                                            • raincole 2 hours ago

                                                                              > The top tier subscription prices are increasing exponentially

                                                                              "Let's just make random shit up and expand it into a whole blog post."

                                                                              Seriously does anyone believe this premise? The Claude Max ($200/mo) is the same kind of product as Github Copilot ($10/mo) so the price 20x-ed?

                                                                              • cactusplant7374 an hour ago

                                                                                OpenAI doubled Codex limits until April. If there is an issue with their platform they reset the limits early. This happened many times in December. They also added the 5.3 Spark model that has its own limit!

                                                                                The author doesn't even mention Codex even though it likely will out compete Claude Code.

                                                                              • TIPSIO 2 hours ago

                                                                                An even worst day is probably coming:

                                                                                Imagine if a model ever does get scary good, would the big labs even release it for general use? You couldn't even buy it if you wanted to. Exceptions would be enterprise deals / e.g.: $AMZN niche super contracts.

                                                                                • eatsyourtacos 2 hours ago

                                                                                  Very true.. also I would say even what I get out of claude code is absolutely phenomenal right now.. but sometimes it does take minutes. I just had it take 15 minutes to do something. But what if you had access to the hardware to run it basically instantly?

                                                                                  Just think how these big companies will use that kind of power for themselves to get even more extreme uses out of it.

                                                                                • ottah an hour ago

                                                                                  This chart is missing all of the non-us competition, which while not at the top, is always right behind the flagship models. These competitors also have much lower inference costs, due to the model architectures being built with a focus on efficiency. Silicon valley is building big block Chevys, while China is making Datsuns.

                                                                                  • Simulacra 2 hours ago

                                                                                    "OpenAI reportedly discussed charging $20k/month on PhD-level research agents with investors."

                                                                                    I've been wondering about this, that there might be a day when certain models are sold at a much higher price, like luxury cars, and only people who are willing to pay a lot of money get them. Everyone else has to settle for a cheaper LLM.

                                                                                    • pluc 2 hours ago

                                                                                      Or ad-supported LLMs where you can't guarantee the answer isn't sponsored.

                                                                                      • Leynos 2 hours ago

                                                                                        Why doesn't the same caveat apply to a paid account?

                                                                                        I mean, you have to declare when content is an advert, and if you are asserting that the owners of the chatbot are going to just ignore that requirement, won't they just do the same thing for paid accounts?

                                                                                        • pluc an hour ago

                                                                                          I would assume they would until the profit of subscriptions surpasses the profit of anything else they can monetize. Why would they leave money on the table?

                                                                                          • Leynos 12 minutes ago

                                                                                            So "ad-supported" is redundant in your comment since you believe it applies to the paid accounts too?

                                                                                    • deadbabe 2 hours ago

                                                                                      It’s worse than this…

                                                                                      Companies have built entire systems of such complexity and slop that they require AI just to do the maintenance. They have fired engineers thinking they can just replace them with AI.

                                                                                      Well when the prices rise, they have no choice but to stay locked in, paying whatever it takes just to keep their companies running. If they stop using AI, their workforce suddenly does not have capacity to do the work required because of the layoffs. And there are not enough people to hire because people are quickly turning away from software engineering as a career. What a disaster it will be.

                                                                                      • allthetime 2 hours ago

                                                                                        Or, they have to hire people who still know what they're doing at a premium.

                                                                                        Be those people.

                                                                                      • llm_nerd 2 hours ago

                                                                                        I agree with the core assumption (to a point -- there is a point of diminishing return where pretty excellent tools are cheap), but this line is ridiculous-

                                                                                        "the cheapest usable tier of Claude Code is $100/mo"

                                                                                        Bullshit. I have the $20 plan and seldom hit the quota. I used to hit the distinct Opus quota, but now that isn't separate I just don't anymore. Even enabled the extra quota charging and have never paid a penny more.

                                                                                        And to be clear, to most people I'm a pretty heavy user. Like, practically it has a heavy influence on my day to day work, and is an amazing contributor to my functions.

                                                                                        The people who think only the $100+ tier is "usable" are often (albeit not always) usually the people doing the worthless, but "forward-thinking" nonsense, throwing millions of tokens aspirational. Like the OpenClaw nonsense is 99.99% worthless filler where people chased a productivity hack that in reality is just hobbyist silliness.

                                                                                        It's token shredding for almost no value as people show that they're with it. People gloating about their swarms of agents doing effectively nothing are another "I need to max out everything" people. These are the ones who yield the result that AI has no benefit to productivity, as they overdo something to such ridiculous extremes.

                                                                                        The same for the laughably poorly thought out MCP servers that flood a service with a quarter million tokens for negligible value. So much insanely poorly considered nonsense is in use, to the great glee of the AI companies. And, I mean, I guess I should thank these people for basically subsidizing it for the rest of us.

                                                                                        The rest of us are surgically applying AI precisely, to incredible effect. The cheap plans are ridiculously valuable.

                                                                                        • yieldcrv 2 hours ago

                                                                                          traders use bloomberg terminals at $30,000/yr

                                                                                          they don't theoretically have to aside from that industry going that direction and the stickiness of communication through it, but it simplifies some of their job

                                                                                          it didn't revolutionize trading or make it more democratized, despite simplifying some aspects of the industry

                                                                                          the technology could have but it remains a specialized tool

                                                                                          thats the way I see agentic coding tools and the trend is following it

                                                                                          once the UX designers, PMs and ideas guys get bored of their newfound SaaS slop capabilities, it will be back to specialists doing this and nobody else

                                                                                          • Sarahcot 14 minutes ago

                                                                                            [dead]

                                                                                            • aplomb1026 2 hours ago

                                                                                              [dead]