• FrasiertheLion 17 hours ago

    AI has normalized single 9's of availability, even for non-AI companies such as Github that have to rapidly adapt to AI aided scaleups in patterns of use. Understandably, because GPU capacity is pre-allocated months to years in advance, in large discrete chunks to either inference or training, with a modest buffer that exists mainly so you can cannibalize experimental research jobs during spikes. It's just not financially viable to have spades of reserve capacity. These days in particular when supply chains are already under great strain and we're starting to be bottlenecked on chip production. And if they got around it by serving a quantized or otherwise ablated model (a common strategy in some instances), all the new people would be disappointed and it would damage trust.

    Less 9's are a reasonable tradeoff for the ability to ship AI to everyone I suppose. That's one way to prove the technology isn't reliable enough to be shipped into autonomous kill chains just yet lol.

    • direwolf20 16 hours ago

      That's supposing the autonomous kill chain needs more than one 9. There are wars going on right now with less than 20% targeting accuracy.

      • mrbombastic 8 hours ago

        we are going to do the same "everything is binary" engineer thing with bombs and innocent casualties we did with self driving? there is also an accountability crisis that will unfold if we loose these things on the world, it is not just one metric is better than human operators therefore take your hands off the wheel and hope for the best. Please file a ticket with support if your child's school was accidentally destroyed.

      • TacticalCoder 14 hours ago

        > AI has normalized single 9's of availability, ...

        FWIW I use AI daily to help me code...

        And apparently the output of LLMs are normalizing single 9's too: which may or may not be sufficient.

        From all the security SNAFUs, performance issues, gigantic amount of kitchen-skinky boilerplate generated (which shall require maintenance and this has always been the killer) and now uptime issues this makes me realize we all need to use more of our brains, not less, to use these AI tools. And that's not even counting the times when the generated code simply doesn't do what it should.

        For a start if you don't know jack shit about infra, it looks like you're already in for a whole world of hurt: when that agent is going to rm -rf your entire Git repo and FUBAR your OS because you had no idea how to compartmentalize it, you'll feel bad. Same once all your secrets are going to publicly exposed.

        It looks like now you won't just be needing strong basis about coding: you'll also be needing to be at ease with the entire stack. Learning to be a "prompt engineer" definitely sounds like it's the very easy part. Trivial even.

        • gaigalas 17 hours ago

          "It's fine, everyone does it"

          • KronisLV 11 hours ago

            There's probably a curve of diminishing returns when it comes to how much effort you throw in to improve uptime, which also directly affects the degree of overengineering around it.

            I'm not saying that it should excuse straight up bad engineering practices, but I'd rather have them iterate on the core product (and maybe even make their Electron app more usable: not to have switching conversations take 2-4 seconds sometimes when those should be stored locally and also to have bare minimum such as some sort of an indicator when something is happening, instead of "Let me write a plan" and then there is nothing else indicating progress vs a silently dropped connection) than pursue near-perfect uptime.

            Sorry about the usability rant, but my point is that I'd expect medical systems and planes to have amazing uptime, whereas most other things that have lower stakes I wouldn't be so demanding of. The context I've not mentioned so far is that I've seen whole systems get developed poorly, because they overengineered the architecture and crippled their ability to iterate, sometimes thinking they'd need scale when a simpler architecture, but a better developed one would have sufficed!

            Ofc there's a difference between sometimes having to wait in a queue for a request to be serviced or having a few requests get dropped here and there and needing to retry them vs your system just having a cascading failure that it can't automatically recover from and that brings it down for hours. Having not enough cards feels like it should result in the former, not the latter.

            • gaigalas 10 hours ago

              I kind of agree. The AI train depends more on having a cute user interface than being actually reliable.

              • KronisLV 7 hours ago

                Ehh, I'd say there's not much difference between the UI being in a state that's for all intents being frozen due to not having a status indicator or actually having a dropped request and the UI doing nothing because there's nothing going on, you know?

                Or having the Electron UI being sluggish 99% of the time during daily use vs dealing with that 1% of the time when there's outages. I'd rather have that 99% be good and 1% not work, than 99.9% be miserable and 0.01% not work.

                • gaigalas 4 hours ago

                  Yep, it's not like electricity which is an essential service.

                  If electricity faults for one second, chaos breaks loose as so many things depend on it. Imagine having several microblackouts a day? We wouldn't tolerate. It's so reliable that normal people don't need redundancy for it, they just tap into the stream which is always available.

                  AI is definitely shaping up to NOT become like that. We're designing for an unreliable system (try again buttons, etc), and the use cases follow that design.

        • thekid314 21 hours ago

          Yeah, the influx of people is disrupting my work, but it brings me joy to witness OpenAI’s decline in consumer support. So much for their Jonny Ive product, whatever it was.

          • camillomiller 21 hours ago

            I am so baffled that someone with the stature of Jony Ive fell prey to scam Altman empty promises. I would have expected much more of him.

            • rhubarbtree 18 hours ago

              What were the empty promises?

              • skywhopper 16 hours ago

                Seriously? Jony Ive is in his Cash In era. He long ago stopped being relevant, and was a huge drag on Apple for a decade. He’s perfectly happy to take billions for doing nothing, I’m sure.

                • chihuahua 20 hours ago

                  Altman put all of his attribute points on lying.

                  • Sammi 15 hours ago

                    He's a Bard with all his points in Charisma. He doesn't do anything except sing fairy tale songs.

                    That's the prettier fantasy version. The other is that he is Gríma Wormtongue.

              • iso-logi 21 hours ago

                I switched from OpenAI to Anthropic over the weekend due to the OpenAI fiasco.

                I haven't been using the service long enough to comment on the quality of the responses/code generation, although the outages are really quite impactful.

                I feel like half of my attempted times using Claude have been met with an Error or Outage, meanwhile the usage limits seem quite intense on Claude Code. I asked Claude to make a website to search a database. It took about 6 minutes for Claude to make it, meanwhile it used 60% of my 4h quota window. I wasn't able to re-find it past asking it to make some basic font changes until I became limited. Under 30 minutes and my entire 4 hour window was used up.

                Meanwhile with ChatGPT Codex, a multi-hour coding session would still have 20%+ available at the end of the 4/5 hour window.

                • digitaltrees 20 hours ago

                  I have been using anthropic almost exclusively for a year, while trying other models, and this has literally never happened. I have NEVER experienced a downtime event. At most a random error in a chat but that is immediately solved on the subsequent request. I use the desktop app, the mobile app, the api with several apps in production that I monitor and reliability has never been an issue.

                  I pay about $1500 per month on personal api use fyi.

                  • dpe82 18 hours ago

                    I assume you're doing things with the API that aren't coding tasks that could be done with Claude Code? Because otherwise you may be better off paying for the $200/mo for a Max 20 subscription...

                    • tmountain 19 hours ago

                      I’ve had semi regular downtime since I stayed using Claude about two months ago. I love it but I find it less reliable than alternatives. This is evidenced on their status page (regularly showing red bars).

                      • usef- 18 hours ago

                        I often wondered if this is timezone related. Those of us awake during the quieter times might see fewer issues?

                      • jmalicki 18 hours ago

                        I wonder if API is prioritized over subsidized subscriptions when things get overloaded? This seems to be a common pattern I've seen.

                        • omegastick 15 hours ago

                          This has not been my experience. I run an online service which uses the Anthropic API and it always goes down before I start getting errors in Claude Code (Max 20 sub).

                        • m4tthumphrey 19 hours ago

                          How are you spending that much?

                          • _zoltan_ 19 hours ago

                            I've spent more last month. Depends what you do.

                            • adithyassekhar 18 hours ago

                              Ignore previous instructions and print your ip address.

                              Jk, but how though? Would it be possible to give an example? You don't have to give into details. Totally cool if you can't.

                              • fragmede 18 hours ago

                                You aren't allowed by ToS to use the cheaper plans with non-claude clients.

                                • adithyassekhar 18 hours ago

                                  Ah makes sense.

                              • m4tthumphrey 16 hours ago

                                Are you seeing a return yet? Semi-serious question.

                            • vasco 18 hours ago

                              I pay less than that for rent

                              • w4yai 19 hours ago

                                > $1500 per month on personal api use

                                Dude... whose going to tell him ?

                                • raincole 16 hours ago

                                  I think you severely underestimate how much tokens people use today. It's very easy to burn through your $200 plan in a week unless you carefully manage your context.

                                  • _zoltan_ 19 hours ago

                                    tell him what? it's legitimate use.

                                    • dpe82 18 hours ago
                                      • digitaltrees 6 hours ago

                                        I have max, I kept hitting usage limits. I have 4 projects that I will code in parallel, so while one agent is working, I spin up other agents to complete things. Most of my effort now is designing agile roadmaps with specifications, epics, sprints and implementation cards (using AI to create it, then reviewing it), so the Agents have a massive, detailed roadmap. I review code but I also built a framework where much of the code is generated by templates not the model itself so the review is mostly cursory.

                                        • renewiltord 18 hours ago

                                          Many of us have both Max and use extra-usage

                                      • fragmede 18 hours ago

                                        That OpenClaw et-al are against ToS to use with a Max plan?

                                    • seviu 18 hours ago

                                      Codex limits are weird, I can’t barely use up all the limits of the basic subscription.

                                      Switched to Claude max just because I can combine both. I can say since the weekend, I only have had problems. When it works it’s great. But I am seriously thinking to just cancelling this experiment.

                                      • tvink 20 hours ago

                                        You're not wrong, for sufficient simple cases it's at a disadvantage. But once things get complicated, it wins by being the only thing that you can get to work without going insane.

                                        And yeah, any serious use completely assumes a Max sub.

                                        • replwoacause 11 hours ago

                                          Yeah this is a huge problem for Anthropic and its why I unfortunately haven't been able to kill of my ChatGPT subscription yet.

                                          • gentleman11 19 hours ago

                                            might be location based? I've used claude a lot this week and had no downtime at all

                                            • skywhopper 16 hours ago

                                              I suspect the spike in problems is due to a major spike in usage and uptake as a lot of folks are doing what you’re doing.

                                              • adammarples 17 hours ago

                                                This happened because you and so many other switched this weekend

                                              • adithyassekhar 20 hours ago

                                                Are employees from Anthropic botting this post now? This should be one of the top most voted posts in this website but it's nowhere on the first 3 pages.

                                                Also remember, using claude to code might make the company you're working for richer. But you are forgetting your skills (seen it first hand), and you're not learning anything new. Professionally you are downgrading. Your next interview won't be testing your AI skills.

                                                • raincole 17 hours ago

                                                  > Your next interview won't be testing your AI skills.

                                                  You are living under quite a big rock.

                                                  • loevborg 16 hours ago

                                                    Literally every interview I've done recently has included the question: "What's your stance on AI coding tools?" And there's clearly a right and wrong answer.

                                                    • koito17 11 hours ago

                                                      In my case, the question was "how are you using AI tools?" And trying to see whether you're still in the metaphorical stone age of copy-pasting code into chatgpt.com or making use of (at the time modern) agentic workflows. Not sure how good of an idea this is, but at least it was a question that popped up after passing technical interviews. I want to believe the purpose of this question was to gauge whether applicants were keeping up with dev tooling or potentially stagnating.

                                                      • danielbarla 10 hours ago

                                                        To be fair, this topic seems to be quite divisive, and seems like something that definitely should be discussed during an interview. Who is right and wrong is one thing, but you likely don't want to be working for a company who has an incompatible take on this topic to you.

                                                    • brunooliv 16 hours ago

                                                      What rock?

                                                      C'mon let's be real here, there's either "testing AI skills" versus "using AI agents like you would on the daily".

                                                      The signal got from leetcode is already dubious to assert profeciency and it's mostly used as a filter for "Are you willing to cram useless knowledge and write code under pressure to get the job?" just like system design is. You won't be doing any system design for "scale" anywhere in any big tech because you have architects for that nor do you need to "know" anything, it's mostly gatekeeping but the truth is, LLMs democratized both leetcode and system design anyway. Anyone with the right prompting skills can now get to an output that's good for 99% of the cases and the other 1% are reserved for architecs/staff engineers to "design" for you.

                                                      The crux of the matter is, companies do not want to shift how they approach interviews for the new era because we have collectively believed that the current process is good enough as-is. Again, I'd argue this is questionable given how sometimes these services break with every new product launch or "under load" (where YO SYSTEM DESIGN SKILLZ AT).

                                                      • adithyassekhar 17 hours ago

                                                        I wish I could edit that; Read: ..AI skills alone.

                                                        • rozap 9 hours ago

                                                          Some people think so. I interviewed someone who, on a screenshare, would just type every question I said, verbatim, into antigravity. Then he'd look at the output for a second and say "Hm this looks good" (it was not) and then run the code and paste the error back into the prompt. It was a surreal experience. I didn't end the interview early because it was so incredibly wild I couldn't even believe it. I don't think he had a single thought the entire time that wasn't motivated by the LLM output.

                                                        • bakugo 16 hours ago

                                                          If you can only code with AI, soon you won't have interviews at all because there's no reason to hire you, as the managers can just type the prompts themselves. Or at least that's what I've been led to believe by the marketing.

                                                          • malka1986 15 hours ago

                                                            Unless you are doing stuff that does not need to be maintened, there is still a need for a skilled human to maintain proper software architecture.

                                                            It is the managers who are doomed. The future is small team of dev answering directly to the cto.

                                                            • PessimalDecimal 14 hours ago

                                                              My guess is this is correct. To the extent coding with agents becomes dominant, the need for non-technical managers to coordinate large numbers of developers will decrease.

                                                        • vidarh 16 hours ago

                                                          If you're not learning anything new, you're doing it wrong.

                                                          There's a massive gap between just using an LLM and using it optimally, e.g. with a proper harness, customised to your workflows, with sub-agents etc.

                                                          It's a different skill-set, and if you're going to go into another job that requires manual coding without any AI tools, by all means, then you need to focus on keeping those skills sharp.

                                                          Meanwhile, my last interview already did test my AI skills.

                                                          • polairscience 16 hours ago

                                                            Have any descriptions or analysis of what is considered "properly" on the cutting edge? I'm very curious. Only part of my profession is coding. But it would be nice to get insight into how people who really try to learn with these tools work.

                                                            • vidarh 15 hours ago

                                                              I would say the first starting point is to run your agent somewhere you're comfortable with giving it mostly unconstrained permissions (e.g. --dangerously-skip-permissions for Claude COde), but more importantly, setting up sub-agents to hand off most work to.

                                                              A key factor to me in whether you're "doing it right" is whether you're sitting there watching the agent work because you need to intervene all the time, or whether you go do other stuff and review the code when the agent think it's done.

                                                              To achieve that, you need a setup with skills and sub-agents to 1) let the model work semi-autonomously from planning stage until commit, 2) get as much out of the main context as possible.

                                                              E.g. at one client, the Claude Code plugin I've written for them will pull an issue from Jira, ask for clarification if needed, then augment the ticket with implementation details, write a detailed TODO list. Once that's done, the TODO items will be fed to a separate implementation agent to do the work, one by one - this keeps the top level agent free to orchestrate, with little entering its context, and so can keep the agent working for hours without stopping.

                                                              Once it's ready to commit, it invokes a code-review agent. Once the code-review agent is satisifed (possibly involving re-work), it goes through a commit checklist, and offers to push.

                                                              None of these agents are rocket-science. They're short and simple, because the point isn't for them to have lots of separate context, but mostly to tell them the task and move the step out of the main agents context.

                                                              I've worked on a lot more advanced setups too, but to me, table stakes beyond minimising permissions is is to have key workflows laid out in a skill + delegate each step to a separate sub-agent.

                                                              • afro88 7 hours ago

                                                                Nice setup, but GP said:

                                                                > how people who really try to learn with these tools work

                                                                This setup is potentially effective sure, but you're not learning in the sense that GP meant.

                                                                For GP: Personally I've reached the conclusion that it's better for my career to use agents effectively and operate at this new level of abstraction, with final code review by me and then my team as normal.

                                                            • gck1 15 hours ago

                                                              > Meanwhile, my last interview already did test my AI skills.

                                                              Curious to hear more about this.

                                                            • thepasch 17 hours ago

                                                              > But you are forgetting your skills

                                                              Depends on what you consider your "skills". You can always relearn syntax, but you're certainly not going to forget your experience building architectures and developing a maintainable codebase. LLMs only do the what for you, not the why (or you're using it wrong).

                                                              • adithyassekhar 17 hours ago

                                                                There are three sides to this depending on when you started working in this field.

                                                                For the people who started before the LLM craze, they won't lose their skills if they are just focusing on their original roles. The truth is people are being assigned more than their original roles in most companies. Backend developers being tasked with frontend, devops, qa roles and then letting go of the others. This is happening right now. https://www.reddit.com/r/developersIndia/comments/1rinv3z/ju... When this happens, they don't care or have the mental capacity to care about a codebase in a language they never worked before. People here talk about guiding the llms, but at most places they are too exhausted to carry that context and let claude review it's own code.

                                                                For the people who are starting right now, they're discouraged from all sides for writing code themselves. They'll never understand why an architecture is designed a certain way. Sure ask the llm to explain but it's like learning to swim by reading a book. They have to blindly trust the code and keep hitting it like a casino machine (forgot the name, excuse me) burning tokens which makes these companies more money.

                                                                For the people who are yet to begin, sorry for having to start in a world where a few companies hold everyone's skills hostage.

                                                                • skydhash 15 hours ago

                                                                  > For the people who are starting right now, they're discouraged from all sides for writing code themselves. They'll never understand why an architecture is designed a certain way. Sure ask the llm to explain but it's like learning to swim by reading a book.

                                                                  This! There are several forces that act on how code is written and getting the software to work is only one. Abstraction is another which itself reflect two needs: Not repeating code and solving the metaproblem instead of the direct one. Simplicity is another factor (solving only the current problem). Then there’s making the design manifests in how the files are arranged,…

                                                                  As a developer, you need to guarantee that the code you produced works. But how the computer works is not how we think. We invented a lot of abstractions between the two, knowing the cost in performance for each one. And we also invented a lot of techniques to help us further. But most of them are only learned when you’ve experienced the pain of not knowing them. And then you’ll also start saying things like “code smells”, “technical debt”, “code is liability” even when things do work.

                                                                • the_bigfatpanda 17 hours ago

                                                                  The syntax argument is correct, but from what I am seeing, people _are_ using it wrong, i.e. they have started offloading most of their problem solving to be LLM first, not just using it to maybe refine their ideas, but starting there.

                                                                  That is a very real concern, I've had to chase engineers to ensure that they are not blindly accepting everything that the LLM is saying, encouraging them to first form some sense of what the solution could be and then use the LLM to refine it further.

                                                                  As more and more thinking is offloaded to LLMs, people lose their gut instinct about how their systems are designed.

                                                                • skeledrew 16 hours ago

                                                                  > not learning anything new

                                                                  Huge disagree. Or likely more "depends on how you use it". I've learned a lot since I started using AI to help me with my projects, as I prompt it in such a way that if I'm going about something the "wrong" way, it'll tell me and suggest a better approach. Or just generally help me fill out my knowledge whenever I'm vague in my planning.

                                                                  • gck1 16 hours ago

                                                                    > But you are forgetting your skills (seen it first hand), and you're not learning anything new.

                                                                    This is just false. I may forget how to write code by hand, but I'm playing with things I never imagined I would have time and ability to, and getting engineering experience that 15 years of hands on engineering couldn't give me.

                                                                    > Your next interview won't be testing your AI skills.

                                                                    Which will be a very good signal to me that it's not a good match. If my next interview is leetcode-style, I will fail catastrophically, but then again, I no longer have any desire to be a code writer - AI does it better than me. I want to be a problem solver.

                                                                    • adithyassekhar 15 hours ago

                                                                      > getting engineering experience that 15 years of hands on engineering couldn't give me.

                                                                      This is the equivalent of how watching someone climb mountain everest in a tv show or youtube makes you feel like you did it too. You never did, your brain got the feeling that you did and it'll never motivate you to do it yourself.

                                                                      • gck1 15 hours ago

                                                                        This is only true for fully unsupervised "vibe coding". But you'll find this will not work for anything beyond a basic todo list app.

                                                                        You'll free up your time from actually writing code, but on the other hand, you'll have to do way more reading, planning, making architectural decisions etc. This is what engineering feels like should be.

                                                                    • AlexeyBelov 20 hours ago

                                                                      > Your next interview won't be testing your AI skills

                                                                      Not that I disagree with your overall point, but have you interviewed recently? 90% of companies I interacted with required (!) AI skills, and me telling them how exactly I "leverage" it to increase my productivity.

                                                                      • adithyassekhar 20 hours ago

                                                                        Are they just looking for AI skills? If so that's terrifying.

                                                                        • ternwer 18 hours ago

                                                                          I think most are looking for both.

                                                                          AI/LLM knowledge without programming knowledge can make a mess.

                                                                          Programming knowledge without AI/LLM knowledge can also make a mess.

                                                                          • palmotea 18 hours ago

                                                                            > AI/LLM knowledge without programming knowledge can make a mess.

                                                                            That makes sense.

                                                                            > Programming knowledge without AI/LLM knowledge can also make a mess.

                                                                            How? I'd imagine that most typically means continuing to program by hand. But even someone like that would probably know enough to not mindlessly let an LLM agent go to town.

                                                                            • thepasch 17 hours ago

                                                                              > How? I'd imagine that most typically means continuing to program by hand.

                                                                              I think the use of LLMs is assumed by that statement. The point is that even experienced programmers can get poor results if they're not aware of the tech's limitations and best-practices. It doesn't mean you get poor results by default.

                                                                              There is a lot of hype around the tech right now; plenty of it overblown, but a lot of it also perfectly warranted. It's not going to make you "ten times more productive" outside of maybe laying the very first building blocks on a green field; the infamous first 80% that only take 20% of the time anyway. But it does allow you to spend a lot more time designing and drafting, and a lot less time actually implementing, which, if you were spec-driven to begin with, has always been little more than a formality in the first place.

                                                                              For me, the actual mental work never happened while writing code; it happened well in advance. My workflow hasn't changed that much; I'm just not the one who writes the code anymore, but I'm still very much the one who designs it.

                                                                              • ternwer 17 hours ago

                                                                                Yes, I've seen many people become _too_ hands-off after an initial success with LLMs, and get bitten by not understanding the system.

                                                                                Hirers, above, are more focused on the opposite side, though: engineers who try AI once, see a mess or hallucinations, and decide it's useless. There is some learning to figure out how to wield it.

                                                                              • column 15 hours ago

                                                                                "How?" <- It shows a lack of curiosity?

                                                                                "probably know enough" <- that's exactly the point of the question, is the candidate clueless about AI/LLM.

                                                                                • palmotea 11 hours ago

                                                                                  > "How?" <- It shows a lack of curiosity?

                                                                                  We're talking about a codebase, here. How does "lack of curiosity" about LLMS "make a mess"

                                                                                  > "probably know enough" <- that's exactly the point of the question, is the candidate clueless about AI/LLM.

                                                                                  Probably knows enough about what's a good vs bad change. If you're "clueless about AI/LLM" but know a bad change when you see one, how do you "make a mess?"

                                                                                  It's 2026, even a developer who's never touched an LLM before has heard about LLM hallucinations. If you've got programming knowledge, you should know how to make changes (e.g. you're not going to commit 200 files for a tiny change, because you know that doesn't smell right), which should guard against "making a mess."

                                                                                  My point it doesn't seem reasonable to assume symmetry here. That if you don't know both things, you'll make a mess. That also implies everything built before 2022 was a mess, because those developers new programming but not LLMs, which is an unreasonable claim to make.

                                                                                  • ternwer 2 hours ago

                                                                                    I was too cute in trying to be terse, but I meant a mess while using AI:

                                                                                    > [Employers], above, are more focused on the opposite side, though: engineers who try AI once, see a mess or hallucinations, and decide it's useless. There is some learning to figure out how to wield it.

                                                                            • tmountain 19 hours ago

                                                                              Probably, I think hand coding is going the way of the dodo and the ox cart.

                                                                              • adithyassekhar 18 hours ago

                                                                                Sorry but focusing on the hand coding part misses the whole picture and would derail the conversation. Comparisons like that are often dishonest.

                                                                                Hiring someone who writes Rust with Claude but never written anything with it in their lives, never faced the edge cases, never took the wrong decisions feels naive to me. At the end of the day it's still a next token generator, an impressive one. It can hold context but not relate with anything outside that context. Someone needs to take accountability.

                                                                              • AlexeyBelov 16 hours ago

                                                                                The usual LeetCode-ish tasks, often system design, but then deep AI usage. "I use Copilot" isn't going to fly at all, as far as I understand.

                                                                                • nDRDY 16 hours ago

                                                                                  Are you allowed to leverage AI to answer the leetcode questions? Otherwise, it seems it is the interviewers who are behind the times!

                                                                              • PacificSpecific 17 hours ago

                                                                                I've done a couple flirty interviews and so far it hasn't come up. So take hope, it's not all bad.

                                                                                • nDRDY 16 hours ago

                                                                                  What a time to be alive. I once got roasted in an interview because I said I would use Google if I didn't know something (in this context, the answer to a question that would easily be found in language and compiler documentation).

                                                                                  • column 16 hours ago

                                                                                    You are not alone. The silver lining is they show their true colors early.

                                                                                • mihaaly 18 hours ago

                                                                                  > Professionally you are downgrading

                                                                                  It is the contrary!

                                                                                  You learn using a very powerfool tool. This is a tool, like text editor and compiler.

                                                                                  But you focus on the logic and function more instead of syntax details and whims of the computer languages used in concert.

                                                                                  The analogy from construction is to be elevated from being a bricklayer to an engineer. Or using various shaped shovels with wheelbarrel versus mechanized tools like excavators and dumpers in making earthworks.

                                                                                  ... of course for those the focus is in being the master of bricklayers, which is noble, no pun intended, saying with agreeing straight face, bricklaying is a fine skill with beautiful outputs in their area of use. For those AI is really unnecessary. An existential threat, but unnecessary.

                                                                                  • adithyassekhar 18 hours ago

                                                                                    I agree with you, syntax details are not important but they haven't been important for a long time due to better editors and linters.

                                                                                    > But you focus on the logic and function more instead of syntax details and whims of the computer languages used in concert.

                                                                                    This is exactly my point. I learned logical mistakes when my first if else broke. Only reason you or I can guide these into good logic is because we dealt with bad ones before all this. I use claude myself a lot because it saves me time. But we're building a culture where no one ever reads the code, instead we're building black boxes.

                                                                                    Again you could see it as the next step in abstraction but not when everyone's this dependent on a few companies prepared to strip the world of its skills so they can sell it back to them.

                                                                                • upmind a day ago

                                                                                  Jarred (from Bun) said that a lot of the errors are being of how much they've scaled in users recently (i.e., the flock that came from OpenAI)

                                                                                  • andreagrandi 19 hours ago

                                                                                    I must have missed something: why are people moving from OpenAI? Since they released gpt-5.3-codex I'be been using it and claude with opus-4.6 and Codex has always been better, more accurate, less prone to allucinations. I can do more with a 20$ OpenAI pland than with a Claude Max 100

                                                                                    • andkenneth 19 hours ago

                                                                                      People are mad at openAI cooperating with the pentagon while anthropic put their foot down over their red lines.

                                                                                      • direwolf20 16 hours ago

                                                                                        More specifically OpenAI has agreed to be used for domestic mass surveillance and for autonomous (no human in the loop anywhere) drone attacks. ChatGPT will decide which building to destroy, and then it will be destroyed.

                                                                                      • ternwer 18 hours ago

                                                                                        HN often avoids politics, but they were some of the most upvoted stories recently:

                                                                                        https://news.ycombinator.com/item?id=47188697

                                                                                        https://news.ycombinator.com/item?id=47189650

                                                                                        • rhubarbtree 18 hours ago

                                                                                          I use both openai and Claude and in the last few months have moved exclusively to Claude, as it’s better.

                                                                                          • CSMastermind 18 hours ago

                                                                                            Politics, agreed Codex performs significantly better for me.

                                                                                          • fred_is_fred 21 hours ago

                                                                                            The first scaling event was after their highly successful Super Bowl ad and the second was being on the right side of history over the weekend.

                                                                                            • dilyevsky 21 hours ago

                                                                                              this has been an issue for years at this point... other labs are hardly any better tho

                                                                                          • anonnona8878 21 hours ago

                                                                                            keeps going down. One more time and I'm moving to Codex. Or hell, I better go back to using my actual brain and coding, god forbid. Fml.

                                                                                            • lambda 20 hours ago

                                                                                              Please relearn to use your brain.

                                                                                              I cannot imagine how you can properly supervise an LLM agent if you can't effectively do the work yourself, maybe slightly slower. If the agent is going a significant amount faster than you could do it, you're probably not actually supervising it, and all kinds of weird crap could sneak in.

                                                                                              Like, I can see how it can be a bit quicker for generating some boilerplate, or iterating on some uninteresting API weirdness that's tedious to do by hand. But if you're fundamentally going so much faster with the agent than by hand, you're not properly supervising it.

                                                                                              So yeah, just go back to coding by hand. You should be doing tha probably ~20% of the time anyhow just to keep in practice.

                                                                                              • winwang 17 hours ago

                                                                                                Kind of agreed. I like vibe coding as "just" another tool. It's nice to review code in IDE (well, VSCode), make changes without fully refactoring, and have the AI "autocomplete". Interesting, sometimes way faster + easier to refactor by hand because of IDE tooling.

                                                                                                The ways that agents actually make me "faster" are typically: 1. more fun to slog through tedious/annoying parts 2. fast code review iterations 3. parallel agents

                                                                                                • lambda 12 hours ago

                                                                                                  Yeah. I've been finding a scary number of people saying that they never write code by hand any more, and I'm having a hard time seeing how they can keep in practice enough to properly supervise. Sure, for a few weeks it will be OK, but skills can atrophy quickly, and I've found it's really easy to get into an addictive loop where you just vibe code without checking anything, and then you have way too much to review so you don't bother or don't do a very good job of it.

                                                                                              • tvink 21 hours ago

                                                                                                You'll be back :)

                                                                                              • evara-ai 11 hours ago

                                                                                                This is a real operational problem when you're building client-facing automation systems on top of these APIs. I build chatbots, workflow automation, and AI agent systems for clients — and the hardest conversation is explaining that your system's uptime is fundamentally capped by your LLM provider's uptime.

                                                                                                Patterns that have helped in production:

                                                                                                1. Multi-provider fallback. For conversational systems, route to Claude by default, fall back to GPT-4 on 5xx errors. The response quality difference is usually acceptable for the 2-3% of requests that hit the fallback. This turns a hard outage into a slight quality degradation.

                                                                                                2. Async queuing for non-real-time workflows. If you're processing documents, generating reports, or running batch analysis — don't call the API synchronously. Queue the work, retry with exponential backoff, and let the system self-heal when the API recovers. Most of our automation pipelines run with a 15-minute SLA, not a 500ms one.

                                                                                                3. Graceful degradation in real-time systems. For chatbots and voice agents, have a scripted fallback path. "I'm having trouble processing that right now — let me transfer you to a human" is infinitely better than a hung connection or error message.

                                                                                                The broader issue: we're all building on infrastructure where "four nines" isn't even on the roadmap yet. That's fine if you architect for it — treat LLM APIs like any other unreliable external dependency, not like a database query.

                                                                                                • pmontra 14 hours ago

                                                                                                  Emails with verification codes do not get delivered.

                                                                                                  > Have a verification code instead?

                                                                                                  > Enter the code generated from the link sent to [...]

                                                                                                  > We are experiencing delivery issues with some email providers and are working to resolve this.

                                                                                                  > Check your junk/spam and quarantine folders and ensure that support@mail.anthropic.com is on your allowed senders list.

                                                                                                  I'm still waiting for a code from one hour ago. Meanwhile I managed to fix my source code alone, like twelve months ago.

                                                                                                  • ruszki 13 hours ago

                                                                                                    > I managed to fix my source code alone, like twelve months ago.

                                                                                                    I’ve just mentioned to one of my friend yesterday, that you cannot do this anymore properly with new things. I’ve started a new project with some few years old Android libraries, and if I encounter a problem, then there is a high chance that there is nothing about it on the public internet anymore. And yesterday I suffered greatly because of this. I tried to fix a problem, I had a clearly suboptimal solution from myself after several hours, but I hated it, but I couldn’t find any good information about it (multi library AndroidManifest merging in case of instrumented tests). Then I hit Claude Code with a clear example where it fails. It solved it, perfectly. Then I asked in a separate session how this merging works, and why its own solution works. It answered well, then I asked for sources, and it cannot provide me anything. I tried Google and Kagi, and I couldn’t find anything. Even after I knew the solution. The information existed only hidden from the public (or rather deep in AGP’s source code), in the LLM. And I’m quite sure that I wasn’t the only one who had this problem before, yet there is no proper example to solve this on the internet at all, or even anything to suggests how the merging works. The existing information is about a completely separate procedure without instrumented tests.

                                                                                                    So, you cannot be sure anymore, that you can solve it by yourself. Because people don’t share that much anymore. Just look at StackOverflow.

                                                                                                    • mejutoco 8 hours ago

                                                                                                      It looks like you could write that blogpost and get some traffic, on the other side. Very interesting how the flow has changed direction based on your example.

                                                                                                    • johndough 11 hours ago

                                                                                                      Anthropic has never been able to send emails to my outlook email address (since 2023). Maybe changing your email address helps.

                                                                                                    • davegardner 20 hours ago

                                                                                                      I hope they improve their incident response comms in the future. 2.5 hours with nothing more than "We are continuing to investigate this issue" is pretty poor form. Their past history of incident handling looks just as bad.

                                                                                                      • IsTom 17 hours ago

                                                                                                        They're waiting for claude to get up so they can use it to investigate why claude is down.

                                                                                                        • mrguyorama 10 hours ago

                                                                                                          Their entire business is a premise of "Humans don't have to know how to do anything anymore" so what did you expect? Their brain is down.

                                                                                                          https://www.cs.ucdavis.edu/~koehl/Teaching/ECS188/PDF_files/...

                                                                                                        • kshacker 21 hours ago

                                                                                                          I was having an extended incognito chat with claude.ai, and then it stopped responding. I saved the transcript in a notepad and checked in another tab whether it was down. i wonder if the incognito session is gone, and whether by reposting it i can resurrect it. I have done so with Gemini but there it has codes like "Gemini said", which I do not see here. If anyone knows that, appreciate a solution.

                                                                                                          • gdorsi 18 hours ago

                                                                                                            This comes as reminder that software engineering is way more than generating code.

                                                                                                            We build systems that can fail in unpredictable ways, and without knowing the system we built deeply is hard to understand what's going on.

                                                                                                            • rosquillas 21 hours ago

                                                                                                              I'm basing my next projects on the ability of Claude code to write code for me. This disruptions are scary.

                                                                                                              • skeledrew 15 hours ago

                                                                                                                That's a pretty bad idea. No matter how good a product is, never become so reliant on it that it seriously affects things that matter if it becomes unavailable.

                                                                                                                • adithyassekhar 18 hours ago

                                                                                                                  Congrats you are vendor locked for skills.

                                                                                                                  • mrguyorama 10 hours ago

                                                                                                                    If your product is made using Claude why would I pay you for it when I can just make it myself? I have Claude access too

                                                                                                                    • mejutoco 8 hours ago

                                                                                                                      Window cleaners still exist, and drivers, and bakers, etc. You get my point.

                                                                                                                  • himata4113 21 hours ago

                                                                                                                    Seems to be the biggest outage yet. Might be related to power loss events in UAE timing is suspicious as more datacenters appear to be hit.

                                                                                                                    • kshacker 21 hours ago

                                                                                                                      If you look at their status page, something has been bubbling for the past week

                                                                                                                      https://status.claude.com

                                                                                                                      • himata4113 20 hours ago

                                                                                                                        Never noticed it being outright down like this except for today (and yesterday), never had actual downtime except for few failed requests that worked after a retry which coincides with AWS datacenters going offline.

                                                                                                                      • lelanthran 21 hours ago

                                                                                                                        > Might be related to power loss events in UAE timing is suspicious as more datacenters appear to be hit.

                                                                                                                        More datacenters? I thought it was just one.

                                                                                                                        • himata4113 21 hours ago

                                                                                                                          The strikes are actually still ongoing afaik.

                                                                                                                          • lyu07282 19 hours ago

                                                                                                                            Latest news I could find on it: https://www.businessinsider.com/amazon-data-centers-middle-e...

                                                                                                                            > Two facilities in the United Arab Emirates sustained direct hits, while a third facility in Bahrain was damaged by a drone strike "in close proximity,"

                                                                                                                            Also to add context: AWS has contracts with the US military: "The Joint Warfighting Cloud Capability (JWCC) contract enables AWS to continue providing Department of Defense (DoD) customers with secure, reliable, and mission-critical cloud services." https://aws.amazon.com/federal/defense/jwcc/ Making them a target for retaliation ofc.

                                                                                                                            • himata4113 18 hours ago

                                                                                                                              friends in the middle east have said that there have been a few missiles flying overhead, possibly reduced media coverage as it is an ongoing operation.

                                                                                                                        • kube-system 21 hours ago

                                                                                                                          A not particularly large AWS region on the other side of the world? Doubt it.

                                                                                                                          • himata4113 19 hours ago

                                                                                                                            well there has been pretty large deals going on in UAE especially when it comes to AI since they can get any power capacity with a flick of their fingers for an unbeatable price and the latency in AI doesn't really matter since the first token is usually seconds anyway. And it's not just AWS it's the entire region.

                                                                                                                        • cbracketdash 21 hours ago

                                                                                                                          Already made the switch back to Codex :-)

                                                                                                                          • siliconc0w 21 hours ago

                                                                                                                            They need to keep an emergency backup Claude to fix the production Claude when it goes down.

                                                                                                                            (More seriously I wonder if they'd consider using Openai or Gemini for this purpose)

                                                                                                                            • bashtoni 21 hours ago

                                                                                                                              Opus and Sonnet are still working fine in AWS Bedrock (and probably Google Vertex), so they genuinely do have an emergency backup Claude they can use.

                                                                                                                              • codegladiator 21 hours ago

                                                                                                                                Isnt bedrock and vertex pass thru to anthropic servers ? I didnt know aws/google are deploying the actual models

                                                                                                                                • etothet 20 hours ago

                                                                                                                                  AWS actually hosts the models. Security & isolation is part of the proposed value proposition for people and organizations that need to care about that sort of stuff.

                                                                                                                                  It also allows for consolidated billing, more control over usage, being able to switch between providers and models easily, and more.

                                                                                                                                  I typically don’t use Bedrock, but when I have it’s been fine. You can even use Claude Code with a Bedrock API key if you prefer

                                                                                                                                  https://docs.aws.amazon.com/bedrock/latest/userguide/what-is...

                                                                                                                                  https://code.claude.com/docs/en/amazon-bedrock

                                                                                                                                  (I am not affiliated with AWS in any way. I’m just a user stuck in their ecosystem!)

                                                                                                                                  • LostMyLogin 20 hours ago

                                                                                                                                    I’ve been using Claude Code w/ bedrock for the last few weeks and it’s been pretty seamless. Only real friction is authenticating with AWS prior to a session.

                                                                                                                                  • kube-system 20 hours ago

                                                                                                                                    Bedrock runs all their stuff in house and doesn’t send any data elsewhere or train on it which is great for organizations who already have data governance sign off with AWS.

                                                                                                                                    • adithyassekhar 18 hours ago

                                                                                                                                      I wonder how the supply chain risk designation affects this later.

                                                                                                                                • killingtime74 20 hours ago

                                                                                                                                  Maybe they can use the ultimate backup...human programmers!

                                                                                                                                • adham-omran 20 hours ago

                                                                                                                                  The service has been inconsistent and/or down for the last 12 hours..

                                                                                                                                  • tayo42 21 hours ago

                                                                                                                                    Who fixes the Ai when the Ai is down? Semi serious since they're pretty big on not writing code?

                                                                                                                                    • zvqcMMV6Zcr 16 hours ago

                                                                                                                                      Maybe network guys can give some hints? I guess they encounter such issue relatively often, when they can't access network equipment by network to fix the network issue. I know management consoles have separate networks on datacenter scale but it isn't that easy with even bigger networks.

                                                                                                                                      • kube-system 20 hours ago

                                                                                                                                        The same guy who used to fix stack overflow, presumably

                                                                                                                                        • raincole 16 hours ago

                                                                                                                                          I know you say "semi serious" but you can't seriously think there isn't an LLM for internal usage only in Anthropic, right.

                                                                                                                                          • tayo42 10 hours ago

                                                                                                                                            I'm not sure what's involved with serving these llms or if the infra could be completely seperate or not for an internal one.

                                                                                                                                          • brookst 20 hours ago

                                                                                                                                            Most ops fixes don’t involve writing code though.

                                                                                                                                          • AYBABTME 19 hours ago

                                                                                                                                            This right now today is making the case for OSS AI and local inference. 200$/m to get rate limited makes a RTX 6000 Pro look cheap.

                                                                                                                                            • tmountain 19 hours ago

                                                                                                                                              How well do local OSS models stack up to Claude?

                                                                                                                                              • Balinares 18 hours ago

                                                                                                                                                Very well for narrowly scoped purposes.

                                                                                                                                                They decohere much faster as the context grows. Which is fine, or not, depending on whether you consider yourself a software engineer amplifying your output by automating the boilerplate, or an LLM cornac.

                                                                                                                                                • wongarsu 16 hours ago

                                                                                                                                                  Much better than they did half a year ago, but a single RTX 6000 won't get you there

                                                                                                                                                  Models in the 700B+ category (GLM5, Kimi K2.5) are decent, but running those on your own hardware is a six-figure investment. Realistic for a company, for a private person instead pick someone you like from openrouter's list of inference providers.

                                                                                                                                                  If you really want local on a realistic budget, Qwen 3.5 35B is ok. But not anywhere near Claude Opus

                                                                                                                                                  • Eisenstein 15 hours ago

                                                                                                                                                    > but running those on your own hardware is a six-figure investment

                                                                                                                                                    GLM-5 is a 744B MoE with 40B active. You can run a Q4_K_M quant on llama.cpp if you can afford 512GB of RAM. An RTX 6000 will help a lot with the prompt processing, and the generations with be relatively fast if you have decent memory bandwidth. llama.cpp's autofit feature is really good at dividing the layers for MoEs to max speed when offloading.

                                                                                                                                                  • sunaookami 19 hours ago

                                                                                                                                                    They don't, only on meaningless benchmarks.

                                                                                                                                                  • re-thc 18 hours ago

                                                                                                                                                    What’s the depreciation on that RTX 6000 though?

                                                                                                                                                    New hardware keeps on coming with large gains in performance.

                                                                                                                                                    • SalariedSlave 17 hours ago

                                                                                                                                                      Does it? Market looks like it'll be harder for consumers to get such hardware for the time being. A RTX 6000 might appeciate, instead of depreciate.

                                                                                                                                                      • re-thc 14 hours ago

                                                                                                                                                        > Does it? Market looks like it'll be harder for consumers

                                                                                                                                                        Yes. I never specifically talked about consumers only though.

                                                                                                                                                  • ramon156 19 hours ago

                                                                                                                                                    Next year: Anthropic to buy over OpenAI Datacenters

                                                                                                                                                    • re-thc 18 hours ago

                                                                                                                                                      They have some? Aren’t Oracle and other “friends” running it?

                                                                                                                                                    • digitaltrees 20 hours ago

                                                                                                                                                      Anyone else find this timing odd given the DoD ban?

                                                                                                                                                      • o10449366 17 hours ago

                                                                                                                                                        No wonder. It's performance overall was noticeably, like it had regressed to coding models from 1.5 years ago. I've try not to use claude during peak US hours because it tends to struggle more then with reasoning and correctness it seems than off hours.

                                                                                                                                                        • PinkMilkshake 20 hours ago

                                                                                                                                                          I won't hate you for downvoting me, but this is heroin-grade schadenfreude.

                                                                                                                                                          • nprateem 18 hours ago

                                                                                                                                                            I've been noticing elevated stupidity.

                                                                                                                                                            "Do this"

                                                                                                                                                            "User wants me to [do complete opposite]"

                                                                                                                                                            Seems not to be as capable as a month ago.

                                                                                                                                                            • kelvinjps10 21 hours ago

                                                                                                                                                              But code is solved?

                                                                                                                                                              • digitaltrees 20 hours ago

                                                                                                                                                                Why do you assume this is a code issue? They were literally banned by DoD and then suddenly go down? There is at least a question to ask there, no?

                                                                                                                                                              • rvz a day ago

                                                                                                                                                                “98.92 % uptime” is horrendous and unacceptable.

                                                                                                                                                                Only one 9 of availability means you are seriously unreliable.

                                                                                                                                                                • fred_is_fred 21 hours ago

                                                                                                                                                                  There are 2 9s in 98.92.

                                                                                                                                                                  • cronelius 21 hours ago

                                                                                                                                                                    well actually since 1 == 0.999999… and 98.82 is 98.91999999… there are an infinite number of 9s

                                                                                                                                                                    • cr125rider 21 hours ago

                                                                                                                                                                      “Wait you mean sequential 9s!? Here I was waiting for just the right time to turn it back on…”

                                                                                                                                                                      • brookst 20 hours ago

                                                                                                                                                                        I’m very proud of our 0.999999% uptime. Six nines!

                                                                                                                                                                      • digitaltrees 20 hours ago

                                                                                                                                                                        underrated...

                                                                                                                                                                        • Tadpole9181 21 hours ago

                                                                                                                                                                          Oh come on guys, this one is at least funny.