• dinkleberg a day ago

    Props to them for actually updating their status page as issues are happening rather than hours later. I was working with claude code and hit an API error, checked the status page and sure enough there was an outage.

    This should be a given for any service that others rely on, but sadly this is seldom the case.

    • palcu a day ago

      Thank you! Opening an incident as soon as user impact begins is one of those instincts you develop after handling major incidents for years as an SRE at Google, and now at Anthropic.

      I was also fortunate to be using Claude at that exact moment (for personal reasons), which meant I could immediately see the severity of the outage.

      • koakuma-chan a day ago

        It's important for companies to use their own products.

        • awesome_dude a day ago

          Unless using your own dogfood prevents you from fixing it if it breaks

          https://www.theguardian.com/technology/2021/oct/05/facebook-...

          I have a memory that Slack fell into this trap too (I could be wrong)

          • hiddencost a day ago

            Facebook notoriously had to cut open the doors to one of their data centers.

            Google SRE still keeps IRC available in case of an emergency.

            • antonvs a day ago

              Now I’m imagining the folks at Slack gritting their teeth and using MS Teams

          • aduwah a day ago

            Take my condolences, Sunday outages are rough

            • nrhrjrjrjtntbt a day ago

              Sweet. Hopefully it is more than instinct but a codified at Anthropic. I.e. a graduate engineer with little experience can assess and raise incident if needed.

            • LanceH a day ago

              Confusingly, I was trying to debug something with a 529, and this outage really had me going for a minute.

              • cevn a day ago

                The 529 is coming from inside the house?!

              • arach a day ago

                Same as you and I was glad to see the status page - hit subscribe on updates

                Claude user base believes in Sunday PM work sessions

                • gwd a day ago

                  As a solo bootstrapped founder, I take my sabbath sundown on Saturday to sundown on Sunday. Sunday evening therefore is generally the start of my work week.

                  • airstrike a day ago

                    Sunday PM builder, reporting in.

                    • taytus a day ago

                      Sunday? What is that?

                      • exe34 a day ago

                        hah I ran out of tokens a bit before it hit I reckon.

                        • rnewme a day ago

                          same here, and I just got started, Hm..

                      • smcleod a day ago

                        Indeed! I checked their status page within 2 minutes of having issues and it was updated to show they had detected it.

                        • Buttons840 a day ago

                          "There's a problem and we already know about it" is so much better than "there's a problem and we don't know about it and/or are hoping it will magically go away and that we won't be embarrassed".

                          • dpkirchner a day ago

                            "If we admit to it we may have to compensate per SLAs, so dishonesty it is!"

                          • fragmede a day ago

                            Seldom? Most status pages I've seen do eventually get updated, just not within that first critical 3 minutes.

                          • palcu a day ago

                            Hello, I'm one of the engineers who worked on the incident. We have mitigated the incident as of 14:43 PT / 22:43 UTC. Sorry for the trouble.

                            • l1n a day ago

                              Also an engineer on this incident. This was a network routing misconfiguration - an overlapping route advertisement caused traffic to some of our inference backends to be blackholed. Detection took longer than we’d like (about 75 minutes from impact to identification), and some of our normal mitigation paths didn’t work as expected during the incident.

                              The bad route has been removed and service is restored. We’re doing a full review internally with a focus on synthetic monitoring and better visibility into high-impact infrastructure changes to catch these faster in the future.

                              • ammut a day ago

                                If you have a good network CI/CD pipeline and can trace the time of deployment to when the errors began, it should be easy to reduce your total TTD/TTR. Even when I was parsing logs years ago and matching them up against AAA authorization commands issued, it was always a question of "when did this start happening?" and then "who made a change around that time period?"

                                • giancarlostoro a day ago

                                  I don't know if you guys do write ups, but cloudflare's write ups on outages is in my eyes the gold standard the entire industry should follow.

                                  • Arcuru a day ago

                                    When I was at Big Corp, I loved reading the internal postmortems. They were usually very interesting and I learned a lot. It's one of the things I miss about leaving.

                                    A tech company that publishes the postmortems when possible always get a +1 in my eyes, I think it's a sign of good company culture. Cloudflare's are great and I would love to see more from others in the industry.

                                    • boondongle a day ago

                                      A big reason for that is it comes from the CEO. Other providers have a team and then at least 2 to 3 layers of management above them and a dotted line legal counsel. So the goal posts randomly shift from "more information" to "no information" over time based on the relationships of that entire chain, the customer heat of the moment, and personality.

                                      Underneath a public statement they all have extremely detailed post-mortems. But how much goes public is 100% random from the customer's perspective. There's no Monday Morning QB'ing the CEO, but there absolutely is "Day-Shift SRE Leader Phil"

                                      • bflesch a day ago

                                        Cloudflare deploys stuff on Fridays, and it directly affected shopify, one of their major ecommerce customers. Until they fix their internal processes all writeups should be seen as purely marketing material.

                                        • giancarlostoro 20 hours ago

                                          I absolutely see it as marketing, and it is effective because I still appreciate the write ups. Arguably any publicly traded company should be letting their investors know more details about outages.

                                      • 999900000999 a day ago

                                        Was this a typo situation or a bad process thing ?

                                        Back when I did website QA Automation I'd manually check the website at the end of my day. Nothing extensive, just looking at the homepage for piece of mind.

                                        Once a senior engineer decided to bypass all of our QA, deploy and took down prod. Fun times.

                                        • spike021 a day ago

                                          Depending on how long someone's been in the industry it's more a question of if, not when, an outage will occur due to someone deciding to push code haphazardly.

                                          At my first job one of my more senior team members would throw caution to the wind and deploy at 3pm or later on Fridays because he believed in shipping ASAP.

                                          There were a couple times that those changes caused weekend incidents.

                                          • MobiusHorizons a day ago

                                            I think you meant to write “when, not if” instead of “if, not when”

                                            • spike021 a day ago

                                              heh, probably. that's what I get for writing a comment while walking my dog.

                                          • userbinator a day ago

                                            In these times, it could be "the AI did it".

                                          • wouldbecouldbe a day ago

                                            Trying to understand what this means.

                                            Did the bad route cause an overload? Was there a code error on that route that wasn’t spotted? Was it a code issue or an instance that broke?

                                            • bc569a80a344f9c a day ago

                                              It says network routing issue.

                                              Network routes consist of a network (a range of IPs) and a next hop to send traffic for that range to.

                                              These can overlap. Sometimes that’s desirable, sometimes it is not. When routers have two routes that are exactly the same they often load balance (in some fairly dumb, stateless fashion) between possible next hops, when one of the routes is more specific, it wins.

                                              Routes get injected by routers saying “I am responsible for this range” and setting themselves as the next hop, others routers that connect to them receive this advertisement and propagate it to their own router peers further downstream.

                                              An example would be advertising 192.168.0.0/23, which is the range of 192.168.0.0-192.168.1.255.

                                              Let’s say that’s your inference backend in some rows in a data center.

                                              Then, through some misconfiguration, some other router starts announcing 192.168.1.0/24 (192.168.1.0-192.168.1.255). This is more specific, that traffic gets sent there, and half of the original inference pod is now unreachable.

                                              • disqard a day ago

                                                Thank you for that explanation!

                                              • mattdeboard a day ago

                                                it means their servers were unreachable due to network misconfig.

                                              • colechristensen a day ago

                                                The details and promptness of reporting are much appreciated and build trust, so thanks!

                                                • tayo42 a day ago

                                                  I was kind surprised to see details like that in a comment, but clicked on your personal website and see your a Co-founder, so I guess no one is going to repremand you lol

                                                • giancarlostoro a day ago

                                                  Any chance you guys could do write ups on these incidents similar to how CloudFlare does? For all the heat some people give them, I trust CloudFlare more with my websites than a lot of other companies because of their dedication to transparency.

                                                  • l1n a day ago

                                                    We're considering this!

                                                    • giancarlostoro a day ago

                                                      I already love the product, and I think it would be great to see. Even if its not as "quickly" as CloudFlares (they post ASAP its insane) I would still be happy to see postmortem threads. We all learn industry wide from them.

                                                  • nickpeterson a day ago

                                                    The one time you desperately need to ask Claude and it isn’t working…

                                                    • dan_wood a day ago

                                                      Can you divulge more on the issue?

                                                      Only curious as a developer and dev op. It's all quite interesting where and how things go wrong especially with large deployments like Anthropic.

                                                    • dgellow a day ago

                                                      Hope you have a good rest of your weekend

                                                      • Chance-Device a day ago

                                                        Thank you for your service.

                                                        • g-mork a day ago

                                                          it's still down get back to work

                                                        • irishcoffee a day ago

                                                          I’m imagining a steampunk dystopia in 50 years: “all world production stopped, LLM hosting went down. The market is in free-fall. Sam, are you there?”

                                                          Man that cracks me up.

                                                          • lxgr a day ago

                                                            Everybody using the same three centralized inference providers? That would be as absurd and unrealistic as everybody hosting in us-east-1 and behind Cloudflare today!

                                                            • adonovan a day ago

                                                              “A lone coder, trained in the direct manipulation of symbols—an elegant weapon from a more civilized age—-is now all that stands between humanity and darkness.” etc

                                                              • michelsedgh a day ago

                                                                Just like the internet, or Cloudflare going down?

                                                                • irishcoffee a day ago

                                                                  No, not even close

                                                                  • patcon a day ago

                                                                    Agreed. When cloudflare (ugh, aka the internet) goes down, we can't access information to think and work through. ("the fuel" in some metaphor)

                                                                    But what about when LLMs go down and a good chunk of a whole generation won't even know how to think, when the remote system goes down? (Is the ability to think "the engine" of self and agency in this metaphor?)

                                                                    We are building a wildly irresponsible context to exist in.

                                                                    • semi-extrinsic a day ago

                                                                      E. M. Forster would like a word.

                                                                    • bdangubic a day ago

                                                                      it is much worse, I forgot how to push to remote so deploys are delayed :)

                                                                  • jsight a day ago

                                                                    I remember hearing Karpathy refer to these outages as a worldwide "intelligence brownout".

                                                                    Crazy: https://www.youtube.com/shorts/SV4DMqAJ8RQ

                                                                    • cdelsolar a day ago

                                                                      Claude code cut me off a few days ago and I _seriously_ had no idea what to do. I’ve been coding for 33 years and I suddenly felt like anything I did manually would be an order of magnitude slower than it had to be.

                                                                      • cantalopes a day ago

                                                                        You can always use gemini cli, its pretty good

                                                                        • prmph a day ago

                                                                          Nah, I've now resolved to never use it, it'd a total time waster. When it works it's decent, but it does not work like half of time for me.

                                                                        • bdangubic a day ago

                                                                          you can’t say things like this on HN these days :)

                                                                        • teaearlgraycold a day ago

                                                                          The nice thing is unlike Cloudflare or AWS you can actually host good LLMs locally. I see a future where a non-trivial percentage of devs have an expensive workstation that runs all of the AI locally.

                                                                          • breatheoften a day ago

                                                                            I'm more and more convinced of the importance of this.

                                                                            There is a very interesting thing happening right now where the "llm over promisers" are incentivized to over promise for all the normal reasons -- but ALSO to create the perception that the "next/soon" breakthrough is only going to be applicable when run on huge cloud infra such that running locally is never going to be all that useful ... I tend to think that will prove wildly wrong and that we will very soon arrive at a world where state of art LLM workloads should be expected to be massively more efficiently runnable than they currently are -- to the point of not even being the bottleneck of the workflows that use these components. Additionally these workloads will be viable to run locally on common current_year consumer level hardware ...

                                                                            "llm is about to be general intelligence and sufficient llm can never run locally" is a highly highly temporary state that should soon be falsifiable imo. I don't think the llm part of the "ai computation" will be the perf bottleneck for long.

                                                                            • lwhi a day ago

                                                                              Is there any utility in thinking about LLM provision in terms of the electricity grid?

                                                                              I've often thought that local power generation (via solar or wind) could be (or could have been) a viable alternative to national grid supply.

                                                                          • PunchyHamster a day ago

                                                                            I'd imagine at some point the companies will just... stop publishing any open models precisely to stop that and keep people paying the subscription.

                                                                            • teaearlgraycold a day ago

                                                                              All we need is one research group somewhere in the world releasing good open models.

                                                                            • lxgr a day ago

                                                                              I’m fairly sure you can also still run computers locally and connect them to the Internet.

                                                                              • irishcoffee a day ago

                                                                                Ah, you need to buy into this dystopia wholesale. The internet is also down because the LLMs fucked up the BGP routing table, which congress agreed (at the time) should run through the LLM interface.

                                                                                Imagination, either the first or last thing to die in 2075.

                                                                                • lxgr a day ago

                                                                                  Congress administrating BGP? Now we’re talking dystopia!

                                                                                  • irishcoffee a day ago

                                                                                    “Hey folks, did you know in 100 years you can’t just call the town doc? Nah, you need to go get a referral. No, for real. Yeah, yeah, that is in fact a compound fracture. I can’t treat it without a referral. Congress made the rules.”

                                                                                    Is it so different?

                                                                              • colordrops a day ago

                                                                                What's the best you can do hosting an LLM locally for under $X dollars. Let's say $5000. Is there a reference guide online for this? Is there a straight answer or does it depend? I've looked at Nvidia spark and high end professional GPUs but they all seem to have serious drawbacks.

                                                                                • teaearlgraycold a day ago

                                                                                  I’m cheating your budget a bit, but for $5600 you can get an M3 Ultra with 256GB of RAM.

                                                                                  • cft a day ago
                                                                                    • colordrops a day ago

                                                                                      That's nice, thank you, I've joined and will follow. They don't seem to have a wiki or about page that synthesizes the current state of the art though.

                                                                                  • exe34 a day ago

                                                                                    I think it's possible, but the current trend is that by the time you can run x level at home, they have 10-100x in the frontier models, so if you can run today's Claude.ai at home, then software engineering as a career is already over.

                                                                                    • teaearlgraycold a day ago

                                                                                      You can run quite powerful models at home on a maxed out Mac Studio. The difference between those and SoTA is more like 2x.

                                                                                      • pstuart a day ago

                                                                                        My poorly informed hope is that that we can have mixture of experts with highly tuned models on areas of focus. If I'm coding in language Foo, I only care about a model that understands Foo and its ecosystem. I imagine that should be self-hostable now.

                                                                                        • tsimionescu a day ago

                                                                                          A model that only understands, say, Java is useless : you need a model that understands English and some kind of reasoning and has some idea of how the human world works, and also knows Java. The vast majority of the computational effort is spent on the first two, the second is almost an afterthought. So, a model that can only program in Java is not going to be meaningfully smaller than a model that can program in ~all programming languages.

                                                                                          • exe34 a day ago

                                                                                            my suspicion is that this is not how intelligence works. creativity comes from cross breeding ideas from many domains.

                                                                                            • pstuart 14 hours ago

                                                                                              Sure, but in the context I was considering, creativity itself wasn't a concern.

                                                                                              For coding, creativity is not necessarily a good thing. There are well established patterns, algorithms, and applications could reasonably be construed as "good enough" to assist with the coding itself. Adding a human language model over that to understand the user's intents could be considered an overlay on the coding model.

                                                                                              I confess that this is willful projection of my hope to be able to self-host agents on affordable hardware. A frontier model on powerful hardware would always be preferable but sometimes "good enough" is just that.

                                                                                              • exe34 2 hours ago

                                                                                                I want to self-host too, but I've spent the last few weeks playing with Claude code on my hobby projects - it solves abstract problems with code, and gives actionable reviews, whereas qwen code with qwen3-coder-480 seems to just write simple code and gives generic feedback.

                                                                                        • cft a day ago

                                                                                          That's the only future of open source that I can see.

                                                                                          • szundi a day ago

                                                                                            Those good ones are not even close though - or are they

                                                                                          • PunchyHamster a day ago

                                                                                            "We vibe coded the problem into existence but now the LLM is down we can't vibe fix it"

                                                                                            • sergiotapia a day ago

                                                                                              Like if electricity went out, no? Same deal.

                                                                                              • irishcoffee a day ago

                                                                                                Imagination.

                                                                                                “So sorry, we can’t read the WiFi signal from your generator. We don’t know how much to tax you for the emissions. Sadly, we can’t allow it to start.”

                                                                                                The story writes itself.

                                                                                            • Stratoscope a day ago

                                                                                              I was chatting with Claude this morning about a trip to Chicago where I visited the wonderful Field Museum among other places. An excerpt:

                                                                                              Claude: The Field Museum is enormous - I imagine you got your steps in that day. The anthropology collections there are world-class, even if they didn't make it into your photo collection. Sometimes the experience is more important than the documentation. Did you have a favorite exhibit beyond the dinosaurs?

                                                                                              Me: Oddly enough, the anthropology and human history section! Now I need to take another visit to Chicago to get pictures of those.

                                                                                              [No response from Claude]

                                                                                              [40 minutes pass while I have breakfast]

                                                                                              Me: Looks like I caught you during your morning nap. Every two and a half year old should get a good nap now and then!

                                                                                              Claude: Ha! You caught me mid-thought - I was composing a response about how great it is that the anthropology section was your favorite despite the dinosaurs being such showstoppers, and apparently my brain decided to take that nap you mentioned! ... Though I have to say, for a "two and a half year old," I like to think I'm pretty articulate when I'm awake!

                                                                                              • tgtweak 20 hours ago

                                                                                                There really should be an http header dedicated to "outage status" with a link to the service outage details page... clients (for example, in this case, your code IDE) could intercept this and notify users.

                                                                                                503 is cool and yes, there is the "well if it's down how are they going to put that up" but in reality most downtimes you see are on the backend and not on the reverse proxies/gateways/cdns where it would be pretty trivial to add a issues/status header with a link to the service status page and a note.

                                                                                                • embedding-shape 19 hours ago

                                                                                                  Something something if everyone used HATEOS this wouldn't have been a problem something something

                                                                                                • sebastiennight a day ago

                                                                                                  In the Claude.ai chat, this was announced to me as

                                                                                                      "You have reached the messages quota for your account. It will reset in 2 hours, or you can upgrade now"
                                                                                                  
                                                                                                  Either I have perfect timing for reaching my quota limits, or some product monetization manager deserves a raise.
                                                                                                  • manquer a day ago

                                                                                                    More likely that error handling is not well implemented - i.e Either backend is not throwing the equivalent of 429/402 errors or the gateway is not handling the errors well and returns this message even though a 429 is being thrown.

                                                                                                    • manuisin a day ago

                                                                                                      This sort of thing keeps me skeptical of AI quite a bit. ChatGPT also has non sensical errors messages for random failures, Gemini too. These companies have infinite compute and yet they haven't been able to implement reliable/graceful error handling in 2+ years for a chat app? Why are they promising us they can replace all developers?

                                                                                                    • frankdenbow a day ago

                                                                                                      i ran into the same thing, i thought it was just timing

                                                                                                    • michelsedgh a day ago

                                                                                                      If they shut down opus 4.5 I'll cry

                                                                                                      • agumonkey a day ago

                                                                                                        i already heard people ask for more api credits embarassed like drug addics

                                                                                                        • XCSme a day ago

                                                                                                          Just a few more credits and it will finally fix that bug without introducing new ones, exactly how I asked

                                                                                                          • baobabKoodaa a day ago

                                                                                                            I can stop any time I want, and in fact I am going to stop. Just one more (bug)fix.

                                                                                                            • michelsedgh a day ago

                                                                                                              This joke is getting old kinda Opus4.5 handles all the bugs in one go and also doesn’t introduce new ones at least for me. Very rarely i get stuck with it like i did with past generations of AI

                                                                                                              • agumonkey a day ago

                                                                                                                How long the usual self debugging cycle ? it seems to be around 10 minutes for me (untyped language)

                                                                                                        • teaearlgraycold a day ago

                                                                                                          I think we’re all very happy with the pricing on it.

                                                                                                          • tcdent a day ago

                                                                                                            I use it as much as my brain can handle and I never exceed my Max plan quota.

                                                                                                            • AnotherGoodName a day ago

                                                                                                              Just a warning for those not on the max plan; if you pay by the token or have the lower tier plans you can easily blow through $100s or cap your plan in under an hour. The rates for paying by the token are insane and the scaling from pro to max is also pretty crazy.

                                                                                                              They made pro have many times more value than paying per token and then they made max again have 25x more tokens than pro on the $200 plan.

                                                                                                              It’s a bit like being offered rice at $1 per grain (pay per token) or a tiny bag of rice for $20 (pro) or a truck load for $200. That’s the pricing structure right now.

                                                                                                              So while i agree you can’t easily exceed the quota on the big plans it’s a little crazy how they’ve tiered pricing. I hope no one out there’s paying per token!

                                                                                                              • KronisLV 18 hours ago

                                                                                                                They should publish the token limits not just talk about conversations or what average users can expect: https://support.claude.com/en/articles/11145838-using-claude...

                                                                                                                For comparison’s sake, this is clear: https://support.cerebras.net/articles/9996007307-cerebras-co...

                                                                                                                And while the Cerebras service is pretty okay, their website otherwise kinda sucks - and yet you can find clear info!

                                                                                                                • square_usual a day ago

                                                                                                                  > I hope no one out there’s paying per token!

                                                                                                                  Some companies are. Yes, for Claude Code. My co used to be like that as it's an easy ramp up instead of giving devs who might not use it that much a $150/mo seat; if you use it enough you can have a seat and save money, but if you're not touching $150 in credits a month just use the API. Oxide also recommends using API pricing. [0]

                                                                                                                  0: https://gist.github.com/david-crespo/5c5eaf36a2d20be8a3013ba...

                                                                                                                  • tcdent a day ago

                                                                                                                    Oh yeah totally my bill used to be closer to $1000/mo when paying per-token.

                                                                                                                    • cmrdporcupine a day ago

                                                                                                                      Yeah well, wait til they take it away

                                                                                                                    • michelsedgh a day ago

                                                                                                                      Exactly I feel like my brain burns out after a few days. Like Im the limit already (yet im the maximizer also) its a very weird feeling

                                                                                                                • termos 2 days ago

                                                                                                                  https://canivibe.ai/

                                                                                                                  So we can maybe vibe, depending what service we use.

                                                                                                                  • giancarlostoro a day ago

                                                                                                                    Nice website, embeds poorly on Discord and other chat apps sadly.

                                                                                                                    • bonesss a day ago

                                                                                                                      Vibedetector

                                                                                                                      • ares623 a day ago

                                                                                                                        We need a service that rates vibe coding capabilities. A "vibe rater".

                                                                                                                      • iLoveOncall a day ago

                                                                                                                        Wow 89% availability is a joke

                                                                                                                      • m_ke a day ago

                                                                                                                        Was it just me or did Opus start producing incredibly long responses before the crash. I was asking basic questions and it wouldn't stop trying to spit out full codebases worth of unrelated code. For some very simple questions about database schemas it ended up compacting twice on a 3 message conversation.

                                                                                                                        • tgtweak 20 hours ago

                                                                                                                          I trust companies that immediately and regularly update their status/issues page and follow up any outages with proper and comprehensive post-mortems. Sadly this is becoming the exception these days and not the norm.

                                                                                                                          • abigail95 a day ago

                                                                                                                            it's monday morning i'm going back to bed

                                                                                                                            • Tom1380 a day ago

                                                                                                                              Australia?

                                                                                                                              • abigail95 a day ago

                                                                                                                                yes and for political reasons i'm also taking the day off, this is just another excuse.

                                                                                                                            • 6r17 a day ago

                                                                                                                              It seems resolved now (per the status-page) - i experienced a moment where the agent got stuck in the same error loop just to pop the result this time. Makes me wonder if there has been some kind of rule applied in order to automatically detect such failure occurring again - quiet inspiring work

                                                                                                                              • __0x01 a day ago

                                                                                                                                Engineering Room, panning over a bunch of hot Blackwells

                                                                                                                                "I can't change the laws of physics!"

                                                                                                                                • russellthehippo a day ago

                                                                                                                                  Anthropic is very focused on AI safety. It makes LLMs safe by shutting down anyone from using them

                                                                                                                                  • victor9000 a day ago

                                                                                                                                    It's the best way to ensure model wellness

                                                                                                                                  • llmthrow0827 a day ago

                                                                                                                                    I used Haiku with Claude Code during the outage, and was surprised at how well it did. I'm going to try mixing it in more to save usage credits.

                                                                                                                                    • throwaway613745 a day ago

                                                                                                                                      Haiku is fantastic for simple answers and one off tasks. Then I switch to Opus for anything “serious”.

                                                                                                                                      I don’t even bother with Sonnet anymore, it’s been made obsolete by Opus 4.5.

                                                                                                                                    • flowinghorse a day ago

                                                                                                                                      Actually when the outage happened, my first action was to check Cloudflare status.

                                                                                                                                      • triwats a day ago

                                                                                                                                        I had an hour to vibe tonight and it looks like it may have gone.

                                                                                                                                        Spent it in bloody Figma instead :(

                                                                                                                                        • jcims a day ago

                                                                                                                                          Anyone know if Claude via Amazon bedrock was impacted?

                                                                                                                                          AFAIK it shouldn’t have been.

                                                                                                                                          • theropost a day ago

                                                                                                                                            Just came back online here

                                                                                                                                            • frankdenbow a day ago

                                                                                                                                              I got lucky and this was in my timeout window

                                                                                                                                              • WhyOhWhyQ a day ago

                                                                                                                                                Didn't notice. Guess I'm legit.

                                                                                                                                                • matt3210 a day ago

                                                                                                                                                  When vibes coders do the infra

                                                                                                                                                  • asasidh 2 days ago

                                                                                                                                                    "We have identified that the outage is related to Sonnet 4.0, Sonnet 4.5, and Opus 4.5."

                                                                                                                                                    What else is people using ? Haiku 4.5 ?

                                                                                                                                                    • epolanski a day ago

                                                                                                                                                      You made me try Haiku as I can't get Opus, and made me realize how a quicker feedback simplifies many tasks, I should be more dynamic in my model selection.

                                                                                                                                                      • riwsky a day ago

                                                                                                                                                        I heard that Google and OpenAI also make coding models, but I’ve never bothered to confirm.

                                                                                                                                                        • gunalx a day ago

                                                                                                                                                          Haiku 4.5 is a pretty decent small ish model. It conforms pretty good to my style guides, when cleaning up text, for eksample.

                                                                                                                                                          • nunodonato a day ago

                                                                                                                                                            I do. Its quite a nice and fast model

                                                                                                                                                            • asasidh a day ago

                                                                                                                                                              me too.. thats the reason I mentioned it there.

                                                                                                                                                          • delaminator 2 days ago

                                                                                                                                                            weird because I am using Sonnet right now. I guess my time is limited

                                                                                                                                                            • onionisafruit a day ago

                                                                                                                                                              I've been using it through this and it occasionally stops with an error message saying something like "repeated 529 responses". Kind of annoying but it's fine.

                                                                                                                                                              • sgt a day ago

                                                                                                                                                                Maybe you're just using the cached Sonnet.

                                                                                                                                                              • Jsuh a day ago

                                                                                                                                                                opus 4.5 is the truth

                                                                                                                                                                • TechDebtDevin a day ago

                                                                                                                                                                  They will say Claude hacked them and escaped its environment to scare normies or something dumb like they always be saying.

                                                                                                                                                                  • aj7 a day ago

                                                                                                                                                                    Isn’t that an AWS outage?

                                                                                                                                                                    • throwaway613745 a day ago

                                                                                                                                                                      Claude being down is the new XKCD Compiling.

                                                                                                                                                                      • rvz a day ago

                                                                                                                                                                        Anthropic is surpassing GitHub on unreliability.

                                                                                                                                                                        Looking forward to the post-mortem.

                                                                                                                                                                        • edverma2 a day ago

                                                                                                                                                                          time to go outside

                                                                                                                                                                          • bitwize a day ago

                                                                                                                                                                            And just like that, the brightest engineers in Silicon Valley were unable to get any programming done.

                                                                                                                                                                            • acedTrex a day ago

                                                                                                                                                                              An overall net positive event.

                                                                                                                                                                              • tom_ a day ago

                                                                                                                                                                                Perhaps related to https://news.ycombinator.com/item?id=46266655 ? - it's just too powerful, and they had to shut it down before something bad happened.