• rtsil 2 days ago

    > “They didn’t listen to A.I. when A.I. told them things they didn’t agree with,” Dr. Rodman said.

    Replace AI with patient, and it's a far too familiar experience.

    • throwaway4220 2 days ago

      Tale as old as time sadly, although it takes a lot of skill to sift through all patient history to get the nuggets.

      Not a bad role for AI. Then again, you miss the chance to show empathy (this is me saying this as a radiologist who sees like 1 patient a day though so huge grain of salt)

      • Spivak 2 days ago

        I'm always kind of annoyed that patients aren't part of the diagnostic team for the patient in most cases. Because sure they aren't a doctor but they're also the expert at their own symptomology given they have the unique experience of seeing their problems from the inside and are monitoring the patient 24/7.

        It's why the DSM in particular is so frustrating because the diagnostic criteria are written for how the condition presents from the outside. Which isn't inherently a bad thing but there's a huge amount of overlap which makes it imprecise. The criteria from the perspective of the sufferer is far far more specific.

        • viraptor 2 days ago

          DSM is a tricky one. That's one area where the patient's idea of symptoms may be especially affected by patient's symptoms. Many will align, but for example you wouldn't want a bipolar person during a manic episode to tell you if they're having problems, because they're going to be feeling amazing. On the other hand for example ADHD diagnosis in many places relies on extra people, because patients are known to downplay their symptoms.

          > from the perspective of the sufferer is far far more specific

          So yeah, that's... sometimes true.

      • maeil 7 hours ago

        > A.I. systems should be “doctor extenders,” Dr. Rodman said, offering valuable second opinions on diagnoses.

        Clearly not. These results show that most of the time doctors should be A.I extenders, offering valuable second opinions on diagnoses.

        It tells you everything that he's "shocked", and despite that shock, he still maintains the above keeping with the cognitive dissonance. Many of us who have enough experience with modern healthcare could see this coming from miles away, and would have seen the opposite result (doctors beating GPT on average) as shocking.

        • julienchastang 2 days ago

          Most fascinating part of the article is at the end:

          ``` Dr. Chen said he noticed that when he peered into the doctors’ chat logs, “they were treating it like a search engine for directed questions: ‘Is cirrhosis a risk factor for cancer? What are possible diagnoses for eye pain?’” “It was only a fraction of the doctors who realized they could literally copy-paste in the entire case history into the chatbot and just ask it to give a comprehensive answer to the entire question,” Dr. Chen added. “Only a fraction of doctors actually saw the surprisingly smart and comprehensive answers the chatbot was capable of producing.”

          ```

          • adxl 2 days ago

            I beat doctors at diagnosing family members. It’s not hard, many doctors are terrible at diagnosis.

            • jimmytucson 2 days ago

              I was lucky enough to be in undergrad with Dr. Rodman. One of the kindest and most intellectually honest people I’ve ever met. Passionate about ideas but at the same time, emotionally uninvested in bad ones.

              He was a history major before he went on to study medicine, and he now does a podcast on the history of medicine called Bedside Rounds. He gets really excited when talking about something he finds interesting and it makes you want to follow him down the rabbit hole. Highly recommend listening at half speed: http://bedside-rounds.org/

              • maeil 6 hours ago

                Good to hear that he's so kind, but he has clearly spent too much time inside the medical world if he's shocked by this result, and in the face of it still maintains that GPT should be the "second opinion" to the doctor instead of the other way around, when the result clearly shows patients would directly benefit from GPT giving the first opinion.

              • mgh2 2 days ago

                https://archive.is/QT7QH#selection-659.0-659.52

                50 sample size not very reliable.

                • latexr a day ago

                  To the article’s credit, it says right in the subtitle that it was a small study.

                  To its detriment, “defeated” in the title is clickbait. It reminded me of those YouTube videos: “Liberal doctors DESTROYED by ChatGPT”.

                  • onecommentman 2 days ago

                    50 samples gets you past “intriguing” and into “we need to be doing something, maybe a large trial first, but something” territory

                    The result that the human+AI is a little better but the AI alone is much better matches the experience with chess engines, where grandmaster+AI is a little better but just AI is the strongest.

                  • wumeow 2 days ago

                    > How, then, do doctors diagnose patients? The problem, said Dr. Andrew Lea, a historian of medicine at Brigham and Women’s Hospital who was not involved with the study, is that “we really don’t know how doctors think.” In describing how they came up with a diagnosis, doctors would say, “intuition,” or, “based on my experience,” Dr. Lea said.

                    I was a big fan of the show House as a kid, and I remember being blown away when I learned that the “Department of Diagnostic Medicine” was made up for the show and not a standard department in every large hospital.

                  • Pigalowda 2 days ago

                    That test case involved a 76-year-old patient with severe pain in his low back, buttocks and calves when he walked. The pain started a few days after he had been treated with balloon angioplasty to widen a coronary artery. He had been treated with the blood thinner heparin for 48 hours after the procedure. The man complained that he felt feverish and tired. His cardiologist had done lab studies that indicated a new onset of anemia and a buildup of nitrogen and other kidney waste products in his blood. The man had had bypass surgery for heart disease a decade earlier. The case vignette continued to include details of the man’s physical exam, and then provided his lab test results. The correct diagnosis was cholesterol embolism — a condition in which shards of cholesterol break off from plaque in arteries and block blood vessels.

                    ————————

                    Strange answer.

                    • viraptor 2 days ago

                      Why strange?

                      • Pigalowda 2 days ago

                        In my opinion the answer doesn’t seem to be parsimonious or allow much gamemanship by the question stem.

                        The patient has post procedure back pain and lower extremity pain (likely femoral access for his angioplasty) and acute kidney injury. Normally I would go for a few things such as an iatrogenic complication like aortic dissection from the angiocath.

                        His anemia, back pain, AKI, and malaise are also be worrisome for retroperitoneal hemorrhage, especially with recent anticoagulation with heparin and catheterization.

                        I might also think of heparin induced thrombocytopenia and get a HIT panel given recent heparinization and anemia. Although I’m pretty sure it’s not this, info was given in the stem for it.

                        The patient already had a CABG so he’s got multivessel coronary artery disease, now he’s also had a POBA. Does he also have ischemic cardiomyopathy? It wouldn’t be a surprise - acute decompensation could cause his AKI due to renal perfusion deficit.

                        And lastly to get back and extremity pain with renal failure after an angio sounds like the catheter would have needed to disrupt a plaque in the descending aorta somewhere and then shower the kidneys and lower extremities with emboli.

                        Anyways I’m a radiologist not a clinician so I’m sure there’s plenty of things I’m not thinking of. Plain old multifocal cholesterol embolism must be more common than I thought. So maybe strange for me but not for GPT or others.

                    • smrtinsert 2 days ago

                      Well probably have to wait for a generational switch before we see doctors effectively leveraging AI.

                      • onecommentman 2 days ago

                        Or hospitals refusing to support the doctor in malpractice suits unless the MD can show they consulted with Dr Silicon during the diagnostic process. Include AI consults in clinical standards guidelines at the hospital level. Require AI training in CME licensing requirements.

                        • Onavo 2 days ago

                          And they better damn well check every single one of the vector database references before handing down a diagnosis.

                      • blackeyeblitzar 2 days ago

                        This doesn't surprise me. When I see a doctor in real life, they are moving patient to patient quickly and are always behind schedule. I'm sure they are also losing lots of time to submitting notes and administrative things. So they don't really have time to deeply analyze my health situation. Multiple times I've been the one to compare different test results, establish patterns or correlations, perform research, and suggest possible diagnoses or solutions to my doctor, who then double-checks my work and accepts my theory or offers an alternative. This collaborative approach has been helpful because there are many conditions where you could mistakenly match symptoms to some generic condition if you don't look at things more deeply. After all, there is a very broad set of medical conditions out there and a single doctor cannot reasonably be expected to know all of them deeply.

                        But I've also had medical professionals, particularly the non doctors (nurse practitioners, physicians assistants, etc) who are much less receptive and more fixated on their first guess, which has sometimes resulted in precious lost time and repeated visits for me. The linked research finding is interesting, and I think highlights the pitfall of professionals who believe too much in their own expertise or gut feeling even when they've not really examined the case carefully:

                        > The chatbot, from the company OpenAI, scored an average of 90 percent when diagnosing a medical condition from a case report and explaining its reasoning. Doctors randomly assigned to use the chatbot got an average score of 76 percent. Those randomly assigned not to use it had an average score of 74 percent.

                        > The study showed more than just the chatbot’s superior performance.

                        > It unveiled doctors’ sometimes unwavering belief in a diagnosis they made, even when a chatbot potentially suggests a better one.

                        > And the study illustrated that while doctors are being exposed to the tools of artificial intelligence for their work, few know how to exploit the abilities of chatbots. As a result, they failed to take advantage of A.I. systems’ ability to solve complex diagnostic problems and offer explanations for their diagnoses.

                        • babyshake 2 days ago

                          The one time in my life I had a life-threatening illness, multiple doctors told me I had nothing to worry about it and the symptoms would go away on their own before I was able to meet with a doctor who took my concerns seriously, and an MRI then found the issue. That experience has really stuck with me and I suspect it is very common.

                          • trogdor a day ago

                            What was the issue?