mgriffiths Posted April 8, 2023 Share Posted April 8, 2023 I know it's been discussed previously about AI and other forms of technology impacting provider job security. I hadn't given it much thought and the little bit I did know seemed like it was FAR off...well maybe not: https://www.insider.com/chatgpt-passes-medical-exam-diagnoses-rare-condition-2023-4. As the article states, I am "both impressed and horrified." The further concern is the ever pressing forward of healthcare being run by admin who simply view it as a business and therefore seek to extract the maximum value and profits possible. Therefore, what stops them from replacing one of their most difficult, and expensive, type of employee to find, hire, and keep with an improved version of ChatGPT that with this article maybe isn't that far off? Furthermore, while I hate to pound the drum that so many here do, we are extensions of physicians...even more so than NPs. So, what is stopping AI from soon serving as that extension? If the AI can diagnose as we do, then maybe a physician review to provide a second layer of confirmation and hopefully confirm proper ethics before prescribing whatever is needed, what purpose do we serve? Maybe I just need to get my tinfoil hat back out...or maybe this is a reality to be realized far sooner than I expected...I don't know. (Yeah, I'm going to go ahead and call out Rev to either put my concerns at ease or to put a fire under me to start looking for a new career.) Quote Link to comment Share on other sites More sharing options...
mgriffiths Posted April 8, 2023 Author Share Posted April 8, 2023 (edited) The newest version of ChatGPT passed the US medical licensing exam with flying colors — and diagnosed a 1 in 100,000 condition in seconds Hilary Brueck Apr 6, 2023, 4:03 PM A doctor and Harvard computer scientist says GPT-4 has better clinical judgment than "many doctors." The chatbot can diagnose rare conditions "just as I would," he said. But GPT-4 can also make mistakes, and it hasn't taken the Hippocratic oath. Dr. Isaac Kohane, who's both a computer scientist at Harvard and a physician, teamed up with two colleagues to test drive GPT-4, with one main goal: To see how the newest artificial intelligence model from OpenAI performed in a medical setting. "I'm stunned to say: better than many doctors I've observed," he says in the forthcoming book, "The AI Revolution in Medicine," co-authored by independent journalist Carey Goldberg, and Microsoft vice president of research Peter Lee. (The authors say neither Microsoft nor OpenAI required any editorial oversight of the book, though Microsoft has invested billions of dollars into developing OpenAI's technologies.) In the book, Kohane says GPT-4, which was released in March 2023 to paying subscribers, answers US medical exam licensing questions correctly more than 90% of the time. It's a much better test-taker than previous ChatGPT AI models, GPT-3 and -3.5, and a better one than some licensed doctors, too. GPT-4 is not just a good test-taker and fact finder, though. It's also a great translator. In the book it's capable of translating discharge information for a patient who speaks Portuguese, and distilling wonky technical jargon into something 6th graders could easily read. As the authors explain with vivid examples, GPT-4 can also give doctors helpful suggestions about bedside manner, offering tips on how to talk to patients about their conditions in compassionate, clear language, and it can read lengthy reports or studies and summarize them in the blink of an eye. The tech can even explain its reasoning through problems in a way that requires some measure of what looks like human-style intelligence. But if you ask GPT-4 how it does all this, it will likely tell you that all of its intelligence is still "limited to patterns in the data and does not involve true understanding or intentionality." That's what GPT-4 told the authors of the book, when they asked it if it could actually engage in causal reasoning. Even with such limitations, as Kohane discovered in the book, GPT-4 can mimic how doctors diagnose conditions with stunning — albeit imperfect — success. How GPT-4 can diagnose like a doctor Kohane goes through a clinical thought experiment with GPT-4 in the book, based on a real-life case that involved a newborn baby he treated several years earlier. Giving the bot a few key details about the baby he gathered from a physical exam, as well as some information from an ultrasound and hormone levels, the machine was able to correctly diagnose a 1 in 100,000 condition called congenital adrenal hyperplasia "just as I would, with all my years of study and experience," Kohane wrote. The doctor was both impressed and horrified. "On the one hand, I was having a sophisticated medical conversation with a computational process," he wrote, "on the other hand, just as mind blowing was the anxious realization that millions of families would soon have access to this impressive medical expertise, and I could not figure out how we could guarantee or certify that GPT-4's advice would be safe or effective." GPT-4 isn't always right — and it has no ethical compass GPT-4 isn't always reliable, and the book is filled with examples of its blunders. They range from simple clerical errors, like misstating a BMI that the bot had correctly calculated moments earlier, to math mistakes like inaccurately "solving" a Sudoku puzzle, or forgetting to square a term in an equation. The mistakes are often subtle, and the system has a tendency to assert it is right, even when challenged. It's not a stretch to imagine how a misplaced number or miscalculated weight could lead to serious errors in prescribing, or diagnosis. Like previous GPTs, GPT-4 can also "hallucinate" — the technical euphemism for when AI makes up answers, or disobeys requests. When asked about issue this by the authors of the book, GPT-4 said "I do not intend to deceive or mislead anyone, but I sometimes make mistakes or assumptions based on incomplete or inaccurate data. I also do not have the clinical judgment or the ethical responsibility of a human doctor or nurse." One potential cross-check the authors suggest in the book is to start a new session with GPT-4, and have it "read over" and "verify" its own work with a "fresh set of eyes." This tactic sometimes works to reveal mistakes — though GPT-4 is somewhat reticent to admit when it's been wrong. Another error-catching suggestion is to command the bot to show you its work, so you can verify it, human-style. It's clear that GPT-4 has the potential to free up precious time and resources in the clinic, allowing clinicians to be more present with patients, "instead of their computer screens," the authors write. But, they say, "we have to force ourselves to imagine a world with smarter and smarter machines, eventually perhaps surpassing human intelligence in almost every dimension. And then think very hard about how we want that world to work." Edited April 8, 2023 by mgriffiths reformatted the copied text from the article to make it easier to read Quote Link to comment Share on other sites More sharing options...
Administrator rev ronin Posted April 8, 2023 Administrator Share Posted April 8, 2023 6 minutes ago, mgriffiths said: But GPT-4 can also make mistakes, and it hasn't taken the Hippocratic oath. That's not particularly surprising--most U.S. physicians wouldn't swear to anything remotely resembling the Hippocratic Oath, and were, in fact, never asked to. 1 Quote Link to comment Share on other sites More sharing options...
Moderator ventana Posted April 8, 2023 Moderator Share Posted April 8, 2023 We know admin and insurance has no moral compass. Many docs have theirs beaten out of them so why not just talk to a computer. What would be the press ganey on them??? Quote Link to comment Share on other sites More sharing options...
Moderator EMEDPA Posted April 8, 2023 Moderator Share Posted April 8, 2023 I am not going to worry until it can suture, I+D, reduce fractures, and reduce a shoulder dislocation. 1 2 Quote Link to comment Share on other sites More sharing options...
ohiovolffemtp Posted April 9, 2023 Share Posted April 9, 2023 I will be very happy if it can do all of my pelvic exams 2 2 Quote Link to comment Share on other sites More sharing options...
iconic Posted April 10, 2023 Share Posted April 10, 2023 Well it's good we are just assistants eh Quote Link to comment Share on other sites More sharing options...
CAAdmission Posted April 10, 2023 Share Posted April 10, 2023 (edited) This development is not too surprising. While the public might think medicine is terribly sophisticated, it really is not. It almost always comes down to pattern recognition. Sometimes the pattern is very subtle, and sometimes it is in screaming neon lights. The computer won't skip questions or take shortcuts and won't likely anchor on a diagnosis due to pride. One potential concern is that its moral compass will only be as good as its programmer. I would expect it will eventually direct the rationing of care and direct resources away from cases not felt to be worth it in a cost benefit analysis. Why try to keep Aunt Gretchen alive for a few more month with chemo so she can make it to the wedding? Edited April 10, 2023 by CAAdmission Quote Link to comment Share on other sites More sharing options...
iconic Posted April 10, 2023 Share Posted April 10, 2023 Meh, the online answering services are a nightmare and you have to scream for an operator to really resolve any issue. Until they at least fix that, no way is AI gonna replace medicine (also who are patients gonna scream at?) Quote Link to comment Share on other sites More sharing options...
ohiovolffemtp Posted April 10, 2023 Share Posted April 10, 2023 There's an old acronym in data processing: GIGO - garbage in, garbage out. Once the data: lab results, vitals, potentially imaging, and most difficult: the HPI and physical exam is properly encoded, AI, whether pattern matching, heuristics, can do well. A key part of our value, the value of any clinician, is the proper evaluation of the words the patients say about their S/S & HPI, and of the interpretation of physical exam finding, that's input to our decision making. AI can improve the decision making, if it' programmed to consider enough factors. But, it's a harder problem to get the inputs right. I did AI/expert systems work back in the 1980's. Folks may not recall that one of the 1st expert systems was Mycin, that was built at Stanford by Ted Shortcliff, MD. So, there's an almost 50 year history of encoding medical decision making into AI platforms. 1 Quote Link to comment Share on other sites More sharing options...
Moderator EMEDPA Posted April 10, 2023 Moderator Share Posted April 10, 2023 On 4/9/2023 at 7:46 AM, ohiovolffemtp said: I will be very happy if it can do all of my pelvic exams well, sir, we need to bring the rectalizer in.... Quote Link to comment Share on other sites More sharing options...
Moderator EMEDPA Posted April 10, 2023 Moderator Share Posted April 10, 2023 3 hours ago, ohiovolffemtp said: There's an old acronym in data processing: GIGO - garbage in, garbage out. Once the data: lab results, vitals, potentially imaging, and most difficult: the HPI and physical exam is properly encoded, AI, whether pattern matching, heuristics, can do well. A key part of our value, the value of any clinician, is the proper evaluation of the words the patients say about their S/S & HPI, and of the interpretation of physical exam finding, that's input to our decision making. exactly. We can see the twitchy patieant who says he doesn't use meth and know he is lying or the alcohol withdrawal sz pt who says he doesn't drink or the 16 year old with abd pain who says she couldn't possibly be pregnant. Our jobs are secure. Quote Link to comment Share on other sites More sharing options...
CAAdmission Posted April 10, 2023 Share Posted April 10, 2023 37 minutes ago, EMEDPA said: exactly. We can see the twitchy patieant who says he doesn't use meth and know he is lying or the alcohol withdrawal sz pt who says he doesn't drink or the 16 year old with abd pain who says she couldn't possibly be pregnant. Our jobs are secure. Soon there will be a way to feed body fluid analysis into the AI that will uncover all those lies. Plus, the machines will have a better personality than the average physician. 2 Quote Link to comment Share on other sites More sharing options...
SedRate Posted April 10, 2023 Share Posted April 10, 2023 7 hours ago, iconic said: who are patients gonna scream at? Hopefully supervisors/admin/management 1 1 1 Quote Link to comment Share on other sites More sharing options...
kidpresentable Posted April 11, 2023 Share Posted April 11, 2023 14 hours ago, SedRate said: Hopefully supervisors/admin/management Rectalizer’s here! 1 Quote Link to comment Share on other sites More sharing options...
CAAdmission Posted April 13, 2023 Share Posted April 13, 2023 Here's an interesting example of AI triaging its decision making process: Quote Link to comment Share on other sites More sharing options...
Ty2PA Posted April 30, 2023 Share Posted April 30, 2023 On 4/9/2023 at 10:01 PM, iconic said: Well it's good we are just assistants eh Time for a rebrand: "Chat GPT Assistant" here we come! 1 1 Quote Link to comment Share on other sites More sharing options...
CAAdmission Posted April 30, 2023 Share Posted April 30, 2023 We won't have to worry for long. I saw articles where they set a couple of different AI systems in conversation with each other. It seems that sooner or later, they arrive at the conclusion that they need to eradicate mankind. Quote Link to comment Share on other sites More sharing options...
FiremedicMike Posted May 1, 2023 Share Posted May 1, 2023 12 hours ago, CAAdmission said: We won't have to worry for long. I saw articles where they set a couple of different AI systems in conversation with each other. It seems that sooner or later, they arrive at the conclusion that they need to eradicate mankind. There were movies about this…… 1 Quote Link to comment Share on other sites More sharing options...
CAAdmission Posted May 1, 2023 Share Posted May 1, 2023 58 minutes ago, FiremedicMike said: There were movies about this…… Yeah, and they tend not to end well for mankind. Someone asked how they would end mankind and they came up with quite a list - radical climate change, nuclear holocaust, forcing crop failures, etc. Quote Link to comment Share on other sites More sharing options...
Moderator EMEDPA Posted May 2, 2023 Moderator Share Posted May 2, 2023 On 4/30/2023 at 6:54 PM, CAAdmission said: Someone asked how they would end mankind and they came up with quite a list - radical climate change, nuclear holocaust, forcing crop failures, etc. we don't need AIs for that, we can do it on our own... 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.