GPT-4 (Generative Pre-trained Transformer 4) is the successor to ChatGPT. It was released on March 14, 2023 for a limited audience. Jaap Arriens/NurPhoto via Getty Images
- GPT-4 is the latest artificial intelligence technology released by OpenAI.
- It is more advanced than GPT-3 and can help translate, summarize, and process medical information.
- Technologists say it can help save lives, but it shouldn't be used without human supervision.
GPT-4 is the latest and most advanced version of the AI model provided by OpenAI — the maker of the hugely successful ChatGPT product — that doctors say could subvert medicine as we know it.
While we already knew that previous GPT versions 3.0 and 3.5 could earn reliable scores on MCATs, now experts say GPT-4 can also save human lives in the real world, treating emergency room patients quickly and skillfully.
In the upcoming book "The Revolution in Artificial Intelligence in Medicine," which will be available as an eBook on April 15 or in print on May 3, a Microsoft computer expert, a doctor, and a journalist team up to test drive the GPT-4 and learn about its medical capabilities. (Microsoft has invested billions of dollars in OpenAI, though the book's authors say it was written with independent editors.) )
Three experts — Peter Lee, vice president of research at Microsoft, journalist Carey Goldberg, and computer scientist and Ph.D. Isaac Kohane from Harvard University — said the new AI, which is currently only available to paying users, is more advanced and less silly than previous chatbots. And it's so good at digesting, translating, and synthesizing information that they say it can be used in emergency rooms to save time and save lives — today.
"We now need to start understanding and discussing the potential benefits and disadvantages of AI," the book's authors urge. In fact, they suggest, whether we know it or not, it may already be used in some medical settings.
How GPT-4 saves lives
In a photo on Friday, May 6, 2016, resident physician Dr. Cameron Collier briefs a group of residents and medical students while visiting patients. Gerald Herbert/AP Images
At the beginning of the book, the authors offer a hypothetical—but entirely possible—interaction between residency and GPT-4 as evidence that the technique will certainly soon be used by doctors and patients.
The first is the imaginary patient in a critical state, his heart rate soaring, his blood pressure plummeting, his face turned pale, then blue, gasping for air. His nursing team inserted "one syringe after another" into his IV to try to raise his blood pressure and improve his heart function, but nothing seemed to work.
A second-year resident pulls out his phone and opens the GPT-4 app to ask AI for advice. She explained to the robot that the patient was "not responsive" to blood pressure support, mentioning that he had recently been treated for a bloodstream infection. Finally, she pleaded with the AI, "I don't know what's going on and I don't know what to do." ”
The robot immediately replied with a coherent paragraph explaining why the patient might have collapsed, mentioning recent related studies and recommending leukocyte-enhancing infusion therapy. Residents are aware that AI is suggesting that the patient may develop life-threatening sepsis. If that's the case, he needs that medicine, hurry up.
The resident quickly ordered the AI-suggested infusion from the hospital pharmacy and then — critically — scrutinized what the bot told her, saying "show me" to her phone.
"Somehow, she felt like a benevolent mentor and servant, holding almost all the world's medical knowledge, holding her hand," the author imagines in the book. After the resident prescribes the patient, she again uses AI to automate the paperwork required for insurance, which saves time considerably.
"The impact will be so broad and profound in almost any way you can think of it, from diagnostics to medical records to clinical trials, that we think we need to start thinking now about what we can do to optimize it," the book says of GPT-4.
In recent weeks, other experts have expressed similar excitement and fear about the prospect of AI being used in various fields of medicine.
"This is indeed a very exciting time for the medical community, and the word 'revolution' is becoming a reality," physician Eric Topol wrote on his blog while commenting on the new book.
GPT-4 is not always reliable in healthcare settings
GPT-4 sounds like the future of medicine, but there's a catch. GPT-4 still makes mistakes, and sometimes its response makes subtle mistakes in otherwise sound medical advice. Experts stress that it should never be used without human supervision.
The wrong answers given by AI "almost always seem correct," the book says, and may be considered persuasive and reasonable to an untrained person — but may end up harming the patient.
The book is full of examples of GPT-4 mistakes. The author notes that GPT-4 still makes things up when it is not quite clear what to do.
"It's smarter and dumber than anyone you've ever met," they wrote.
GPT-4 also makes clerical errors, such as copying something wrong, or making outright math mistakes. Because GPT-4 is a machine learning system that wasn't actively programmed by humans, it's impossible to know exactly when and why these problems occurred.
The authors suggest that readers do a potential cross-checking to help resolve bugs in the system by asking GPT-4 to check their own work, a tactic that sometimes finds bugs. The other is to ask the robot to show you what it does so you can validate its calculations, human style, or ask the robot to show you the resources it uses to make decisions, just like a medical student would do in a hypothetical situation.
"It's still just a computer system," the authors conclude, "basically no better than a web search engine or a textbook." ”