laitimes

Nature | The big language model of general medicine assists medical decision-making

author:CNS Guide
Nature | The big language model of general medicine assists medical decision-making

Traditional models for medical prediction require the use of structured data, which greatly limits their application (1).

A recent work has used the natural language understanding capabilities of large language models (LLMs) to directly "read" clinical notes to achieve more convenient and accurate medical predictions (1).

Researchers such as Eric Karl Oermann of NYU Langone Health used full Langone Health electronic medical record data (7.25 million clinical notes from 387,144 patients in 4 hospitals) to train a model based on BERT (Bidirectional Encoder Representation with). Transformer)) architecture of the big language model - NYUTron (109 million parameters) (1, 2).

Further through task-specific fine-tuning, the model achieves more accurate (compared to models based on structured data (3)) medical prediction (all-cause rehospitalization, death in hospital, comorbidity index, length of hospitalization, insurance refusal, etc.) within 30 days, and can be seamlessly integrated into the diagnosis and treatment process to help doctors and other relevant personnel make judgments (1).

Nature | The big language model of general medicine assists medical decision-making

Training and Deployment of NYUTron Model (1)

The researchers further analyzed the predictive effect of the NYUTron model on all-cause rehospitalization within 30 days in various scenarios, and explored the performance of the model in terms of data efficiency, versatility and deployability (1).

Finally, combined with a prospective, single-arm, non-interventional clinical trial, the researchers initially evaluated the potential clinical impact of the model (1).

The work was published in Nature on June 7, 2023. The researchers say this work opens the way for translational deep learning and advances in natural language models to provide higher quality, affordable healthcare (1).

Comment(s):

Although the data is good, the medical information that can be "mastered" by the model with 109 million parameters should still be very limited. In the future, based on larger scale pre-training models such as GTP-4, further fine-tuning or better effect using medical/clinical data is used, or as a supplement to the current model, mutual verification of prediction results.

In addition, this work is based on clinical record text for training, which is a reasonable choice considering the current data volume and computing power; In the future, it is necessary to directly train multimodal large-language models based on first-hand data such as medical images and other examination results to give more meaningful predictions (4).

About the corresponding author (5):

Nature | The big language model of general medicine assists medical decision-making

Bibliography:

1. L. Y. Jiang, X. C. Liu, N. P. Nejatian, M. Nasir-Moin, D. Wang, A. Abidin, K. Eaton, H. A. Riina, I. Laufer, P. Punjabi, M. Miceli, N. C. Kim, C. Orillac, Z. Schnurman, C. Livia, H. Weiss, D. Kurland, S. Neifert, Y. Dastagirzada, D. Kondziolka, A. T. M. Cheung, G. Yang, M. Cao, M. Flores, A. B. Costa, Y. Aphinyanaphongs, K. Cho, E. K. Oermann, Health system-scale language models are all-purpose prediction engines. Nature, 1–6 (2023).

2. J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" in Proceedings of the 2019 Conference of the North (Association for Computational Linguistics, Stroudsburg, PA, USA, 2019; https://aclanthology.org/N19-1423), pp. 4171–4186.

3. T. Chen, C. Guestrin, "XGBoost" in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (ACM, New York, NY, USA, 2016; http://dx.doi.org/10.1145/2939672.2939785), pp. 785–794.

4. P. Rajpurkar, M. P. Lungren, The Current and Future State of AI Interpretation of Medical Images. N. Engl. J. Med.388, 1981–1990 (2023).

5. ‪Eric Karl Oermann - ‪Google Scholar, (available at https://scholar.google.com/citations?user=GQum-K4AAAAJ&hl=en).

Original link:

https://www.nature.com/articles/s41586-023-06160-y

Read on