TOPLINE:
ChatGPT offers mostly accurate insights into cardiac arrest, benefiting laypeople, but cardiopulmonary resuscitation (CPR)-related answers require improvement.
METHODOLOGY:
- A list of 40 cardiac arrest- and CPR-related questions was prepared.
- ChatGPT responses to the questions were assessed and rated by 14 healthcare professionals and 16 laypeople on a scale of 1 (poor) to 5 (excellent).
- The relevance, readability, content, comprehensiveness, and accuracy of ChatGPT-generated answers were evaluated.
- Overall, both professionals and laypeople rated all questions positively (4.3±0.7), with laypeople providing significantly higher scores vs professionals (4.6±0.7 vs 4.0±0.5; P =.02).
- Clarity (4.4±0.6), relevance (4.3±0.6), accuracy (4.0±0.6), and comprehensiveness (4.2±0.7) of answers received high scores from both professionals and laypeople.
- CPR-related inquiries consistently received lower scores by both groups, across all parameters.
"As large language models like ChatGPT will play an increasingly significant role in the future, it is imperative to establish robust monitoring measures for healthcare-related content generated by these systems," the authors concluded.
SOURCE:
This study, with lead author Tommaso Scquizzato, MD, Department of Anesthesia and Intensive Care at IRCCS San Raffaele Scientific Institute, Italy, was published online on December 9, 2023, in Resuscitation.
LIMITATIONS:
- The study's list of questions may not have covered all potential inquiries, possibly missing inputs from certain groups, such as family members of nonsurvivors or those with specific neurological sequelae.
- The study involved Sudden Cardiac Arrest UK members (who are usually younger than the average population of cardiac arrest survivors), which might not fully represent the broader population of cardiac arrest survivors.
DISCLOSURES:
The study received no external funding. The authors reported no conflicts of interest.
Comments