ChatGPT-4 Performs Clinical Information Retrieval Tasks Utilizing Consistently More Trustworthy Resources Than Does Google Search for Queries Concerning the Latarjet Procedure.

ChatGPT-4 Performs Clinical Information Retrieval Tasks Utilizing Consistently More Trustworthy Resources Than Does Google Search for Queries Concerning the Latarjet Procedure.

Oeding, Jacob F;Lu, Amy Z;Mazzucco, Michael;Fu, Michael C;Taylor, Samuel A;Dines, David M;Warren, Russell F;Gulotta, Lawrence V;Dines, Joshua S;Kunze, Kyle N;
Arthroscopy : the journal of arthroscopic & related surgery : official publication of the Arthroscopy Association of North America and the International Arthroscopy Association 2024
34
oeding2024chatgpt4arthroscopy

Abstract

To assess the ability for ChatGPT-4, an automated Chatbot powered by artificial intelligence (AI), to answer common patient questions concerning the Latarjet procedure for patients with anterior shoulder instability and compare this performance to Google Search Engine.Using previously validated methods, a Google search was first performed using the query "Latarjet." Subsequently, the top ten frequently asked questions (FAQs) and associated sources were extracted. ChatGPT-4 was then prompted to provide the top ten FAQs and answers concerning the procedure. This process was repeated to identify additional FAQs requiring discrete-numeric answers to allow for a comparison between ChatGPT-4 and Google. Discrete, numeric answers were subsequently assessed for accuracy based on the clinical judgement of two fellowship-trained sports medicine surgeons blinded to search platform.Mean (±standard deviation) accuracy to numeric-based answers were 2.9±0.9 for ChatGPT-4 versus 2.5±1.4 for Google (p=0.65). ChatGPT-4 derived information for answers only from academic sources, which was significantly different from Google Search Engine (p=0.003), which used only 30% academic sources and websites from individual surgeons (50%) and larger medical practices (20%). For general FAQs, 40% of FAQs were found to be identical when comparing ChatGPT-4 and Google Search Engine. In terms of sources used to answer these questions, ChatGPT-4 again used 100% academic resources, while Google Search Engine used 60% academic resources, 20% surgeon personal websites, and 20% medical practices (p=0.087).ChatGPT-4 demonstrated the ability to provide accurate and reliable information about the Latarjet procedure in response to patient queries, using multiple academic sources in all cases. This was in contrast to Google Search Engine, which more frequently used single surgeon and large medical practice websites. Despite differences in the resources accessed to perform information retrieval tasks, the clinical relevance and accuracy of information provided did not significantly differ between ChatGPT-4 and Google Search Engine.

Citation

ID: 279331
Ref Key: oeding2024chatgpt4arthroscopy
Use this key to autocite in SciMatic or Thesis Manager

References

Blockchain Verification

Account:
NFT Contract Address:
0x95644003c57E6F55A65596E3D9Eac6813e3566dA
Article ID:
279331
Unique Identifier:
S0749-8063(24)00407-9
Network:
Scimatic Chain (ID: 481)
Loading...
Blockchain Readiness Checklist
Authors
Abstract
Journal Name
Year
Title
5/5
Creates 1,000,000 NFT tokens for this article
Token Features:
  • ERC-1155 Standard NFT
  • 1 Million Supply per Article
  • Transferable via MetaMask
  • Permanent Blockchain Record
Blockchain QR Code
Scan with Saymatik Web3.0 Wallet

Saymatik Web3.0 Wallet