Opzioni
Expert of Experts Verification and Alignment (EVAL) Framework for Large Language Models Safety in Gastroenterology
2025
Periodico
NPJ DIGITAL MEDICINE
Abstract
Large language models generate plausible text responses to medical questions, but inaccurate responses pose significant risks in medical decision-making. Grading LLM outputs to determine the best model or answer is time-consuming and impractical in clinical settings; therefore, we introduce EVAL (Expert-of-Experts Verification and Alignment) to streamline this process and enhance LLM safety for upper gastrointestinal bleeding (UGIB). We evaluated OpenAI's GPT-3.5/4/4o/o1-preview, Anthropic's Claude-3-Opus, Meta's LLaMA-2 (7B/13B/70B), and Mistral AI's Mixtral (7B) across 27 configurations, including zero-shot baseline, retrieval-augmented generation, and supervised fine-tuning. EVAL uses similarity-based ranking and a reward model trained on human-graded responses for rejection sampling. Among the employed similarity metrics, Fine-Tuned ColBERT achieved the highest alignment with human performance across three separate datasets (ρ = 0.81-0.91). The reward model replicated human grading with 87.9% of cases across temperature settings and significantly improved accuracy through rejection sampling by 8.36% overall. EVAL offers scalable potential to assess accuracy for high-stakes medical decision-making.
Diritti
open access
license:creative commons
license uri:http://creativecommons.org/licenses/by-nc-nd/4.0/
Soggetti
-
Best model
-
Clinical setting
-
Fine tuning
-
Gastrointestinal blee...
-
Human performance
-
Language model
-
Medical decision maki...
-
Rejection sampling
-
Similarity metric
-
Upper gastrointestina...
-
artificial intelligen...
-
benchmarking
-
gastroenterology
-
human
-
large language model
-
multiple choice test
-
reinforcement (psycho...
-
retrieval augmented g...
-
safety
-
supervised fine tunin...
-
upper gastrointestina...
-
zero shot prompting