Large language models can be remarkably capable, but they can also fail in ways that are subtle, surprising, and at times consequential. This tutorial explores how biases are introduced and amplified during model training, why these systems sometimes prioritise agreeable responses over accurate ones, and how their confident tone can mask uncertainty or error. Through real-world case studies, we'll examine the practical implications of these limitations for AI safety and healthcare settings, and equip participants with foundational prompt engineering techniques to use these tools more critically and effectively.
Email us at communications@cnia.ca
Copyright © Canadian Nursing Informatics Association