Education and Training

Education and training in AI are essential to medical research because they determine whether AI is used as a meaningful clinical tool or becomes a misunderstood, misapplied, or even harmful technology. In medicine, where decisions directly affect human lives, this importance is especially high.

1. Ensuring safe and accurate use of AI
Medical researchers and clinicians must understand how AI systems work, their assumptions, and their limitations. Proper training helps users interpret outputs correctly, recognize errors or bias, and avoid over-reliance on automated recommendations.

2. Improving research quality and reproducibility
AI education equips medical researchers with skills in data quality, model validation, and statistical reasoning. This leads to more rigorous study design, transparent reporting, and replicable results—critical standards in medical research.

3. Bridging the gap between clinicians and data scientists
AI in medicine is inherently interdisciplinary. Training enables clinicians to communicate effectively with engineers and data scientists, aligning clinical needs with technical development and ensuring models address real medical problems.

4. Addressing ethical, legal, and regulatory challenges
Education in AI includes awareness of patient privacy, data governance, bias, and accountability. Trained researchers are better prepared to design systems that comply with regulations and uphold ethical standards in patient care.

5. Supporting clinical adoption and trust
Clinicians are more likely to trust and adopt AI tools when they understand how the AI system work and how decisions are generated. Training promotes informed skepticism and confidence, enabling AI to support—not replace—clinical judgment.

7. Preparing for future medical practice
As AI becomes embedded in diagnostics, treatment planning, and population health, education ensures that current and future medical professionals are equipped to work effectively alongside AI systems.

1. CAREER: Cultivating Autistic Resilience and Empowerment through Employment Readiness with Generative AI. PI: Asst. Prof. Fan Xiuyi 


1. Hong, S., Cai, C., Du, S., Feng, H., Liu, S., Fan, X. (2025). “My Grade is Wrong!”: A Contestable AI Framework for Interactive Feedback in Evaluating Student Essays. In: Cristea, A.I., Walker, E., Lu, Y., Santos, O.C., Isotani, S. (eds) Artificial Intelligence in Education. AIED 2025. Lecture Notes in Computer Science(), vol 15881. Springer, Cham. https://doi.org/10.1007/978-3-031-98462-4_4

2. Cai, C., Duell, J., Chen, D. M., Ho, W. K., Lee, B. T. K., Li, F., Liu, S., Ng, O., Vidya Sudarshan, Zhou, S.-M., Zhu, G., & Fan, X. (2025). Advancing AI literacy in medical education: A medical AI competency framework development. In A. I. Cristea, E. Walker, Y. Lu, O. C. Santos, & S. Isotani (Eds.), Artificial intelligence in education: 26th International Conference, AIED 2025 (pp. 116-123). Springer. https://doi.org/10.1007/978-3-031-98462-4_15

3. Sing Yee Toh, Chang Cai, Li Rong Wang, Xiaoyin Bai, Joanne Ngeow, and Xiuyi Fan. 2025. The Effect of Explainable AI and Uncertainty Quantification on Medical Students’ Perspectives of Decision-Making AI: A Cancer Screening Case Study. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA '25). Association for Computing Machinery, New York, NY, USA, Article 515, 1–13. https://doi.org/10.1145/3706599.3719791