Human AI Interaction

Even the smartest AI is ineffective if the users cannot understand or control it. Human AI Interaction research focuses on interfaces, explanations, and workflows so people can use AI confidently, not blindly or reluctantly. HAI studies help design systems that communicate uncertainty, limitations, and confidence clearly—so users rely on AI when they should and question it when they shouldn’t. HAI explores task-sharing, decision support, and cooperative workflows where humans bring judgment and context, and AI brings speed and pattern recognition.

Human–AI interaction is highly interdisciplinary, combining:

• Computer science (machine learning, AI systems)
• Psychology and cognitive science (perception, trust, decision-making)
• Design and usability (interfaces, workflows)
• Ethics and policy (fairness, accountability, transparency)
• Social sciences (human behavior, culture, power dynamics)

As poorly designed AI can lead to misuse or over-trust, confusion or frustration and unsafe decisions in high-stakes settings like healthcare. Human–AI interaction research can provide guidance on how AI systems can be designed to work with people, aligning technological capability with human needs, values, and limitations. 

1. An Explainable Deep Learning Algorithm to analyse Bone marrow aspirates. PI: Asst. Prof Eugene Fan

2. Explainable Artificial Intelligence for Peripheral Blood Film Analysis. Co-PI: Asst. Prof Fan Xiuyi


1. Kumar, R.S. et al. (2024). On Identifying Effective Investigations with Feature Finding Using Explainable AI: An Ophthalmology Case Study. In: Finkelstein, J., Moskovitch, R., Parimbelli, E. (eds) Artificial Intelligence in Medicine. AIME 2024. Lecture Notes in Computer Science(), vol 14845. Springer, Cham. https://doi.org/10.1007/978-3-031-66535-6_34

2. X. Fan, "Position Paper: Integrating Explainability and Uncertainty Estimation in Medical AI," 2025 International Joint Conference on Neural Networks (IJCNN), Rome, Italy, 2025, pp. 1-8, https://doi.org/10.1109/IJCNN64981.2025.11228229 

3. Bong, J.H., Cai, C., Toh, S.Y., Liu, S., Fan, X. (2025). Interactive Sensemaking with SurveySense: Enhancing Survey Insights Through Human-AI Collaboration on an LLM-Based Platform. In: Sottilare, R.A., Schwarz, J. (eds) Adaptive Instructional Systems. HCII 2025. Lecture Notes in Computer Science, vol 15813. Springer, Cham. https://doi.org/10.1007/978-3-031-92970-0_11