Mitigating AI–Driven Information Disorder via a Fit–Based Framework
|
Name of Recipient |
Assoc Prof Yuen Kum Fai
|
|
Project Title |
Mitigating AI-Driven Information Disorder via a Fit-Based Framework |
|
Project Status |
Ongoing |
|
Year Awarded |
2025 |
|
Type of Grant |
Social Science & Humanities Research Thematic Grant |
|
Funding Type |
A |
This project proposes a novel fit-based framework grounded in Person-Environment (P-E) fit theory to systematically model and mitigate AI-driven information discover by aligning user informational demands and capabilities with AI informational features. The overarching objective of this project is to enhance public resilience to AI-driven information disorder by empirically modelling the informational fits and misfits in human–AI interaction.
Specifically, the project aims to:
-
Identify key informational vectors (i.e. user informational demands, user informational capabilities, and AI informational features) that influence user identification of information disorder;
-
Evaluate identification outcomes through an adjusted confusion matrix framework that captures true and false positives and negatives;
-
Construct a vector-based model to quantify four types of informational fits (demand–feature, feature–capability, within-demand, and demand–capability) and their effects on identification performance; and
-
Develop targeted interventions and system-level simulations that enhance fit, reduce misfit, and inform governance strategies.
The project’s contributions are twofold. Academically, it advances theoretical and methodological frontiers by introducing a scalable, empirically validated framework for studying human–AI informational fit. It bridges the gap between behavioural sciences and technical AI design, offering new conceptual tools (e.g. vector-based fit metrics, adjusted confusion matrix for human classifiers) and analytical approaches (e.g. latent class fit archetypes, agent-based simulations) to the study of information disorder. Non-academically, it offers timely, evidence-based insights to guide national and regional policies on AI explainability, digital literacy, and media governance.