Artificial Intelligence and Managerial Decision-Making: A Multidisciplinary Perspective on Smart Organizations
DOI:
https://doi.org/10.7492/vv4b3w67Abstract
AI testing platforms are evolving faster than their evaluation logic, exposing a structural gap in motivation-sensitive adaptive testing engines. Conventional assessment systems optimize question difficulty but fail to optimize learner cognition, motivation sustainability, and overload sensitivity, making platform intelligence deterministic and late-reactive. This paper investigates how gamified adaptive testing platforms can engineer motivation-aware assessment pipelines that minimize cognitive rupture while maximizing engagement stability. A Gamified Adaptive Testing Intelligence Stack (GATIS) is proposed, integrating adaptive question tuning, engagement entropy diagnostics, response-effort contraction analytics, leaderboard pressure modeling, reward-fatigue detection, task-switching density logs, latency-aware fragility clustering, and early anomaly inference. Findings indicate that static scoring engines catastrophically misinterpret motivation as UI interactions, inflating engagement illusions while under-detecting cognitive strain until disengagement occurs. Adaptive AI models surface overload signatures 3–5 weeks earlier than threshold-based LMS analytics, enabling pre-rupture intervention. The research concludes that gamification without early uncertainty-aware scoring and cognition-adaptive modeling remains decorative theater. The study positions AI testing systems as behavioral networks, not scoreboards, reinforcing the need for platform architectures that adapt to cognition as aggressively as they adapt to questions. The results confirm that motivation and overload are structurally path-dependent, and only early uncertainty-aware modeling prevents engagement collapse.














