Profile Hover Image

Xinying Hou

xyhou [at] umich [dot] edu

PhD Candidate
School of Information
University of Michigan
Ann Arbor

Google Scholar · LinkedIn · Email

News

7/2025 I am serving on the EAAI 2026 PC, collocated with AAAI-26
7/2025 I coordinated the 1st International Workshop on AI Literacy Education For All in AIED 2025. Thanks all for coming and contributing!
4/2025 One Full paper and one Late-breaking paper are accepted to AIED 2025!

Selected Publications

Check my Google Scholar profile for a full list of publications and see who's citing them!

* Equal Contribution

Underlined names indicate that I contributed in a mentoring capacity

An LLM-Enhanced Multi-agent Architecture for Conversation-Based Assessment
Xinying Hou, Carol Forsyth, Jessica Andrews-Todd, James Rice, Zhiqiang Cai, Yang Jiang, Diego Zapata-Rivera, Art Graesser
International Conference on Artificial Intelligence in Education
Conversation-based assessments (CBA), which evaluate student knowledge through interactive dialogues with artificial agents on a given topic, can help address non-effortful formative test-taking and the lack of adaptability in traditional assessment. This work employs evidence-centered design framework with LLM techniques to establish a multi-agent architecture for conversation-based assessment. It includes four LLM agents: two student-facing agents and two behind-the-scenes agents. All agents are monitored by a non-LLM agent (Watcher), which manages the assessment flow through updated instructions to agents and turn control.
Improving Student-AI Interaction Through Pedagogical Prompting: An Example in Computer Science Education
Ruiwei Xiao, Xinying Hou*, Runlong Ye*, Majeed Kazemitabaar*, Nicholas Diana, Michael Liut, John Stamper
Under Review: Computers & Education: Artificial Intelligence (Journal)
We first proposed pedagogical prompting, a theoretically-grounded new concept to elicit learning-oriented responses from LLMs. For proof-of-concept learning intervention in a real educational setting, we selected early undergraduate CS education (CS1/CS2) as the example context. Based on instructor insights, we designed and developed a learning intervention as an interactive system with scenario-based instruction to train pedagogical prompting skills. Finally, we assessed its effectiveness with pre/post-tests in a user study of CS undergraduates.
CodeTailor: LLM-Powered Personalized Parsons Puzzles for Engaging Support While Learning Programming
Xinying Hou, Zihan Wu, Xu Wang, Barbara J Ericson
ACM Conference on Learning @ Scale 🏅 Best Paper Nomination
We presented CodeTailor, a system that leverages a large language model (LLM) to provide personalized help to students while still encouraging cognitive engagement. CodeTailor provides a personalized Parsons puzzle to support struggling students. In a Parsons puzzle, students place mixed-up code blocks in the correct order to solve a problem. CodeTailor distinguishes itself from existing LLM-based products by providing an active learning opportunity where students are expected to "solve" the puzzle rather than simply acting as passive consumers by "read- ing" a direct solution.
How novices use LLM-based code generators to solve CS1 coding tasks in a self-paced learning environment
Majeed Kazemitabaar, Xinying Hou, Austin Henley, Barbara J Ericson, David Weintrop, Tovi Grossman
ACM Koli Calling International Conference on Computing Education Research
We presented the results of a thematic analysis on a data set from 33 learners as they independently learned Python by working on 45 code-authoring tasks with access to an AI Code Generator based on OpenAI Codex. Our analysis reveals four distinct coding approaches when writing code with an AI code generator: AI Single Prompt; AI Step-by-Step; Hybrid; and Manual coding, where learners wrote the code themselves.
Using Adaptive Parsons Problems to Scaffold Write-code Problems
Xinying Hou, Barbara J Ericson, Xu Wang
ACM Conference on International Computing Education Research
In this paper, we explore using Parsons problems to scaffold novice programmers who are struggling while solving write-code problems. Parsons problems, in which students put mixed-up code blocks in order, can be created quickly and already serve thousands of students while other types of programming support methods are expensive to develop or do not scale. We conducted studies in which novices were given equivalent Parsons problems as optional scaffolding while solving write-code problems. We investigated when, why, and how students used the Parsons problems as well as their perceptions of the benefits and challenges.
Assessing the effects of open models of learning and enjoyment in a digital learning game
Xinying Hou, Huy Anh Nguyen, J Elizabeth Richey, Erik Harpstead, Jessica Hammer, Bruce M McLaren
International Journal of Artificial Intelligence in Education
In this math digital learning game, one version encouraged playing and learning through an open learner model, while one encouraged playing for enjoyment through an analogous open enjoyment model. We compared these versions to a control version that is neutral with respect to learning and enjoyment. The learning-oriented group engaged more in re-practicing, while the enjoyment-oriented group demonstrated more exploration of different mini-games. In turn, our analyses have led to preliminary ideas about how to use AI to provide recommendations that are more aligned with students’ dynamic learning and enjoyment states and preferences.