Announcement:
I'm honored to have received the NSF Graduate Research Fellowship (GRFP)! I'll be using this support to study whether connecting coursework to students' interests and lived experiences can improve learning.

About

I’m interested in making computing education more accessible and personally meaningful through Human-Computer Interaction (HCI) and AI. My interest in computing began in seventh grade when I started programming on my TI-84 calculator. At Temple University, I began researching how to help novice students understand programming concepts.

My work currently explores how large language models (LLMs) can generate analogies, explanations, and learning materials that reflect students' interests and backgrounds. I am passionate about making computing more inclusive, especially for students who don’t yet see themselves represented in technical spaces. In the fall of 2025, I began my PhD at the University of Michigan School of Information, where I am advised by Dr. Barbara Ericson. Go blue!

Research

My research focuses on using LLMs to personalize learning in computing education. I study how AI-generated explanations and analogies can be adapted to align with students’ interests, cultural backgrounds, and learning needs.

I combine LLMs with perspectives from HCI to better understand how to support intrinsic motivation and deliver adaptive feedback. This includes building interactive tools, analyzing student responses, and evaluating how personalized support affects comprehension. I’m focused on making sure these systems help students learn, not replace or mislead them.

Publications

Sort: Filter:

Beyond the Benefits: A Systematic Review of the Harms and Consequences of Generative AI in Computing Education

Seth Bernstein, Ashfin Rahman, Nadia Sharifi, Ariunjargal Terbish, Stephen MacNeil

Koli Calling 2025Conference

Like a Nesting Doll: Analyzing Recursion Analogies Generated by CS Students using Large Language Models

Seth Bernstein, Paul Denny, Juho Leinonen, Stephen MacNeil, et al.

ITiCSE 2024Conference

Analyzing Students' Preferences for LLM-Generated Analogies

Seth Bernstein, Paul Denny, Juho Leinonen, Stephen MacNeil, et al.

ITiCSE 2024Poster

Assessing the Role of Diversity in LLM Explanations for Enhancing Student Understanding

Kush Patel, Seth Bernstein, Rayhona Nasimova, Paul Denny, Juho Leinonen, Stephen MacNeil

SIGCSE TS 2026Poster

Generating Diverse Code Explanations using the GPT-3 Large Language Model

Stephen MacNeil, Andrew Tran, Dan Mogil, Seth Bernstein, et al.

ICER 2022Poster

Comparing Code Explanations Created by Students and Large Language Models

Juho Leinonen, Paul Denny, Stephen MacNeil, Sami Sarsa, Seth Bernstein, et al.

ITiCSE 2023Conference

The Implications of Large Language Models for CS Teachers and Students

Stephen MacNeil, Joanne Kim, Juho Leinonen, Paul Denny, Seth Bernstein, et al.

SIGCSE 2023BoF

Decoding Logic Errors: A Comparative Study on Bug Detection by Students and Large Language Models

Stephen MacNeil, Paul Denny, Andrew Tran, Juho Leinonen, Seth Bernstein, et al.

ACE 2024Conference

Experiences from Using Code Explanations Generated by Large Language Models in a Web Software Development E-Book

Stephen MacNeil, Andrew Tran, Arto Hellas, Joanne Kim, Sami Sarsa, Paul Denny, Seth Bernstein, et al.

SIGCSE 2023Conference

Automatically Generating CS Learning Materials with Large Language Models

Stephen MacNeil, Andrew Tran, Juho Leinonen, Paul Denny, Joanne Kim, Arto Hellas, Seth Bernstein, et al.

SIGCSE 2023Workshop

Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances

Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, Dan Mogil

arXiv 2023Preprint