The Promises and Pitfalls of Using Language Models to Measure Instruction Quality in Education


Journal article


Paiheng Xu, Jing Liu, Nathan Jones, Julie Cohen, Wei Ai
arXiv, NAACL 2024, 2024


View PDF
Cite

Cite

APA   Click to copy
Xu, P., Liu, J., Jones, N., Cohen, J., & Ai, W. (2024). The Promises and Pitfalls of Using Language Models to Measure Instruction Quality in Education. ArXiv, NAACL 2024. https://doi.org/10.48550/arXiv.2404.02444


Chicago/Turabian   Click to copy
Xu, Paiheng, Jing Liu, Nathan Jones, Julie Cohen, and Wei Ai. “The Promises and Pitfalls of Using Language Models to Measure Instruction Quality in Education.” arXiv, NAACL 2024 (2024).


MLA   Click to copy
Xu, Paiheng, et al. “The Promises and Pitfalls of Using Language Models to Measure Instruction Quality in Education.” ArXiv, NAACL 2024, 2024, doi:10.48550/arXiv.2404.02444.


BibTeX   Click to copy

@article{paiheng2024a,
  title = {The Promises and Pitfalls of Using Language Models to Measure Instruction Quality in Education},
  year = {2024},
  journal = {arXiv, NAACL 2024},
  doi = {10.48550/arXiv.2404.02444},
  author = {Xu, Paiheng and Liu, Jing and Jones, Nathan and Cohen, Julie and Ai, Wei}
}

Abstract

Assessing instruction quality is a fundamental component of any improvement efforts in the education system. However, traditional manual assessments are expensive, subjective, and heavily dependent on observers' expertise and idiosyncratic factors, preventing teachers from getting timely and frequent feedback. Different from prior research that mostly focuses on low-inference instructional practices on a singular basis, this paper presents the first study that leverages Natural Language Processing (NLP) techniques to assess multiple high-inference instructional practices in two distinct educational settings: in-person K-12 classrooms and simulated performance tasks for pre-service teachers. This is also the first study that applies NLP to measure a teaching practice that is widely acknowledged to be particularly effective for students with special needs. We confront two challenges inherent in NLP-based instructional analysis, including noisy and long input data and highly skewed distributions of human ratings. Our results suggest that pretrained Language Models (PLMs) demonstrate performances comparable to the agreement level of human raters for variables that are more discrete and require lower inference, but their efficacy diminishes with more complex teaching practices. Interestingly, using only teachers' utterances as input yields strong results for student-centered variables, alleviating common concerns over the difficulty of collecting and transcribing high-quality student speech data in in-person teaching settings. Our findings highlight both the potential and the limitations of current NLP techniques in the education domain, opening avenues for further exploration.

Share

Tools
Translate to