Skip to main content

ClassEval Frequently Asked Questions (FAQ)

Response Rates

Research indicates that higher achieving students are more likely to participate and tend to rate their instructors more favorably than lower achieving students.1

In many cases, student responses focus on the organization, rapport, entertainment level, and difficulty of the course, not on instructor pedagogy.2

Low response rates are likely to increase bias in results if the students filling out the evaluations are not representative of the entire class population.1, 3

Research indicates that higher-performing students appear to be more likely than less successful students to complete online evaluations.1

A study at NC State showed that higher-performing and engaged students have higher response rates (e.g., students complete evaluations for classes in their major at higher rates).1

Students with GPAs between 1-1.99 had an average response rate of 23% while the response rates for GPAs between 2-2.99 and 3-4.00 were 37% and 48.1% respectively.1

Research indicates that online evaluations have nearly always lower response rates than paper systems.4

The majority of studies, including one at NC State, confirm lower response rates for online versus in-class paper systems.1, 3, 4

Students often lack the motivation to complete online evaluations because of the length and number of evaluations to complete. On average, students at NC State are asked to complete 5.96 class evaluations per semester.1

Research indicates several differences between online and paper-based evaluation systems

Research shows that there is a 50-75% increase in the number of written comments on online versus paper evaluations.

Comments written online are typically longer and provide greater detail than comments on paper evaluations making them more useful for instructional improvement.

There is not an overall increase in negative comments on online evaluations. As with paper evaluations, low-rated teachers — those perceived by students to be poorer — typically get more negative online comments while teachers perceived to be better receive only a few negative comments.1, 5

Because research findings indicate online evaluation systems nearly always have low response rates and potential biases, caution should be taken when using response data to evaluate teacher effectiveness.

As with any self-administered surveys, student evaluations of teaching potentially suffer from nonresponse bias. In addition, because of small samples, they often also suffer from sampling error.

Because of possible errors in representation, the EoT committee recommends to use caution when interpreting evaluation results. Relying on results from one section or even one semester is not advised. Instead it is best to look at patterns across classes in multiple semesters.

Class evaluation results are one part of the complex issue of the measurement of teaching effectiveness. In order to make this data more meaningful and useful, it should be triangulated with multiple sources including peer evaluations of teaching and students’ qualitative comments, among others.

Evaluation data can be useful for faculty seeking to improve their courses. In particular, many faculty find that student comments can be used for improving learning.

Faculty should strive to increase participation by employing methods below, reducing nonresponse error.

Ways to Increase ClassEval Response Rates

Time Commitment: 10-20 minutes in class to administer. The time to review and discuss feedback varies based on the size of class.

Potential to improve ClassEval response rates by 9-16%.6, 7, 8

Conducting mid-course evaluations can improve ratings on end of-course evaluations, as students become more able evaluators as well as more engaged in the course.7

Students respond positively when their comments result in changes to the course, leading to improved student attitudes about the class and/or instructor.8

Time Commitment: 5 -10 minutes to modify provided verbiage and mention how you have used the feedback.

Showing students in multiple ways that their feedback is valued can increase response rates. In one instance, average rates rose to over 95% as a result of using this strategy along with several other techniques to demonstrate the importance of evaluations to students.9, 10

Time Commitment: Periodic announcements take less than five minutes at the beginning/end of class.

Faculty discussion of the importance of completing evaluations was associated with an increase in online evaluation rates from 54% to 72% in one study.11

Write the response rate on the board daily.

Turn it into a competition with another section or last year’s class. Compete to see which section or class receives the highest response rate.

Time Commitment: Less than five minutes to send an email or announcement.

Reminders from faculty, including emails and online discussion board postings, have been shown to increase evaluation response rates.12, 13

Time Commitment: 10-20 minutes during one class to administer. Book a computer lab or ask students to bring laptops/tablets/smart phones to class.

Online evaluations completed in class have a 30% higher response rate than when completed outside of class.4,14,15 In 2013, an NC State CHASS Pilot saw the same increase with 76% of sections receiving 60% or higher rates as compared to 13% in the same 2012 sections.

The ability to update/save evaluations will make it easier for students to fill out the scaled questions of the evaluation during class allowing for comments/changes later.

Where can I go for more information?

Adams, M. J. D., & Umbach, P. D. (2012). Nonresponse and on-line student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments. Research in Higher Education, 53, 576-591.

Young, K., Joines, J., Standish, T., & Gallagher, V. (2019). Student evaluations of teaching: the impact of faculty procedures on response rates. Assessment & Evaluation in Higher Education, 44(1), 37-49.

Chapman, D., & Joines, J. A. (2017). Strategies for increasing response rates for online end-of-course evaluations. International Journal of Teaching and Learning in Higher Education, 29(1), 47–60.

Nulty, D. D. (2008). The adequacy of response rates to on-line and paper surveys: What can be done?, Assessment and Evaluation in Higher Education, 33, 301-314.

Hativa, N. (2013). Student ratings of instruction: A practical approach to designing, operating, and reporting. Create Space Independent Publishing Platform.

McGowen, W.R. & Osgathorpe, R.T. (2011). Student and faculty perceptions of effects of midcourse evaluation. To Improve the Academy, 29, 160-172.

Lewis, K. (2001). Using mid semester student feedback and responding to it. New Directions for Teaching and Learning, 87, 33-44.

Marsh, H.W. & Overall, J.U. (1979) Long-term stability of students’ evaluations: A note on Feldman’s “Consistency and variability among college students in rating their teachers and courses.” Research in Higher Education, 10(2),139-147.

Keutzer, C.S. (1993). Midterm evaluation of teaching provides helpful feedback to instructors. Teaching of Psychology 20(4), 238-240.

Tucker, B., Jones, S., & Straker, L. (2008). Online student evaluation improves course experience questionnaire results in a physiotherapy program. Higher Education Research and Development, 27, 281-296.

University of British Columbia Vancouver (April 15, 2010). Student evaluation of teaching: Response rates. Retrieved from

Laubsch, P. (2006). Online and in-person evaluations: A literature review and exploratory comparison. Journal of Online Learning and Teaching 2(2). Retrieved from

Wode, J.,& Keiser, J.(2011). Online course evaluation literature review and findings. A report from Academic Affairs, Columbia University, Chicago. Retrieved from

Smith, P., & Morris, O. (2012). Effective course evaluation: The future for quality and standards in higher education. Electric Paper Ltd. International House. Retrieved from

Evaluation of Teaching Committee, OFD (2013). Survey of fall 2012 faculty with response rates > 70%. Unpublished raw data. North Carolina State University, Raleigh, NC.

Crews, T. B., & Curtis, D. F. (2011). Online course evaluations: Faculty perspective and strategies for improved response rates. Assessment & Evaluation in Higher Education 36(7), 865–878.

Hornstein, H. A. (2017). Student evaluations of teaching are an inadequate assessment tool for evaluating faculty performance. Cogent Education, 4(1), 1304016.

Boring, A., Ottoboni, K., & Stark, P. (2016). Student evaluations of teaching (mostly) do not measure teaching effectiveness. ScienceOpen Research. Retrieved from

Benton, S. L., & Ryalls, K. R. (2013). Challenging misconceptions about student ratings of instruction. IDEA Paper #58. The IDEA Center. Retrieved from

Wright, R. E. (2000). Student evaluations and consumer orientation of universities. Journal of Nonprofit and Public Sector Marketing, 8, 33-40.

Cohen, P. (1980). Effectiveness of student-rating feedback for improving college instruction: A meta-analysis of findings. Research in Higher Education, 13(4), 321-341.

Ladson-Billings, G. (1996). Silences as weapons: Challenges of a black professor teaching white students. Theory into Practice, 35(2), 79-85.

Standish, T., Joines, J. A., Young, K. R., & Gallagher, V. J. (2018). Improving SET response rates: Synchronous online administration as a tool to improve evaluation quality. Research in Higher Education, 59(6), 812-823.

Chapman, D. D. (2019). Mid-semester evaluation: Don’t wait until it’s too late for student feedback. ISETL website. Retrieved from

Download this page as a PDF for offline viewing and printing.

Updated by the Evaluation of Teaching (EOT) Standing Committee – Spring 2019