By Chelsea LoCascio
Each semester, students are asked to fill out an Online Student Feedback on Teaching evaluation for every course they take. This semester is no different, as some of the undergraduate students at the College can complete them at anytime from Monday, April 25, to Friday, May 6, in PAWS, according to the Office of Records and Registration’s Website.
However, there is a miscommunication between the professors and students, with the latter generally believing their responses have no impact on their professors’ careers.
“Some people think that they don’t (affect the professors) because I feel like most people don’t really want to (fill them out) and just think it’s a waste of time,” junior psychology major Christine Dunne said.
Sociology professor Diane Bates said that the evaluations have more of an effect on untenured professors. Department chairs review and renew adjunct professors’ contracts every semester, which includes looking at their student evaluations, according to Bates.
“(This is) so that if they see something, they can act very quickly,” Bates said.
Likewise, pre-tenured faculty are under annual scrutiny from their department chair, whereas tenured professors are reviewed every five years, Bates said.
The student evaluations are also a crucial part of the promotion process for pre-tenure professors, College Promotions Committee (CPC) Chair and the Library’s Head of Cataloguing Cathy Weng said.
For promotions, first the Department Promotion and Reappointment Committee looks at the professor’s application, then the dean of their school, CPC, Provost and Vice President for Academic Affairs Jacqueline Taylor and then College President R. Barbara Gitenstein — in that order, according to Weng.
According to Academic Affair’s Website, the CPC — made up of 12 faculty members and librarians from a variety of disciplines — evaluates each applicant on the criteria and standards detailed in the Board of Trustees-approved Promotion and Reappointment document.
This document details the three primary criteria: teaching, scholarship and service, with teaching being the most important, Weng said.
In order to evaluate teaching effectiveness, every evaluating entity, including the CPC, looks at the applicant’s student evaluations and course syllabi from three to five years prior to submitting the application, colleagues’ peer reviews of their teaching and course materials that are deemed relevant by the candidate, according to the document.
Since every type of faculty is affected by the student evaluations, Bates questions the representativeness and accuracy of the evaluations’ results. She pointed out that certain courses and instructors always receive lower ratings in evaluations.
“It’s not just at TCNJ, it’s a national issue,” Bates said. “I have concerns that student evaluations are not a good measure of the quality of education or the quality of instruction, frankly. Because we know that certain patterns exist and some of them are that some types of instructors get lower student evaluations than others.”
According to Bates, liberal learning courses get lower ratings than upper-level seminars within the student’s major. In addition, writing-intensive courses, as well as “threshold classes that are designed to sort of weed out students” in each major, get lower ratings, too, Bates said.
“It makes sense. If people are taking an elective in their major, they’re happier — they’re likely to give higher student evaluations,” Bates said. “Whether or not those evaluations capture who is a good instructor and who is not a good instructor, I have some concerns about that because none of that is ever taken into consideration.”
Bates expressed concern about the way the College interprets the student evaluation data.
“The promotions process here at TCNJ really just wants to look at the numbers,” she said. “‘Well, what was your average? What was your mean score?’ And while those I think are reasonable to include (in the process), I just have some serious hesitation about using that as a very powerful measure of the quality of instruction.”
As a sociologist, Bates is worried about how representative the surveys are since they switched to online from print. As a result, the survey responses are voluntary, which generally garners the most positive and negative responses.
“(The results) will look more like (ratemyprofessor.com) than a legitimate sample of students,” Bates said.
For Dunne, who gives mostly positive evaluations, she thinks the evaluations are a useful tool for both the students and the faculty.
“I feel like it’s good that the College (has evaluations),” Dunne said. “I think it’s better than not doing it because it’s just a way for students to express their opinions about a course and I think that is super important because nobody knows the course better than the students that are in it.”
However, fewer students have been participating since the format changed. According to Taylor, when the evaluations officially went online during the Fall 2014 semester, they had a 65 percent participation rate, which includes both undergraduate and graduate students. The participation rates then fell to 52 percent in Spring 2015 and then 49 percent in Fall 2015, Taylor said.
“We really need to get the word out to students,” Taylor said. “That’s the best way to counter the skepticism that students have in whether the (evaluations) matter.”
Although student evaluations are an important part of the application review process, their unreliability is the reason why other criteria are examined during performance reviews as well as promotion applications.
“It’s possible to see some negative evaluations from students, but we also look at peer evaluations because we cannot all rely on students’ evaluations,” Weng said. “We also look up the percentage of students filling out (the evaluations). For example, if there are 20 or 15 students in one class and only three… submitted their evaluations, then this could skew the final (results of the) evaluations.”
As a personal solution to this problem, women’s and gender studies Professor Janet Gray gives all of her students her own questionnaire.
“In doing my own evaluations, which are really sort of more focused on ‘What have you put into the course? What are you going to take away? What are your favorite bits? Most memorable bits?’ That’s far more meaningful to me than the standardized evaluations,” Gray said.
Bates agrees that Gray’s own questionnaire is the best option for teachers looking to improve.
“Students actually typically provide more useful feedback in that context than in anonymous student evaluations, which tend to both bring out both efficiency in answers — so people just quickly fill it out,” Bates said. “Especially now with the electronic, voluntary process, it’s going to bring out the very angry and the very satisfied students and probably miss a lot of the average students.”
In order to combat these bias results and impact each professor’s ability to become better, every student should fill out the student evaluations each semester, according to Weng.
“I believe professors here want students’ feedback to improve their own teaching, not only to get tenured or get a promotion — (that’s) not their primary purpose,” Weng said. “We care about teaching and we care about the success of our students.”