First, let me be clear that we at Hubert.ai have somewhat of a thorn in the side towards student surveys. Surveys are almost always uninspiring, long, boring, and only scratching the surface of real information. However, we do know how important they are and the value they bring educators every day.
The key to getting accurate and valid data from student surveys is largely in the survey design. There are professionals to consult if you’re responsible for designing a survey, but it’s always good to have a good understanding of the many areas that needs consideration.
We’ll start with survey length.
A surveys length is always a compromise between the amount of desired information and how far you think you can push students before they go completely mental. In student evaluations research, it’s not uncommon to learn about daunting surveys with more than 65 questions.
With that many questions, the risk of low response rate and survey fatigue is overwhelming. From the student’s point of view, a short and effective form where questions are easy to understand and give feedback to is what really matters.
Anyway, the average amount of questions seems to be somewhere around 30 in total for an end course evaluation. A well-designed survey can often produce similar information from around 10 questions.
A common misconception and a criticism towards teacher feedback forms is that they should let students judge upon the teacher’s ability to teach and how knowledgeable they are in their respective subject
But that’s not what they are for.
Student evaluations of teaching should be treated as a data source to provide insights into the teachers’ ability. Not as the unquestionable truth.
That’s why it’s a good idea to mainly have descriptive scales in student evaluations, not evaluative. This way also makes it more clear to students that they are not judging a popularity contest of some sort.
Question: How well did the teacher attend to questions from the class?
Scale: He/she didn’t answer any questions / A few questions / Some questions / Most questions / All questions
Secondly, another criticized factor is that students are evaluating teachers on a norm-based scale rather than a criterion-based scale. In other words, students are asked to compare one teacher to another without any clear reference. Feedback from a senior year student would likely be different from that of a sophomore as they have different previous experience of teaching and teachers.
Question: How would you rank the teacher in this course?
Scale: Very poor/ Below Average/ Average / Above Average / Excellent
When it comes to what questions should cover and how they should be phrased there are a few things student feedback survey constructors tend to forget.
As one of the purposes of student evaluations is to measure how much students actually are learning, it’s important to ask students to self-evaluate their knowledge gains. According to Elizabeth Barre, the most important difference between validated assessment instruments and unvalidated evaluation instruments is that the validated ask students to reflect upon their learning.
That means questions in line with:
– Did you learn this and that?
– How much did you learn?
– How did you learn it?
Questions covering an overall assessment on an entire course or a specific teacher are often seen in validated surveys despite having evaluative scales. The use of such questions are often seen as a good way of quantifying opinions but are also intensely debated in the scientific community.
Teaching Strategies and Bias Control
To get a clue on teaching effectiveness it’s generally a good idea to ask about specific teaching behaviors and methods, and how frequently they occur.
Finally, to combat factors that might bias and influence the results, questions dealing with prior interest, course workload, why they took the course, how much effort they exerted, and so on are valuable during analysis.
Here’s is a good example of how a student evaluation form should be built (again, according to Barre and her excellent work):
We see scales that are descriptive and criterion-based, not comparative and not evaluative. As Elizabeth Barre puts it herself:
All the students are doing is telling whoever is reading this forms that this teacher hardly ever related course material to real life, or this teacher always found ways to help students answer their own questions. And this is going to give us a sense of how often a teacher did behaviors that we think are desirable, which is much more useful than having a sense as to whether a student think a faculty member is ”excellent”
Analysis — The Tricky Part
In the final analysis of the student evaluation forms, there are a few things to think about.
When handling biases, you first need to decide what biases that needs to be controlled for and what should be left untouched. You can find more info regarding bias here.
As mentioned more detailed in the section on biases; discipline, class size, and student pre-conditions should all be accounted for in the result presentation. Grades and workload are not biases and should therefore not be statistically adjusted for.
As comparisons go, teachers can normally be compared when they teach classes of similar size, in the same discipline to students with similar interest, work attitude and effort exerted. In normal cases, when these characteristics notably differ they should not be used in direct comparisons. To be even more objective in comparisons, gender-related aspects such as whether the teacher is male or female and the composition of the class can we weighed into the breakdown of results.
If other institutes use the same survey vendor, it might be possible to compare your results to other similar teachers. Which can be fun. Or scary.
The most important thing after doing all the hard work of collecting student opinions is of course that you make effective change in your routines. This can be everything from speaking louder to changing course material. Many small changes bring big improvements over time!