Surveys fill a multitude of purposes within education and give students, teachers, and parents a chance to share their opinions in a familiar way that have been around since the 1920s.
The collected information is, among other things, meant to form a framework from which teachers improvement is built upon. Improvements are then implemented and the cycle begins again. In theory, this sounds like a good and reasonable way of working. In reality: Not so much.
Let’s look at the average teacher feedback survey.
Most contain about two pages of multi-faceted questions written in a strict academic language in order to get valid responses. Following the questions are often a 5 or 7 level scale that is supposed to represent every possible opinion there is to have in the matter.
Basically, the information surveys can provide is:
“Student X thinks the last semester was 3/5 good”
“Student X thinks his professor was 2/5 knowledgeable in his field“
“Student X thinks the course material was 4/5 good”
What close-ended survey responses really got going for them, is the fact that they are numbers-based. It’s easy to apply all sort of statistical tools and to find differences over time. Computers also happen to be numbers-based and thus make great friends with data that can be represented by a number.
So what’s the matter then?
Qualitative data matters!
If teachers are to be able to use feedback as guidance on how to improve methods and skills, we need to dig a lot deeper.
Mining valuable information from Likert-scale responses is hard. Other than trying to ‘read between the lines’ (i.e guessing) to understand why the student answered the way he did, you’re left with few options. The standard way of getting more in-depth feedback is usually done by either replacing or complementing your close-ended questions with open-ended questions.
So now, adding to the fact that Student X thinks the course material was 4/5 good, we hopefully also know that he or she thought so because it filled the gaps from the lectures in a good way, was very relevant and had an ideal level of complexity. What was missing was a thorough explanation of quadratic polynomials.
Now, that’s the kind of information we can use to make things better.
Truth is that open-ended responses are what’s backing improvement, fueling progression and shows us the answer as to why a student, parent or teacher thinks in a certain way. Scholars are in strong agreement regarding the effectiveness of open-ended feedback (1, 2, 3, 4, 5, 6).
Unfortunately, there are a few issues that come with open-ended feedback as well.
Filling out surveys is one of the most boring tasks a student can take on. Preferably, they’ll spend as little time as possible doing this and may not always supply the most valuable data to the teacher.
In (the odd) case, students do supply lots of valuable information, analyzing comments can add a considerable strain to the already busy teaching schedule.
It’s not uncommon for introductory-level courses in college to contain 150–300 students. Just imagine analyzing feedback from 300 students several times per year – can you feel your hair turning gray yet?
Replacing surveys with an intelligent chatbot would not only make up a more engaging experience for students. It would also be able to dig considerably deeper than regular open-ended comments by asking follow-up questions and finding out why the student answered the way he/she did. Basically, it’s like a compromise between a survey and a personal interview.
Garbage in, garbage out is something that truly applies to student feedback surveys.
Deep-Learning Text Analytics
Pair your new way of collecting qualitative data with a way of quickly categorizing and analyzing it, and we’re starting to feel like we’re back in 2018 again.
Text analytics is developing at a faster rate than ever before and is about to revolutionize the way we handle large amounts of written comments. Hotels already use it to dig through heaps of reviews from customers to find unanimous areas that could be improved upon. So why can’t colleges and professors use it?
It’s most reasonable to believe that teachers do not always have time to read every comment and are, therefore, missing out on tons of valuable information. Technology is making it possible to analyze hundreds of comments and compile it down to a few short action points that would lead to the highest impact on student learning.
Now, it’s time to bring this solution to classrooms and hubert.ai is leading the way. In our free beta version, Hubert (our chatbot) asks students what should start, stop and continue in the classroom and can probe even deeper by asking follow-up questions. Sign up here to try.
Along with these technologies comes a range of exciting next step possibilities such as:
- Making it possible to share and recommend successful methods and practices between teachers of similar classes.
- Cross-referencing feedback with research findings to find scientifically backed recommendations on how to improve teaching further.
- Automatically censor insults and personal attacks on the teacher if present.
- Decreasing number of questions if the student already covered them in a previous response.
- Introducing fast and qualitative evaluations to help teachers improve throughout the whole semester.
- And so on and so on..
I’m certain that these technologies will help schools develop teachers in a way that is both more effective and less work intensive for teachers. The development in this area is extremely exciting and I look forward to keep following it first-hand.
Article originally posted at: https://blog.hubert.ai/how-chatbots-and-text-analytics-will-replace-surveys-in-education/