Rate My Tweet: Understanding Comparative Judgement in the Wild
Abstract
Marking and feedback is such an essential part of teaching and learning. For students to improve, they need to receive feedback. However, for the students to receive the feedback, the teachers need to mark it. Marking takes a considerable time for the teacher to complete and creates a significant cognitive load within the process. Therefore an alternative approach to marking called adaptive comparative judgement (ACJ) has been proposed in the educational space. ACJ has derived from the law of comparative judgment (LCJ), a pairwise method that compares and ranks items. While studies suggest that ACJ is highly reliable and accurate while making it quick for the teachers, alternative studies have questioned this claim suggesting that the process can bias the results through its adaptive nature. Additionally, studies have also found out that the ACJ can result in the overall marking process taking longer than a more traditional method of marking. At the same time, the current ACJ applications provide little resources in personalised feedback to individual students.
Therefore, we have proposed a new ranking system that can rank the outcomes from the comparative judgement marking approach. The alternative ranking system was the Elo system. Additionally, aiming to reduce teachers cognitive load, reduce the time required to mark and ultimately provide personalised feedback to the user using NLP techniques.
We experimented on Twitter tweets around the topic of Brexit to ask users what tweets they found funnier. The findings found that the Elo system is a suitable system to use for ranking the tweets outcomes. However, the NLP feedback process results provided good
building blocks for future experiments that did not have a positive impact as desired.