Case study: raising the quality and impact of feedback with screen-captured essay analysis videos

This case was created with Russell Stannard, Principal Teaching Fellow, Applied Linguistics, University of Warwick.

What, why, how? – Practitioner/Advisor Statement

My innovations were motivated by the challenge of making feedback (originally on written work but now broader) more engaging, efficient and multi-modal (rather than just text based) and therefore more inclusive. We need to motivate students to make use of feedback. This challenge is encountered generally across education.

We are now successfully addressing this, in an incredibly well received way, using a “screen-captured feedback” approach.

I was frustrated by this lack of engagement with feedback. Practice in this area was getting out of sync with other changes in HE. So I was deliberately looking for a solution. I first saw the technology in 2000, but it took 6 years to realise it was easy and doable. Then I started to experiment with Camtasia (teaching at Westminster University). I did a research experiment with 11 Chinese students, and got published in Times Higher. I won a Times Higher award for outstanding initiative in ICT.It was used in a national student teaching report for the government. I developed new techniques through a community of practitioners that grew as a result of this publicity. It has also worked well with disciplines with a design element (engineering, arts etc).

Screencasts are recorded combining a recording of the student’s work being explored on screen and a commentary from the teacher. I know use the Jing software for this, but have also used Screencast-o-matic and Camtasia. I record myself viewing, correcting, commenting and giving feedback on student’s work on screen. The recording is exported as a video. The video is then uploaded to a server, and a link is sent to the student. The feedback is personalised to the student.

The technique is used to address issues with deeper meaning and structure, allowing complicated feedback on these issues to be more easily developed. I tend not to use it to address superficial issues.

This approach is currently used by 4 academics in Applied Linguistics (almost 1/4), with 50% of students. Based on research carried out on Norway, and the priority given to feedback on NSS, this is of great significance to all students and to all departments.

I have evaluated the impact of the technique on teachers, and have found that it alters the style of feedback, with fewer ‘tick marks’ and more comment to justify. However, it is possible to end up providing too much feedback. I have tried modifying the technique by limiting myself to 5 key comments per essay.

The response from students has been excellent, and engagement with feedback has clearly improved. This should be a long term change in how the student uses feedback.

The simplicity of the approach enabling wider uptake. Software is available for free. However, even with this technical simplicity, uptake is depends upon teachers being able to establish a natural flow when speaking into the recording. Some people struggle with this. Perhaps a purpose designed screen recording tool could help with this?

The approach could also be improved if upload speed were improved (on campus). The use of Moodle (starting in 2013) which might simplify the workflow. Also, when working with large cohorts it can become repetitious, with many students requiring similar feedback. This is a constraint on wider adoption. It could be addressed with a simple mechanism for combining examples from different students. However, the personal aspect of feedback is an important part of the value of the technique, and this would have to be preserved. US users have suggested flipping the technique for student reflection. Self-review is an approach that can be used to increase the impact of the approach while maintaining the personal aspect. Students are asked to give feedback on their own work reflexively (with training and guidance on how to review), and then to share their self-assessment.

Finally, wider adoption might be constrained by concerns regarding validation by external examiners. It is not clear how this would work at Warwick, there is no official or guidance in place. In institutions where there is that official push (OU) it is more widely embedded. It might be the case that Warwick centrally is keen on this, but people still aren’t confident enough to try something new. Official guidance and encouragement might enable wider adoption.

Academic Technology Advisor’s Analysis

Summary of the design change

This is a complex innovation, involving changes in hardware, software, individual and collective practice, and roles and relationships. A core existing practice, essay feedback, is transferred from its traditional media (notes handwritten on text, and less commonly one-to-one face-to-face communication) to an unfamiliar medium (online video). There are two significant transformations:

1. in the different temporal aspect of the acts of giving and receiving feedback (Russell recognizes this as being a desired and significant effect) – rather than briefly skipping around the text and the scribbled notes, the student has to stop and engage with a feedback point in a more directed, linear fashion.

2. by establishing, through the physical presence of the academics voice in the feedback, a stronger connection between the academic, their authority, the advice, specific points in the student’s text, the actions of the student, and their ideas/skills/learning.

What led to the design change?

A convergence of the innovator being immersed in the problem domain, and keeping a watch out for changing hardware and software affordances and patterns that might be transferred and adapted to re-invent the problem domain.

How significant are its intended impacts?

This might be viewed as a superficial application of new gadgetry, but there’s much more to it than that. The shift in format transforms the acts of giving and receiving feedback, from an often light, shallow and inconsequential engagement (the student briefly looking for expected words to confirm their expectations) to a more focussed, deeper, directed engagement that holds the attention of the student. In the cases where the video based technique is applied, the difference can be very significant. If this level of engagement were to transfer beyond these cases and the video media, with the more focussed deeper engagement style being used in written, oral and peer-to-peer feedback, then the significance will be even greater. If the approach were to be more widely adopted, with all students getting some exposure to it (not necessarily for all assignments), the positive impacts could be enormous.

How was it implemented?

The technique was invented quickly, once the hardware and software became available and its affordances were recognised by the innovator. It was an individual innovation-decision, with little dependency on supporting infrastructure and agencies. Any institutional structural constraints present had little impact on the invention and adoption by the innovator. The practice is very much about improving student-teacher interaction on a one-to-one basis. It’s introduction to these relationships would have needed thought and care, along with some action to establish it as normative amongst peer groups.

How successful is the design change?

For the inventor, and his students, this has been very successful, and has become normal practice. Through national and international publicity and networking, it has been replicated by many others outside of the university. However, it might be that for the innovation to achieve its potential, and to transform attitudes and practice (by teachers and students) it needs to be used more widely and consistently.

How durable are its impacts?

Potentially life-long, if it does make a permanent change to attitudes and practices. It seems to scaffold behaviours that could become cognitively ingrained.

How transferable is the design?

Although “feedback” is widely regarded as a significant issue, there is less commonality as to the nature of the problem: does feedback need to be delivered more immediately? is more feedback needed? better quality feedback? feedback that justifies the mark? feedback that directs the student to specific improvements? – there are many views on the issue. This approach deals with the “problem” of feedback when conceived in a specific way: students not using feedback effectively, teachers not creating feedback that engages the students and encourages them to use it effectively. The definition of the problem in this way is the starting point for the diffusion of this approach. However, there are then issues concerning its compatibility (or perceived compatibility) with existing practice, especially institutional rules, norms and expectations. The technology may itself also be perceived as too complex, when compared to existing practice. However, it is argued that once the teacher is set-up with the required skills and and facilities, it is simpler and quicker. This needs to be demonstrated, made observable widely. Although the software itself is free, and the hardware is in most cases already present, additional support is probably needed to make it easy for all teachers to try it out for themselves.

Academic Technology team actions

Wider understanding, consideration, and where appropriate, adoption, would be eased through identifying and documenting the simplest, quickest technology solution. This should be documented in a simple user guide and video, and showcased in person.