Author Archives: Dr Robert O'Toole NTF

About Dr Robert O'Toole NTF

Senior Academic Technologist, IT Services, University of Warwick Fellow of the Higher Education Academy National Teaching Fellow Warwick Award for Teaching Excellence

20 ways to use ResponseWare

The following activities may be conducted using ResponseWare:

1. Attendance registration – students, signed into the system using their username and password, indicate their presence in the room in response to an attendance poll, the teacher sees a list of students who are present and a list of those who are not, the data may then be analysed, downloaded, or uploaded through a VLE integration (e.g. Moodle).

2. Revision exercises – may occur towards the end of a module, near to exam time, or regularly throughout, often at the start of a lecture to revise what was covered in a previous lecture.

3. Introduction, ice-breaker, warm-up – an intensive series of questions at the start of a lecture, to get the students in the right frame of mind.

4. Maths and statistics – ResponseWare includes a ‘numeric response’ type of question, in which students must respond with the correct number, or within a specified range.

5. Enhancing student engagement by more frequently testing understanding and providing micro-feedback – the most common use of ResponseWare, the aim is to prevent teacher-actions and student-understanding disengaging, as the teacher is able to quickly judge if the students have understood a topic and are ready to move on, and the student is able to recognise their own understanding and progress.

6. Peer learning – the teacher asks the students to discuss a question before, or sometimes after, they answer it, perhaps asking them to discuss their response with someone who has a different response (the method pioneered by Professor Eric Mazur of Harvard University).

7. Confidence-based testing – the teacher poses a question, does not immediately provide the answer, asks the students to state how confident they are (via a Likert scale question in ResponseWare), then reveals the answer and gets the students to reflect on the accuracy of their self-efficacy assessment (especially important in disciplines such as medical training, but also used in Economics – pioneered by Dr Fabio Arico of UEA to address the Dunning-Kruger effect amongst students).

8. Rhetorical questioning – the teacher poses a question that is deliberately designed to highlight misconceptions, contentions, false assumptions, leading to deeper investigation and discussion, and potentially to dispelling errors that prevent students from mastering threshold concepts.

9. Gathering creative responses – ResponseWare includes several mechanisms for gathering ideas from the audience, including a system that builds a word cloud from the responses.

10. Decision making – allow students to make choices, with straightforward voting on a list of options or arranging options into ‘priority rankings’.

11. Crowdsourcing choices and definitions – questions and answers can easily be added during a session, meaning that we can gather ideas from students (for example alternative definitions of a word), add them as questions, and get them to vote on them (this has been used at Warwick in medical research).

12. Working against the clock – the teacher is in control of how long students have to respond to a question, they can do this manually, or add a countdown timer, thus encouraging the students to think fast – this can be used within a rhetorical question to force students to respond intuitively or based on perhaps unsound assumptions.

13. Enhancing student engagement through competitions – competitions may be between individuals or the class may be divided (before or during the live session) into teams, points awarded for correct answers, and scores automatically compiled into a leader board.

14. Speed scoring – when the teacher records and tracks individual student performance, or polling is used within a competition, points scored may be adjusted for speed, so that faster responses gain higher grades.

15. Gathering instant student feedback – in addition to making inferences based upon responses to ordinary questions, the teacher may explicitly ask for feedback at any time during a lecture, simply by adding an anonymous feedback question.

16. Module evaluation – more comprehensive feedback surveys may be conducted quickly and efficiently at any time, combining the benefits of gathering feedback in-class (not later online) with the benefits gained by doing this in a digital (not paper) format.

17. Learner/learning analytics – with the ability to download data into Excel or Access (as csv files) or upload data into the VLE, we may easily apply sophisticated analysis algorithms to data from individual or multiple lectures, looking at learning gain and other dimensions from an individual student or group perspective.

18. Demographic learner/learning analytics – we can create demographic groupings before or during lectures, and analyse responses accordingly (for example, if we are teaching students from two different disciplines, we can analyse the differences between their responses).

19. Evaluating the impact of specific teaching and learning activities – using the learning analytics potential of ResponseWare to evaluate the efficacy of specific teaching techniques and activities, testing (for example) the constructive alignment (Biggs) between Intended Learning Outcomes, Learner Activities and Assessment Activities.

20. Social, outreach and fun uses – we have also seen ResponseWare used in many contexts beyond the conventional lecture, as it is an easy to use and fun set of tools.

What’s your approach to learning and teaching?

For a series of VR workshops that we are running next week, we want the participants to give us a very brief view of their approach to learning and teaching. This is an example of the kind of thing we are looking for, to put their responses to VR into context. The prompting questions are:

Briefly explain what you aim to get out of the teaching and/or learning that you do. What matters most about the design and implementation of teaching and learning? What values are important in guiding the choices you make in what you do and how you do it?

And an example response (by me as a student, although it could easily be recast as being about my approach to teaching):

I study what might be called “the philosophy of design” and “designerly practices applied to everyday life”. I’m very much motivated by wanting to improve the world, through helping people to work more effectively together in understanding their collective interests and shaping the things that they do. So I’m not a particularly career-minded or instrumental kind of learner. But I carefully choose what I engage in, so as to use my precious time and energy to find ideas and practices that will help me with what I do. I like some lectures – but only when they are really engaging and social. I don’t really like seminars, as I have always found them to be too short and too contrived. I like to formulate my ideas through writing, but am increasingly experimenting with other media, including diagrams, photography and video.

A problem with assessment in super-selective institutions

“As argued in Chapter 1, good teaching narrows the initial gap between Robert and Susan therefore producing a smaller spread of final grades than that predicted by the initial spread of ability. The distribution of results after good teaching should not be bell shaped but skewed, with high scores more frequent than low scores. At university level there is therefore every reason not to expect a bell curve distribution of assessment results in our classes.” Biggs & Tang (2011) Teaching for Quality Learning at University (4th edition), p.200

In a super-selective university this is even more so. If we assume a high quality intake, with very narrow spread of capabilities, then the eventual attainment spread should be extremely narrow. When we look at a student who achieved 65% (student 1) and compare them to a student who achieved 80% (student 2) in reality that difference might mean very little. The difference might simply be the product of entirely extraneous variables, random events (student 1 having a cold during exam week).

Unless we can demonstrate a difference in kind between the high achiever and the slightly lower achiever, this is meaningless. It might be (and I think I see this happening) that academics invest much into the identification and application of those differences in kind – “student 2 really got it, they have become a proper philosopher/physicist/economist”.

“The categories of honours (first class, upper second, lower second) originally suggested qualities that students’ work should manifest: a first was qualitatively different from an upper second, it was not simply that the first got more sums right.” ibid. p.210

But that then is also open to subjective biases. Biggs and Tang don’t really seem to have an answer to this. But they are very much entrapped by their strict adherence to definitive “intended learning outcomes” within the system of constructive alignment. Hussey and Smith’s alternative combination of ILOs and “emergent learning outcomes” within an “articulated curriculum” leaves room for student creative input, risk taking, genuine innovation, individuation and other (possibly) less determinate characteristics of learning as research/innovation/creativity. As such, the curriculum offers opportunities for more significant and transformative student input, and consequently aspects of student transformation-through-learning that can be meaningfully assessed and reported upon. Having experienced such learning activities, and achieved unforeseeable outcomes, the student is more likely to value and build upon their success. Thus the learning itself, and the transformation being evaluated, is a more reliable indicator of the student’s future capabilities. And that IS what we are looking for when we assess students in the university.

“The extent to which emergent learning outcomes (ELOs) contribute to the achievement of intended learning outcomes (ILOs) varies. Some emergent outcomes are relatively close to the intended learning outcomes and can be perceived to contribute directly towards their achievement. The contribution of others is less direct, being capable of inclusion on the basis of their contribution to the student’s knowledge of the subject in general, whilst the contribution of other emergent learning outcomes is to the field of studies in general and might be included on those terms. Yet other ELOs contribute to the overall development of the students as autonomous, self-managing learners, far beyond the field of study.” Hussey & Smith (2003) “The Uses of Learning Outcomes”, Teaching in Higher Education, Vol. 8, No. 3, 2003, pp. 357–368.

Understanding and shaping student engagement live and online

This lecture was originally created as part of the LDC APP PGR course, introducing postgraduate research students to teaching. I’ve done it in various forms now (from 30 minutes to 2 hours) and it is always really good. It is all about designing to gauge and shape behavioural, emotional and cognitive engagement. The longer version includes some advice about using online tools. This is put into the Warwick context, where most use of online is to “sustain and amplify” good class teaching – the Extended Classroom approach.

Peer-learning methods are used in the lecture, with ResponseWare, to illustrate how we can understand and shape student engagement. Simple MCQs are used in some cases, with and without set answers. Text based responses to some questions are also used, with results presented as a word clouds. There is also a numeric response question (about how often one should stop and prompt student thinking and discussion during a lecture) with a range specified as correct.

I’ve added notes to the slides to explain what I was doing and how I used ResponseWare.

Method: the VRIP matrix for analysing current and ideal tech adoption

Over the last few days i’ve been involved in yet another one of the long winding discussions that occur in the academic tech business concerning how a particular requirement is best satisfied, and what part institutional IT should play in providing a solution. In this case it is “video and audio streaming live events” (yes, that one again). The debate is messy in two dimensions:

  1. “Live event” in an academic context covers a wide range of quite different things, from lectures through to physical theatre workshops.
  2. There’s a huge range of possible technology elements, including cameras, mics, encoders, streaming platforms, ad-ons to common tools (YouTube, Facebook) with a super complex matrix of features and quality levels.

Fortunately we now have analytical tools that can help us – so long as they are used systematically. And here are some notes I have written to explain it to colleagues.

The VRIP matrix (as I’ve decided to call it so as to sound more impressive) emerged out of work done on digital capabilities, and the visitor/resident model (Dave White, Helen Beetham, Alison Le Cornu, Lawrie Phipps, Donna Lanclos, James Clay and others). This is what a matrix looks like. The idea is that a tool/technique can be placed onto the matrix to indicate the type of adoption and integration into institutional services. This can be used to show the situation as it is or the ideal we think we should design and work towards. We can also indicate the extent of adoption by using a colour coding schema. Typically we map out the situation regarding a specific group (e.g. undergraduates).

The matrix can be used to graph the current situation – e.g. where most people are at with a tool or technique now. It may also be used to show the ideal degree/type of adoption and support.

At the institutional end of the matrix we place things that are or should be chosen, provided and supported by the institution. At the other end, we put personally (or communally) chosen, owned and supported.

On the visitor end of the visitor/resident axis we put tools and techniques that people use less often and which they don’t become familiar (they often have to relearn each time they visit). And at the resident end are the tools and techniques that they live and breathe all the time.

So where is “streaming a lecture” on the matrix, for specific groups of people? That’s an empirical question for which we haven’t really got an answer.

And where should it be? That’s a complex strategy question.

And how do we get to the point at which a tool/technique is in the right place on the matrix for the right people?

So this is the kind of sophisticated analysis, that needs to be backed up by an investigation (of the kind I’m doing for VR at the moment).

 

Workshop: Design Thinking techniques for effective participation in the design process

I’ll be running this workshop on 14th March 2017 at 12.30, Oculus Building OC1.01, University of Warwick.

The University is a design rich environment: we design all kinds of aspects of the educational process, courses, tools, technologies, spaces, organisations, publications. Across all of these fields there is a desire to do designing more collaboratively. Effective participation by staff and students should ensure outcomes that fit more neatly with needs and capabilities, stick in use for longer, spread more widely and grow our capabilities for further development.

Professional designers have developed a broad repertoire of techniques for ensuring effective participation. In this workshop, we will learn about and try out techniques that focus upon the language and dialog used by design collaborations throughout the lifecycle of a design (right through to supporting its use once implemented).

Design Language

Designs are typically described and recognised using a remarkably limited and un-examined vocabulary. Expanding and critically assessing a richer shared vocabulary is essential for successful designing. This may counter unhelpful assumptions and cognitive biases, and is especially important when aiming for inclusive and universally accessible designs. Finding just the right words to describe an actual or possible aspect of a design may also unlock new possibilities or new pathways for investigation. Working with language in this way, bringing together all of the perspectives involved in the design process (including users), helps to ensure a higher level of engagement and a sense of ownership.

How do you and your collaborators describe your designs? How might that language be refined and enriched?

Design Patterns

A design pattern is a statement of a problem plus a pattern of actions and interactions that addresses a problem with links to related patterns. It is elaborated with information on its originating context and the concerns, values and problems out of which it arose and advice on implementation and customisation. The pattern is usually headed with a catchy and meaningful title. In some disciplines the inclusion of diagrams and images is considered essential. In education, this might be best achieved with a storyboard or even a video. All of these elements are intended to act as a guide to design activity and a prompt for thinking and prototyping.

Could you benefit from stating your design patterns more explicitly? Could that enable more objective and precise collaborative designing?