David Wiley has posed a little thought experiment that encourages reflection around levels of automation and “personalisation” within a University course. Judging by my Twitter stream it appears to have arisen out of a session or happening from the ELI conference. The experiment describes a particular teacher purpose, outlines four options for fulfilling that purpose, and offers a standard against which to consider those options.
It’s a thought experiment that connects to a practice of mine and the growing status quo around higher education (at least in Australia). It’s also generated some interesting responses.
I’d like to extend that experiment in order to
- Reflect on some of the practices I have engaged in.
- Highlight some limitations with the current practice of e-learning in Australian higher education.
- Point out a potential problem with one perceived future for e-learning (replace the teacher with technology).
First, it would be useful to read Wiley’s original (and short) thought experiment and the responses.
Types of extensions
There are a range of ways in which the original thought experiment could be extended or modified. I’ll be looking at the following variations
- Modify the teacher’s purpose. (The support extension)
In Wiley’s experiment the teacher is seeking to acknowledge success (score 80% or higher on an exam). Does a change in purpose impact your thinking?
- Clarify the context. (The inappropriate massification extension)
Does the nature and complexity of the educational context matter? Does it change your thoughts?
- Add or modify an option. (The personalisation extension)
Wiley gives four options ranging on a scale from manual/bespoke/human to entirely automated. Some of the comments on Wiley’s post offer additional options that vary the relationship between what is automated and what is manual/human. Generally increasing the complexity of the automation to increase it’s level of “personalisation”. At what level does automation of personalisation become a problem? Why?
- Question the standard
The standard Wiley sets is that the students “receive a message ‘from their teacher’ and that students will interpret the messages as such”. In a world of increasingly digitally mediated experiences, does such a standard make sense?
- Change the standard each practice is being measured against. (The “connection not the message” extension).
The support extension
In Wiley’s experiment the purpose is stated as the faculty member deciding
that each time a student scores 80% or higher on an exam, she’ll send them an email congratulating them and encouraging them to keep up the good work
What if the purpose was to
Identify all those students who have not submitted an assignment by the due date and don’t already have an extension. Send each of those students an email asking if there’s a problem that she can help with.
This is the purpose for which I’ve recently developed and used an option similar to Wiley’s option #3.
Changing the purpose doesn’t appear to really change my thoughts about each of the options, if I use the standard from Wiley’s thought experiment
to ensure that students are actually receiving a message “from their teacher” and that students will interpret the messages as such.
With an option #3 like approach, it’s possible that students may not interpret the message as being “from their teacher”/personal. But it’s not sufficient for me to stop (more below)
But it does rule out an automation option suggested by @KateMfD
Email is bad enough, but faux email? Why not make them badges and be done?
A non-submission badge strikes me as problematic.
The inappropriate, massification extension
Does the context within which the course is taught have any impact on your thinking?
The context in which I adopted option #3 was a course with 300+ students. About 160 of those students are online students. That is, they never aren’t expected to attend a campus and the geographic location of most means that they it would be impossible for them to do so. I’m directly responsible for about 220 of those students and responsible for the course overall. There are 2 other staff responsible for two different campus cohorts.
The course is 100% assignment based. All assignments are submitted via a version of the Moodle assignment submission activity that has been modified somewhat by my institution. For the assignment described in this post only 193 of 318 enrolled students had submitted assignments by the due date. Another 78 students had received extensions meaning that 47 students hadn’t submitted by the due date.
The tool being used to manage this process does not provide any method to identify the 47 that haven’t submitted AND don’t have extensions. Someone manually needs to step through the 125 students who haven’t submitted and exclude those that have extensions.
Having done that the teacher is then expected to personally contact 47 different students? Many of whom the teacher will never meet face-to-face? Many of whom chose the online study option due to how well asynchronous learning fits their busy life and part-time study? Even though attempting to personally contact these 47 students is going to consume a significant amount of time?
Another problem is that the system provided by the institution doesn’t provide any other choice than to adopt Wiley’s option #1 (send them each an email). Not only does the system NOT support the easy identification of non-submit, no extension students. It provides no support for sending a bulk email to each student within that category (or any other category).
In order to choose Wiley’s other options a teacher would have to engage in a bit of bricolage just like I did. Which tends not to happen. As an example consider that my course is a 3rd year course. The 300+ students in my course have been studying for at least 3 years in an online mode. Many of them for longer than that because they are studying part-time. They will typically have studied around 16 courses before starting my course. With that in mind here’s what one student wrote in response to me adoption option #3
Thank you for contacting me in regards to the submission. You’re the first staff member to ever do that so I appreciate this a lot.
Does a teaching context that has seen significant massification unaccompanied by appropriate changes in support for both students and teachers make any difference in your thoughts? If the manual options are seen to take time away from supporting other (or all) students? What if the inappropriate massification of higher education means that the teacher doesn’t (and can’t) know enough personal information about (most of the) individual students to craft a meaningful, personal email message?
The personalisation extension
Wiley’s options and some of the responses tend to vary based on the amount of personalisation, and how much of the personalisation is done by a human or is automated.
A human manually checking the gradebook and writing an individual email to each student seems to strike some as more appropriate (more human?). Manually sending an email from a range of pre-written versions also may be ok. But beyond that and people appear to start to stuggle.
What about the option suggested by James DiGioai
scripting the criterion matching step, which informs the teacher which students are above 80%, and pushes her to write bespoke messages for each matching student. She automates the tedious part of the task and let the teacher do the emotional work of connecting with and support her students.
Is it the type of work that is automated that is important?
What about the apparently holy grail of many to automate the teacher out of the learning experience? Are we fearful that technology will replace teachers? Can technology replace teachers?
Or is it the case that technology can and should
replace many of the routine administrative tasks typically handled by teachers, like taking attendance, entering marks into a grading book
Bringing us back to the question of where do you draw this line?
Question the standard
Wiley’s standard is
our faculty member wants to ensure that students are actually receiving a message “from their teacher” and that students will interpret the messages as such.
The assumption being that there is significant value to the student in the teacher sending and being seen to send a message written specifically for the student. A value evident in some of the responses to Wiley’s post.
In this “digital era” does such a standard/value continue to make sense? @KateMfD suggests that in some cases it may not, but in Wiley’s original case it does
But an email of encouragement strikes me as a different kind of thing. It’s intended either to be a personal message, or to masquerade as one. Political campaigning, marketing, all the discourses that structure our lives, and that we justly dismiss as inauthentic, reach for us with the mimicry of personal communication. “Dear Kate” doesn’t make it so.
Is the “is there a problem? can I help?” message that I use in my context one that can be automated? After all, the purpose of the message is that I don’t know enough about the student’s reason for not submitting to personalise the message.
What if the massification of higher education means that the teacher doesn’t (and can’t) know enough about (most of) the students to craft a personal message? Alright to automate?
I have some anecdotal evidence to support this. I have been using options at or around Wiley’s 3rd option for years. An “email merge” facility was a feature we added to a system I designed in the early 2000s. It was one of the most used features, including use by teachers who were using a different system entirely. This facility mirrored the functionality of a “mail merge” facility where you could insert macros in a message that would be replace with information personal to each individual.
One example of how I used was a simple “how’s it going” message that I would send out a key points of the semester. One response I received from a student (which I’m sure I’ve saved somewhere, but can’t find) was along the lines of “I know this is being sent out as a global email, but it still provides a sense of interaction”.
Suggesting that at least for that student there was still value in the message, even though they knew I didn’t hand craft it.
The “connection not the message” extension
Which brings me to my last point. The standard for Wiley’s thought experiment is based on the value of the message being and being seen to be a personal message to the student. That’s not the standard or the value that I see for my practices.
For what it’s worth I think that the “7 Principles of Good Practice for Undergraduate Education” from Chickering and Gamson (1997) are an ok framework for thinking about learning and teaching. The first of their 7 principles is
- Encourages Contact Between Students and Faculty
Frequent student-faculty contact in and out of classes is the most important factor in student motivation and involvement. Faculty concern helps students get through rough times and keep on working
The standard I use is whether or not the practices I use encourage contact between my students and I. Does it create a connection?
Whether or not the students see the message I sent as being personally written for them is not important. It’s about whether or not it encourages them to respond and helps a connection form between us.
In the case of the not submitted, no extension students I’m hoping they’ll respond, explain the reason they haven’t submitted, and provide an opportunity for me to learn a little more about the problems they are having.
While I haven’t done the analysis, anecdotally I know that each time I send out this email I get responses from multiple students. Most, but not all, respond.
For me, this standard is more important than the standard in Wiley’s thought experiment. It’s also a standard that my personal experience suggests that moving further up Wiley’s options is okay.
It’s also a standard which argues against the complete automation of the personalisation process. The reasons why students haven’t submitted their assignment and the interventions that may be needed and appropriate tend to represent the full richness and variety of the human condition. The type of richness and variety for which an automated system can’t (currently?) handle well.