Extending a little thought experiment

David Wiley has posed a little thought experiment that encourages reflection around levels of automation and “personalisation” within a University course. Judging by my Twitter stream it appears to have arisen out of a session or happening from the ELI conference. The experiment describes a particular teacher purpose, outlines four options for fulfilling that purpose, and offers a standard against which to consider those options.

It’s a thought experiment that connects to a practice of mine and the growing status quo around higher education (at least in Australia). It’s also generated some interesting responses.

I’d like to extend that experiment in order to

  1. Reflect on some of the practices I have engaged in.
  2. Highlight some limitations with the current practice of e-learning in Australian higher education.
  3. Point out a potential problem with one perceived future for e-learning (replace the teacher with technology).

First, it would be useful to read Wiley’s original (and short) thought experiment and the responses.

Types of extensions

There are a range of ways in which the original thought experiment could be extended or modified. I’ll be looking at the following variations

  1. Modify the teacher’s purpose. (The support extension)
    In Wiley’s experiment the teacher is seeking to acknowledge success (score 80% or higher on an exam). Does a change in purpose impact your thinking?
  2. Clarify the context. (The inappropriate massification extension)
    Does the nature and complexity of the educational context matter? Does it change your thoughts?
  3. Add or modify an option. (The personalisation extension)
    Wiley gives four options ranging on a scale from manual/bespoke/human to entirely automated. Some of the comments on Wiley’s post offer additional options that vary the relationship between what is automated and what is manual/human. Generally increasing the complexity of the automation to increase it’s level of “personalisation”. At what level does automation of personalisation become a problem? Why?
  4. Question the standard
    The standard Wiley sets is that the students “receive a message ‘from their teacher’ and that students will interpret the messages as such”. In a world of increasingly digitally mediated experiences, does such a standard make sense?
  5. Change the standard each practice is being measured against. (The “connection not the message” extension).

The support extension

In Wiley’s experiment the purpose is stated as the faculty member deciding

that each time a student scores 80% or higher on an exam, she’ll send them an email congratulating them and encouraging them to keep up the good work

What if the purpose was to

Identify all those students who have not submitted an assignment by the due date and don’t already have an extension. Send each of those students an email asking if there’s a problem that she can help with.

This is the purpose for  which I’ve recently developed and used an option similar to Wiley’s option #3.

Changing the purpose doesn’t appear to really change my thoughts about each of the options, if I use the standard from Wiley’s thought experiment

to ensure that students are actually receiving a message “from their teacher” and that students will interpret the messages as such.

With an option #3 like approach, it’s possible that students may not interpret the message as being “from their teacher”/personal. But it’s not sufficient for me to stop (more below)

But it does rule out an automation option suggested by @KateMfD

Email is bad enough, but faux email? Why not make them badges and be done?

A non-submission badge strikes me as problematic.

The inappropriate, massification extension

Does the context within which the course is taught have any impact on your thinking?

The context in which I adopted option #3 was a course with 300+ students. About 160 of those students are online students. That is, they never aren’t expected to attend a campus and the geographic location of most means that they it would be impossible for them to do so. I’m directly responsible for about 220 of those students and responsible for the course overall. There are 2 other staff responsible for two different campus cohorts.

The course is 100% assignment based. All assignments are submitted via a version of the Moodle assignment submission activity that has been modified somewhat by my institution. For the assignment described in this post only 193 of 318 enrolled students had submitted assignments by the due date. Another 78 students had received extensions meaning that 47 students hadn’t submitted by the due date.

The tool being used to manage this process does not provide any method to identify the 47 that haven’t submitted AND don’t have extensions. Someone manually needs to step through the 125 students who haven’t submitted and exclude those that have extensions.

Having done that the teacher is then expected to personally contact 47 different students? Many of whom the teacher will never meet face-to-face? Many of whom chose the online study option due to how well asynchronous learning fits their busy life and part-time study? Even though attempting to personally contact these 47 students is going to consume a significant amount of time?

Another problem is that the system provided by the institution doesn’t provide any other choice than to adopt Wiley’s option #1 (send them each an email). Not only does the system NOT support the easy identification of non-submit, no extension students. It provides no support for sending a bulk email to each student within that category (or any other category).

In order to choose Wiley’s other options a teacher would have to engage in a bit of bricolage just like I did. Which tends not to happen. As an example consider that my course is a 3rd year course. The 300+ students in my course have been studying for at least 3 years in an online mode. Many of them for longer than that because they are studying part-time. They will typically have studied around 16 courses before starting my course. With that in mind here’s what one student wrote in response to me adoption option #3

Thank you for contacting me in regards to the submission. You’re the first staff member to ever do that so I appreciate this a lot.

Does a teaching context that has seen significant massification unaccompanied by appropriate changes in support for both students and teachers make any difference in your thoughts? If the manual options are seen to take time away from supporting other (or all) students? What if the inappropriate massification of higher education means that the teacher doesn’t (and can’t) know enough personal information about (most of the) individual students to craft a meaningful, personal email message?

The personalisation extension

Wiley’s options and some of the responses tend to vary based on the amount of personalisation, and how much of the personalisation is done by a human or is automated.

A human manually checking the gradebook and writing an individual email to each student seems to strike some as more appropriate (more human?). Manually sending an email from a range of pre-written versions also may be ok. But beyond that and people appear to start to stuggle.

What about the option suggested by James DiGioai

scripting the criterion matching step, which informs the teacher which students are above 80%, and pushes her to write bespoke messages for each matching student. She automates the tedious part of the task and let the teacher do the emotional work of connecting with and support her students.

Is it the type of work that is automated that is important?

What about the apparently holy grail of many to automate the teacher out of the learning experience? Are we fearful that technology will replace teachers? Can technology replace teachers?

Or is it the case that technology can and should

replace many of the routine administrative tasks typically handled by teachers, like taking attendance, entering marks into a grading book

Bringing us back to the question of where do you draw this line?

Question the standard

Wiley’s standard is

our faculty member wants to ensure that students are actually receiving a message “from their teacher” and that students will interpret the messages as such.

The assumption being that there is significant value to the student in the teacher sending and being seen to send a message written specifically for the student. A value evident in some of the responses to Wiley’s post.

In this “digital era” does such a standard/value continue to make sense? @KateMfD suggests that in some cases it may not, but in Wiley’s original case it does

But an email of encouragement strikes me as a different kind of thing. It’s intended either to be a personal message, or to masquerade as one. Political campaigning, marketing, all the discourses that structure our lives, and that we justly dismiss as inauthentic, reach for us with the mimicry of personal communication. “Dear Kate” doesn’t make it so.

Is the “is there a problem? can I help?” message that I use in my context one that can be automated? After all, the purpose of the message is that I don’t know enough about the student’s reason for not submitting to personalise the message.

What if the massification of higher education means that the teacher doesn’t (and can’t) know enough about (most of) the students to craft a personal message? Alright to automate?

I have some anecdotal evidence to support this. I have been using options at or around Wiley’s 3rd option for years. An “email merge” facility was a feature we added to a system I designed in the early 2000s. It was one of the most used features, including use by teachers who were using a different system entirely. This facility mirrored the functionality of a “mail merge” facility where you could insert macros in a message that would be replace with information personal to each individual.

One example of how I used was a simple “how’s it going” message that I would send out a key points of the semester. One response I received from a student (which I’m sure I’ve saved somewhere, but can’t find) was along the lines of “I know this is being sent out as a global email, but it still provides a sense of interaction”.

Suggesting that at least for that student there was still value in the message, even though they knew I didn’t hand craft it.

The “connection not the message” extension

Which brings me to my last point. The standard for Wiley’s thought experiment is based on the value of the message being and being seen to be a personal message to the student. That’s not the standard or the value that I see for my practices.

For what it’s worth I think that the “7 Principles of Good Practice for Undergraduate Education” from Chickering and Gamson (1997) are an ok framework for thinking about learning and teaching. The first of their 7 principles is

  1. Encourages Contact Between Students and Faculty
    Frequent student-faculty contact in and out of classes is the most important factor in student motivation and involvement. Faculty concern helps students get through rough times and keep on working

The standard I use is whether or not the practices I use encourage contact between my students and I. Does it create a connection?

Whether or not the students see the message I sent as being personally written for them is not important. It’s about whether or not it encourages them to respond and helps a connection form between us.

In the case of the not submitted, no extension students I’m hoping they’ll respond, explain the reason they haven’t submitted, and provide an opportunity for me to learn a little more about the problems they are having.

While I haven’t done the analysis, anecdotally I know that each time I send out this email I get responses from multiple students. Most, but not all, respond.

For me, this standard is more important than the standard in Wiley’s thought experiment. It’s also a standard that my personal experience suggests that moving further up Wiley’s options is okay.

It’s also a standard which argues against the complete automation of the personalisation process. The reasons why students haven’t submitted their assignment and the interventions that may be needed and appropriate tend to represent the full richness and variety of the human condition. The type of richness and variety for which an automated system can’t (currently?) handle well.

 

Why is e-learning like teenage sex and what can be done about it?

There are two versions of this talk.

  1. A joint presentation by Professor Peter Albion and I gave in May 2015 at USQ.

    This version of the talk combines and builds upon ideas from two papers co-written with Damien Clark and Amanda Heffernan.

    Update: The video of the session is available. Talk starts at about 4m30s.

  2. A solo presentation that I gave at CSU as part of a workshop on Learning Technology Innovation.

CSU version

USQ version

Video will be made available eventually.

Slides

Abstract

The implementation of e-learning – defined by the OECD (2005) as the use of information and communications technology (ICT) to support and enhance learning and teaching – in universities has a problem. A problem perhaps best summed up by Professor Mark Brown (Laxon, 2013)

E-learning’s a bit like teenage sex. Everyone says they’re doing it but not many people really are and those that are doing it are doing it very poorly. (n.p).

E-learning’s teenage sex problem is apparent at USQ with the perception that some academic staff are not as engaged with the use of learning technologies as they perhaps could be (Sankey, 2015).

This is not a new problem. In a paper over 20 years ago Geoghagen (1994) sought to explain why a three decade long “vision of pedagogical utopia” (n.p.) promised by instructional technologies had failed to eventuate. Given that “Australian universities have made very large investments in corporate educational technologies” (Holt et al., 2013, p. 388) it would appear increasingly important to understand and address e-learning’s on-going teenage sex problem.

This session will discuss and demonstrate both practical and theoretical perspectives of and solutions to the problem. The practical approaches and tools to be demonstrated have been applied successfully within USQ by individual and small groups of academics. Similar approaches and tools have also been used at CQUniversity to develop a strategic, learning analytics-enabled, student retention project.

The session will argue that the dominant deficit model of academic staff – perhaps best illustrated by the suggestion from the 2014 Horizon Report for Higher Education (Johnson et al, 2014) that the low digital fluency of faculty was the most significant challenge impeding higher education technology adoption – is less than helpful. Instead, the session will argue that e-learning’s teenage sex problem arises from an inappropriate mindset, and a limited conception of knowledge and learning. The session will demonstrate how a different mindset and conception of knowledge and learning can help address e-learning’s on-going teenage sex problem.

The session will build upon ideas from two earlier papers (Jones and Clark, 2014; Jones, Heffernan and Albion, 2015)

References

Geoghegan, W. (1994). Whatever happened to instructional technology? Paper presented at the 22nd Annual Conference of The International Business Schools Computing Association. Baltimore, MD.

Holt, D., Palmer, S., Munro, J., Solomonides, I., Gosper, M., Hicks, M., Sankey, M., Allan, G., & Hollenbeck, R. (2013). Leading the quality management of online learning environments in Australian higher education. Australasian Journal of Educational Technology, 29(3), 387-402. Retrieved from http://www.ascilite.org.au/ajet/submission/index.php/AJET/article/view/84

Johnson, L., Adams Becker, S., Estrada, V., & Freeman, A. (2014). NMC Horizon Report: 2014 Higher Education Edition. Austin, Texas. Retrieved from http://www.nmc.org/publications/2014-horizon-report-higher-ed

Jones, D., Heffernan, A., & Albion, P. R. (2015). TPACK as shared practice: Toward a research agenda. In D. Slykhuis & G. Marks (Eds.), Proceedings of Society for Information Technology & Teacher Education International Conference 2015 (pp. 3287-3294). Las Vegas, NV: AACE. Retrieved from http://www.editlib.org/p/150454/

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262-272). Dunedin. Retrieved from http://ascilite2014.otago.ac.nz/files/fullpapers/221-Jones.pdf

Laxon, A. (2013, September 14). Exams go online for university students. The New Zealand Herald.

OECD. (2005). E-Learning in Tertiary Education: Where do we stand? Paris, France: Centre for Educational Research and Innovation, Organisation for Economic Co-operation and Development. Retrieved from http://www.oecd-ilibrary.org/education/e-learning-in-tertiary-education_9789264009219-en

Sankey, M. (2015). Train a teacher in the way s/he should go and s/he will not depart… Retrieved April 10, 2015, from http://www.linkedin.com/pulse/train-teacher-way-she-should-go-depart-michael-sankey

Where does the LMS sit in the reusability paradox

This post continues the adaptation of the original work of David Wiley around the reuse and remixing of open content and applying that knowledge to the LMS and other institutional e-learning systems and practices. The idea is that explicitly ignoring the distinction between the “content” and the digital systems (and perhaps also the physical equipment) that are used in contemporary learning/teaching spaces is useful in identifying problems with current practice and identifying alternatives.

The Reusability Paradox

The inverse relationship between reusability and pedagogical effectiveness

The graph to the right represents “The Reusability Paradox” from David Wiley. Developed in the context of learning objects the paradox proposes that there is an inverse relationship between the reusability of a learning object and its pedagogical effectiveness. That is, the more easily you can re-use it in different course, then the less impact it will have on student learning (and vice versa).

Wiley argues that this paradox arises because “humans make meaning by connecting new information to that which they already know”. The more elaborate the connections that a learning object has to my context, the easier it is for me to see and make connections with it. It’s easier for me to learn. However, those more elaborate connections make it more difficult to take that learning object and use it in another context. Those elaborate connections don’t make sense in a different context, they cause confusion.

Thus to make a learning object portable, you have to minimise those elaborate, context-specific connections. You have a vanilla or standard object that is usable in more contexts. The cost, however, is that it’s now harder for the human being to make a connection to that learning object. They have to do much more work to connect the object to their existing context and knowledge. They have to do much more work to learn.

What’s good for “open content” is good for the LMS

My last post sought to apply Wiley’s 5Rs Framework to the LMS. The aim here is to explore what might be revealed by applying the Reusability Paradox to the LMS.

The Learning Management System (LMS) is designed to be general. To be reusable across different institutions and people. For example, the Moodle LMS is described as

Powering tens of thousands of learning environments globally, Moodle is trusted by institutions and organisations large and small, including Shell, London School of Economics, State University of New York, Microsoft and the Open University. Moodle’s worldwide numbers of more than 65 million users across both academic and enterprise level usage makes it the world’s most widely used learning platform.

The Reusability Paradox would imply that in order to achieve this level of successful reuse, the LMS must be focusing a bit more on reusability than pedagogical effectiveness. It would imply that at the level of individual learners and teachers that there should exist some difficulties in making connections. The learners and teachers much be engaged in some additional effort to connect to and learn with the LMS. It doesn’t take a lot of looking to find evidence of this. At the institutional level there will be training sessions run to help people understand the system and overcome the gap between what they’d like to do and what the system can do. At a more invisible level is the ad hoc social connections linking people who aren’t quite as technically literate (able to connect with the general tool) with the sprinkling of technically literate people – every academic organisational unit has at least one of these.

More recently you can see evidence of the code being written by people to make these connections. Some recent examples include:

  1. @palbion’s creation of a Greasemonkey script last weekend to add important functionality to the Moodle assignment module.
  2. The 10 (so far) Perl scripts I use to manipulate Moodle and other institutional systems to achieve the learning outcomes I want with my course.

    Including those required to implement the process analytics I’ve added to my course.

  3. The work @damoclarky has done to replace a more useful reporting mechanism for Moodle with MAV.

At this point, I should strongly point out that the problem here is not Moodle. The problem is the implications that the Reusability Paradox has for systems like a LMS that are trying to be reusable across contexts. Almost by definition such systems will have a gap between what they offer and the requirements of the context. Someone or something has to make those connections, and sadly most institutions don’t seem to be doing a good job of it.

What can be done?

David Wiley identifies four choices in terms of open content

  1. create highly decontextualized resources that can be reused broadly but teach very little;
  2. we can build highly contextualized resources that teach effectively in a single setting but are very difficult to reuse elsewhere;
  3. we can shoot for the mediocre middle; or,
  4. allow and enable for contextual modification of the learning object.

In terms of open content, Wiley talks about the open license as being the great enabler, he argues

The way to escape from the Reusability Paradox is simply by using an open license. If I publish my educational materials using an open license, I can produce something deeply contextualized and highly effective for my local context AND give you permission to revise and remix it until it is equally effective to reuse in your own local context. Poof! The paradox disappears. I’ve produced something with a strong internal context which you have permission to make fit into other external contexts.

Problem fixed, not!

So the problem is fixed, at least for Moodle, because it has an open license and

can be customised in any way and tailored to individual needs. Its modular set up and interoperable design allows developers to create plugins and integrate external applications to achieve specific functionalities. Extend what Moodle does by using freely available plugins and add-ons – the possibilities are endless!

But it’s not quite as simple as that. Once Moodle is adopted, installed, and used by a University the institution must now attempt to make its instance of Moodle reusable across the entire institution. It’s to inefficient to do otherwise. The learners and teachers at that institution are only allowed to use the institutional instance of Moodle as it stands. They are typically unable to make changes to Moodle. They do not have the access necessary to make such changes. They are stuck in the Reusability Paradox.

Of course, learners and teachers won’t sit still in this paradox. They won’t accept the need to continually overcome the lack of contextual appropriateness of these systems. They take steps, like those outlined above. The evolution of technology (LTI, JSON, Greasemonkey etc) is making it easier for individuals to modify systems for their own purposes (e.g. @palbion’s creation of a Greasemonkey script).

One university and minimum course standards

At the same time, there is a growing trend for institutional management to promulgate ideas such as minimum course standards. Where it is argued that it is better for students and the institution if all course sites look the same and have – at least at some minimum standard – the same functionality. A level of consistency that smacks head long into the Reusability Paradox and causes no end of trouble. Especially for those of us expected to step backwards to meet the minimal standard.

Allow and enable for contextual modification of the learning object

If institutions wish to improve the quality of their students’ learning, then it would appear that some consideration of the Reusability Paradox is required. In particular, it appears sensible that they adopt Wiley’s fourth choice for dealing with the paradox

allow and enable for contextual modification of the learning object

Where, in this case, the learning object is the LMS and other digital systems.

The problem I see is that institutions are reliant on a mindset that I’ve labelled the: Strategic/Established/Tree-link (SET) Mindset. Such a mindset is going to find it incredibly hard to “allow and enable for contextual modification” because it assumes that:

  1. there must be one plan and one aim (Strategic).
  2. digital technologies cannot be cost effectively changed (if at all – Established).
  3. the world is best understood through logical decomposition into hierarchies (Tree-like).

The big question is how to help organisations adopt more of a BAD mindset. A mindset that is all about allowing and enabling for contextual modification (i.e. learning).

References

David Wiley, The Reusability Paradox. OpenStax CNX. May 25, 2013 http://cnx.org/contents/dad41956-c2b2-4e01-94b4-4a871783b021@19.

Contradictions in adjectives: You can’t be consistent and optimal

One current challenge is attempting to engage productively with institutional strategic/operational planning. The big challenge in doing so is balancing the perceived importance of institutional level concerns (governance etc) with those of an individual teacher.

As part of this challenge I was reading a document summarising the aims of a rather large institutional project in ICT around learning and teaching. Yesterday I tweeted part of the strategies from that project (it starts with “Ensure the development of”

As my tweet suggests I see some contradictions in the adjectives.

Here’s a story from the dim dark past to illustrate how it’s essentially impossible to have an online student experience that is both consistent and optimal.

You shall not use single quotes!

Back in the mid-1990s CQU was a fairly traditional second generation distance education provider. As such it had a style guide for print-based materials (almost the only sort) that were distributed to students. In large part the aim of the style guide was to provide a consistent learning experience for students. One such element of the style guide was ‘You shall not use single quotes’. “Double quotes” were the only acceptable option.

So, that’s consistent.

Less than optimal

As it happens, in the mid-1990s I was the tutor in the course 85343, Machine Intelligence. The practical application of the concepts in this course were done in the Prolog programming language. Here’s a brief section of Prolog code taken from here. Can you see the problem this is going to cause in terms of consistency?

move(1,X,Y,_) :-
write(‘Move top disk from ‘),
write(X),
write(‘ to ‘),
write(Y),
nl.

That’s write, Prolog code makes use of single quotes. The distance education study material for 85343 included sections of Prolog code. Do you know what the central distance education organisation did?

Obviously, because ‘You shall not use single quotes’ they automatically converted all of the single quotes into double quotes, printed the materials, and sent them out to students.

I don’t know whether the coordinator of the course got to proof the study material before it went out. But he was the Head of School and I’m willing to be if he did, he didn’t even think to check the style of quotes used in the Prolog code.

Consistent can’t be optimal

The lesson (for me at least) is that you can’t be consistent across all the courses in a university, while at the same stage claiming to provide an optimal learning experience for students.

This quote from Dede (2008) picks up on why this is a problem (or you can listen to the man himself)

Educational research strongly suggests that individual learning is as diverse and as complex as bonding, or certainly as eating. Yet theories of learning and philosophies about how to use ICT for instruction tend to treat learning like sleeping, as a simple activity relatively invariant across people, subject areas, and educational objectives. Current, widely used instructional technology applications have less variety in approach than a low-end fast-food restaurant. (p. 58)

And it’s not new

Here’s a quote from Jones (1996) – yep I had a bug in my bonnet about this almost 20 years ago and here I am again

With traditional on-campus teaching academics generally have complete control over what they teach and how it is presented. In CQU’s distance education model the subject matter’s presentation is controlled by DDCE. This results in significant tension between the desire to operate standardised systems for production and distribution of courseware and the desire for course designers to be creative and imaginative (Mark, 1990).

‘It’s like deja vu all over again’

There’s a paper or two here.

References

Dede, C. (2008). Theoretical perspectives influencing the use of information technology in teaching and learning. In J. Voogt & G. Knezek (Eds.), International Handbook of Information Technology in Primary and Secondary Education (pp. 43–62). New York: Springer.

20 Mark, M. The Differentiation of Institutional Structures. Contemporary Issues in American Distance Education, Michael Moore (ed), 1990, pp 30-43

Metaphors and organisational change

Metaphors are useful. They reveal some of the underlying assumptions held by people. For example, this paper (Behrens, 2007) reveals that Information Systems research has a strong bias towards thinking of organisations as if they were machines. A bias that tends to invade most organisational practice. The following picks up on a couple of recent events to examine one of the metaphors commonly used in academia. It argues that this metaphor reveals some problematic assumptions.

Herding cats

The standard trope around academics and change is herding cats. Captured memorably by an EDS commercial

Of course, anyone who knows cats, knows that’s not the way to get cats to do anything.

Update: (26/06/2015) Apparently the Mythbusters have some empirical proof of how difficult this might be. As pointed to by this tweet

Obviously this difficulty was foresee by others, as (end update) is illustrated by a tweet yesterday from @SAlexander_UTS summarising a point made by a senior academic

Even this modification of the herding cats metaphor doesn’t understand that the entire metaphor is based on potentially problematic assumptions. Problematic assumptions that I have observed failing to have any impact on universities for something approaching 20 years.

I’ll focus on three.

That you know where to move the food to

The first assumption is that someone (typically senior management) know where to move the food. i.e. someone knows what is the best strategy, the best practice. This is the assumption that underpinned Mao’s four pests campaign to eliminate rats, flies, mosquitoes and sparrows. The sparrows ate grain meant for the people, so had to go. They were eliminated by (amongst other measures) millions of villagers heading out to bang pots and pans to continually scare sparrows so they would never land and hence die from exhaustion. The campaign was so successful that there was a locust plague.

Apparently, the sparrows also ate insects, including locusts. With most of the sparrows dead, the locusts bred leading to somewhat troubling and unintended consequences.

Change of a complex adaptive system – like a university/organisation – is very very difficult because it’s difficult for a group of people (even if they are super intelligent senior management) to understand all of the consequences of changing where the food is located.

That you can successfully move the food

Be Water Wise

The other assumption underpinning the herding cats metaphor is that you can successfully move the cat food (or herd the cats). i.e. once you’ve identified where to move the cat food to, that you are capable of picking up the cat food and moving it to the new location. At an organisational level this is very hard for any meaningful change.

For example, over the last two days I was attending a planning session for the two schools of education at my current institution (I work in one of those schools). The sessions were held in the dining room of one of the colleges on-campus. The restroom for men at this college provides a wonderful metaphor for just how difficult it is to move the cat food and illustrates what “moving the cat food” typically looks like in most universities.

As the image shows, the door into the restroom had prominently displayed a sticker promoting the idea of being water wise. Someone in the college or broader institution had identified being water wise as a good idea and was trying to herd the cats in that direction.

The only trouble is that when you entered the restroom you soon became aware of running water. As the next image shows it appears that the washer in the basin tap was shot so that tap was continually leaking. No matter how water wise I wanted to be….

Running water

When it comes to “moving the cat food” in universities. It often more closely resembles the distribution of lots of stickers, rather than effectively modifying the environment to achieve the stated goal. So an institution that is keen on Open Educational Resources runs lots of special events and creates websites espousing the benefits of open educational resources. But at the same time retains a default position that the copyright for all teaching materials created by staff remains with the university. If I want to convert my teaching resources into open educational resources, I have to ask the legal office for permission.

That the cats will follow the food

So, assuming that you can

  1. Identify the best destination for the cat food; and,
  2. Successfully move the cat food to that destination.

The assumption is that the cats will follow the food. That they will happily accept your arbitrary decision that they should eat in a new location.

Anyone who knows cats, knows that this isn’t going to work. For example.

If there is a defining characteristic of cats it is that they have a fairly high level of agency. They will decide whether or not the new destination suits. If it doesn’t, they will do something else.

For example, if you design a new standard look and feel for the institutional LMS and it is a step backwards in terms of functionality, then some academics will work around that look and feel.

It’s called task corruption.

Other alternatives

I’m a cat person (I’m also a dog person) and based on my experience there are other alternatives.

Scruff of the neck

You could take a leaf out of the species textbook and grab them by the scruff of the neck and take them where you want. This is an approach that is being taken by some management. However, it still suffers from exactly the same problems as outlined above.

Beyond those problems, it adds the additional problem of changing the relationship between you and any adult cat you try this with. Especially if you try it repeatedly.

Squirting water

If wanted the cat to stop doing the wrong thing, you could always use the squirt bottle approach. Whenever the cat does the wrong thing, you squirt water at or yell loudly or some other form of punishment.

Of course, this actually can only ever prevent the cat from doing the wrong thing, rather than take them to a new place. It also assumes you can identify the “wrong thing” to do.

But worse than that, there is an argument that it doesn’t even work and I quote

The squirt bottle technique only accomplishes three things:

  1. It creates frustration in the cat
  2. It causes the cat to become afraid of you
  3. The cat learns to wait until you aren’t around before engaging in the behavior

Trust

Winter in Toowoomba

This is Tommy (aka Son). He’s my cat/I’m his human. We’ve been together for must be almost 9 years now. Tommy can be the other side of the yard, but if I make a particular noise (and all things being equal) he will generally head my way (at his own speed). He knows that there will be a positive outcome and generally desires that outcome. He trusts me. If something in the environment changes (e.g. visitors) he may not, but in the right circumstances I might be able to get him to surface, but there are limits.

This is another approach you can take with cats. However, it still suffers from the same problems as above. It assumes that I (senior management) know where to go and can successfully get everyone there.

Surprise – let the cat(s) take you where it will

An approach that doesn’t seem to be all that much discussed is to let the cats be cats. Enjoy what they do and what they will give you. Perhaps establish a few routines and the appropriate environment, but the reason anyone owns a cat is because cats surprise and give enjoyment. Letting cats be cats.

Just a bit like organising a children’s birthday party.