Self-reporting considered harmful?

I’m currently on my way to Auckland for the 2009 ASCILITE conference. A few weeks ago I was lucky enough to be in Denver for the 2009 EDUCAUSE conference. Both were/will be useful and interesting experience. But having been at EDUCAUSE, read a couple of the ASCILITE papers and talked with a colleague I’m once again wondering

Should self-reporting be considered harmful?

I’m now into the second day at ASCILITE, taken me that long to get back to this.

Educational technology and self-reporting

My interest and both conferences are interested in the application of educational technology within universities. A fairly typical, if not predominant, model of the work done in this area and presented at these conferences follows the following process:

  • Some one/group has a good idea and/or sees a problem.
  • They design a project and implement.
  • They evaluate the project.
  • They write and present the paper at a conference/journal.

i.e. the people that had the idea and were responsible for the design and implementation of the intervention are the same people that evaluate and report on the success or otherwise of the project.

They self-report.

Why is this a problem?

Is this really a problem? These folk should be professionals. They should be following accepted research methods, documenting those methods and having the results peer reviewed. Shouldn’t this address any problems with self-reporting?

In a perfect world this might well be the case. But we are in far from a perfect world. I have a theoretical argument why this is the case and empirical observations to back it up.

The theory arises from science and is the simple observation that human beings are not rational decision makers. We are not information processing machines (i.e. the brain is not a digital computer). We are pattern matching intelligences (anyone familiar with Dave Snowden will have heard this before). We look for and see/interpret situations through the lens of first-fit pattern analysis. Patterns formed by regular and recent experience get hit first.

This theory/fact is probably a strong contributor factor to the Kaplan’s Law of Instrument.

My problem is that, in the best case, the person designing the intervention will already have an established set of expectations/patterns that they are looking for. They may not test for information outside of those expectations. e.g. an straight information technology intervention will be measured on whether it delivered the promised features on time and on budget and not by the implications on the users and how they work around the limitations of the system and the resulting costs.

At worst, you will get post-hoc rationalisation of the sort talked about in relation to planning in this post “Make it looked planned”. Where order that didn’t exist during the design, is added on afterwards.

I can point to at least two papers at the ASCILITE’09 conference that suffer from this problem. Rationale and order that was not present in the design and implementation have been added into the paper after the fact. I’m pretty sure that these aren’t the only two papers.

Now, in some cases, a bit of post hoc rationalisation may be a good thing. Especially where the implementation of the project is itself intended to be a learning process. Anything truly innovative can’t be planned. What I’m complaining about here is when certain principles or goals are added after the fact and the impression given that they were present from the start.

Solutions?

This is incredibly difficult to do in any widespread way.

One solution that we played with a little bit, but which never got off the ground was the REACT process. In this process, the problem, possible solutions, the chosen solution and implementation plans were presented and peer reviewed before implementation. This not only provides some evidence that post-hoc rationalisation hasn’t occured, it also provides an opportunity for good people to provide additional input and ideas into a project before it is implemented. Hopefully strengthening the proposal.

After this, the intervention is implemented and evaluated and then reported back.

I wonder if you could get something like this going around a conference? Perhaps we should try and get it going back at CQUniversity first.

One thought on “Self-reporting considered harmful?

  1. Rolley

    Hrm yeah good idea regarding REACT. You’re absolutely right about self reporting though I think – its all to easy for the mind to be closed to real objectivity after already investing a lot of thought and effort towards subjective goals.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s