Learning analytics is one of the new management fashions sweeping higher education. As a consequence, every Australian University I know has some sort of project around learning analytics underway. Some of these projects are actually considering how to help the folk at the coal face – students and teaching staff – use and benefit from learning analytics.
(Aside: It feels like the focus is more on the students than the teaching staff. This feels like a continuation of a recent trend where central institutional L&T is increasingly focusing on working directly with the students and bypassing the teaching staff, perhaps because it’s easier. But that’s a topic for another time).
The “end in mind” – a success factor
The broader business analytics field is starting to identify what they consider to be success factors. For example, this recent post titled “12 predictive analytics screw-ups” which lists 12 major mistakes (the inverse of a success factor?).
Number 1 on their list is
1. Begin without the end in mind
You’re excited about predictive analytics. You see the potential value of it. There’s just one problem: You don’t have a specific goal in mind.
I wonder if this is going to be a source of problems for learning analytics when applied to teaching staff (or perhaps students)?
How can you have a common “end in mind” for such a diverse set of people?
The problem of diversity
Is there any university, anywhere in the world, where there is a consistent pedagogical practice across all of its courses (units) and all of its teaching staff? I think not.
The “end in mind” is going to be different for different academics. The information they most need at any point in time is going to be different than others. In fact, it’s likely to be different across time.
Generic indicators such as engagement may be useful at a certain level. But when they are based on standard assumptions about the use of certain types of LMS tools at certain levels there are problems. For example, the pedagogy in the course I’m currently teaching is trying to push most of student engagement out onto their own blogs hosted on their choice of blog provider.
Retention: the solution
Most University learning analytics projects seem to focus on retention. i.e. the bottom line. There is some value in this, but the on-going focus on that is going to cause problems. For example, this recent story about a University considering the inclusion of student grades as part of staff appraisals. Goodhart’s law anyone?
How do you succeed?
If having an “end in mind” is a success factor for analytics projects, then what is an appropriate “end in mind” for learning analytics within a University?
What impacts will arise from the chosen “end in mind”? Will adoption be high? Will task corruption be high?