Learning analytics is one of the research areas I’m interested in. Consequently, I’ve read and listened to a bit about learning analytics over recent times. In that time I’ve often heard Moneyball used as an example or analogy for learning analytics.
I can see the reason for this. It’s a good example of how data can inform decision making in a field many people (especially those in America) are familiar with. Having a best selling book that’s turned into a Brad Pit movie doesn’t hurt either. But I think it’s the wrong analogy for learning analytics.
As it happens I’ve been reading Nate Silver’s book The Signal and the Noise: Why most predictions fail bus some don’t over recent weeks and I’ll use it to make my case. Silver has had success in applying “analytics” to make predictions in both baseball and US politics and in the book he talks to experts from a range of fields about predictions. Through this process he concluded
I came to realize that prediction in the era of Big Data was not going very well
One of the reasons he gives is
Baseball, for instance, is an exceptional case. It happens to be an especially rich and revealing exception
Why? Well one reason is given when talking about economics, a discipline with a poor track record when it comes to predictions.
This isn’t baseball, where the game is always played by the same rules.
If you don’t play by the rules set down in baseball you are going to get pulled up. What are the rules for learning? How can you be sure that each of the students are aware of the rules, interpreted them the same way, and are playing by them?
A little further on in Silver’s book comes this
The third major challenge for economic forecasters is that their raw data isn’t much good
If the raw data isn’t much good, any predictions you make based on that data is going to have some flaws.
How good is the data in learning? Well, in the face-to-face classroom it’s next to non-existent. At least in the hard, quantitative, consistent form required for most learning analytics. If it’s e-learning, well the data is currently limited to usage logs from the LMS, which are at best a vague indicator of what’s going on.
Intelligent Tutoring Systems tend to solve these problems by having a fixed set of rules (a model) of learning and learners in a particular area. These rules, however, would appear to limit the adoption of the system. How many other contexts can these rules be applied to? Can you actually create such rules for all contexts?
I’m not convinced you can. Especially when broader trends are pushing for an increasingly diverse set of students, but also when learning is seen as a broader, more open and individual happening. Are there “rules” for learning that are broadly applicable?
What the economics analogy suggests for learning analytics
Is learning analytics about prediction? I’d argue that largely it is. You are wanting to understanding what is happening and make predictions about what will happen next. If the learner isn’t going to learn, you want to know that and be able to intervene. You want to make predictions that enable intervention.
The lack of success in prediction in economics suggests that the future of learning analytics may not be bright. At least if it relies on the same models and assumptions as economics. So what needs to change?