Learning analytics should not promote one size fits all: The effects of instructional conditions in predicting academic success

What follows is a summary of

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2015). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicating learning success. The Internet and Higher Education, 28, 68–84. doi:doi:10.1016/j.iheduc.2015.10.002

I’ve skimmed it before, but renewed interest is being driven by a local project to explore what analytics might reveal about 9 teacher education courses, especially in light of the QILT process and data.

Reactions

Good paper.

Connections to the work we’re doing in terms of similar number of courses (9) and a focus on looking into the diversity hidden by aggregated and homogenised data analysis. The differences are

  • we’re looking at the question of engagement, not prediction (necessarily);
  • we’re looking for differences within a single discipline/program and aiming to explore diversity within/across a program
  • in particular, what it might reveal about our assumptions and practices
  • some of our offering are online only

Summary

Gašević, et al (2015) looks at the influence of specific instructional conditions in 9 blended courses on success prediction using learning analytics and log-data.

A lack of attention to instructional conditions can lead to an over or under estimation of the effects of LMS features on students’ academic success

Learning analytics

Interest in, but questions around the portability of learning analytics.

the paper aims to empirically demonstrate the importance for understanding the course and disciplinary context as an essential step when developing and interpreting predictive models of academic success and attrition (Lockyer, Heathcote, & Dawson, 2013)

Some aims to decontextualise – i.e. that some work aims to identify predictive models that can

inform a generalized model of predictive risk that acts independently of contextual factors such as institution, discipline, or learning design. These omissions of contextual variables are also occasionally expressed as an overt objective.

While there are some large scale projects, most are small scale and (emphasis added)

small sample sizes and disciplinary homogeneity adds further complexity in interpreting the research findings, leaving open the possibility that disciplinary context and course specific effects may be contributing factors

 Absence of theory in learning analytics – at least until recently.  Theory that points to the influence of diversity in context, subject, teacher, and learner.

Most post-behaviorist learning theories would suggest the importance of elements of the specific learning situation and student and teacher intentions

Impact of context – Mentions Finnegan, Morris and Lee (2009) as a study that looked at the role of contextual variables and finding disciplinary differences and “no single significant predictor shared across all three disciplines”

Role of theoretical frameworks – argument for benefits of integrating theory

  • connect with prior research;
  • make clear the aim of research designs and thus what outcomes mean.

Theoretical grounding for study

Winne and Hadwin’s “constructivist, meta-cognitive approach to self-regulated learning

  1. learners construct their knowledge by using tools (cognitive, physical, and digital);
  2. to operate on raw information (stuff given by courses);
  3. to construct products of their learning;
  4. learning products are evaluated via internal and external standards
  5. learners make decisions about their the tactics and standards used.
  6. decisions are influenced by internal and external conditions

Leading to the idea proposition

that learning analytics must account for conditions in order to make any meaningful interpretation of learning success prediction

The focus here is on instructional conditions.

Predictions from this

  1. Students will tend to interact more with recommended tools
  2. There will be a positive relationship between students level of interaction and the instructional conditions of the course (high frequency of use tools will have a large impact on success)
  3. The central tendency will prevail so that models that aggregate variables about student interaction may lead to over/under estimation

Method

Correlational (non-experimental) design. 9 first year courses that were part of an institutional project on retention. Participation in that project based on a discipline specific low level of retention – a quite low 20% (at least to me).  4134 students from 9 courses over 5 years – not big numbers.

Outcome variables – percent mark and academic status – pass, fail, or withdrawn (n=88).

Data based on other studies and availability

  • Student characteristics: age, gender, international student, language at home, home remoteness, term access, previous enrolment, course start.
  • LMS trace data: usage of various tools, some continuous, some lesser used as dichotomous and then categorical variables (reasons given)

Various statistics tests and models used.

Discussion

Usage across courses was variable hence the advice (p. 79)

  1. there is a need to create models for academic success prediction for individual courses, incorporating instructional conditions into the analysis model.
  2. there must be careful consideration in any interpretation of any predictive model of academic success, if these models do not incorporate instructional conditions
  3. particular courses,which may have similar technology use,maywarrant separatemodels for academic suc- cess prediction due to the individual differences in the enrolled student cohort.

And

we draw two important conclusions: a) generalized models of academic success prediction can overestimate or underestimate effects of individual predictors derived from trace data; and b) use of a specific LMS feature by the students within a course does not necessarily mean that the feature would have a significant effect on the students’ academic success; rather, instructional conditions need to be considered in order to understand if, and why, some variables were significant in order to inform the research and practice of learning and teaching (pp. 79, 81)

Closes out with some good comments on moving students/teachers beyond passive consumers of these models and the danger of existing institutional practice around analytics having decisions be made too far removed from the teaching context.

 

One thought on “Learning analytics should not promote one size fits all: The effects of instructional conditions in predicting academic success

  1. Pingback: Designing a collection of analytics to explore “engagement” – The Weblog of (a) David Jones

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s