Learning analytics should not promote one size fits all: The effects of instructional conditions in predicting academic success

What follows is a summary of

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2015). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicating learning success. The Internet and Higher Education, 28, 68–84. doi:doi:10.1016/j.iheduc.2015.10.002

I’ve skimmed it before, but renewed interest is being driven by a local project to explore what analytics might reveal about 9 teacher education courses, especially in light of the QILT process and data.

Reactions

Good paper.

Connections to the work we’re doing in terms of similar number of courses (9) and a focus on looking into the diversity hidden by aggregated and homogenised data analysis. The differences are

  • we’re looking at the question of engagement, not prediction (necessarily);
  • we’re looking for differences within a single discipline/program and aiming to explore diversity within/across a program
  • in particular, what it might reveal about our assumptions and practices
  • some of our offering are online only

Summary

Gašević, et al (2015) looks at the influence of specific instructional conditions in 9 blended courses on success prediction using learning analytics and log-data.

A lack of attention to instructional conditions can lead to an over or under estimation of the effects of LMS features on students’ academic success

Learning analytics

Interest in, but questions around the portability of learning analytics.

the paper aims to empirically demonstrate the importance for understanding the course and disciplinary context as an essential step when developing and interpreting predictive models of academic success and attrition (Lockyer, Heathcote, & Dawson, 2013)

Some aims to decontextualise – i.e. that some work aims to identify predictive models that can

inform a generalized model of predictive risk that acts independently of contextual factors such as institution, discipline, or learning design. These omissions of contextual variables are also occasionally expressed as an overt objective.

While there are some large scale projects, most are small scale and (emphasis added)

small sample sizes and disciplinary homogeneity adds further complexity in interpreting the research findings, leaving open the possibility that disciplinary context and course specific effects may be contributing factors

 Absence of theory in learning analytics – at least until recently.  Theory that points to the influence of diversity in context, subject, teacher, and learner.

Most post-behaviorist learning theories would suggest the importance of elements of the specific learning situation and student and teacher intentions

Impact of context – Mentions Finnegan, Morris and Lee (2009) as a study that looked at the role of contextual variables and finding disciplinary differences and “no single significant predictor shared across all three disciplines”

Role of theoretical frameworks – argument for benefits of integrating theory

  • connect with prior research;
  • make clear the aim of research designs and thus what outcomes mean.

Theoretical grounding for study

Winne and Hadwin’s “constructivist, meta-cognitive approach to self-regulated learning

  1. learners construct their knowledge by using tools (cognitive, physical, and digital);
  2. to operate on raw information (stuff given by courses);
  3. to construct products of their learning;
  4. learning products are evaluated via internal and external standards
  5. learners make decisions about their the tactics and standards used.
  6. decisions are influenced by internal and external conditions

Leading to the idea proposition

that learning analytics must account for conditions in order to make any meaningful interpretation of learning success prediction

The focus here is on instructional conditions.

Predictions from this

  1. Students will tend to interact more with recommended tools
  2. There will be a positive relationship between students level of interaction and the instructional conditions of the course (high frequency of use tools will have a large impact on success)
  3. The central tendency will prevail so that models that aggregate variables about student interaction may lead to over/under estimation

Method

Correlational (non-experimental) design. 9 first year courses that were part of an institutional project on retention. Participation in that project based on a discipline specific low level of retention – a quite low 20% (at least to me).  4134 students from 9 courses over 5 years – not big numbers.

Outcome variables – percent mark and academic status – pass, fail, or withdrawn (n=88).

Data based on other studies and availability

  • Student characteristics: age, gender, international student, language at home, home remoteness, term access, previous enrolment, course start.
  • LMS trace data: usage of various tools, some continuous, some lesser used as dichotomous and then categorical variables (reasons given)

Various statistics tests and models used.

Discussion

Usage across courses was variable hence the advice (p. 79)

  1. there is a need to create models for academic success prediction for individual courses, incorporating instructional conditions into the analysis model.
  2. there must be careful consideration in any interpretation of any predictive model of academic success, if these models do not incorporate instructional conditions
  3. particular courses,which may have similar technology use,maywarrant separatemodels for academic suc- cess prediction due to the individual differences in the enrolled student cohort.

And

we draw two important conclusions: a) generalized models of academic success prediction can overestimate or underestimate effects of individual predictors derived from trace data; and b) use of a specific LMS feature by the students within a course does not necessarily mean that the feature would have a significant effect on the students’ academic success; rather, instructional conditions need to be considered in order to understand if, and why, some variables were significant in order to inform the research and practice of learning and teaching (pp. 79, 81)

Closes out with some good comments on moving students/teachers beyond passive consumers of these models and the danger of existing institutional practice around analytics having decisions be made too far removed from the teaching context.

 

Helping teachers “know thy students”

The first key takeaway from Motz, Teague and Shepard (2015) is

Learner-centered approaches to higher education require that instructors have insight into their students’ characteristics, but instructors often prepare their courses long before they have an opportunity to meet the students.

The following illustrates one of the problems teaching staff (at least in my institution) face when trying to “know thy student”. It ponders if learner experience design (LX design) plus learning analytics (LA) might help. Shows off one example of what I’m currently doing to fix this problem and ponders some future directions for development.

The problem

One of the problems I identified in this talk was what it took for me to “know thy student” during semester. For example, the following is a question asked by a student on my course website earlier this year (in an offering that included 300+ students).

Question on a forum

To answer this question, it would be useful “know thy student” in the following terms

  1. Where is the student located?
    My students are distributed throughout Australian and the world. For this assignment they should be using curriculum documents specific to their location. It’s useful to know if the student is using the correct curriculum documents.
  2. What specialisation is the student working on?
    As a core course the Bachelor of Education degree, my course includes all types of pre-service teachers. Ranging from students studying to be Early Childhood teachers, Primary school teachers, Secondary teachers, and even some looking to be VET teachers/trainers.
  3. What activities and resources has the student engaged with on the course site?
    The activities and resources on the site are designed to help students learn. There is an activity focused on this question, has this student completed it? When did they complete it?
  4. What else has the student written and asked about?
    In this course, students are asked to maintain their own blog for reflection. What the student has written on that blog might help provide more insight. Ditto for other forum posts.

To “know thy student” in the terms outlined above and limited to the tools provided by my institution requires:

  • the use three different systems;
  • use of a number of different reports/services within those two systems; and,
  • at least 10 minutes to click through each of these.
Norman on affordances

Given Norman’s (1993) observations is it any wonder that perhaps I might not spend 10 minutes on that task every time I respond to a question from the 300+ students?

Can learner experience (LX) design help?

Yesterday, Joyce (@catspyjamasnz) and I spent some time exploring if and how learner experience design (Joyce’s expertise) and learning analytics (my interest) might be combined.

As I’m currently working on a proposal to help make it easier for teachers “know thy students” this was uppermost in my mind. And, as Joyce pointed out, “know the students” is a key step in LX design. And, as Motz et al (2015) illustrate there appears to be some value in using learning analytics to help teachers “know thy students”. And, beyond Motz’s et al (2015) focus on planning, learning analytics has been suggested to help with the orchestration of learning in the form of process analytics (Lockyer et al, 2013). A link I was thinking about before our talk.

Out of all this a few questions

  1. Can LX design practices be married with learning analytics in ways that enhance and transform the approach used by Motz et al (2015)?
  2. Learning analytics can be critiqued as being driven more by the available data and the algorithms available to analyse it (the expertise of the “data scientists”) driving it. Some LA work is driven by educational theories/ideas. Does LX design offer a different set of “purposes” to inform the development of LA applications?
  3. Can LX design practices + learning analytics be used to translate what Motz et al (2015) see as “relatively rare and special” into more common practice

    Exceptionally thoughtful, reflective instructors do exist, who customize and adapt their course after the start of the semester, but it’s our experience that these instructors are relatively rare and special, and these efforts at learning about students requires substantial time investment.

  4. Can this type of practice be done in a way that doesn’t require “data analysts responsible for developing and distributing” (Motz et al, 2015) the information?
  5. What type of affordances can and should such an approach provide?
  6. What ethical/privacy issues would need to be addressed?
  7. What additional data should be gathered and how?

    e.g. in the past I’ve used the course barometer idea to gather student experience during a course. Might something like this be added usefully?

More student details

“More student details” is the kludge that I’ve put in place to solve the problem at the top of this post. I couldn’t live with the current systems and had to scratch that itch.

The technical implementation of this scratch involves

  1. Extracting data from various institutional systems via manually produced reports and screen scraping and placing that data into a database on my laptop.
  2. Adapting the MAV architecture to create a Greasemonkey script that talks to a server on my laptop that in turn extracts data from the database.
  3. Install the Greasemonkey script on the browser I use on my laptop.

As a result, when I use that browser to view the forum post at the top of this post, I actually see the following (click on the image to see a larger version). The red arrows have been added to the image to highlight what’s changed. The addition of [details] links.

Forum post + more student details

Whenever the Greasemonkey script sees a Moodle user profile link, it adds a [details] link. Regardless of which page on my Moodle course sites I’m on. The following image shows an excerpt from the results page for a Quiz. It has the [details] links as well.

Quiz results + more student details

It’s not beautiful, but it’s only something I currently use and I was after utility.

Clicking on the [details] links results in a popup window appearing. A window that helps me “know they student”. The window has three tabs. The first is labelled “Personal Details” and is visible below. It provides information from the institutional student records system, including name, email address, age, specialisation, which campus or mode the student is enrolled in, the number of prior units they’ve completed, their GPA, and their location and phone numbers.

Student background

The second tab on “more student details” shows details of the student’s activity completion. This is a Moodle idea where it tracks if and when a student has completed an activity or resource. My course site is designed as a collection of weekly “learning
paths”. Each path is a series of activities and resources design to help the student learn. Each week belongs to one of three modules.

The following image shows part of the “Activity Completion” tab for “more student details”. It shows that Module 2 starts with week 4 (Effective planning: a first step) and week 5 (Developing your learning plan). Each week has a series of activities and resources.

For each activity the student has completed, it shows when they completed that activity. This student completed the “Welcome to Module 2” – 2 months ago. If I hold the mouse over “2 months ago” it will display the exact time and date it was completed.

I did mention above that it’s useful, rather the beautiful.

Student activity completion

The “blog posts tab shows details about all the posts the student has written on their blog for this course. Each of the blog posts include a link to that blog post and shows how long ago the post was made.

Student blog posts

With this tool available, when I answer a question on a discussion forum I can quickly refresh what I know about the student and their progress before answering. When I consider a request for an assignment extension, I can check on the student’s progress so far. Without spending 10+ minutes doing so.

API implementation and flexibility

As currently implemented, this tool relies on a number of manual steps and my personal technology infrastructure. To scale this approach will require addressing these problems.

The traditional approach to doing this might involve making modifications to Moodle to add this functionality into Moodle. I think this is the wrong way to do it. It’s too heavyweight, largely because Moodle is a complex bit of software used by huge numbers of people across the world, and because most of the really useful information here is going to be unique to different courses. For example, not many courses at my institution currently use activity completion in the way my course does. Almost none of the courses at my institution use BIM and student blogs the way my course does. Beyond this, the type of information required to “know thy student” extends beyond what is available in Moodle.

To “know thy student”, especially when thinking of process analytics that are unique to the specific learning design used, it will be important that any solution be flexible. It should allow individual courses to adapt and modify the data required to fit the specifics of the course and its learning design.

Which is why I plan to continue the use of augmented browsing as the primary mechanism, and why I’ve started exploring Moodle’s API. It appears to provide a way to allow the development of a flexible and customisable approach to allowing “know thy student” respond to the full diversity of learning and teaching.

Now, I wonder how LX design might help?

What might a project combining LX Design and Analaytics look like?

In a bit more than an hour I’ll be talking to @catspyjamasnz trying to nut out some ideas for a project around LX Design and Learning Analytics. The following is me thinking out loud and working through “my issues”.

What is LX Design

I’ve got some vague ideas which I need to work on. Obviously start with a Google search.

Oh dear, the top result is for Learning Experience Design TRADEMARK which is apparently

a synthesis of instructional design, educational pedagogy, neuroscience, social sciences, design thinking, and UI/UX—is critical for any organization looking to compete in the modern educational marketplace.

While I won’t dwell on this particular approach, it does link to some of my vague qualms about LX design. First, there’s a danger of it becoming too much of another collection of meaningless buzzwords used to label the same old practice as conforming to the latest buzzwords. Mainly because the people adopting don’t fully understand it and fail transform their practice. Old wine, new bottles.

Second, there’s the problem of the “product focus” in learning. Where the focus is on building the best product, which troubles me. Perhaps this says more about my biases, but I worry that LX Design will become just another tool (perhaps a very good tool) applied within the dominant SET mindset within institutional e-learning (which is my context). Which not surprisingly is one of my concerns about the direction of learning analytics.

And talking about old wine in new bottles, this post suggests that

Although LXD is a relatively new term in the field of design, there are some established best practices emerging as applied to creating online learning interfaces:

Mmm, not much there that I’d class as something that LXD has provided to the world. e.g. Donald Clark’s current sequence of “10” posts, including “10 essential rules on use of GRAPHICS in online learning”.

Needs and wants of the user?

This overview of User Experience Design (UX Design) – the foundation on which LX design is built – suggests

The term “user experience” was coined by Dr. Donald Norman, a cognitive science researcher who was also the first to describe the importance of user-centered design (the notion that design decisions should be based on the needs and wants of users).

As I wrote last week I’m not convinced that the “needs and wants of users” is always the best approach. Especially if we’re talking about something very new that the user doesn’t yet understand.

Which begs the question:

Who is the user in a learning experience?

The obvious answer from a LX design perspective is that the user is the learner. That the focus should be on the learner has been the broadly accepted in higher education for some time now. But then all models are wrong, but some are useful. In critiquing the raise of the term Technology Enhanced Learning, Bayne (2014) draws on a range of publications by Biesta to critique the focus on learning and learners. I’ve just skimmed this argument for this post, but there is potentially something interesting and useful here.

Beyond this more theoretical question about the value of a “learner focus”, I’d also like to mention something a little closer to home. The context in which I’m framing this post is within higher education’s practice of formal learning. A practice that currently still assumes that there is some value in having a teacher involved in the learning experience. Where “teacher” may not be a single individual, but actually be a small team with diverse roles. Which leads me to the proposition that the “teacher” is also a user within a learning experience.

As I’m employed as a teacher within higher education, I can speak to the negative impact of the blindingly obvious, almost complete lack of user experience design around the tools and systems teachers are required to engage with around learning and teaching. Given the low quality of those tools, it’s no surprise to me that most learning in higher education has some flaws.

This is one of the reasons behind the 4 paths for learning analytics focusing on the teacher (as designer of learning, if you must) and not the learner.

Increasingly, I wonder if the focus on being learner centered is arising from a frustration with the perceived lack of quality of the learning experiences produced by teachers combined with a deficit model of teachers. Which brings me to this quote from Bayne (2014)

points us toward a need to move beyond anthropocentrism and the focus on the individual, toward a greater concern with the networks, ecologies and sociomaterial contexts of our engagement with education and technology.

Impact of LX design for teachers?

What would happen to the quality of learning overall, if LX design were applied to the systems and processes that teachers use to design, implement, support, and revise learning and teaching? Would this help teachers learn more about how to teach better?

Learning analytics

I assume the link between LX design and learning analytics is that learning analytics can provide the data to better inform LX design. In particular, what Lockyer et al (2013) call “process analytics” would be useful

These data and analyses provide direct insight into learner information processing and knowledge application (Elias, 2011) within the tasks that the student completes as part of a learning design. (p. 1448)

One of the problems @beerc and I have with learning analytics is that it really only ever focuses on two bits of the PIRAC framework i.e. information and representation. It hardly ever does anything about affordances or change. This is why dashboards suck and are a broken metaphor. A dashboard without the ability to do anything to control the car are no value whatsoever.

My questions about LXD

  1. Just another FAD? Old wine in new bottles?
  2. Another tool reinforcing the SET mindset? Especially the product focus.
  3. Does LX design have a problem because it doesn’t include complex adaptive systems theory? It appears to treat learner experience design as a complicated problem, rather than a complex problem.
  4. The “meta-learning” problem – can it be applied to teachers learning how to teach?
  5. Where does it fit on the spectrum of: sage on the stage, guide on the side, and meddler in the middle?
  6. How to make it useful for the majority of teachers and learners?
  7. What type of affordances can/should analytics provide LX design to help all involved?

References

Bayne, S. (2014). What’s the matter with Techology Enhanced Learning? Learning, Media & Technology, 40(1), 5–20. doi:10.1080/17439884.2014.915851.Available

Dashboards suck: learning analytics’ broken metaphor

I started playing around with what became learning analytics in 2007 or so. Since then every/any time “learning analytics” is mentioned in a university there’s almost an automatic mention of dashboards. So much so I was lead to tweet.

I’ve always thought dashboards suck. This morning when preparing the slides for this talk on learning analytics I came across an explanation which I think captures my discomfort around dashboards (I do wonder whether I’d heard it somewhere else previously).

What is a dashboard

In the context of an Australian university discussion about learning analytics the phrase “dashboard” is typically mentioned by the folk from the business intelligence unit. The folk responsible for the organisational data warehouse. It might also get a mention from the web guru who’s keen on Google Analytics. In this context a dashboard is typically a collection of colourful charts, often even doing a good job of representing important information.

So what’s not to like?

The broken metaphor

Obviously “analytics” dashboards are a metaphor referencing the type of dashboard we’re familiar with in cars. The problem is that many (most?) of the learning analytics dashboards are conceptualised and designed like the following dashboard.

The problem is that this conceptualisation of dashboards misses the bigger picture. Rather than being thought of like the above dashboard, learning analytics dashboards need to be thought of as like the following dashboard.

Do you see the difference? (and it’s not the ugly, primitive nature of the graphical representation in the second dashboard).

Representation without Affordances and removed from the action

The second dashboard image includes: the accelerator, brake, and clutch pedals; the steering wheel; the indicators; the radio; air conditioning; and all of the other interface elements a driver requires to do something with the information presented in the dashboard. All of the affordances a driver requires to drive a car.

The first dashboard image – like many learning analytics dashboards – provides no affordances for action. The first vision of a dashboard doesn’t actually help you do anything.

What’s worse, the dashboards provided by most data warehouses aren’t even located within the learning environment. You have to enter into another system entirely, find the dashboard, interpret the information presented, translate that into some potential actions, exit the data warehouse, return to the learning environment, translate those potential actions into the affordances of the learning environment.

Picking up on the argument of Don Norman (see quote in image below), the difficulty of this process would seem likely to reduce the chances of any of those potential actions being taken. Especially if we’re talking about (casual) teaching staff working within a large course with limited training, support and tools.

Norman on affordances

Affordances improve learning analytics

Hence, my argument is that the dashboard (Representation) isn’t sufficient. In designing your learning analytics application you need to include the pedals, steering wheel etc (Affordances) if you want to increase the likelihood of that application actually helping improve the quality of learning and teaching. Which tends to suggest that your learning analytics application should be integrated into the learning environment.

Revisiting the IRAC framework and looking for insights

The Moodlemoot’AU 2015 conference is running working groups one of which is looking at assessment analytics. In essence, trying to think about what can be done in the Moodle LMS code to enhance assessment.

As it happens I’m giving a talk during the Moot titled “Four paths for learning analytics: Moving beyond a management fashion”. The aim of the talk is to provide some insights to help people think about the design and evaluation of learning analytics. The working seems like a good opportunity to (at some level) “eat my own dogfood” and fits with my current task of developing the presentation.

As part of getting ready for the presentation, I need to revisit the IRAC framework. A bit of work from 2013 that we’ve neglected, but which (I’m surprised and happy to say) I think holds much more promise than I may have thought. The following explains IRAC and what insights might be drawn from it. A subsequent post will hopefully apply this more directly to the task of Moodle assessment analytics.

(Yes, Col and Damien, I have decided once again to drop the P and stick with IRAC).

The IRAC Framework

Originally developed to “improve the analysis and design of learning analytics tools and interventions” and hopefully be “a tool to aid the mindful implementation of learning analytics” (Jones, Beer, Clark, 2013). The development of the framework drew upon “bodies of literature including Electronic Performance Support Systems (EPSS) (Gery, 1991), the design of cognitive artefacts (Norman, 1993), and Decision Support Systems (Arnott & Pervan, 2005).

This was largely driven by our observation that most of the learning analytics stuff wasn’t that much focused on whether or not it was actually adopted and used, especially by teachers. The EPSS literature was important because an EPSS is meant to embody a “perspective on designing systems that support learning and/or performing” (Hannafin, McCarthy, Hannafin, & Radtke, 2001, p. 658). EPSS are computer-based systems intended to “provide workers with the help they need to perform certain job tasks, at the time they need that help, and in a form that will be most helpful” (Reiser, 2001, p. 63).

Framework is probably not the right label.

IRAC was conceptualised as four questions to ask yourself about the learning analytics tool you were designing or evaluating. As outlined in Jones et al (2013)

The IRAC framework is intended to be applied with a particular context and a particular task in mind. A nuanced appreciation of context is at the heart of mindful innovation with Information Technology (Swanson & Ramiller, 2004). Olmos & Corrin (2012), amongst others, reinforce the importance for learning analytics to start with “a clear understanding of the questions to be answered” (p. 47) or the task to be achieved.

Once you’ve got your particular context and task in mind, then you can start thinking about these four questions:

  1. Is all the relevant Information and only the relevant information available?
  2. How does the Representation of the information aid the task being undertaken?
  3. What Affordances for interventions based on the information are provided?
  4. How will and who can Change the information, representation and the affordances?

The link with the LA literature

Interestingly, not long after we’d submitted the paper for reviewing, Siemens (2013) came out and that paper included the following Learning Analytics (LA) Model (LAM) (click on the image to see a larger version). LAM was meant to help move LA from small scale “bottom-up” approaches into a more systemic and institutional approach. The “data team” was given significant emphasis in this.

Siemens (2013) Learning Analytics Model

Hopefully you can see how the Siemens’ LAM and the IRAC framework, at least on the surface, seem to cover much of the same ground. In case you can’t, the following image (click on it to see a larger version) makes that connection explicit.

IRAC and LAM

Gathering insights from IRAC and LAM

The abstract for the Moot presentation promises insights so let’s see what insights you might gain from IRAC. The following is an initial list of potential insights. Insights might be too strong a word. Provocations or hypothesis might be better suited.

  1. An over emphasis on Information.

    When overlaying IRAC onto the LAM the most obvious point for me is the large amount of space in the LAM dedicated to Information. This very large focus on the collection, acquisition, storage, cleaning, integration, and analysis of information is not all that surprising. After all that is what big data and analytics bring to the table. The people who developed the field of learning analytics came to it with an interest in information and its analysis. It’s important stuff. But it’s not sufficient to achieve the ultimate goal of learning analytics, which is captured in the following broadly used definition (emphasis added)

    Learning analytics is the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning, and the environments in which it occurs.

    The point of learning analytics is to find out more about learning and the learning environment and change it for the better. That requires action. Action on part of the learner, the teacher, or perhaps the institution or other actors. There’s a long list of literature that strongly argues that simply providing information to people is not sufficient for action.

  2. Most of the information currently available is of limited value.

    In not a few cases, “big data is driven more by storage capabilities than by superior ways to ascertain useful knowledge” (Bollier & Firestone, 2012, p. 14). There have been questions asked about how much the information that is currently captured by LMSes and other systems can actually “contribute to the understanding of student learning in a complex social context such as higher education” (Lodge & Lewis, 2012, p. 563). Click streams reveal a lot about when and how people traverse e-learning environments, but not why and with what impacts. Beyond that is the problem raised by observations that the use of e-learning by most courses does not make particularly heavy or well-designed use of the learning environment.

  3. Don’t stop at a dashboard (Representation).

    It appears that most people think that if you’ve generated a report or (perhaps worse) a dashboard you have done your job when it comes to learning analytics. This fails on two parts.

    First, these are bad representations. Reports and many dashboards are often pretty crappy at helping people understand what is going on. Worse, these are typically presented outside of the space where the action happens. Breaking the goal of an an information system/EPSS i.e. “provide workers with the help they need to perform certain job tasks, at the time they need that help, and in a form that will be most helpful” (Reiser, 2001, p. 63).

    Second, just providing data in a pretty form is not sufficient. You want people to do something with the information. Otherwise, what’s the point? That’s why you have to consider the affordances question.

  4. Change is never considered.

    At the moment, most “learning analytics” projects involve installing a system, be it stand alone or part of the LMS etc. Once it’s installed it’s all just a better of ensuring people are using it. There’s actually no capacity to change the system or the answers to the I, R, or A questions of IRAC that the system provides. This is a problem on so many levels.

    In the original IRAC paper we mentioned: how development through continuous action cycles involving significant user participation was a core of the theory of decision support systems (Arnott & Pervan, 2005) a pre-cusor to learning analytics; Buckingham-Shum’s (2012) observation that most LA is based on data already being captured by systems and that analysis of that data will perpetuate existing dominant approaches to learning; the problem of gaming once people learn what the system wants. Later we added the task artifact cycle.

    More recently (Macfadyen et al 2014) argue that one of the requirements of learning analytics tools is an integrated and sustained overall refinement procedure allowing reflection” (p. 12).

  5. The more context sensitive the LA is, the more value it has.

    In talking about the use of the SNAPP tool to visualise connections in discussion forums, Lockyer et al (2013) explain that the “interpretation of visualizations also depends heavily on an understanding the context in which the data were collected and the goals of the teacher regarding in-class interaction” (p. 1446). The more you know about the learning context, the better the insight you can draw from learning analytics. An observation that brings the reusability paradox into the picture. Most LA – especially those designed into an LMS – have to be designed to have the potential to be reused across all of the types of institutions that use the LMS. This removes the LMS (and its learning analytics) away from the specifics of the context, which reduces its pedagogical value.

  6. Think hard about providing and enhancing affordances for intervention

    Underpinning the IRAC work is the work of Don Norman (1993), in particular the quote in the image of him below. If LA is all about optimising learning and the learning environment then the LA application has to make it easy for people to engage in activities designed to bring that goal about. If it’s hard, they won’t do it. Meaning all that wonderfully complex algorithmic magic is wasted.

    Macfadyen et al (2014) identify facilitating the deployment of interventions that lead to change to enhance learning as a requirement of learning analytics. Wise (2014) defines learning analytics intervention “as the surrounding frame of activity through which analytics tools, data and reports are taken up and used”. An area of learning analytics that is relatively unexplored (Wise, 2014) and I’ll close with another quote from Wise (2014) which sums up the whole point of the IRAC framework and identifies what I think is the really challenging problem for LA

    If learning analytics are to truly make an impact on teaching and learning and fulfill expectations of revolutionizing education, we need to consider and design for ways in which they will impact the larger activity patterns of instructors and students. (Wise, 2014, 203)

    (and I really do need to revisit the Wise paper).

Norman on affordances

References

Arnott, D., & Pervan, G. (2005). A critical analysis of decision support systems research. Journal of Information Technology, 20(2), 67–87. doi:10.1057/palgrave.jit.2000035

Bollier, D., & Firestone, C. (2010). The promise and peril of big data. Washington DC: The Aspen Institute. Retrieved from http://india.emc.com/collateral/analyst-reports/10334-ar-promise-peril-of-big-data.pdf

Buckingham Shum, S. (2012). Learning Analytics. Moscow. Retrieved from http://iite.unesco.org/pics/publications/en/files/3214711.pdf

Hannafin, M., McCarthy, J., Hannafin, K., & Radtke, P. (2001). Scaffolding performance in EPSSs: Bridging theory and practice. In World Conference on Educational Multimedia, Hypermedia and Telecommunications (pp. 658–663). Retrieved from http://www.editlib.org/INDEX.CFM?fuseaction=Reader.ViewAbstract&paper_id=8792

Gery, G. J. (1991). Electronic Performance Support Systems: How and why to remake the workplace through the strategic adoption of technology. Tolland, MA: Gery Performance Press.

Jones, D., Beer, C., & Clark, D. (2013). The IRAC framwork: Locating the performance zone for learning analytics. In H. Carter, M. Gosper, & J. Hedberg (Eds.), Electric Dreams. Proceedings ascilite 2013 (pp. 446–450). Sydney, Australia.

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, 57(10), 1439–1459. doi:10.1177/0002764213479367

Lodge, J., & Lewis, M. (2012). Pigeon pecks and mouse clicks : Putting the learning back into learning analytics. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Future challenges, sustainable futures. Proceedings ascilite Wellington 2012 (pp. 560–564). Wellington, NZ. Retrieved from http://www.ascilite2012.org/images/custom/lodge,_jason_-_pigeon_pecks.pdf

Macfadyen, L. P., Dawson, S., Pardo, A., & Gasevic, D. (2014). Embracing big data in complex educational systems: The learning analytics imperative and the policy challenge. Research and Practice in Assessment, 9(Winter), 17–28.

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Olmos, M., & Corrin, L. (2012). Learning analytics: a case study of the process of design of visualizations. Journal of Asynchronous Learning Networks, 16(3), 39–49. Retrieved from http://ro.uow.edu.au/medpapers/432/

Reiser, R. (2001). A history of instructional design and technology: Part II: A history of instructional design. Educational Technology Research and Development, 49(2), 57–67.

Siemens, G. (2013). Learning Analytics: The Emergence of a Discipline. American Behavioral Scientist, 57(10), 1371–1379. doi:10.1177/0002764213498851

Swanson, E. B., & Ramiller, N. C. (2004). Innovating mindfully with information technology. MIS Quarterly, 28(4), 553–583.

Wise, A. F. (2014). Designing pedagogical interventions to support student use of learning analytics. In Proceedins of the Fourth International Conference on Learning Analytics And Knowledge – LAK ’14 (pp. 203–211). doi:10.1145/2567574.2567588

Reading – Embracing Big Data in Complex Educational Systems: The Learning Analytics Imperative and the Policy Challenge

The following is a summary and ad hoc thoughts on Macfadyen et al (2014).

There’s much to like in the paper. But the basic premise I see in the paper is that to fix the problems of the current inappropriate teleological processes used in institutional strategic planning and policy setting is an enhanced/adaptive teleological process. The impression I take from the paper is that it’s still missing the need for institutional to be enabling actors within institutions to integrate greater use of ateleological processes (see Clegg, 2002). Of course, Clegg goes onto do the obvious and develop a “dialectical approach to strategy” that merges the two extremes.

Is my characterisation of the adaptive models presented here appropriate?

I can see very strong connections with the arguments made in this paper between institutions and learning analytics and the reasons why I think e-learning is a bit like teenage sex.

But given the problems with “e-learning” (i.e. most of it isn’t much good in pedagogical terms) what does that say about the claim that we’re in an age of “big data” in education. If the pedagogy of most e-learning is questionable, is the data being gathered any use?

Conflating “piecemeal” and “implementation of new tools”

The abstract argues that there must be a shift “from assessment-for-accountability to assessment-for-learning” and suggests that it won’t be achieved “through piecemeal implementation of new tools”.

It seems to me that this is conflating two separate ideas, they are

  1. piecemeal; and,

    i.e. unsystematic or partial measures. It can’t happen bit-by-bit, instead it has to proceed at the whole of institutional level. This is the necessary step in the argument that institutional change is (or must be) involved.

    One of the problems I have with this is that if you are thinking of educational institutionals as complex adaptive systems, then that means they are the type of system where a small (i.e. piecemeal change) could potentially (but not always) have a large impact. In a complex system, a few very small well directed changes may have a large impact. Or alternatively and picking up on ideas I’ve heard from Dave Snowden, implementing large amounts of very small projects and observing the outcomes may be the only effective way forward. By definition a complex system is one where being anything but piecemeal may be an exercise in futility. As you can never understand a complex system, let alone being able to guess the likely impacts of proposed changes..

    The paper argues that systems of any type are stable and resistant to change. There’s support for this argument. I need to look for dissenting voices and evaluate.

  2. implementation of new tools.

    i.e. the build it and they will come approach won’t work. Which I think is the real problem and is indicative of the sort of simplistic planning processes that the paper argues against.

These are two very different ideas. I’d also argue that while these alone won’t enable the change, they are both necessary for the change. I’d also argue that institutional change (by itself) is also unlikely to to achieve the type of cultural change required. The argument presented in seeking to explain “Why e-learning is a bit like teenage sex” is essentially this. That institutional attempts to enable and encourage changed in learning practice toward e-learning fail because they are too focused on institutional concerns (large scale strategic change) and not enough on enabling elements of piecemeal growth (i.e. bricolage).

The Reusability Paradox and “at scale”

I also wonder about considerations raised by the reusability paradox in connection with statements like (emphasis added) “learning analytics (LA) offer the possibility of implementing real–time assessment and feedback systems and processes at scale“. Can the “smart algorithms” of LA marry the opposite ends of the spectrum – pedagogical value and large scale reuse? Can the adaptive planning models bridge that gap?

Abstract

In the new era of big educational data, learning analytics (LA) offer the possibility of implementing real–time assessment and feedback systems and processes at scale that are focused on improvement of learning, development of self–regulated learning skills, and student success. How- ever, to realize this promise, the necessary shifts in the culture, techno- logical infrastructure, and teaching practices of higher education, from assessment–for–accountability to assessment–for–learning, cannot be achieved through piecemeal implementation of new tools. We propose here that the challenge of successful institutional change for learning analytics implementation is a wicked problem that calls for new adaptive forms of leadership, collaboration, policy development and strategic planning. Higher education institutions are best viewed as complex systems underpinned by policy, and we introduce two policy and planning frameworks developed for complex systems that may offer institutional teams practical guidance in their project of optimizing their educational systems with learning analytics.

Introduction

First para is a summary of all the arguments for learning analytics

  • awash in data (I’m questioning)
  • now have algorithms/methods that can extract useful stuff from the data
  • using these methods can help make sense of complex environments
  • education is increasingly complex – increasing learner diversity, reducing funding, increasing focus on quality and accountability, increasing competition
  • it’s no longer an option to use the data

It also includes a quote from a consulting company promoting FOMO/falling behind if you don’t use it. I wonder how many different fads they’ve said that about?

Second para explains what the article is about – “new adaptive policy and planning approaches….comprehensive development and implementation of policies to address LA challenges of learning design, leadership, institutional culture, data access and security, data privacy and ethical dilemmas, technology infrastructure, and a demonstrable gap in institutional LA skills and capacity”.

But based on the idea of Universities as complex adaptive systems. That “simplistic approaches to policy development are doomed to fail”.

Assessment practices: A wicked problem in a complex system

Assessment is important. Demonstrates impact – positive and negative – of policy. Assessment still seen too much as focused on accountability and not for learning. Diversity of stakeholders and concerns around assessment make substantial change hard.

“Assessment practice will continue to be intricately intertwined both with learning
and with program accreditation and accountability measures.” (p. 18). NCLB used as an example of the problems this creates and mentions Goodhart’s law.

Picks up the on-going focus on “high-stakes snapshot testing” to provide comparative data. Mentions

Wall, Hursh and Rodgers (2014) have argued, on the other hand, that the perception that students, parents and educational leaders can only obtain useful comparative information about learning from systematized assessment is a false one.

But also suggests that learning analytics may offer a better approach – citing (Wiliam, 2010).

Identifies the need to improve assessement practices at the course level. Various references.

Touches on the difficulties in making these changes. Mentions wicked problems and touches on complex systems

As with all complex systems, even a subtle change may be perceived as difficult, and be resisted (Head & Alford, 2013).

But doesn’t pick up the alternate possibility that a subtle change that might not be seen as difficult could have large ramifications.

Learning analytics and assessment-for-learning

This paper is part of a special issue on LA and assessment. Mentions other papers that have show the contribution LA can make to assessment.

Analytics can add distinct value to teaching and learning practice by providing greater insight into the student learning process to identify the impact of curriculum and learning strategies, while at the same time facilitating individual learner progress (p. 19)

The argument is that LA can help both assessment tasks: quality assurance, and learning improvement.

Technological components of the educational system and support of LA

The assumption is that there is a technological foundation for – storing, managing, visualising and processing big educational data. Need for more than just the LMS. Need to mix it all up and this “institutions are recognizing the need to re–assess the concept of teaching and learning space to encompass both physical and virtual locations, and adapt learning experiences to this new context (Thomas, 2010)” (p. 20) Add to that the rise of multiple devices etc.

Identifies the following requirements for LA tools (p. 21) – emphasis added

  1. Diverse and flexible data collection schemes: Tools need to adapt to increasing data sources, distributed in location, different in scope, and hosted in any platform.
  2. Simple connection with institutional objectives at different levels: information needs to be understood by stakeholders with no extra effort. Upper management needs insight connected with different organizational aspects than an educator. User–guided design is of the utmost importance in this area.
  3. Simple deployment of effective interventions, and an integrated and sustained overall refinement procedure allowing reflection

Some nice overlaps with the IRAC framework here.

It does raise interesting questions about what are institutional objectives? Even more importantly how easy it is or isn’t to identify what those are and what they mean at the various levels of the institution.

Interventions An inset talks about the sociotechnical infrastructure for LA. It mentions the requirement for interventions. (p. 21)

The third requirement for technology supporting learning analytics is that it can facilitate the deployment of so–called interventions, where intervention may mean any change or personalization introduced in the environment to support student success, and its relevance with respect to the context. This context may range from generic institutional policies, to pedagogical strategy in a course. Interventions at the level of institution have been already studied and deployed to address retention, attrition or graduation rate problems (Ferguson, 2012; Fritz, 2011; Tanes, Arnold, King, & Remnet, 2011). More comprehensive frameworks that widen the scope of interventions and adopt a more formal approach have been recently proposed, but much research is still needed in this area (Wise, 2014).

And then this (pp. 21-22) which contains numerous potential implications (emphasis added)

Educational institutions need technological solutions that are deployed in a context of continuous change, with an increasing variety of data sources, that convey the advantages in a simple way to stakeholders, and allow a connection with the underpinning pedagogical strategies.

But what happens when the pedagogical strategies are very, very limited?

Then makes this point as a segue into the next section (p. 22)

Foremost among these is the question of access to data, which needs must be widespread and open. Careful policy development is also necessary to ensure that assessment and analytics plans reflect the institution’s vision for teaching and strategic needs (and are not simply being embraced in a panic to be seen to be doing something with data), and that LA tools and approaches are embraced as a means of engaging stakeholders in discussion and facilitating change rather than as tools for measuring performance or the status quo.

The challenge: Bringing about institutional change in complex systems

“the real challenges of implementation are significant” (p. 22). The above identifies “only two of the several and interconnected socio-technical domains that need to be addressed by comprehensive institutional policy and strategic planning”

  1. influencing stakeholder understanding of assessment in education
  2. developing the necessary institutional technological infrastructure to support the undertaking

And this has to be done whilst attending to business as usual.

Hence not surprising that education lags other sectors in adoption analytics. Identifies barriers

  • lack of practical, technical and financial capacity to mind big data

    A statement from the consulting firm who also just happens to be in the market of selling services to help.

  • perceived need for expensive tools

Cites various studies showing education institutions stuck at gathering and basic reporting.

And of course even if you get it right…

There is recognition that even where technological competence and data exist, simple presentation of the facts (the potential power of analytics), no matter how accurate and authoritative, may not be enough to overcome institutional resistance (Macfadyen & Dawson, 2012; Young & Mendizabal, 2009).

Why policy matters for LA

Starts with establishing higher education institutions as a “superb example of complex adaptive systems” but then suggests that (p. 22)

policies are the critical driving forces that underpin complex and systemic institutional problems (Corvalán et al., 1999) and that shape perceptions of the nature of the problem(s) and acceptable solutions.

I struggle a bit with that observation and even more with this argument (p. 22)

we argue that it is therefore only through implementation of planning processes driven by new policies that institutional change can come about.

Expands on the notion of CAS and wicked problems. Makes this interesting point

Like all complex systems, educational systems are very stable, and resistant to change. They are resilient in the face of perturbation, and exist far from equilibrium, requiring a constant input of energy to maintain system organization (see Capra, 1996). As a result, and in spite of being organizations whose business is research and education, simple provision of new information to leaders and stakeholders is typically insufficient to bring about systemic institutional change.

Now talks about the problems more specific to LA and the “lack of data-driven mind-set” from senior management. Links this to earlier example of institutional research to inform institutional change (McINtosh, 1979) and links to a paper by Ferguson applying those findings to LA, from there and other places factors identified include

  • academics don’t want to act on findings from other disciplines;
  • disagreements over qualitative vs quantitative approaches;
  • researchers & decision makers speak different languages;
  • lack of familiarity with statistical methods
  • data not presented/explained to decision makers well enough.
  • researchers tend to hedge an dquality conclusions.
  • valorized education/faculty autonomy and resisted any administrative efforts perceived to interfere with T&L practice

Social marketing and change management is drawn upon to suggest that “social and cultural change” isn’t brought about by simply by giving access to data – “scientific analyses and technical rationality are insufficient mechanisms for understanding and solving complex problems” (p. 23). Returns to

what is needed are comprehensive policy and planning frameworks to address not simply the perceived shortfalls in technological tools and data management, but the cultural and capacity gaps that are the true strategic issues (Norris & Baer, 2013).

Policy and planning approaches for wicked problems in complex systems

Sets about defining policy. Includes this which resonates with me

Contemporary critics from the planning and design fields argue, however, that these classic, top–down, expert–driven (and mostly corporate) policy and planning models are based on a poor and homogenous representation of social systems mismatched with our contemporary pluralistic societies, and that implementation of such simplistic policy and planning models undermines chances of success (for review, see Head & Alford, 2013).

Draws on wicked problem literature to expand on this. Then onto systems theory.

And this is where the argument about piecemeal growth being insufficient arises (p. 24)

These observations not only illuminate why piecemeal attempts to effect change in educational systems are typically ineffective, but also explains why no one–size–fits–all prescriptive approach to policy and strategy development for educational change is available or even possible.

and perhaps more interestingly

Usable policy frameworks will not be those which offer a to do list of, for example, steps in learning analytics implementation. Instead, successful frameworks will be those which guide leaders and participants in exploring and understanding the structures and many interrelationships within their own complex system, and identifying points where intervention in their own system will be necessary in order to bring about change

One thought is whether or not this idea is a view that strikes “management” as “researchers hedging their bets”? Mentioned above as a problem above.

Moves onto talking “adaptive management strategies” (Head and Alford, 2013) which offer new means for policy and planning that “can allow institutions to respond flexibly to ever-changing social and institutional contexts and challenges” which talk about

  • role of cross-institutional collaboration
  • new forms of leadership
  • development of enabling structures and processes (budgeting, finance, HR etc)

Interesting that notions of technology don’t get a mention.

Two “sample policy and planning models” are discussed.

  1. Rapid Outcome Mapping Approach (ROMA) – from international development

    “focused on evidence-based policy change”. An iterative model. I wonder about this

    Importantly, the ROMA process begins with a systematic effort at mapping institutional context (for which these authors offer a range of tools and frameworks) – the people, political structures, policies, institutions and processes that may help or hinder change.

    Perhaps a step up, but isn’t this still big up front design? Assumes you can do this? But then some is better than none?

    Apparently this approach is used more in Ferguson et al (2014)

  2. “cause-effect framework” – DPSEEA framework

    Driving fource, Pressure, State, Exposure, Effect (DPSEEA) a way of identifying linkages between forces underpinning complex systems.

Ferguson et al (2014) apparently map “apparently successful institutional policy and planning processes have pursued change management approaches that map well to such frameworks”. So not yet informed by? Of course, there’s always the question of the people driving those systems reporting on their work?

I do like this quote (p. 25)

To paraphrase Head and Alford (2013), when it comes to wicked problems in complex systems, there is no one– size–fits–all policy solution, and there is no plan that is not provisional.

References

Ferguson, R., Clow, D., Macfadyen, L., Essa, A., Dawson, S., & Alexander, S. (2014). Setting Learning Analytics in Context : Overcoming the Barriers to Large-Scale Adoption. Journal of Learning Analytics, 1(3), 120–144. doi:10.1145/2567574.2567592

Macfadyen, L. P., Dawson, S., Pardo, A., & Gasevic, D. (2014). Embracing big data in complex educational systems: The learning analytics imperative and the policy challenge. Research and Practice in Assessment, 9(Winter), 17–28.

The four paths for implementing learning analytics and enhancing the quality of learning and teaching

The following is a place holder for two presentations that are related. They are:

  1. “Four paths for learning analytics: Moving beyond a management fashion”; and,

    An extension of Beer et al (2014) (e.g. there are four paths now, rather than three) that’s been accepted to Moodlemoot’AU 2015.

  2. “The four paths for implementing learning analytics and enhancing the quality of learning and teaching”;

    A USQ research seminar that is part a warm up of the Moot presentation, but also an early attempt to extend the 4 paths idea beyond learning analytics and into broader institutional attempts to improve learning and teaching.

Eventually the slides and other resources from the presentations will show up here. What follows is the abstract for the second talk.

Slides for the MootAU15 presentation

Only 15 minutes for this talk. Tried to distill the key messages. Thanks to @catspyjamasnz the talk was captured on Periscope

Slides for the USQ talk

Had the luxury of an hour for this talk. Perhaps to verbose.

Abstract

Baskerville and Myers (2009) define a management fashion as “a relatively transitory belief that a certain management technique leads rational management progress” (p. 647). Maddux and Cummings (2004) observe that “education has always been particularly susceptible to short-lived, fashionable movements that come suddenly into vogue, generate brief but intense enthusiasm and optimism, and fall quickly into disrepute and abandonment” (p. 511). Over recent years learning analytics has been looming as one of the more prominent fashionable movements in educational technology. Illustrated by the apparent engagement of every institution and vendor in some project badged with the label learning analytics. If these organisations hope to successfully harness learning analytics to address the challenges facing higher education, then it is important to move beyond the slavish adoption of the latest fashion and aim for more mindful innovation.

Building on an earlier paper (Beer, Tickner, & Jones, 2014) this session will provide a conceptual framework to aid in moving learning analytics projects beyond mere fashion. The session will identify, characterize, and explain the importance of four possible paths for learning analytics: “do it to” teachers; “do it for” teachers; “do it with” teachers; and, teachers “DIY”. Each path will be illustrated with concrete examples of learning analytics projects from a number of universities. Each of these example projects will be analysed using the IRAC framework (Jones, Beer, & Clark, 2013) and other lenses. That analysis will be used to identify the relative strengths, weaknesses, and requirements of each of the four paths. The analysis will also be used to derive implications for the decision-makers, developers, instructional designers, teachers, and other stakeholders involved in both learning analytics, and learning and teaching.

It will be argued that learning analytics projects that follow only one of the four paths are those most likely to be doomed to mere fashion. It will argue that moving a learning analytics project beyond mere fashion will require a much greater focus on the “do it with” and “DIY” paths. An observation that is particularly troubling when almost all organizational learning analytics projects appear focused primarily on either the “do it to” or “do it for” paths.

Lastly, the possibility of connections between this argument and the broader problem of enhancing the quality of learning and teaching will be explored. Which paths are used by institutional attempts to improve learning and teaching? Do the paths used by institutions inherently limit the amount and types of improvements that are possible? What implications might this have for both research and practice?

References

Baskerville, R. L., & Myers, M. D. (2009). Fashion waves in information systems research and practice. MIS Quarterly, 33(4), 647–662.

Beer, C., Tickner, R., & Jones, D. (2014). Three paths for learning analytics and beyond : moving from rhetoric to reality. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 242–250).

Jones, D., Beer, C., & Clark, D. (2013). The IRAC framwork: Locating the performance zone for learning analytics. In H. Carter, M. Gosper, & J. Hedberg (Eds.), Electric Dreams. Proceedings ascilite 2013 (pp. 446–450). Sydney, Australia.

Maddux, C., & Cummings, R. (2004). Fad, fashion, and the weak role of theory and research in information technology in education. Journal of Technology and Teacher Education, 12(4), 511–533.

Learning analytics is better when…..?

Trying to capture some thinking that arose during an institutional meeting re: learning analytics. The meeting was somewhat positive, but – as is not uncommon – there seemed to be some limitations around what learning analytics actually is and what it might look like. Wondering if the following framing might help it draws on points made by numerous people about learning analytics and some strong echoes of the (P)IRAC framework

Learning analytics better when it

  1. knows more about the learning environment;
    (learning environment includes learners, teachers, learning designs etc.)
  2. is accessible from within the learning environment;
    i.e. learners and teachers don’t need to remove themselves from the learning environment to access the learning analytics.
  3. provides affordances for action within the learning environment;
    If no change results from the learning analytics, then there is little value in it.
  4. can be changed by people within the learning environment.
    i.e. learners and teachers (and perhaps others) can modify the learning analytics for their own (new) purposes.

The problem is that I don’t think that institutional considerations of learning analytics pay much atention to these four axes and this may explain limited usage and impact arising from the tools.

All four axes tend to require knowing a lot about the specifics of the learning environment and being able to respond to what you find in that environment in a contextually appropriate way.

The more learning analytics enables this, the more useful it is. The more useful it is, the more it used and the more impact it will have.

A few examples to illustrate.

Data warehouse

  1. What does it know about the learning environment? Limited
    Generally will know who the learners are, what they are studying, where they are from etc. May know what they have done within various institutional systems.
    Almost certainly knows nothing about the learning design.
    Probably knows who’s teaching what they’ve taught before.
  2. Accessible from the learning environment? Probably not
    Access it via a dashboard tool which is separate from the learning environment. i.e. not going to be emedded within the discussion forum tool, or the wiki tool.
    A knowledgeable user of the tool may well set up their own broader environment so that the data warehouse is integrated into it.
  3. Affordances for action? NONE
    It can display information, that’s it.
  4. Change? Difficult and typically the same for everyone
    Only the data warehouse people can change the representation of the information the warehouse provides. They probably can’t change the data that is included in the data warehouse without buy in from external system owners. IT governance structures need to be traversed.

Moodle reports

  1. What does it know about the learning environment? Limited
    Know what the students have done within Moodle. But does not typically know of anything outside Moodle.
  2. Accessible from the learning environment? Somewhat
    If you’re learning within Moodle, you can get to the Moodle reports. But the Moodle reports are a separate module (functionality) and thus aspects of the Moodle reports cannot be easily included into other parts of the Moodle learning environment and certainly cannot be integrated into non-Moodle parts of the learning environment.
  3. Affordances for action? Limited
    The closest is that some reports provide the ability to contact digitally students who meet certain criteria. However, the difficulty of using the reports suggests that the actual “affordances” are somewhat more limited.
  4. Change? Difficult, limited to Moodle
    Need to have some level of Moodle expertise and some greater level of access to modify reports. Typically would need to go through some level of governance structure. Probably can’t be change to access much outside of Moodle.

“MAV-enabled analytics”

A paper last year describes the development of MAV at CQU and some local tinkering I did using MAV i.e. “MAV-enabled analytics”.

  1. What does it know about the learning environment? Limited but growing
    As described, both MAV (student clicks on links in Moodle) and my tinkering (student records data) draw on low level information. In a month or so my on-going tinkering has the tool including information about student completion of activities in my course site and what the students have written on their individual blogs. Hopefully that will soon be extended with SNA and some sentiment analysis.
  2. Accessible from the learning environment? Yes
    Both are analytics tools are embedded into the Moodle LMS – the prime learning environment for this context.
  3. Affordances for action? Limited but growing
    My tinkering offers little. MAV @ CQU is integrated with other systems to support a range of actions associated with contacting and tracking students. Both systems are very easy to use, hence increasing the affordances.
  4. Change? Slightly better than limited.
    MAV has arisen from tinkering and thus new functionality can be added. However, it requires someone who knows how MAV and its children work. It can’t be changed by learners/teachers. However, as I am the teacher using the results of my tinkering, I can change it. However, I’m constrained by time and system access.

Using the PIRAC – Thinking about an “integrated dashboard”

On Monday I’m off to a rather large meeting to talk about what data might be usefully syndicated into a integrated dashboard. The following is an attempt to think out lod about the (P)IRAC framework (Jones, Beer and Clark, 2013) in the context of this local project. To help prepare me for the meeting, but also to ponder some recent thoughts about the framework.

This is still a work in progress.

Get the negativity out of the way first

Dashboards sux!!

I have a long-term negative view of the value of dashboards and traditional data warehouses/business intelligence type systems. A view that has risen out of both experience and research. For example, the following is a slide from this invited presentation. There’s also a a paper (Beer, Jones, & Tickner, 2014) that evolved from that presentation.

Slide19

I don’t have a problem with the technology. Data warehouse tools do have a range of functionality that is useful. However, in terms of providing something useful to the everyday life of teachers in a way that enhances learning and teaching, they leave a lot to be desired.

The first problem is the Law of Instrument.

Hammer ... Nail ... by Theen ..., on Flickr
Creative Commons Creative Commons Attribution-Noncommercial-Share Alike 2.0 Generic License   by  Theen … 

The only “analytics” tool the institution has is the data warehouse, so that’s what it has to use. The problem being is that the data warehouse cannot be easily and effectively integrated into the daily act of learning and teaching in a way that provides significant additional affordances (more on affordances below).

Hence it doesn’t get used.

Now, leaving that aside.

(P)IRAC

After a few years of doing learning analytics stuff, we put together the IRAC framework as an attempt to guide learning analytics projects. Broaden the outlook and what needed to be considered. Especially what needed to be considered to ensure that the project outcome was widely and effectively used. The idea is that the four elements of the framework could help ponder what was available and what might be required. The four original components of IRAC are summarised in the following table.

IRAC Framework (adapted from Jones et al 2013)
Component Description
Information
  • the information we collect is usually about “those things that are easiest to identify and count or measure” but which may have “little or no connection with those factors of greatest importance” (Norman, 1993, p. 13).
  • Verhulst’s observation (cited in Bollier & Firestone, 2010) that “big data is driven more by storage capabilities than by superior ways to ascertain useful knowledge” (p. 14).
  • Is the information required technically and ethically available for use?
  • How is the information to be cleaned, analysed and manipulated?
  • Is the information sufficient to fulfill the needs of the task?
  • In particular, does the information captured provide a reasonable basis upon which to “contribute to the understanding of student learning in a complex social context such as higher education” (Lodge & Lewis, 2012, p. 563)?
Representation
  • A bad representation will turn a problem into a reflective challenge, while an appropriate representation can transform the same problem into a simple, straightforward task (Norman, 1993).
  • To maintain performance, it is necessary for people to be “able to learn, use, and reference necessary information within a single context and without breaks in the natural flow of performing their jobs.” (Villachica et al., 2006, p. 540).
  • Olmos and Corrin (2012) suggest that there is a need to better understand how visualisations of complex information can be used to aid analysis.
  • Considerations here focus on how easy is it to understand the implications and limitations of the findings provided by learning analytics? (and much, much more)
Affordances
  • A poorly designed or constructed artefact can greatly hinder its use (Norman, 1993).
  • To have a positive impact on individual performance an IT tool must be utilised and be a good fit for the task it supports (Goodhue & Thompson, 1995).
  • Human beings tend to use objects in “ways suggested by the most salient perceived affordances, not in ways that are difficult to discover” (Norman, 1993, p. 106).
  • The nature of such affordances are not inherent to the artefact, but are instead co-determined by the properties of the artefact in relation to the properties of the individual, including the goals of that individual (Young, Barab, & Garrett, 2000).
  • Glassey (1998) observes that through the provision of “the wrong end-user tools and failing to engage and enable end users” even the best implemented data warehouses “sit abandoned” (p. 62).
  • The consideration for affordances is whether or not the tool and the surrounding environment provide support for action that is appropriate to the context, the individuals and the task.
Change
  • Evolutionary development has been central to the theory of decision support systems (DSS) since its inception in the early 1970s (Arnott & Pervan, 2005).
  • Rather than being implemented in linear or parallel, development occurs through continuous action cycles involving significant user participation (Arnott & Pervan, 2005).
  • Buckingham-Shum (2012) identifies the risk that research and development based on data already being gathered will tend to perpetuate the existing dominant approaches from which the data was generated.
  • Bollier and Firestone (2010) observe that once “people know there is an automated system in place, they may deliberately try to game it” (p. 6).
  • Universities are complex systems (Beer, Jones, & Clark, 2012) requiring reflective and adaptive approaches that seek to identify and respond to emergent behaviour in order to stimulate increased interaction and communication (Boustani et al., 2010).
  • Potential considerations here include, who is able to implement change? Which, if any, of the three prior questions can be changed? How radical can those changes be? Is a diversity of change possible?

Adding purpose

Whilst on holiday enjoying the Queenstown view below and various refreshments, @beerc and I discussed a range of issues, including the IRAC framework and what might be missing. Both @beerc and @damoclarky have identified potential elements to be added, but I’ve always been reluctant. However, one of the common themes underpinning much of the discussion of learning analytics at ASCILITE’2014 was for whom was learning analytics being done? We raised this question somewhat in our paper when we suggested that much of learning analytics (and educational technology) is mostly done to academics (and students). Typically in the service of some purpose serving the needs of senior management or central services. But the issue was also raised by many others.

Which got us thinking about Purpose.

Queenstown View

As originally framed (Jones et al, 2013)

The IRAC framework is intended to be applied with a particular context and a particular task in mind……Olmos & Corrin (2012), amongst others, reinforce the importance for learning analytics to start with “a clear understanding of the questions to be answered” (p. 47) or the task to be achieved.

If you start the design of a learning analytics tool/intervention without a clear idea of the task (and its context) in mind, then it’s going to be difficult to implement.

In our discussions in NZ, I’d actually forgotten about this focus in the original paper. This perhaps reinforces the need for IRAC to become PIRAC. To explicitly make purpose the initial consideration.

Beyond increasing focus on the task, purpose also brings in the broader organisational, personal, and political considerations that are inherent in this type of work.

So perhaps purpose encapsulates

  1. Why are we doing this? What’s the purpose?
    Reading between the lines, this particular project seems to be driven more by the availability of the tool and a person with the expertise to do stuff with the tool. The creation of a dashboard seems the strongest reason given.
    Tied in with seems to be the point that the institution needs to be seen to be responding to the “learning analytics” fad (the FOMO problem). Related to this will, no doubt, be some idea that by doing something in this area, learning and teaching will improve.
  2. What’s the actual task we’re trying to support?
    In terms of a specific L&T task, nothing is mentioned.
  3. Who is involved? Who are they? etc.
    The apparent assumption is that it is teaching staff. The integrated dashboard will be used by staff to improve teaching?

Personally, I’ve found thinking about these different perspectives useful. Wonder if anyone else will?

(P)IRAC analysis for the integrated dashboard project

What follows is a more concerted effort to use PIRAC to think about the project. Mainly to see if I can come up with some useful questions/contributions for Monday.

Purpose

  • Purpose
    As above the purpose appears to be to use the data warehouse.

    Questions:

    • What’s the actual BI/data warehouse application(s)?
    • What’s the usage of the BI/data warehouse at the moment?
    • What’s it used for?
    • What is the difference in purpose in using the BI/data warehouse tool versus Moodle analytics plugins or standard Moodle reports?
  • Task
    Without knowing what the tool can do I’m left with pondering what information related tasks that are currently frustrating or limited. A list might include

    1. Knowing who my students are, where they are, what they are studying, what they’ve studied and when the add/drop the course (in a way that I can leverage).
      Which is part of what I’m doing here.
    2. Having access to the results of course evaluation surveys in a form that I can analyse (e.g. with NVivo).
    3. How do I identify students who are not engaging, struggling, not learning, doing fantastic and intervene?

    Questions:

    • Can the “dashboards” help with the tasks above?
    • What are the tasks that a dashboard can help with that isn’t available in the Moodle reports?
  • Who
  • Context

What might be some potential sources for a task?

  1. Existing practice
    e.g. what are staff currently using in terms of Moodle reports and is that good/bad/indifferent?

  2. Widespread problems?
    What are the problems faced by teaching staff?
  3. Specific pedagogical goals?
  4. Espoused institutional priorities?
    Personalised learning appears to be one. What are others?

Questions:

  • How are staff using existing Moodle reports and analytics plugins?
  • How are they using the BI tools?
  • What are widespread problems facing teaching staff?
  • What is important to the institution?

Information

The simple questions

  • What information is technically available?
    It appears that the data warehouse includes data on

    • enrolment load
      Apparently aimed more at trends, but can do semester numbers.
    • Completion of courses and programs.
    • Recruitment and admission
      The description of what’s included in this isn’t clear.
    • Student evaluation and surveys
      Appears to include institutional and external evaluation results. Could be useful.

    As I view the dashboards, I do find myself asking questions (fairly unimportant ones) related to the data that is available, rather than the data that is important.

    Questions

    • Does the data warehouse/BI system know who’s teaching what when?
    • When/what information is accessible from Moodle, Mahara and other teaching systems?
    • Can the BI system enrolment load information drill down to course and cohort levels?
    • What type of information is included in the recruitment and admission data that might be useful to teaching staff?
    • Can we get access to course evaluation surveys for courses in a flexible format?
  • What information is ethically available?

Given the absence of a specific task, it would appear

Representation

  • What types of representation are available?
    It would appear that the dashboards etc are being implemented with PerformancePoint hence it’s integration with Sharepoint (off to a strong start there). I assume relying on its “dashboards” feature hence meaning it can do this. So there would appear to be a requirement for Silverlight to see some of the representations

    Questions

    • Can the data warehouse provide flexible/primitive access to data?
      i.e. CSV, text or direct database connections?
  • What is knowledge is required to view those representations?
    There doesn’t appear to be much in the way of contextual help with the existing dashboards. You have to know what the labels/terminology mean. Which may not be a problem for the people for whom the existing dashboards are intended.
  • What is the process for viewing these representations?

Affordances

Based on the information above about the tool, it would appear that there are no real affordances that the dashboard system can provide. It will tend to be limited to representing information.

  • What functionality does the tool allow people to do?
  • What knowledge and other resources are required to effectively use that functionality?

Change

  • Who, how, how regularly and with what cost can the
    1. Purpose;
      Will need to be approved via whatever governance process exists.
    2. Information;
      This would be fairly constrained. I can’t see much of the above information changing. At least not in terms of getting access to more or different data. The question about ethics could potentially meant that there would be less information available.
    3. Representation; and,
      Essentially this would appear that all the dashboards can change. Any change will be limited by the specifics of the tool
    4. Affordances.
      You can’t change what you don’t have.

    be changed?

Adding some learning process analytics to EDC3100

In Jones and Clark (2014) we drew on Damien’s (Clark) development of the Moodle Activity Viewer (MAV) as an example of how bricolage, affordances and distribution (the BAD mindset) can add some value to institutional e-learning. My empirical contribution to that paper was talking about how I’d extended MAV so that when I was answering a student query in a discussion forum I could quickly see relevant information about that student (e.g. their major, which education system they would likely be teaching into etc).

A major point of that exercise was that it was very difficult to actually get access to that data at all. Let alone get access to that data within the online learning environment for the course. At least if I had to wait upon the institutional systems and processes to lumber into action.

As this post evolved, it’s become also an early test to see if the IRAC framework can offer some guidance in designing the extension of this tool by adding some learning process analytics. The result of this post

  1. Defines learning process analytics.
  2. Applies that definition to my course.
  3. Uses the IRAC framework to show off the current mockup of the tool and think about what other features might be added.

Very keen to hear some suggestions on the last point.

At this stage, the tool is working but only the student details are being displayed. The rest of the tool is simply showing the static mockup. This afternoon’s task is to start implementing the learning process analytics functionality.

Some ad hoc questions/reflections that arise from this post

  1. How is the idea of learning process analytics going to be influenced by the inherent tension between the tendency for e-learning systems to be generic and the incredible diversity of learning designs?
  2. Can generic learning process analytics tools help learners and teachers understand what’s going on in widely different learning designs?
  3. How can you the diversity of learning designs (and contexts) be supported by learning process analytics?
  4. Can a bottom-up approach work better than a top-down?
  5. Do I have any chance of convincing the institution that they should provide me with
    1. Appropriate access to the Moodle and Peoplesoft database; and,
    2. A server on which to install and modify software?

Learning process analytics

The following outlines the planning and implementation of the extension of that tool through the addition of process analytics. Schneider et al (2012) (a new reference I’ve just stumbled across) define learning process analytics

as a collection of methods that allow teachers
and learners to understand what is going on in a learning scenario, i.e. what participants work(ed) on, how they interact(ed), what they produced(ed), what tools they use(ed), in which physical and virtual location, etc. (p. 1632)

and a bit later on learning scenario and learning process analytics are defined as

as the measurement and collection of learner actions and learner productions, organized to provide feedback to learners, groups of learners and teachers during a teaching/learning situation. (p. 1632)

This is a nice definition in terms of what I want to achieve. My specific aim is to

collect, measure, organise and display learner actions and learner productions to provide feedback to the teacher during a teaching/learning situation

Two main reasons for the focus on providing this information to the teacher

  1. I don’t have the resources or the technology (yet) to easily provide this information to the learners.
    The method I’m using here relies on servers and databases residing on my computer (a laptop). Not something I can scale to the students in my class. I could perhaps look at using an external server (the institution doesn’t provide servers) but that would be a little difficult (I haven’t done it before) and potentially get me in trouble with the institution (not worth the hassle just yet).

    As it stands, I won’t even be able to provide this information to the other staff teaching into my course.

  2. It’s easier to see how I can (will?) use this information to improve my teaching and hopefully student learning.
    It’s harder to see how/if learners might use any sort of information to improve their learning.

Providing this information to me is the low hanging fruit. If it works, then I can perhaps reach for the fruit higher up.

Learner actions and productions

What are the learner actions and productions I’m going to generate analytics from?

The current course design means that students will be

  1. Using and completing a range of activities and resources contained on the course site and organised into weekly learning paths.
    These actions are in turn illustrated through a range of data including

    • Raw clicks around the course site stored in system logs.
    • Activity completion.
      i.e. if a student has viewed all pages in a resource, completed a quiz, or posted the required contributions to a discussion forum they are counted as completing an activity. Students get marks for completing activities.
    • Data specific to each activity.
      i.e. the content of the posts they contributed to a forum, the answers they gave on a quiz.
  2. Posting to their individual blog (external to institutional systems) for the course.
    Students get marks for # of posts, average word count and links to other students and external resources.
  3. Completing assignments.
  4. Contributing to discussions on various forms of social media.
    Some officially associated with the course (e.g. Diigo and others unofficially (student Facebook groups).

I can’t use some of the above as I do not have access to the data. Private student Facebook groups is one example, but the more prevalent is institutional data that I’m unable to access. In fact, the only data I can easily get access to is

  • Student blog posts; and,
  • Activity completion data.

So that’s what I’ll focus on. Obviously there is a danger here that what I can measure (or in this case access) is what becomes important. On the plus side, the design of this course does place significant importance on the learning activities students undertake and the blog posts. It appears what I can measure is actually important.

Here’s where I’m thinking that the IRAC framework can scaffold the design of what I’m doing.

Information

Is all the relevant Information and only the relevant information available?

Two broad sources of information

  1. Blog posts.
    I’ll be running a duplicate version of the BIM module in a Moodle install running on my laptop. BIM will keep a mirror of all the posts students make to their blogs. The information in the database will include

    • Date, time, link and total for each post.
    • A copy of the HTML for the post.
    • The total number of posts made so far, the url for the blog its feed.
  2. Activity completion.
    I’ll have to set up a manual process for importing activity completion data into a database on my computer. For each activity I will have access to the date and time when the student completed the activity (if they have).

What type of analysis or manipulation can I perform on this information?

At the moment, not a lot. I don’t have a development environment that will allow me to run lots of complex algorithms over this data. This will have to evolve over time. What do I want to be able to do initially? An early incomplete list of some questions

  1. When was the last time the student posted to their blog?
  2. How many blog posts have they contributed? What were they titled? What is the link to those posts?
  3. Are the blog posts spread out over time?
  4. Who are the other students they’ve linked to?
  5. What activities have they completed? How long ago?
  6. Does it appear they’ve engaged in a bit of task corruption in completing the activities?
    e.g. is there a sequence of activities that were completed very quickly?

Representation

Does the representation of the information aid the task being undertaken?

The task here is basically giving me some information about the student progress.

For now it’s going to be a simple extension to the approach talked about in the paper. i.e. whenever my browser sees on a course website a a link to a user profile, it will add a link [Details] next to it. If I click on that link I see a popup showing information about that student. The following is a mockup (click on the images to see a larger version) of what is currently partially working

001 - Personal Details

By default the student details are shown. There are two other tabs, one for activity completion and one for blog posts.

Requirement suggestion: Add into the title of each tab some initial information. e.g. Activity completion should include something like “(55%)” indicating the percentage of activities currently completed. Or perhaps it might be the percentage of the current week’s activities that have been completed (or perhaps the current module).

The activity completion tab is currently the most complicated and the ugliest. Moving the mouse of the Activity Completion tab brings up the following.

002 - Activity completion

The red, green and yellow colours are ugly and are intended to indicate a simple traffic light representation. Green means all complete, red is not, yellow means in progress for some scale.

The course is actually broken up into 3 modules. The image above shows each module being represented. Open up a module and you see the list of weeks for that module – also with the traffic light colours. Click on a particular week and you see the list of activities for that week. Also with colours, but also with the date when the student completed the activity.

Requirement suggestion: The title bars for the weeks and modules could show the first and last time the student completed an activity in that week/module.

Requirement suggestion: The date/time when an activity was completed could be a roll-over. Move the mouse over the date/time and it will change the date/time to how long ago that was.

Requirement suggestion: What about showing the percentage of students who have completed activities? Each activity could show the % of students who had completed it. Each week could show the percentage of students who had completed that week’s activities. Each module could….

Requirement suggestion: Find some better colours.

The blog post tab is the most under-developed. The mockup currently only shows some raw data that is used to generate the students mark.

003- blog posts

Update The following screen shot shows progress on this tab. The following is from the working tool.

BlogProcessAnalytics

Requirement suggestions:

  • Show a list of recent blog post titles that are also links to those posts.
    Knowing what the student has (or hasn’t) blogged recently may give some insight into their experience.
    Done: see above image.
  • Show the names of students where this student has linked to their blog posts.
  • Organise the statistics into Modules and show the interim mark they’d get.
    This would be of immediate interest to the students.

Affordances

Are there appropriate Affordances for action?

What functionality can this tool provide to me that will help?

Initially it may simply be the display of the information. I’ll be left to my own devices to do something with it.

Have to admit to being unable to think of anything useful, just yet.

Change

How will the information, representation and the affordances be Changed?

Some quick answers

  1. ATM, I’m the only one using this tool and it’s all running from my laptop. Hence no worry about impact on others if I make changes to what the tool does. Allows some rapid experimentation.
  2. Convincing the organisation to provide an API or some other form of access directly (and safely/appropriately) to the Moodle database would be the biggest/easiest way to change the information.
  3. Exploring additional algorithms that could reveal new insights and affordances is also a good source.
  4. Currently the design of the tool and its environment is quite kludgy. Some decent design could make this particularly flexible.
    e.g. simply having the server return JSON data rather than HTML and having some capacity on the client side to format that data could enable some experimentation and change.

References

Schneider, D. K., Class, B., Benetos, K., Lange, M., Internet, R., Developer, A., & Zealand, N. (2012). Requirements for learning scenario and learning process analytics. In World Conference on Educational Multimedia, Hypermedia and Telecommunications (pp. 1632–1641).