Learning analytics is better when…..?

Trying to capture some thinking that arose during an institutional meeting re: learning analytics. The meeting was somewhat positive, but – as is not uncommon – there seemed to be some limitations around what learning analytics actually is and what it might look like. Wondering if the following framing might help it draws on points made by numerous people about learning analytics and some strong echoes of the (P)IRAC framework

Learning analytics better when it

  1. knows more about the learning environment;
    (learning environment includes learners, teachers, learning designs etc.)
  2. is accessible from within the learning environment;
    i.e. learners and teachers don’t need to remove themselves from the learning environment to access the learning analytics.
  3. provides affordances for action within the learning environment;
    If no change results from the learning analytics, then there is little value in it.
  4. can be changed by people within the learning environment.
    i.e. learners and teachers (and perhaps others) can modify the learning analytics for their own (new) purposes.

The problem is that I don’t think that institutional considerations of learning analytics pay much atention to these four axes and this may explain limited usage and impact arising from the tools.

All four axes tend to require knowing a lot about the specifics of the learning environment and being able to respond to what you find in that environment in a contextually appropriate way.

The more learning analytics enables this, the more useful it is. The more useful it is, the more it used and the more impact it will have.

A few examples to illustrate.

Data warehouse

  1. What does it know about the learning environment? Limited
    Generally will know who the learners are, what they are studying, where they are from etc. May know what they have done within various institutional systems.
    Almost certainly knows nothing about the learning design.
    Probably knows who’s teaching what they’ve taught before.
  2. Accessible from the learning environment? Probably not
    Access it via a dashboard tool which is separate from the learning environment. i.e. not going to be emedded within the discussion forum tool, or the wiki tool.
    A knowledgeable user of the tool may well set up their own broader environment so that the data warehouse is integrated into it.
  3. Affordances for action? NONE
    It can display information, that’s it.
  4. Change? Difficult and typically the same for everyone
    Only the data warehouse people can change the representation of the information the warehouse provides. They probably can’t change the data that is included in the data warehouse without buy in from external system owners. IT governance structures need to be traversed.

Moodle reports

  1. What does it know about the learning environment? Limited
    Know what the students have done within Moodle. But does not typically know of anything outside Moodle.
  2. Accessible from the learning environment? Somewhat
    If you’re learning within Moodle, you can get to the Moodle reports. But the Moodle reports are a separate module (functionality) and thus aspects of the Moodle reports cannot be easily included into other parts of the Moodle learning environment and certainly cannot be integrated into non-Moodle parts of the learning environment.
  3. Affordances for action? Limited
    The closest is that some reports provide the ability to contact digitally students who meet certain criteria. However, the difficulty of using the reports suggests that the actual “affordances” are somewhat more limited.
  4. Change? Difficult, limited to Moodle
    Need to have some level of Moodle expertise and some greater level of access to modify reports. Typically would need to go through some level of governance structure. Probably can’t be change to access much outside of Moodle.

“MAV-enabled analytics”

A paper last year describes the development of MAV at CQU and some local tinkering I did using MAV i.e. “MAV-enabled analytics”.

  1. What does it know about the learning environment? Limited but growing
    As described, both MAV (student clicks on links in Moodle) and my tinkering (student records data) draw on low level information. In a month or so my on-going tinkering has the tool including information about student completion of activities in my course site and what the students have written on their individual blogs. Hopefully that will soon be extended with SNA and some sentiment analysis.
  2. Accessible from the learning environment? Yes
    Both are analytics tools are embedded into the Moodle LMS – the prime learning environment for this context.
  3. Affordances for action? Limited but growing
    My tinkering offers little. MAV @ CQU is integrated with other systems to support a range of actions associated with contacting and tracking students. Both systems are very easy to use, hence increasing the affordances.
  4. Change? Slightly better than limited.
    MAV has arisen from tinkering and thus new functionality can be added. However, it requires someone who knows how MAV and its children work. It can’t be changed by learners/teachers. However, as I am the teacher using the results of my tinkering, I can change it. However, I’m constrained by time and system access.

Using the PIRAC – Thinking about an “integrated dashboard”

On Monday I’m off to a rather large meeting to talk about what data might be usefully syndicated into a integrated dashboard. The following is an attempt to think out lod about the (P)IRAC framework (Jones, Beer and Clark, 2013) in the context of this local project. To help prepare me for the meeting, but also to ponder some recent thoughts about the framework.

This is still a work in progress.

Get the negativity out of the way first

Dashboards sux!!

I have a long-term negative view of the value of dashboards and traditional data warehouses/business intelligence type systems. A view that has risen out of both experience and research. For example, the following is a slide from this invited presentation. There’s also a a paper (Beer, Jones, & Tickner, 2014) that evolved from that presentation.

Slide19

I don’t have a problem with the technology. Data warehouse tools do have a range of functionality that is useful. However, in terms of providing something useful to the everyday life of teachers in a way that enhances learning and teaching, they leave a lot to be desired.

The first problem is the Law of Instrument.

Hammer ... Nail ... by Theen ..., on Flickr
Creative Commons Creative Commons Attribution-Noncommercial-Share Alike 2.0 Generic License   by  Theen … 

The only “analytics” tool the institution has is the data warehouse, so that’s what it has to use. The problem being is that the data warehouse cannot be easily and effectively integrated into the daily act of learning and teaching in a way that provides significant additional affordances (more on affordances below).

Hence it doesn’t get used.

Now, leaving that aside.

(P)IRAC

After a few years of doing learning analytics stuff, we put together the IRAC framework as an attempt to guide learning analytics projects. Broaden the outlook and what needed to be considered. Especially what needed to be considered to ensure that the project outcome was widely and effectively used. The idea is that the four elements of the framework could help ponder what was available and what might be required. The four original components of IRAC are summarised in the following table.

IRAC Framework (adapted from Jones et al 2013)
Component Description
Information
  • the information we collect is usually about “those things that are easiest to identify and count or measure” but which may have “little or no connection with those factors of greatest importance” (Norman, 1993, p. 13).
  • Verhulst’s observation (cited in Bollier & Firestone, 2010) that “big data is driven more by storage capabilities than by superior ways to ascertain useful knowledge” (p. 14).
  • Is the information required technically and ethically available for use?
  • How is the information to be cleaned, analysed and manipulated?
  • Is the information sufficient to fulfill the needs of the task?
  • In particular, does the information captured provide a reasonable basis upon which to “contribute to the understanding of student learning in a complex social context such as higher education” (Lodge & Lewis, 2012, p. 563)?
Representation
  • A bad representation will turn a problem into a reflective challenge, while an appropriate representation can transform the same problem into a simple, straightforward task (Norman, 1993).
  • To maintain performance, it is necessary for people to be “able to learn, use, and reference necessary information within a single context and without breaks in the natural flow of performing their jobs.” (Villachica et al., 2006, p. 540).
  • Olmos and Corrin (2012) suggest that there is a need to better understand how visualisations of complex information can be used to aid analysis.
  • Considerations here focus on how easy is it to understand the implications and limitations of the findings provided by learning analytics? (and much, much more)
Affordances
  • A poorly designed or constructed artefact can greatly hinder its use (Norman, 1993).
  • To have a positive impact on individual performance an IT tool must be utilised and be a good fit for the task it supports (Goodhue & Thompson, 1995).
  • Human beings tend to use objects in “ways suggested by the most salient perceived affordances, not in ways that are difficult to discover” (Norman, 1993, p. 106).
  • The nature of such affordances are not inherent to the artefact, but are instead co-determined by the properties of the artefact in relation to the properties of the individual, including the goals of that individual (Young, Barab, & Garrett, 2000).
  • Glassey (1998) observes that through the provision of “the wrong end-user tools and failing to engage and enable end users” even the best implemented data warehouses “sit abandoned” (p. 62).
  • The consideration for affordances is whether or not the tool and the surrounding environment provide support for action that is appropriate to the context, the individuals and the task.
Change
  • Evolutionary development has been central to the theory of decision support systems (DSS) since its inception in the early 1970s (Arnott & Pervan, 2005).
  • Rather than being implemented in linear or parallel, development occurs through continuous action cycles involving significant user participation (Arnott & Pervan, 2005).
  • Buckingham-Shum (2012) identifies the risk that research and development based on data already being gathered will tend to perpetuate the existing dominant approaches from which the data was generated.
  • Bollier and Firestone (2010) observe that once “people know there is an automated system in place, they may deliberately try to game it” (p. 6).
  • Universities are complex systems (Beer, Jones, & Clark, 2012) requiring reflective and adaptive approaches that seek to identify and respond to emergent behaviour in order to stimulate increased interaction and communication (Boustani et al., 2010).
  • Potential considerations here include, who is able to implement change? Which, if any, of the three prior questions can be changed? How radical can those changes be? Is a diversity of change possible?

Adding purpose

Whilst on holiday enjoying the Queenstown view below and various refreshments, @beerc and I discussed a range of issues, including the IRAC framework and what might be missing. Both @beerc and @damoclarky have identified potential elements to be added, but I’ve always been reluctant. However, one of the common themes underpinning much of the discussion of learning analytics at ASCILITE’2014 was for whom was learning analytics being done? We raised this question somewhat in our paper when we suggested that much of learning analytics (and educational technology) is mostly done to academics (and students). Typically in the service of some purpose serving the needs of senior management or central services. But the issue was also raised by many others.

Which got us thinking about Purpose.

Queenstown View

As originally framed (Jones et al, 2013)

The IRAC framework is intended to be applied with a particular context and a particular task in mind……Olmos & Corrin (2012), amongst others, reinforce the importance for learning analytics to start with “a clear understanding of the questions to be answered” (p. 47) or the task to be achieved.

If you start the design of a learning analytics tool/intervention without a clear idea of the task (and its context) in mind, then it’s going to be difficult to implement.

In our discussions in NZ, I’d actually forgotten about this focus in the original paper. This perhaps reinforces the need for IRAC to become PIRAC. To explicitly make purpose the initial consideration.

Beyond increasing focus on the task, purpose also brings in the broader organisational, personal, and political considerations that are inherent in this type of work.

So perhaps purpose encapsulates

  1. Why are we doing this? What’s the purpose?
    Reading between the lines, this particular project seems to be driven more by the availability of the tool and a person with the expertise to do stuff with the tool. The creation of a dashboard seems the strongest reason given.
    Tied in with seems to be the point that the institution needs to be seen to be responding to the “learning analytics” fad (the FOMO problem). Related to this will, no doubt, be some idea that by doing something in this area, learning and teaching will improve.
  2. What’s the actual task we’re trying to support?
    In terms of a specific L&T task, nothing is mentioned.
  3. Who is involved? Who are they? etc.
    The apparent assumption is that it is teaching staff. The integrated dashboard will be used by staff to improve teaching?

Personally, I’ve found thinking about these different perspectives useful. Wonder if anyone else will?

(P)IRAC analysis for the integrated dashboard project

What follows is a more concerted effort to use PIRAC to think about the project. Mainly to see if I can come up with some useful questions/contributions for Monday.

Purpose

  • Purpose
    As above the purpose appears to be to use the data warehouse.

    Questions:

    • What’s the actual BI/data warehouse application(s)?
    • What’s the usage of the BI/data warehouse at the moment?
    • What’s it used for?
    • What is the difference in purpose in using the BI/data warehouse tool versus Moodle analytics plugins or standard Moodle reports?
  • Task
    Without knowing what the tool can do I’m left with pondering what information related tasks that are currently frustrating or limited. A list might include

    1. Knowing who my students are, where they are, what they are studying, what they’ve studied and when the add/drop the course (in a way that I can leverage).
      Which is part of what I’m doing here.
    2. Having access to the results of course evaluation surveys in a form that I can analyse (e.g. with NVivo).
    3. How do I identify students who are not engaging, struggling, not learning, doing fantastic and intervene?

    Questions:

    • Can the “dashboards” help with the tasks above?
    • What are the tasks that a dashboard can help with that isn’t available in the Moodle reports?
  • Who
  • Context

What might be some potential sources for a task?

  1. Existing practice
    e.g. what are staff currently using in terms of Moodle reports and is that good/bad/indifferent?

  2. Widespread problems?
    What are the problems faced by teaching staff?
  3. Specific pedagogical goals?
  4. Espoused institutional priorities?
    Personalised learning appears to be one. What are others?

Questions:

  • How are staff using existing Moodle reports and analytics plugins?
  • How are they using the BI tools?
  • What are widespread problems facing teaching staff?
  • What is important to the institution?

Information

The simple questions

  • What information is technically available?
    It appears that the data warehouse includes data on

    • enrolment load
      Apparently aimed more at trends, but can do semester numbers.
    • Completion of courses and programs.
    • Recruitment and admission
      The description of what’s included in this isn’t clear.
    • Student evaluation and surveys
      Appears to include institutional and external evaluation results. Could be useful.

    As I view the dashboards, I do find myself asking questions (fairly unimportant ones) related to the data that is available, rather than the data that is important.

    Questions

    • Does the data warehouse/BI system know who’s teaching what when?
    • When/what information is accessible from Moodle, Mahara and other teaching systems?
    • Can the BI system enrolment load information drill down to course and cohort levels?
    • What type of information is included in the recruitment and admission data that might be useful to teaching staff?
    • Can we get access to course evaluation surveys for courses in a flexible format?
  • What information is ethically available?

Given the absence of a specific task, it would appear

Representation

  • What types of representation are available?
    It would appear that the dashboards etc are being implemented with PerformancePoint hence it’s integration with Sharepoint (off to a strong start there). I assume relying on its “dashboards” feature hence meaning it can do this. So there would appear to be a requirement for Silverlight to see some of the representations

    Questions

    • Can the data warehouse provide flexible/primitive access to data?
      i.e. CSV, text or direct database connections?
  • What is knowledge is required to view those representations?
    There doesn’t appear to be much in the way of contextual help with the existing dashboards. You have to know what the labels/terminology mean. Which may not be a problem for the people for whom the existing dashboards are intended.
  • What is the process for viewing these representations?

Affordances

Based on the information above about the tool, it would appear that there are no real affordances that the dashboard system can provide. It will tend to be limited to representing information.

  • What functionality does the tool allow people to do?
  • What knowledge and other resources are required to effectively use that functionality?

Change

  • Who, how, how regularly and with what cost can the
    1. Purpose;
      Will need to be approved via whatever governance process exists.
    2. Information;
      This would be fairly constrained. I can’t see much of the above information changing. At least not in terms of getting access to more or different data. The question about ethics could potentially meant that there would be less information available.
    3. Representation; and,
      Essentially this would appear that all the dashboards can change. Any change will be limited by the specifics of the tool
    4. Affordances.
      You can’t change what you don’t have.

    be changed?

Adding some learning process analytics to EDC3100

In Jones and Clark (2014) we drew on Damien’s (Clark) development of the Moodle Activity Viewer (MAV) as an example of how bricolage, affordances and distribution (the BAD mindset) can add some value to institutional e-learning. My empirical contribution to that paper was talking about how I’d extended MAV so that when I was answering a student query in a discussion forum I could quickly see relevant information about that student (e.g. their major, which education system they would likely be teaching into etc).

A major point of that exercise was that it was very difficult to actually get access to that data at all. Let alone get access to that data within the online learning environment for the course. At least if I had to wait upon the institutional systems and processes to lumber into action.

As this post evolved, it’s become also an early test to see if the IRAC framework can offer some guidance in designing the extension of this tool by adding some learning process analytics. The result of this post

  1. Defines learning process analytics.
  2. Applies that definition to my course.
  3. Uses the IRAC framework to show off the current mockup of the tool and think about what other features might be added.

Very keen to hear some suggestions on the last point.

At this stage, the tool is working but only the student details are being displayed. The rest of the tool is simply showing the static mockup. This afternoon’s task is to start implementing the learning process analytics functionality.

Some ad hoc questions/reflections that arise from this post

  1. How is the idea of learning process analytics going to be influenced by the inherent tension between the tendency for e-learning systems to be generic and the incredible diversity of learning designs?
  2. Can generic learning process analytics tools help learners and teachers understand what’s going on in widely different learning designs?
  3. How can you the diversity of learning designs (and contexts) be supported by learning process analytics?
  4. Can a bottom-up approach work better than a top-down?
  5. Do I have any chance of convincing the institution that they should provide me with
    1. Appropriate access to the Moodle and Peoplesoft database; and,
    2. A server on which to install and modify software?

Learning process analytics

The following outlines the planning and implementation of the extension of that tool through the addition of process analytics. Schneider et al (2012) (a new reference I’ve just stumbled across) define learning process analytics

as a collection of methods that allow teachers
and learners to understand what is going on in a learning scenario, i.e. what participants work(ed) on, how they interact(ed), what they produced(ed), what tools they use(ed), in which physical and virtual location, etc. (p. 1632)

and a bit later on learning scenario and learning process analytics are defined as

as the measurement and collection of learner actions and learner productions, organized to provide feedback to learners, groups of learners and teachers during a teaching/learning situation. (p. 1632)

This is a nice definition in terms of what I want to achieve. My specific aim is to

collect, measure, organise and display learner actions and learner productions to provide feedback to the teacher during a teaching/learning situation

Two main reasons for the focus on providing this information to the teacher

  1. I don’t have the resources or the technology (yet) to easily provide this information to the learners.
    The method I’m using here relies on servers and databases residing on my computer (a laptop). Not something I can scale to the students in my class. I could perhaps look at using an external server (the institution doesn’t provide servers) but that would be a little difficult (I haven’t done it before) and potentially get me in trouble with the institution (not worth the hassle just yet).

    As it stands, I won’t even be able to provide this information to the other staff teaching into my course.

  2. It’s easier to see how I can (will?) use this information to improve my teaching and hopefully student learning.
    It’s harder to see how/if learners might use any sort of information to improve their learning.

Providing this information to me is the low hanging fruit. If it works, then I can perhaps reach for the fruit higher up.

Learner actions and productions

What are the learner actions and productions I’m going to generate analytics from?

The current course design means that students will be

  1. Using and completing a range of activities and resources contained on the course site and organised into weekly learning paths.
    These actions are in turn illustrated through a range of data including

    • Raw clicks around the course site stored in system logs.
    • Activity completion.
      i.e. if a student has viewed all pages in a resource, completed a quiz, or posted the required contributions to a discussion forum they are counted as completing an activity. Students get marks for completing activities.
    • Data specific to each activity.
      i.e. the content of the posts they contributed to a forum, the answers they gave on a quiz.
  2. Posting to their individual blog (external to institutional systems) for the course.
    Students get marks for # of posts, average word count and links to other students and external resources.
  3. Completing assignments.
  4. Contributing to discussions on various forms of social media.
    Some officially associated with the course (e.g. Diigo and others unofficially (student Facebook groups).

I can’t use some of the above as I do not have access to the data. Private student Facebook groups is one example, but the more prevalent is institutional data that I’m unable to access. In fact, the only data I can easily get access to is

  • Student blog posts; and,
  • Activity completion data.

So that’s what I’ll focus on. Obviously there is a danger here that what I can measure (or in this case access) is what becomes important. On the plus side, the design of this course does place significant importance on the learning activities students undertake and the blog posts. It appears what I can measure is actually important.

Here’s where I’m thinking that the IRAC framework can scaffold the design of what I’m doing.

Information

Is all the relevant Information and only the relevant information available?

Two broad sources of information

  1. Blog posts.
    I’ll be running a duplicate version of the BIM module in a Moodle install running on my laptop. BIM will keep a mirror of all the posts students make to their blogs. The information in the database will include

    • Date, time, link and total for each post.
    • A copy of the HTML for the post.
    • The total number of posts made so far, the url for the blog its feed.
  2. Activity completion.
    I’ll have to set up a manual process for importing activity completion data into a database on my computer. For each activity I will have access to the date and time when the student completed the activity (if they have).

What type of analysis or manipulation can I perform on this information?

At the moment, not a lot. I don’t have a development environment that will allow me to run lots of complex algorithms over this data. This will have to evolve over time. What do I want to be able to do initially? An early incomplete list of some questions

  1. When was the last time the student posted to their blog?
  2. How many blog posts have they contributed? What were they titled? What is the link to those posts?
  3. Are the blog posts spread out over time?
  4. Who are the other students they’ve linked to?
  5. What activities have they completed? How long ago?
  6. Does it appear they’ve engaged in a bit of task corruption in completing the activities?
    e.g. is there a sequence of activities that were completed very quickly?

Representation

Does the representation of the information aid the task being undertaken?

The task here is basically giving me some information about the student progress.

For now it’s going to be a simple extension to the approach talked about in the paper. i.e. whenever my browser sees on a course website a a link to a user profile, it will add a link [Details] next to it. If I click on that link I see a popup showing information about that student. The following is a mockup (click on the images to see a larger version) of what is currently partially working

001 - Personal Details

By default the student details are shown. There are two other tabs, one for activity completion and one for blog posts.

Requirement suggestion: Add into the title of each tab some initial information. e.g. Activity completion should include something like “(55%)” indicating the percentage of activities currently completed. Or perhaps it might be the percentage of the current week’s activities that have been completed (or perhaps the current module).

The activity completion tab is currently the most complicated and the ugliest. Moving the mouse of the Activity Completion tab brings up the following.

002 - Activity completion

The red, green and yellow colours are ugly and are intended to indicate a simple traffic light representation. Green means all complete, red is not, yellow means in progress for some scale.

The course is actually broken up into 3 modules. The image above shows each module being represented. Open up a module and you see the list of weeks for that module – also with the traffic light colours. Click on a particular week and you see the list of activities for that week. Also with colours, but also with the date when the student completed the activity.

Requirement suggestion: The title bars for the weeks and modules could show the first and last time the student completed an activity in that week/module.

Requirement suggestion: The date/time when an activity was completed could be a roll-over. Move the mouse over the date/time and it will change the date/time to how long ago that was.

Requirement suggestion: What about showing the percentage of students who have completed activities? Each activity could show the % of students who had completed it. Each week could show the percentage of students who had completed that week’s activities. Each module could….

Requirement suggestion: Find some better colours.

The blog post tab is the most under-developed. The mockup currently only shows some raw data that is used to generate the students mark.

003- blog posts

Update The following screen shot shows progress on this tab. The following is from the working tool.

BlogProcessAnalytics

Requirement suggestions:

  • Show a list of recent blog post titles that are also links to those posts.
    Knowing what the student has (or hasn’t) blogged recently may give some insight into their experience.
    Done: see above image.
  • Show the names of students where this student has linked to their blog posts.
  • Organise the statistics into Modules and show the interim mark they’d get.
    This would be of immediate interest to the students.

Affordances

Are there appropriate Affordances for action?

What functionality can this tool provide to me that will help?

Initially it may simply be the display of the information. I’ll be left to my own devices to do something with it.

Have to admit to being unable to think of anything useful, just yet.

Change

How will the information, representation and the affordances be Changed?

Some quick answers

  1. ATM, I’m the only one using this tool and it’s all running from my laptop. Hence no worry about impact on others if I make changes to what the tool does. Allows some rapid experimentation.
  2. Convincing the organisation to provide an API or some other form of access directly (and safely/appropriately) to the Moodle database would be the biggest/easiest way to change the information.
  3. Exploring additional algorithms that could reveal new insights and affordances is also a good source.
  4. Currently the design of the tool and its environment is quite kludgy. Some decent design could make this particularly flexible.
    e.g. simply having the server return JSON data rather than HTML and having some capacity on the client side to format that data could enable some experimentation and change.

References

Schneider, D. K., Class, B., Benetos, K., Lange, M., Internet, R., Developer, A., & Zealand, N. (2012). Requirements for learning scenario and learning process analytics. In World Conference on Educational Multimedia, Hypermedia and Telecommunications (pp. 1632–1641).

On the difference between “rational”, “possible” and “desirable”

A couple of weeks ago @kateMfD wrote a post asking “What next for the LMS?”. (one of a raft of LMS posts doing the rounds recently). Kate’s point was in part that

The LMS is specifically good at what universities need it to do. Universities have learning management systems for the same reason they have student information systems: because their core institutional business isn’t learning itself, but the governance of the processes that assure that learning has happened in agreed ways.

The brief comment I shared on Kate’s post shared some discussions @beerc and I had 6 or 7 years ago. Back then we were responsible for helping academic staff use the institution’s LMS. I was amazed at how manual the process was and how limited it was in its use of standard checks. For example, it was quite common for a new course site to be pointing to last year’s course profile/synopsis (a PDF). This wasn’t being picked up until a student or two complete all of the following steps

  1. Actually bothered use the link to the course profile form the course site.
  2. Figured out that it was pointing to last year’s course profile.
  3. Was bothered enough by this problem to report it to someone.

Rather than be reactive, it seemed sensible to write a Perl script or two that would “mine” the Blackboard database and identify these types of problems very early in the semester so we could proactively fix them.

At that stage, “management” couldn’t grasp the value of this process and it never went anywhere. I never could understand that.

Fear of management rising

Not long after that – as the learning analytics fad started to rise – Col and I were worried about what management would do once they joined the band wagon. In particular, we wondered when they might identify the problems that ideas like “Web 2.0 tools” (blogs, Second Life etc) or Personal Learning Environments (another fad we were playing with at the time) would pose for learning analytics. i.e. to run “learning analytics” you need to have access to the data and a University generally won’t have access to the data from tools that are personal to the learner and external to the institution.

Given Kate’s identification of management’s purpose around learning – “governance of the processes that assure that learning has happened in agreed ways” – Col and I have been waiting to hear of Universities banning the use of external/personal tools for learning/teaching because it broke their “learning analytics”. Around the same time as Kate’s post, I heard that on southern University was indeed going down that route, and that’s the comment I made on Kate’s post.

Why is this a problem?

This morning @drwitty_knitter replied to my comment with

I would think this is quite common. Universities like to be able to track where money is being spent and what the outcomes are for students. Unless tools have some way to report what students are doing, and how that relates to their curricular goals, it would be hard to justify their use.

And I agree, I think it will becoming increasingly common. But I also still think it’s a really, really bad idea. @beerc, @damoclarky and offered one explanation why this is a bad idea in this ASCILITE’2012 paper i.e.

Insight gained over the last four years exploring learning analytics at one university suggest that the assumptions embodied by managerialism may be an inappropriate foundation for the application of learning analytics into tertiary learning environments

In short, in order to believe it is possible to use analytics to connect what students are doing with their curricular goals can only occur if you make a range of assumptions about the nature of people, learning, and universities that fails to engage effectively with reality. No matter how complex the learning analytics algorithms and systems used, the only way you can achieve the stated purpose is to attempt to reduce the variability of learning and teaching to fit the limitations of the capabilities of the technology.

Which is exactly what is happening when institutions ban the use of personal or external tools.

This won’t be enough. As we show in the ASCILITE paper, even if you limit yourself to the LMS, the diversity of learners and learning; and, the chasm between what happens in the LMS and actual student learning is such that there will still be huge questions about what the analytics can tell you. This will lead to at least two likely outcomes

  1. Management will believe what the analytics tells them and plan future action on this poor foundation; and,
  2. Management won’t believe the analytics and thus will further reduce the variability of learning and teaching to fit the limitations of the capabilities of the technology.

The last option contributes to the problem that Chris Dede identifies in this clip:

that the very, very best of our high-end learning environments have less variety than a bad fast food restaurant

The three paths

In an ASCILITE’2014 paper we identify three paths that might be followed with learning analytics

  1. Do it to.
  2. Do it for.
  3. Do it with.

Our argument is that almost all of the learning analytics work (and increasingly much of what passes for learning and teaching support activities) is following the first two paths. We also argue that this will end badly for the quality of learning and teaching and will contribute to learning analytics being yet another fad.

The “Do it to” path is completely rational if your task is to ensure the quality of learning across the institution. But it’s only possible if you assume that there is no diversity in learning and teaching and that “learning” is the data captured in digital trials left in institutional databases. I don’t think it is either possible or desirable, hence I don’t think it’s rational. YMMV.

Three paths for learning analytics and beyond: Moving from rhetoric to reality

Paper accepted to ASCILITE’2014 and nominated for best paper.

Beer, C., Tickner, R., & Jones, D. (2014). Three paths for learning analytics and beyond : moving from rhetoric to reality. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 242–250).

Abstract

There is growing rhetoric about the potential of learning analytics in higher education. There is concern about what the growing hype around learning analytics will mean for the reality. Will learning analytics be a repeat of past mistakes where technology implementations fail to move beyond a transitory fad and provide meaningful and sustained contributions to learning and teaching? How can such a fate be avoided? This paper identifies three paths that learning analytics implementations might take, with particular consideration to their likely impact on learning and teaching. An ongoing learning analytics project – currently used by hundreds of teaching staff to support early interventions to improve student retention -at a regional Australian university is examined in relation to the three paths, and some implications, challenges and future directions are discussed.

Keywords: learning analytics, learning, teaching, data, complexity, bricolage

Introduction

The delivery of distance education via the Internet is the fastest growing segment of adult education (Carr-Chellman, 2004; Macfadyen & Dawson, 2010) and there is considerable pressure for institutions to ‘join the herd’. Burgeoning demand for university places, increased competition between universities, the introduction of globalisation coupled with reduced public funding are driving universities to expend time and resources on e-learning (Ellis, Jarkey, Mahony, Peat, & Sheely, 2007). There is however, evidence to suggest that the ubiquitous adoption of learning management systems (LMS) to meet institutional e-learning needs, has constrained innovation and negatively impacted on the quality of the learning experience (Alexander, 2001; Paulsen, 2002). This has contributed to a gap between the rhetoric around the virtues of e-learning and the complicated reality of the e-learning ‘lived experience’. Increasingly the adoption of technology by universities is being driven by a search for any panacea that will bridge this gap and is showing a tendency toward faddism.

Managerial faddism or hype is the tendency of people to eagerly embrace the newest fad or technology of the moment and to see problems as being largely solvable (or preventable) through better or more ‘rational’ management (Goldfinch, 2007). Birnbaum (2001) says about managerial fads; “they are usually borrowed from other settings, applied without full consideration of their limitations, presented either as complex or deceptively simple, rely on jargon, and emphasize rational decision making” (p. 2). Maddux and Cummings (2004) suggest that the use of information technology in higher education has been “plagued by fad and fashion since its inception” (p. 514). It is argued that management hype cycles are propagated by top-down, teleological approaches that dominate technology innovation, and indeed management, in higher education (Duke, 2001). Given the higher education sector’s disposition to adopting technological concepts based on hype and apparent rationality (Duke, 2001), there is a danger that the implementation of emerging technology related concepts, such as learning analytics (LA), will fail to make sustained and meaningful contributions to learning and teaching (L&T).

The aim of this paper is to explore how LA can avoid becoming yet another fad, by analysing the likely implementation paths institutions might take. The paper starts by examining what we now know about LA for evidence that suggests LA appears to be in the midst of a hype cycle that is likely to impede its ability to provide a sustained and meaningful contribution to L&T. The paper then examines some conceptual and theoretical frameworks around hype cycles, technology implementation, complex systems and models of university learning. These frameworks form the basis for identifying and analysing three likely paths universities might take when implementing LA. CQUniversity’s recent experience with a LA project that aims to assist with student retention is drawn upon to compare and contrast these paths before implications and future work are presented.

What we know about learning analytics

Johnson et al. (2013) define Learning Analytics (LA) as the collection and analysis of data in education settings in order to inform decision-making and improve L&T. Siemens and Long (2011) define LA as “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs”(p. 34). Others have said, “learning analytics provides data related to students’ interactions with learning materials which can inform pedagogically sound decisions about learning design” (Fisher, Whale, & Valenzuela, 2012, p. 9). Definitions aside, it can be said that the widespread use of technology in higher education has allowed the capture of detailed data on events that occur within learning environments (Pardo, 2013). The explosion in the use of digital technologies with L&T has contributed to the sectors’ growing interest in LA due to the ability of technology to generate digital trails (Siemens, 2013a), which can be captured and analysed. These digital trails have the potential to inform L&T practices in a variety of ways.

It is said that LA can contribute to course design, student success, faculty development, predictive modelling and strategic information (Diaz & Brown, 2012). Others say that LA can identify students at risk, highlight student learning needs, aid in reflective practice and can enable teachers to appropriately tailor instruction among other things (Johnson et al., 2013). Reports abound identifying LA as a key future trend in L&T with many reporting its rise to mainstream practice in the near future (Johnson et al., 2013; Lodge & Lewis, 2012; New Media Consortium, 2012; Siemens, 2011). Siemens and Long (2011) typify this rhetoric when they say that LA “is essential for penetrating the fog that has settled over much of higher education” (p. 40). While the promise of LA is still has a long way to go to live up to expectation, the prospect of evidence-informed practice and improved data services in an increasingly competitive and online higher education marketplace, is fuelling institutions’ interest in LA.

There are many reasons for the normative pull towards improved data services in higher education. Administrators demand data for the support of resource and strategic planning, faculty and administrators are hungry for information that can assist institutions with student recruitment and student retention, and external agencies such as governments, require a range of data indicators about institutional performance (Guan, Nunez, & Welsh, 2002). Prior to the emergence of LA, this desire for improved data services had, in many cases, led to the adoption of data warehouses by universities. Data warehouses are “… an subject-oriented, integrated, non-volatile and time-variant collection of data in support of management’s decisions” (Inmon, 2002). Data warehouses grew out of decision support systems and their use has escalated over recent years with increasing volumes and varieties of data being collected by institutions (Guan et al., 2002). Unfortunately and despite large volumes of data, data warehouses suffer from high failure rates and limited use by users (Goldfinch, 2007).

It has been said that a majority of information systems (IS) fail, and the larger the development, the more likely it will fail (Goldfinch, 2007). While there are many potential reasons for IS project failure, managerial faddism, management approaches and immense complexity are shown to be a significant factors (Goldfinch, 2007). These factors are of particular concern for LA, due to a range of underlying complexities and the ‘contextuality’ of what LA is representing (Beer, Jones, & Clark, 2012). Managerial faddism and management approaches to technology adoption can constrain the implementation’s ability to deal with complexity, as solutions are often presented as universally applicable, ‘quick fixes’ (Birnbaum, 2001). This is a concern for LA, as there is evidence to suggest that it is currently in the midst of a hype cycle.

The learning analytics hype

In observing the growing interest and attempted implementations of learning analytics within Australian universities, it is increasingly apparent that learning analytics is showing all the hallmarks of a management fashion or fad (Jones, Beer, & Clark, 2013). Fads are innovations that appear to be rational and functional, and are aimed at encouraging better institutional performance (Gibson & Tesone, 2001). Environmental factors such as increasing competition, regulation and turbulence contribute to the development of fads where there is an overwhelming desire to ‘be part of the in crowd’ (Gibson & Tesone, 2001). Fads often ‘speak to managers’ in that they appear to be common-sense and appeal to organisational rationality around efficiency and effectiveness, which makes counter-argument difficult (Birnbaum, 2001). Learning analytics talks strongly to managerialism due to its potential to facilitate data-driven decision-making and to complement existing institutional business intelligence efforts (Beer et al., 2012). Once an innovation such as LA achieves a high public profile, it can create an urgency to ‘join the bandwagon’ that swamps deliberative, mindful behaviour (Swanson & Ramiller, 2004).

The Horizon Project is an on-going collaborative research effort between the New Media Consortium and various partners to produce annual reports intended to help inform education leaders about significant developments in technology in higher education. LA has been mentioned in the Horizon Project’s reports in some form for the last five years. In 2010 and 2011 reports, visual data analysis (Johnson, Levine, Smith, & Stone, 2010) and then learning analytics (Johnson et al., 2011), were placed in the four to five year time frame for widespread adoption. In 2012 and 2013, perhaps as a sign of developing hype, LA moved to ‘1 year or less until widespread adoption’. However, in a 2014 report (Johnson, Adams Becker, Cummins, & Estrada, 2014), predictions about the widespread adoption of learning analytics has moved back to the 2 to 3 year time frame. Johnson et al (2014) explain that this increase in time frame is said not to be because “learning analytics has slowed in Australian tertiary education” (p. 2), but instead due to new aspects of learning analytics that add “more complexity to the topic that will require more time to explore and implement at scale (p. 2). Could this perhaps echo Birnbaum’s (2001) earlier observation that fads are often presented as complex or deceptively simple? During a trip to Australia in 2013, George Siemens, a noted international scholar in the LA arena, said “I’m not familiar with (m)any universities that have taken a systems-level view of LA… Most of what I’ve encountered to date is specific research projects or small deployments of LA. I have yet to see a systemic approach to analytics use/adoption.” (Siemens, 2013b).

The gathering hype around LA (Jones et al., 2013) appears to be following a similar trend to the business world around the concept of “big data” – the analysis and use of large datasets in business. Universities also followed the business world with the widespread adoption of data warehouse technology for decision support (Ramamurthy, Sen, & Sinha, 2008). While data warehouses have been around for some time, they have been plagued by high failure rates and limited spread or use (Ramamurthy et al., 2008). This is indicative of a larger trend in industry, where “the vast majority of big data and magical business analytics project fail. Not in a great big system-won’t-work way…They fail because the users don’t use them” (Schiller, 2012). The adoption of these technologies appears to be perilous even when the rush to adoption is not being driven by hype. If learning analytics does appear to be showing all the signs of being yet another fad, what steps can organisations take to avoid this outcome? The following section describes some theoretical frameworks that are drawn upon to help identify potential paths.

Theoretical frameworks

Hype cycles characterise a technology’s typical progression from an emerging technology to either productive use or disappointment (Linden & Fenn, 2003). Hype cycles have been linked to a recognition that imitation is often the driving force behind the diffusion of any technological innovation (Ciborra, 1992). Birnbaum (2001) suggest that technology hype cycles start with a technological trigger, which is followed a growing revolution and the rapid expansion of narrative and positivity around the technology. Then comes the minimal impact where enthusiasm for the technology starts to wane and initial reports of success become tempered by countervailing reports of failure. This is followed by the resolution of dissonance where the original promoters of the fad seek to explain the failure of the fad to achieve widespread impact. Such explanations tend not to see the blame arising from the fad itself, but instead attribute it to “a lack of leadership, intransigence of followers, improper implementation, and lack of resources” (Birnbaum, 2001). Hype cycles are linked with teleological or top-down approaches to technology adoption, which have primacy in higher education (Birnbaum, 2001). A practice that seems ignorant of research suggesting ateleological or bottom-up approaches to technology adoption can lead to more meaningful implementations (Duke, 2001).

Defining what constitutes a successful implementation of an Information or Communication Technology (ICT) is perilous. The conventional approach to recognising a successful ICT project according to Marchland & Peppard (2013) relates to some easily answered questions. Does it work? Was it deployed on time? Was it within budget? Did it adhere to the project plan? Goldfinch (Goldfinch, 2007) extends this to say that ICT projects can often fail simply because they are not used as intended, or users do not use them at all for reasons such as recalcitrance, lack of training or usability. More traditional project success measures might be useful for straightforward ICT projects where the requirements can be determined at the design stage, however, ICT projects around data and analytics are much more difficult to evaluate in terms of success or failure (Marchand & Peppard, 2013). These systems require people to interpret and create meaning from the information the systems provide. While deploying analytical ICT is relatively easy, understanding how they might be used is much less clear and these projects cannot be mapped out in a neat fashion (Marchand & Peppard, 2013). Suggesting that traditional ‘top down’ approaches associated with technology implementation might be less than ideal for LA implementations.

Teleological, top-down or plan-based approaches dominate technology adoption in higher education (McConachie, Danaher, Luck, & Jones, 2005). Known as planning or plan-based approaches, they are typically idealistic, episodic and follow a deliberate plan or strategy (Boehm & Turner, 2003). The suitability of these approaches for resolving complex problems has been questioned (Camillus, 2008). By contrast, ateleological or learning approaches follow an emergent path and are naturalistic and evolutionary (Kurtz & Snowden, 2003). The debate between the planning and learning schools of process has been one of the most pervasive debates in the management literature (Clegg, 2002) with many authors critically evaluating the two schools (e.g., Mintzberg, 1989; Kurtz & Snowden, 2003; McConachie et al, 2005).

The use of planning-based processes to the implementation of LA projects creates a problem when online learning environments are acknowledged as non-linear complex systems (Barnett, 2000; Beer et al., 2012; Mason, 2008a, 2008b). Complex systems are systems that adapt, learn or change as they interact (Holland, 2006). They are non-linear systems in that they contain nested agents and systems that are all interacting and evolving, so we cannot understand any of the agents or systems without reference to the others (Plsek & Greenhalgh, 2001). Cause and effect is not evident and cannot be predicted, meaning that even small interventions can have far-reaching, disproportionate and impossible to predict consequences (Boustani et al., 2010; Shiell, Hawe, & Gold, 2008). If LA is about understanding learners and the contexts within which they learn, considering online learning environments as complex systems has a profound effect on how we approach LA projects. It follows from this that what contemporary universities need is the most productive elements of both teleological and ateleological approaches to the eight elements of the design process identified by (Introna, 1996). Such a synthesis is crucial to addressing the plethora of issues competing for the attention of university decision-makers, whether in Australia or internationally.

The development of LA tools and processes is only the first of the steps (Elias, 2011) identifies as necessary for the implementation of LA. The second step identified by (Elias, 2011), and arguably the far more difficult step, is “the integration of these tools and processes into the practice of teaching and learning” (p. 5). Beer et al. (2012) argue that it is likely to be the teachers who have the right mix of closeness and expertise with the learning context, to make the best use of LA derived information. Echoing earlier arguments that teachers are perhaps the most important element of any attempt to enhance learning and teaching (L&T) (Radloff, 2008). Achieving such a goal would appear to require some understanding of the practice of teaching and learning. One such understanding is provided by Trigwell’s (2001) model of university teaching. As shown in Figure 1, Trigwell’s (2001) model suggests that the student learning experience is directly related to teachers’ strategies, teachers’ planning, teachers’ thinking including knowledge, conceptions and reflections, along with the L&T context. This is difficult as the teacher’s context is complex and dynamic. If LA is representing data about learners and their contexts and its goal is to enhance to L&T, it is crucial that it engages with teachers and their dynamic contexts (Sharples et al., 2013).

Trigwell's model of teaching by David T Jones, on FlickrFigure 1. Trigwell’s (2001) model of university teaching.

The three paths

Based on the preceding theoretical perspectives and personal experience within Australian universities, it is possible to identify at least three potential paths – ‘do it to’, ‘do it for’, and ‘do it with’ -that universities might take when pondering harnessing LA. In the rest of the paper we describe these three paths and then use them to understand the emergence of an LA project at a particular Australian University.

Do it to the teachers

‘Do it to’ describes the top-down, techno-rational and typical approach to ICT adoption in higher education. In theory, this approach starts with the recognition that LA aligns with identified institutional strategic goals. From there a project is formed that leads to a technology being identified and considered at the institutional level, usually with input from a small group of people, before being implemented institution-wide. ‘Do it to’ approaches will typically involve the setting up of a formal project with appropriate management sponsorship, performance indicators, budgets, project teams, user groups, and other project management requirements.

The ‘do it to’ approach focuses much of its attention on changing the teaching and learning context (the left hand end of Figure 1) in terms of policies and systems. The assumption is that this change in context will lead to changes in teacher thinking, planning and strategy. ‘Do it to’ provides a focus on creating the right context for L&T but its effect on teacher thinking, planning and strategy is arguably deficient. ‘Do it to’ represents a mechanistic approach that although common, is likely to fail (Duke, 2001) and this is particularly troublesome for LA implementations for a range of reasons.

The difficulty of ICT implementation for data and analytics projects (Marchand & Peppard, 2013) is compounded in LA due to its novelty and an absence of predefined approaches that are known to work across multiple contexts (Siemens, 2013a). L&T contexts are complex and diverse (Beer et al., 2012) and imposed technological solutions into these environments can lead to a problem of task corruption, where staff engagement is superficial and disingenuous (Rosh White, 2006). Centralised approaches to LA can often be mistakenly viewed as purely an exercise in technology (Macfadyen & Dawson, 2012) and may provide correlative data that can be misleading or erroneous at the course or individual levels (Beer et al., 2012).

Do it for the teachers

Geoghegan (1994) identifies the growth of a “technologists’ alliance” between innovative teaching staff, central instructional technology staff and information technology vendors, as responsible for many of the developments that seek to harness information technology to enhance student learning. While this alliance is often called upon to contribute to the “do it to” path, they are also largely responsible for the “do it for” path. Driven by a desire and responsibility to enhance student learning, members of the alliance will seek to draw on their more advanced knowledge of technology and its application to learning and teaching to: organize staff development sessions; experiment with, adopt or develop new applications of technology; and, help with the design and implementation of exemplar information technology enhanced learning designs. Such work may lead to changes in the L&T context – in much the same way as the “do it to” path -through the availability of a new Moodle plugin for learning analytics or visits from experts on certain topics. It can also lead to changes in the thinking, planning and strategies of small numbers of teaching staff. Typically those innovative teaching staff participating in the exemplar applications of technologies and whom are becoming or already a part of the technologists’ alliance.

While the technologists’ alliance is responsible for many of the positive examples of harnessing information technology to enhance L&T, Geoghagen (1994) also argues that its members have “also unknowingly worked to prevent the dissemination of these benefits into the much larger mainstream population”. Geoghagen (1994) attributes a major contributor to this being the extensive differences between the members of the technologists’ alliance and the majority of teaching staff. Rather than recognize and respond to this chasm, there has been a failure to recognize its existence, assume a level of homogeneity, and believe that it is simply a matter of overcoming increased resistance to change, rather than addressing qualitatively distinct perspectives and needs (Geoghegan, 1994).

Do it with the teachers

This approach is firmly entrenched in the learning approach process mentioned previously. This path starts by working with teaching academics inside the course or unit ‘black box’ during the term. The idea is to develop an understanding of the lived experience, encompassing all its diversity and difficulty, so as to establish how LA can help contribute within the context. The aim being to fulfil Geoghagen’s (1994) advice to develop an application “well-tuned to the instructional needs” that provides a “major and clearly recognizable benefit or improvement to an important process”. Such applications provide those outside the techologists’ alliance a compelling reason to adopt a new practice. It is through the adoption of new practices that educators can gain new experiences “that enable appropriation of new modes of teaching and learning that enable them to reconsider and restructure their thinking and practice” (Cavallo, 2004). An approach, which (Cavallo, 2004) argues is significantly more effective in changing practice than “merely being told what to do differently” (p. 97). Thus the ‘do it with’ path starts with the current reality of L&T – the right-hand end of Figure 1 – and works back toward the left.

Beyond being a potentially more effective way of changing thinking and practice around learning, the “do it with” approach brings a number of other potential benefits. This type of bottom-up or evolutionary approach is also known as bricolage – “the art of creating with what is at hand” (Scribner, 2005) -and has been identified as a major component of how teachers operate (Scribner, 2005). It is also a primary source of strategic benefit from information technology (Ciborra, 1992). However, the ‘do it with’ path also has some hurdles to overcome. These approaches are messy and tend not to fit with existing institutional approaches to technology adoption or innovation. Learning approach processes are agile and require freedom to adapt and change. This clashes with existing organisational cultural norms around technology innovation, implementation and uniformity. ‘Do it with’ approaches do not fit with existing organisation structures that are rationally decomposed into specialised units (Beer et al., 2012). Other problems can be attributed to workloads and competing requirements and these can inhibit the collaborative, reflective and adaptable approaches required for bricolage. There are also questions about whether or not such approaches can be translated into sustainable, long-term practices.

A question of balance

These three approaches described above are not mutually exclusive. Elements of all three approaches are very likely to, and perhaps need to exist with in LA implementations. It is a question of balance. The typical approach to ICT implementation is ‘do it to’ which constrains the impact the implementation might have on L&T. This paper has suggested that ‘do it with’ and even ‘do it for’ approaches, may allow LA to develop more sustained and meaningful contributions to L&T. However, they starkly contrast with existing institutional technology adoption and implementation norms based on ‘do it to’. While the way forward may not be clear, it is clear that we need a better balance between all three of these approaches if LA is going to enhance learning, teaching and student success. The following section describes a LA implementation at a regional Australian university with a very complex and diverse L&T environment.

EASI @ CQU

EASI or Early Alert Student Indicators is a LA project at CQUniversity targeting a strategic goal around student retention by improving academic-student contact. It combines student descriptive data from the student information system with student behaviour data from the Moodle LMS, and provides this data, in real-time, to teaching academics within their Moodle course sites. It also provides the academics a number of ways by which they can ‘nudge’ students who might be struggling at any point during the term. The term 1, 2014 trial was deemed to be very successful with 5,968 visits to EASI across the term, by 283 individual academic staff that looked at 357 individual courses. A majority of the 39,837 nudges recorded were mail-merges where academics used the in-built mail-merge facility to send personalised emails to students. The 7,146 students who received at least one ‘nudge’ email during the term, had by the end of term, 51% more Moodle clicks on average than students who did not receive nudges. This may be indicative of heightened engagement and aligns with anecdotal comments from the academics who have indicated that the personalised email ‘nudges’ promoted increased student activity and dramatically elevated staff-student conversation.

Based on a strategic goal to address a growing student retention problem, a formal project was proposed in 2012 based on a project proposal document (PPD) that outlined how the project would contribute to the strategic goal. There were more than a dozen iterations of this document before the project gained final approval, which then required a project initiation document (PID) to be submitted. The PID, over a number of iterations, provided fine-grained detail on a range of plans for the project including the project plan, project scope, deliverables, milestones, budget and quality. Twelve months after the PPD, work officially began on the project following the final approval of the PID. On the surface it would appear that this particular LA project followed a ‘do it to’ approach with formal project management methodology, and early indications about its success are encouraging. However, the underlying and invisible reality suggests a different story.

The idea for EASI evolved from many conversations and collaborations between staff from within the central L&T unit, and coalface academic staff, going back to 2008. These conversations and collaborations were predominately around finding ways of making better use of data that could inform L&T. The central L&T staff were somewhat unique in that they were active LA researchers, possessed experience with software development, and all were in daily contact and shared insights with front-line academic teaching staff. The central L&T staff pursued LA in their own time, using informal access to test data that was often incomplete or inconsistent. The EASI concept developed during 2011, when these staff identified the potential for LA to contribute to the strategic imperative of improving student retention. A number of small-scale pilots/experiments were conducted in close partnership with the participating teaching academics on a trial-and-error basis.

These trials occurred prior to the approval of the formal project plan using a combination of ‘do it with’ and ‘do it for’ paths before the start of the formal project and its requirements constrained the approach strictly to ‘do it to’. The essence of this story is that the project’s success, as defined by senior management (Reed, 2014), is directly attributable to the tinkering and experimentation that occurred with the front-line academics, prior to the commencement of the formal project. The ‘do it with’ and ‘do it for’ components allowed the bricolage that made the implementation meaningful (Ciborra, 1992), while the ‘do it to’ component provided the resourcing necessary to progress the idea beyond the tinkering stage. Perhaps the key message from the EASI experience is that there needs to be balance between all three approaches if LA is to going to make sustained and meaningful contributions to L&T.

Conclusion

A story was told in this paper of an apparently successful ‘do it to’ LA project. It was suggested that this project was successful only because of its underpinning and preceding ‘do it with’ and ‘do it for’ processes. These processes allowed the project to adapt in response to the needs of the users over time, prior to the start of the formal project. Based on this experience and the theoretical frameworks described in this paper, it would appear likely that attempts to implement LA without sufficient ‘do it with’ will fail. Turn-key solutions and the increasing trend for ‘systems integration’ and outsourcing, is unlikely to allow the bricolage required for sustained and meaningful improvement in complex L&T contexts. There is even a question of how long the EASI project can remain successful given the formal project and its associated resourcing, will cease at the end of the project.

While this paper specifically targeted LA, there is a question as to whether the same paths, or combination thereof, are required more broadly for improving L&T in universities. Is the broader e-learning rhetoric/reality gap a result of an increasing amount of ‘do it to’ and ‘do it for’ and not enough ‘do it with’? How much effort are universities investing in each of the three paths? How could a university appropriately follow the ‘do it with’ path more often? What impacts might this have on the quality of learning and teaching? The exploration of these questions may help universities to bridge the gap between e-learning rhetoric and reality.

References

Alexander, S. (2001). E-learning developments and experiences. Education+ Training, 43 (4/5), 240-248.

Barnett, R. (2000). Supercomplexity and the Curriculum. Studies in Higher Education, 25 (3), 255-265. doi: 10.1080/03075070050193398

Beer, C., Jones, D., & Clark, D. (2012). Analytics and complexity: Learning and leading for the future . Paper presented at the ASCILITE2012 Future challenges, sustainable futures, Wellingtone, New Zealand.

Birnbaum, R. (2001). Management fads in higher education: Where they come from, what they do, why they fail : Jossey-Bass San Francisco.

Boehm, B., & Turner, R. (2003). Using Risk to Balance Agile and Plan-Driven Methods. Computer, 36 (6), 57.

Boustani, M. A., Munger, S., Gulati, R., Vogel, M., Beck, R. A., & Callahan, C. M. (2010). Selecting a change and evaluating its impact on the performance of a complex adaptive health care delivery system. Clinical Interventions In Aging, 5 , 141-148.

Camillus, J. C. (2008). Strategy as a Wicked Problem. Harvard Business Review, 86 (5), 98-106.

Carr-Chellman, A. A. (2004). Global perspectives on e-learning: Rhetoric and reality : Sage.

Cavallo, D. (2004). Models of growth—towards fundamental change in learning environments. BT Technology Journal, 22 (4), 96-112.

Ciborra, C. U. (1992). From thinking to tinkering: The grassroots of strategic information systems. The Information Society, 8 (4), 297-309.

Diaz, V., & Brown, M. (2012). Learning analytics: A report on the ELI focus session. In Educause (Ed.), Educause Learning Initiative (Paper 2, 2012 ed., Vol. ELI Paper 2: 2012, pp. 18). Educause: Educause.

Duke, C. (2001). Networks and Managerialism: field-testing competing paradigms. Journal of Higher Education Policy & Management, 23 (1), 103-118. doi: 10.1080/13600800020047270

Elias, T. (2011). Learning analytics: Definitions, processes and potential. Learning, 23 , 134-148.

Ellis, R. A., Jarkey, N., Mahony, M. J., Peat, M., & Sheely, S. (2007). Managing Quality Improvement of eLearning in a Large, Campus-Based University. Quality Assurance in Education: An International Perspective, 15 (1), 9-23.

Fisher, J., Whale, S., & Valenzuela, F.-R. (2012). Learning Analytics: a bottom-up approach to enhancing and evaluating students’; online learning (pp. 18). University of New England: Office for Learning and Teaching.

Geoghegan, W. (1994). Whatever happened to instructional technology? Paper presented at the Paper presented at the 22nd Annual Conference of the International Business Schools Computing Association.

Gibson, J. W., & Tesone, D. V. (2001). Management fads: Emergence, evolution, and implications for managers. Academy of Management Executive, 15 (4), 122-133. doi: 10.5465/AME.2001.5898744

Goldfinch, S. (2007). Pessimism, computer failure, and information systems development in the public sector. Public Administration Review, 67 (5), 917-929.

Guan, J., Nunez, W., & Welsh, J. F. (2002). Institutional strategy and information support: the role of data warehousing in higher education. Campus –Wide Information Systems, 19 (5), 168.

Holland, J. (2006). Studying Complex Adaptive Systems. Journal of Systems Science and Complexity, 19 (1), 1 8. doi: 10.1007/s11424-006-0001-z

Inmon, W. H. (2002). Building the data warehouse / W.H. Inmon : New York ; Chichester : Wiley, c2002. 3rd ed. Introna, L. D. (1996). Notes on ateleological information systems development. Information Technology & People, 9 (4), 20-39.

Johnson, L., Adams Becker, S., Cummins, M., & Estrada, V. (2014). 2014 NMC Technology Outlook for Australian Tertiary Education: A Horizon Project Regional Report. In N. M. Consortium (Ed.), An NMC Horizon Project Regional Report . Austin, Texas.

Johnson, L., Adams, S., Cummins, M., Freeman, A., Ifenthaler, D., Vardaxis, N., & Consortium, N. M. (2013). Technology Outlook for Australian Tertiary Education 2013-2018: An NMC Horizon Report Regional Analysis. In T. N. M. Consortium (Ed.): New Media Consortium.

Johnson, L., Becker, S., Estrada, V., & Freeman, A. (2014). Horizon Report: 2014 Higher Education.

Johnson, L., Levine, A., Smith, R., & Stone, S. (2010). The 2010 Horizon Report : ERIC.

Johnson, L., Smith, R., Willis, H., Levine, A., Haywood, K., New Media, C., & Educause. (2011). The 2011 Horizon Report: New Media Consortium.

Jones, D., Beer, C., & Clark, D. (2013). The IRAC framework: Locating the performance zone for learning analytics . Paper presented at the Electric Dreams., Sydney. http://www.ascilite.org.au/conferences/sydney13/program/papers/Jones.pdf

Kurtz, C. F., & Snowden, D. J. (2003). The new dynamics of strategy: Sense-making in a complex and complicated world. IBM Systems Journal, 42 (3), 462-483.

Linden, A., & Fenn, J. (2003). Understanding Gartner’s hype cycles. Strategic Analysis Report Nº R-20-1971. Gartner, Inc .

Lodge, J., & Lewis, M. (2012). Pigeon pecks and mouse clicks: Putting the learning back into learning analytics. Paper presented at the ASCILITE 2012,, Wellington.

Macfadyen, L. P., & Dawson, S. (2010). Mining LMS data to develop an ‚Äúearly warning system‚ for educators: A proof of concept. Computers & Education, 54 (2), 588-599. doi: 10.1016/j.compedu.2009.09.008

Macfadyen, L. P., & Dawson, S. (2012). Numbers Are Not Enough. Why e-Learning Analytics Failed to Inform an Institutional Strategic Plan. Journal of Educational Technology & Society, 15 (3), 149-163.

Maddux, C., & Cummings, R. (2004). Fad, fashion, and the weak role of theory and research in information technology in education. Journal of Technology and Teacher Education, 12 (4), 511-533.

Marchand, D. A., & Peppard, J. (2013). Why IT Fumbles Analytics. Harvard Business Review, 91 (1), 104-112.

Mason, M. (2008a). Complexity theory and the philosophy of education. Educational Philosophy and Theory, 40 (1), 15. doi: 10.1111/j.1469-5812.2007.00412.x

Mason, M. (2008b). What Is Complexity Theory and What Are Its Implications for Educational Change? Educational Philosophy and Theory, 40 (1), 35-49.

McConachie, J., Danaher, P. A., Luck, J., & Jones, D. (2005). Central Queensland University’s Course Management Systems: Accelerator or Brake in Engaging Change? International Review of Research in Open and Distance Learning, 6 (1).

New Media Consortium. (2012). The NMC Horizon Report, Higher Education Edition. In N. M. Consortium (Ed.), Horizon Project (2012 ed., Vol. 2012, pp. 36). Austin, Texas USA 78730: New Media Consortium and Educause Learning Initiative.

Pardo, A. (2013). Social learning graphs: combining social network graphs and analytics to represent learning experiences. International Journal of Social Media and Interactive Learning Environments, 1 (1), 43-58.

Paulsen, M. (2002). Online education systems in Scandinavian and Australian universities: A comparative study. The International Review of Research in Open and Distance Learning, 3 (2).

Plsek, P. E., & Greenhalgh, T. (2001). Complexity science: The challenge of complexity in health care. BMJ (Clinical Research Ed.), 323 (7313), 625-628.

Radloff, A. (2008). Engaging staff in quality learning and teaching: what’s a Pro Vice Chancellor to do? Sydney: HERDSA.

Ramamurthy, K. R., Sen, A., & Sinha, A. P. (2008). An empirical investigation of the key determinants of data warehouse adoption. Decision Support Systems, 44 (4), 817-841.

Reed, R. (2014, 10/7/2014). [EASI project success].

Rosh White, N. (2006). Tertiary education in the Noughties: the student perspective. Higher Education Research & Development, 25 (3), 231-246. doi: 10.1080/07294360600792947

Schiller, M. J. (2012). Big Data Fail: Five Principles to Save Your BI Butt. Retrieved 1/6/2014, 2014, from http://www.cioinsight.com/c/a/Expert-Voices/Big-Data-Fail-Five-Principles-to-Save-Your-BI-Butt-759074/

Scribner, J. P. (2005). The problems of practice: Bricolage as a metaphor for teachers’ work and learning. Alberta journal of educational research, 51
(4).

Sharples, M., McAndrew, P., Ferguson, R., FitzGerald, E., Hirst, T., & Gaved, M. (2013). Innovating Pedagogy 2013. In O. University (Ed.), (Report 2 ed.). Milton Keynes, United Kingdom: The Open University.

Shiell, A., Hawe, P., & Gold, L. (2008). Complex interventions or complex systems? Implications for health economic evaluation. BMJ, 336 (7656), 1281-1283.

Siemens, G. (2011). Learning and Knowledge Analytics. Retrieved 1/11/2011, 2011, from http://www.learninganalytics.net/?p=131

Siemens, G. (2013a). Learning Analytics: The Emergence of a Discipline. American Behavioral Scientist, 57 (10), 1380-1400. doi: 10.1177/0002764213498851

Siemens, G. (2013b). [Systems level learning analytics].

Siemens, G., & Long, P. (2011). Penetrating the Fog: Analytics in Learning and Education. Educause Review, 46 (5), 9. Retrieved from Educause Review Online website: http://www.educause.edu/ero/article/penetratingfog-analytics-learning-and-education

Swanson, E. B., & Ramiller, N. C. (2004). Innovating mindfully with information technology. MIS quarterly , 553-583.

Trigwell, K. (2001). Judging university teaching. International Journal for Academic Development, 6 (1), 65-73. doi: 10.1080/13601440110033698

Does my course suffer from semester droop?

The institutional LMS seems to be having some problems, so I’ll post this instead.

Quite a few folk I work with have made observations about semester droop. i.e. attendance at lectures/tutorials dropping off as the semester progresses. @damoclarky and @beerc confirmed that the same droop can be seen in the longitudinal LMS data they have access to across most courses.

So the question I wanted to explore was

Does the course site from the S2, 2013 offering of my course show evidence of semester droop?

The quick answer is “Yes, a bit”. But it’s not huge or entirely unexpected. Might be interesting to explore more in other courses and especially find out what’s behind it.

Why do this?

I thought this would be interesting because

  1. I have a little tool that allows me to view usage of the site very easily.

    If it were harder, I probably wouldn’t have done it.

  2. The S2, 2013 offering is entirely online, no on-campus students so the course site is the main form of official interaction.
  3. Part of the final result (no more than 3%) comes from completing the sequence of weekly activities on the course site.
  4. I’ve tried to design these activities so that they explicitly link with the assessment and Professional Experience (the course if for pre-service teachers, they teach in schools for 3 weeks during the semester).

How?

Created two views of the S2, 2013 EDC3100 course site using MAV

  1. Clicks; and,

    Shows the entire course site with the addition of a heat map that shows the number of times students have clicked on each link.

  2. Students.

    The same image, but rather than clicks the heat map shows the number of students that clicked on each link.

Findings

  1. Students – there is some drop off.

    91 students completed all the assessment. 9 did not.

    97 students is the largest number of students clicking on any link. This is limited to the Assessment link and a couple of links in the first week. Where did the other two go?

    The activities in the last week range from 48 students clicking on a link up to 83 students.

    So definite drop off with some students not completing the activities in the last few weeks.

  2. Clicks.

    Assessment link had the most clicks – 1559 clicks.

    The “register your blog” link had 1211 clicks. This is where students registered and looked for other student blog addresses. The blog contributed to final result.

    Discussion forums for Q&A and Assignment 3 – 977 clicks and 949 clicks.

    Activities in the first week ranged from 177 clicks up to 352. Indicating that many students started these more than once.

    Activities in the last week ranged from 83 to 146 clicks. The 146 clicks was titled “Pragmatic assignment 3 advice”.

    Definite drop off. The most popular activity in the last week got less clicks than the least popular activity from week 1.

Reasons?

@palbion made the point that students are pragmatic and do what they think they need. It appears that the EDC3100 design addresses this somewhat in that they tend to stick with the activities as they need it.

However, by the last week the students have the results from two assignments that make up 59% of their assessment. I wonder if the small percentage associated with completing study desk activities and knowing their likely mark results in them making a pragmatic decision? One potential explanation for the drop off in the last week.

The other is they are probably busy with other assignments and courses they need to catch up on after being on Professional Experience.

@beerc has made the suggestion that perhaps by the end of semester the students are more confident with the value of the course site and how to use it. They’ve had the full semester to become familiar, hence less clicks searching around to make sure everything is checked.

Of course, asking them would be the only way to find out.

Thoughts?

Designing for situation awareness in a complex system

The following is a summary and probably some thoughts on

Endsley, M. (2001). Designing for Situation Awareness in Complex System. In Proceedings of the Second International Workshop on the symbiosis of humans, artifacts and environment. Kyoto, Japan.

@beerc is excited by some of the potential of this and related work for university e-learning. It seems to fit with our thoughts that most universities and the individuals therein aren’t even scratching the surface in terms of what technology could offer.

My initial thoughts

I like some of the initial outlining of the problem, however, I think the solution smacks too much of the complicated systems approach, rather than the complex adaptive approach. The solution is essentially a specialised waterfall model (requirements/design/evaluate) with a focus on situation awareness. There’s some interesting stuff there, but I’m not sure how applicable it is to university e-learning. I remain leery of this idea that experts can come in an analyse the problem and fix it. It needs to be more evolutionary.

There are some nice quotes for higher ed and its systems.

The challenge of the information age

“The problem is no longer lack of information, but finding what is needed when it is needed.” Actually, I’d have to argue that when it comes to information about the learners in the courses I teach, I’m still suffering the former problem when it should be the latter.

Describes “The information gap” where “more data != more information”. Draws on some of the common explanations.

From data to information

Draws on a Bennis (1977) quote “This post-technological age has been defined as one in which only those who have the right information, the strategic knowledge, and the handy facts can make it”….makes the point “making the right decisions will depend on having a good grasp of the true picture of the situation”.

The overflow of data needs translating into information. But it will “need to be processed and interpreted slightly differently by different individuals, each of whom has varied dynamically changing but inter-related information needs”.

This translation “depends on understanding how peopel process and utilize information in the decision making activities”

Understanding “human error”

A few examples of this before “the biggest challenge within most industries and the most likely cause of an accident receives teh label of human error. This is a most misleading term, however, that has done much to sweep the real problems under the rug.”

Instead it’s argued that the human was “striving against significant challenges”….coping “with hugely demanding complex systems”. Overload in terms of data and technology. This is addressed through long lists of procedures and checklists which are apt to eventually fail. Instead

The human being is not the cause of these errors, but the final dumping ground for the inherent problems and difficulties in the technologiges we have created. The operator is usually the one who must bring it all together and overcome whatever failures and inefficiencies exist in the system

This resonates quite strongly with my experience at different universities when trying to teach a large course with the provided information systems.

Situation awareness: The key to providing information

“Developing and maintaining a high level of situation awareness is the most difficult part of many jobs”. SA is defined as “an internalised mental model of the current state of the operator’s environment…..This integrated picture forms the central organising feature from which all decision making and action takes place”.

Developing and keeping SA up to date makes up a vast portion of the person’s job.

“The key to coping in the ‘information age’ is developing systems that support this process. This is where our current technologies have left human operators the most vulnerable to error.”…..cites research that shows SA problems were “the leading causal factor”.

I wonder if such research could be done in a contemporary university setting?

Success will come is you can combine and present the vast amounts of data in a way that provides SA. “The key here is in understanding that true situation awareness only exists in the mind of the human operator”.

This is an interesting point given the rush to automated analytics.

The successful improvement of SA through design or training problems requires the guidance of a clear understanding of SA requirements in the domain, the individual, system and environmental factors that affect SA, and a design process that specifically addresses SA in a systematic fashion

SA defined

Citing Endsley (1988) SA is defined as

the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future”

and there a levels of SA

  1. Perception of the elements in the environment

    “perceiving the status, attributes and dynamics of relevant elements in the environment”

  2. Comprehension of the current situation

    More than awareness of the elements, includes “an understanding of the significance of those elements in light of one’s goals”.

    A novice operator may achieve the same Level 1 SA as an experienced operator, but will likely fall short at Level 2.

  3. Projection of future status

    What are the elements in the environment going to do?

Theoretical underpinnings

Links to broader literature that has developed a theoretical framework model. Apparently heavily based on the cognitivist/psychology research. Working memory, long term memory etc. e.g. Fracker’s (1987) hypothesis that working memory is the main bottleneck for situational awareness and other perspectives. Mental models/schema get a mention as a solution.

“Of prime importance is that this process can be almost instantaneous due to the superior abilities of human pattern matching mechanisms”. Hence the importance of expertise and experience.

Designing for situation awareness enhancement

The type of systems integration required for SA

usually requires very unique combinations of information and portrayals of information that go far beyond the black box “technology oriented” approaches of the past

Designing these systems is complex, but progress made. Too complex to cover here, but talks about three major steps

  1. SA requirements analysis

    Frequently done with a form of cognitive task analysis/goal-directed task analysis. The point is that goals/objectives form the focus, NOT tasks.

    Done using a combination of cognitive engineering procedures with a number of operators.

    Done (with references) in many domains.

  2. SA-Oriented design

    Presents 6 design principles for SA, that is also applicable more broadly.

  3. Measurement of SA in design evaluation

    Mentions the Situation Awareness Global Assessment Technique (SAGAT) measuring operator SA.

When initially reading those three steps my first reaction was “Arggh, it’s the SDLC/waterfall model all over again. That’s extremely disappointing”. I then started wondering if this was because they are thinking of complicated systems, not complex adaptive systems?

Does GPA make any difference to #moodle course usage?

Summary

In short, there is definitely a pattern. In fact, there are two patterns evident:

  1. Students in higher GPA groups click on a range of different activities and resources at much higher rates than those in lower GPA groups.
  2. A greater percentage of students in higher GPA groups will click on resources.

There are a few exceptions to this. Another less explored pattern is a drop off in usage as the semester progresses.

This is in the context of a single offering of a course I teach with all students (n=~100) enrolled as online students.

The pattern seems to exist across different types of resources from across the semester. Though there does appear to be a drop off toward the end of semester.

This aligns with findings from prior work such as Dawson et al (2008) and Beer et al (2009).

My next step is to see what reaction presenting this pattern to the next crop of students will have.

Origins

In just over a week a new semester starts. Institutional requirements mean that course sites need to be available 2 weeks prior to the start of semester. Consequently there’s already been some activity on the new site for the course I teach. In response, I observed

To which @s_palm replied

Which got me wondering. Is there a link between accessing the course site and GPA? Do “good” students use the LMS more? What happens if students are aware of this pattern?

With the Moodle Activity Viewer installed, I have one way to explore the usage question for a past course site. To be clear

  1. This is just an initial look to see if there are any obvious patterns.
  2. As @s_palm has pointed out

To test this, I’m going to

  1. Create some new groups on my old Moodle course site based on student GPA.

    I could also do this based on the final grade in this course, might be an interesting comparison.

    Glad I had access to the database, creating these groups through the Moodle interface would have been painful.

  2. I can then use MAV’s “select a group” feature to view how they’ve accessed the course site.

    MAV will show the number of clicks or number of students who have visited every link on a Moodle course site. I don’t expect the number of students to reveal too much – at least not on the home page – as completing activities/resources is part of the assessment. Comparing the number of links is not going to be straight forward given the different numbers in each group (and MAV not offering anyway to normalise this).

Explanation of the “analysis”

The quick and dirty comparison is between the following groups

  • 6 GPA (n=11) – all students with a GPA of 6 or above.
  • 5 GPA (n=49) – all students with a GPA of above 5, but less than 6.
  • 4 GPA (n=35) – GPA above 4, but less than 5.
  • Less than 4 GPA (n=28) – the remaining students, apart from a handful with a GPA of 0 (exemptions?)

The analysis will compare two usage “indicators” for a range of course resources/activities.

The “indicators” being compared are

  • Clicks / Students – the total number of clicks on the resource/activity by all students in a group divided by the number of students in the group.
  • Percentage – the percentage of students in that group who clicked on the activity/resource.

Assessment and Study Schedule

The first resources compared are

  • Assessment – a Moodle book that contains all details of the assessment for the course.
  • Study Schedule – a page that gives an overall picture of the schedule of the course with links to each week’s block.
Group Clicks / Student % Students
Study Schedule
6 GPA 4.2 100.0
5 GPA 2.9 75.5
4 GPA 2.8 74.3
Less than 4 1.3 53.6
Assessment
6 GPA 22.7 100.0
5 GPA 12.2 75.5
4 GPA 11.0 74.3
Less than 4 8.2 64.3

The pattern is established early. The higher GPA groups access these resources more.

Unsurprisingly, the assessment information is used more than the study schedule.

Forums

Next comparison is two forums. Each assignment has it’s own forum. There is a general discussion forum. Finally, there are a range of forums used for specific learning activities during the semester. The two forums being compared here are

  • Q&A forum – is a forum for general questions and discussion.
  • Assignment 3 and Professional Experience Forum – assignment 3 is wrapped around the students’ 3 weeks practical teaching period.
Group Clicks / Student % Students
Q&A Forum
6 GPA 19.3 90.9
5 GPA 7.9 65.3
4 GPA 7.3 54.3
Less than 4 1.6 35.7
A3 and PE forum
6 GPA 16.0 100.0
5 GPA 8.4 73.5

4 GPA 5.5 68.6
Less than 4 1.2 35.7

The pattern continues. Particularly troubling is the significant reduction in use of the forums by the “Less than 4 GPA” group. Only about a third of them use the forums as opposed to over half accessing the study schedule and even more accessing the assessment.

I wonder how much of this percentage difference is due to students who have dropped out early?

Week 1 activities

In week 1 of the semester the students have to undertake a range of activities including the three compared here

  • Register their blog – they are required to create and use a personal blog throughout the semester. This activity has them register and be able to view the registered blogs of other students.
  • Share introductions – post an introduction of themselves and look at others. An activity that has been recently revisited for the coming semester.
  • PKM and reflection – a Moodle book introducing Personal Knowledge Management and reflection through a range of external resources. These two processes are positioned as key to the students’ learning in the course.
Group Clicks / Student % Students
Register your blog
6 GPA 12.9 100.0

5 GPA
9.2 75.5
4 GPA 10.8 77.1
Less than 4 6.6 60.7

Share introductions forum
6 GPA 6.6 100.0
5 GPA 4.6 75.5
4 GPA 5.6 77.1
Less than 4 2.2 57.1

PKM and reflection
6 GPA 3.8 100.0
5 GPA 2.3 75.5
4 GPA 2.1 74.3
Less than 4 1.4 53.6

Generally the pattern continues. The “4 GPA” group bucks this trend with the “Register your blog” activity. This generates at least two questions

  • Are the increased clicks / students due to troubles understanding the requirements?
  • Or is it due to wanting to explore the blogs of others?

Given that the percentage of students in the “4 GPA” group also bucks the trend, it might be the former.

Late semester resources

Finally, three resources from much later in the semester to explore how folk are keeping up. The three resources are

  • Overview and expectations – a Moodle book outlining what is expected of the students when they head out on their Professional Experience. There is still four weeks of theory left in the course, followed by 3 weeks of Professional Experience.
  • Your two interesting points – a Moodle forum in the last week of new content. The last week before the students go on Professional Experience. The students are asked to share in this forum the two points that resonated most with them from the previous reading that was made up reflections of prior students about Professional Experience.
  • Pragmatic advice on assignment 3 – another Moodle book with fairly specific advice about how to prepare and complete the 3rd assignment (should generate some interest, you’d think).
Group Clicks / Student % Students
Overview and expectations
6 GPA 1.7 90.9
5 GPA 2.4 75.5
4 GPA 1.6 65.7
Less than 4 0.8 50.0
Your two interesting points
6 GPA 1.2 63.6
5 GPA 1.0 55.1
4 GPA 0.6 34.3
Less than 4 0.2 14.3
Pragmatic A3 advice
6 GPA 1.5 90.9
5 GPA 1.4 73.5
4 GPA 1.0 60.0
Less than 4 0.6 42.9

The established pattern linking GPA with usage largely remains. However, the “5 GPA” students buck that pattern with the “Overview and Expectations” book. The “gap” between the top group and the others is also much lower with the other two resources (0.2 and 0.1 click / student) compared to some much larger margins with earlier resources.

There is also a drop off in groups toward the end of semester as shown in the following table comparing the main Assesment link with the pragmatic A3 advice.

Group Assessment Prag A3 advice
  C / S % studs C / S % studs
6 GPA 22.7 100.0 1.5 90.9
5 GPA 12.2 75.5 1.4 73.5
4 GPA

11.0 74.3 1.0 60.0
Less than 4 8.2 64.3 0.6 42.9

Some “warnings”. The 10% drop for the “6 GPA” group represents 1 student. There’s a chance that by the end of semester, the students have worked out they can print out a Moodle book (can be used to produce a PDF). So they visit it, save the PDF and refer to that. This might explain the drop off in clicks / students.

References

Beer, C., Jones, D., & Clark, K. (2009). The indicators project identifying effective learning, adoption, activity, grades and external factors. Same places, different spaces. Proceedings ascilite Auckland 2009. Auckland, New Zealand. Retrieved from http://www.ascilite.org.au/conferences/auckland09/procs/beer.pdf

Dawson, S., McWilliam, E., & Tan, J. P. L. (2008). Teaching smarter: How mining ICT data can inform and improve learning and teaching practice. Melbourne. Retrieved from http://www.ascilite.org.au/conferences/melbourne08/procs/dawson.pdf

MAV, #moodle, process analytics and how I’m an idiot

I’m currently analysing the structure of a course I teach and have been using @damoclarky’s Moodle Activity Viewer to help with that. In the process, I’ve discovered that I’m an idiot in having missed the much more interesting and useful application of MAV than what I’ve mentioned previously. The following explains (at least one example of) how I’m an idiot and how MAV can help provide a type of process analytics as defined by Lockyer et al (2013).

Process analytics

In summary, Lockyer et al (2013) define process analytics as one of two broad categories of learning analtyics that can help inform learning design. Process analytics provide insight into “learner information processing and knowledge application … within the tasks that the student completes as part of a learning design” (Lockyer et al, 2013, p. 1448). As an example, they mention the use of social network analysis of student discussion activity to gain insights into engaged a student is with the activity and who the student is connecting with within the forum.

The idea is that a learning analytics application becomes really useful when combined with the pedagogical intent of the person who designed the activity. The numbers and pretty pictures by themselves are more valuable in combination with teacher knowledge.

A MAV example – Introduction discussion forum

I’m currently looking through the last offering of my course, trying to figure out what worked and what needs to be changed. As part of this, I’m drawing on MAV to give me some idea of how many students clicked on particular parts of the course site and how many times they did click. At this level, MAV is an example of a very primitive type of learning analytics.

Up until now, I’ve been using MAV to look at the course home page as captured in this large screen shot. When I talk about MAV, this is what I show people. But now that I actually have MAV on a computer where I can play with it, I’ve discovered that MAV actually generates an access heat map on any page produced by Moodle.

This includes discussion forums, as shown in the following image (click on it to see a larger version).

Forum students by David T Jones, on Flickr

This is a modified (I’ve blurred out the names of students) capture of the Introduction discussion forum from week 1 of the course. This is where students are meant to post a brief introduction to themselves, including a link to their newly minted blog.

With a standard Moodle discussion forum, you can see information such as: how many replies to each thread; who started the thread; and, who made the last post. What Moodle doesn’t show you is how many students have viewed those introductions. Given the pedagogical purpose of this activity is for students to read about other students, knowing if they are actually even looking at the posts is useful information.

MAV provides that information. The above image is MAV’s representation of the forum showing the number of students who have clicked each link. The following image is MAV’s representation of the number of clicks on each link.

Forum clicks by David T Jones, on Flickr

What can I derive from these images by combining the “analytics” of MAV with my knowledge of the pedagogical intent?

  • Late posts really didn’t help make connections.

    The forum is showing the posts from most recent to least recent. i.e. the posts near the top are the late posts. This forum is part of week 1, which was 15th to 19th of July, 2013. The most recent reply (someone posting their introduction) was made in Oct. Subsequent posts are from 7th to 10th August, almost a month after the task was initially due (the first assignment was due 12th August, completing this task contributed a small part of the mark for the first assignment).

    These late posts had really very limited views. No more than 4 students viewing them.

  • But then neither did many of them.

    Beyond the main thread started by my introduction, the most “popular” other introduction was clicked on 41 times by 22 students (out of 91 in the course). Most were significantly less than this.

    Students appear not to place any importance on reading the introductions of others. i.e. the intent is not being achieved.

  • Students didn’t bother looking at my Moodle profile.

    The right hand column of the images shows the name of the author and the time/date of the last post in a thread. The author’s name is also a link to their Moodle profile.

    MAV has generated an access heat map for all the links, including these. There are no clicks on my profile link. This may be because the course site has a specific “Meet the teaching team” page, or it maybe they simply don’t care about learning more about me.

  • It appears students who posted in a timely manner had more people looking at their profiles.

    This is a bit of stretch, but the folk who provided the last post to messages toward the bottom of the above images tend to have higher clicks on their profile than those later in the semester. For example, 19, 22, and 12 for the three students providing the last posts for the earliest posts, and, 1, 1, and 7 for the students providing the last post for the more recent posts.

  • Should I limit this forum to one thread?

    The most popular thread is the one containing my introduction (549 clicks, 87 students). Many students posted their introduction as a reply to my introduction. However, of the 122 replies to my post, I posted 30+ of those replies.

In short, I need to rethink this activity.

Implications

I wonder if the networks between student blog posts differs depending on when they posted to this discussion forum? Assuming that posting to this discussion forum on time is an indicator of engagement with the pedagogical intent?

If the aim behind an institutional learning analytics intervention is to improve learning and teaching, then perhaps there is no need for a complex, large scale enterprise (expensive) data warehouse project. Perhaps what is needed is the provision of simple – but currently invisible information/analysis – via a representation that is embedded within the context of learning and teaching and thus makes it easier for the pedagogical designer to combine the analytics with their knowledge of the pedagogical intent.

Answering the questions of what information/analysis and what representation is perhaps best understood by engaging and understanding existing practice.

@damoclarky needs to be encouraged to do some more writing and work on MAV and related ideas.

References

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, 57(10), 1439–1459. doi:10.1177/0002764213479367

Challenges in employing complex e-learning strategies in campus-based universities

The following is a summary of McNaught et al (2009). This is one of three papers that from the same institution around the LMS that I’ve looked at recently.

The abstract for the paper is

Despite the existence of a significant number of established interactive e-learning tools and strategies, the overall adoption of e-learning is not high in many universities. It is thus important for us to identify and understand the challenges that face more complex e-learning projects. Using a qualitative method that gathered together the reflections of experienced practitioners in the field, this paper outlines many types of challenges that arise in the planning and development, implementation and evaluation stages of e-learning projects. Some of these challenges are related to human factors and some are associated with external factors such as technological infrastructure, university policy and support and the teaching and learning culture as a whole. A number of models are presented to assist our understanding of this situation – one on understanding the nature of innovation, a grounded model of the challenge factors we have encountered in our own experience and one to show possible future directions.

The paradox of e-learning

Lot’s of e-learning conferences full with presentations about digital resources and tools. But reality of institutional adoption of e-learning very different. “..this paper was born out of a desire to ‘come clean’ and see if we can advance e-learning from its often mundane position as the repository for lecture notes and PowerPoints” (McNaught et al, 2009, p. 268).

The context of campus-based universities

The cases arise from campus-based universities in Hong Kong, though “we believe our ‘story’ is more generally applicable” (p. 268). The authors do suggest that the “dynamic of distance universities are quite different” given that distance may provide more of an incentive for better e-learning strategies.

Note: I really don’t think that the distance dynamic plays much of a role at an overall level. There is perhaps more thought, but I wonder how much of that translates into action?

Even writing in 2009, the authors suggest that most of the success stories arise from pioneering teachers. The early adopters. References a 1998 paper by Anderson et al as support. Gives some statistics from CUHK from 2004 to show limited use. This draws on some of the previous papers.

More interactive uses of technology is often “development-intensive and/or administrative-intensive. They require teachers to spend a great deal of time in planning and creating the online resources, and then usually sustained significant effort in monitoring and regulating the online strategies while they are in progress”. Cites Weaver’s (2006) challenge to “encourage quality teaching practices…that seamlessly integrates the technical skills with the pedagogical and curricular practices… and does not promote transfer of existing poor teaching practices to the online environment”

Examples of unsuccessful complex e-learning

Appears to draw on literature to give examples of complex e-learning projects that failed in various stages of the development process

  1. During development – “getting it finished” – Cheng et al (2006).
  2. “Getting it used”

A model to show why innovation is challenging

Going to present two ways of “representing the challenges that face innovation and change – in this case we are considering a complex interactive use of e-learning in campus-based universities”.

The first is the J Curve. i.e. things will get worse before they get better “because of the expenses and challenges that occur early on in the innovation cycle”.

Note: But like much of the innovation literature this simplification doesn’t capture the full complexity of life, innovation and e-learning. If innovation is in something that is rapidly changing (e.g. university e-learning) then is there ever an upward swing? Or does the need for a new innovation – and another downward spiral – occur before you get the chance to climb out of the trough? For example, does the regular LMS migration phase in most universities (or the next organisational restructure) prevent any ability to climb up the J curve?

The second is the S curve (a related representation below). i.e. diffusion occurs through innovation, growth and maturity. With the transition from “innovation” to “growth” phase being the most important. And it’s hard

Leading innovation through the bottom of the J-curve or through the transition from ‘innovation’ to ‘growth’ in the S-curve is not easy as this process often requires people to rethink their beliefs and reformulate their ways of working; this is difficult. (p. 271)

Now brings in Lewin’s ideas about conceptual change process as a way of thinking about the challenge of changing beliefs and practices (a model the authors have used previously). This process has three stages

  1. “a process for diagnosing existing conceptual frameworks and revealing them to those involved;”
  2. a period of disequilibrium and conceptual conflict which makes the subject dissatisfied with existing conceptions; and
  3. a reforming or reconstruction phase in which a new conceptual framework is formed” [Kember et al. (2006), p.83]

Note: A few years ago I expressed some reservations about the applicability of Lewin’s model. I think they still apply.

To some extent this quote from the author’s gets at some of my reservations about this perspective on encouraging change with e-learning (emphasis added) “The process of demonstrating to teachers that there might be a better way forward with their use of e-learning requires evidence and this is why evaluation is so critical” (p. 271).

The assumption here is that there is a better way for the teacher to teach. We – the smart people who know better – just need to show them. Given the paucity of quality technology available within universities; the diversity of teachers, students and pedagogies; and the challenge from Weaver above I wonder if there is always a better way to demonstrate that is – to employ some Vygotsky – within the Zone of Proximal Development of the particulars of the learning/teaching context?

The author’s model for understanding the challenges facing e-learning, innovation and change is

  1. An understanding that change is not easy and always meets resistance (J-curve).
  2. An appreciation that there will be no significant gains unless significant numbers of teachers begin to adopt the innovation – in this case, complex e-learning (S-curve).
  3. A suggestion that the process of implementation should model the three stages of the conceptual change process. Evaluation is integral to this process.

Note: Are people all the change averse? Sure, we are/can be creatures of habit. However, when it comes to e-learning and that sort of “innovation” the change is often done to students and staff, rather than with them. i.e. the particular tool (e.g. a new LMS) or pedagogical approach (e.g. MOOC, flipped classroom etc) is chosen centrally and pushed out. Systems that are developed with people to solve problems or provide functions that were identified as needed are different (I think).

Note: I find #1 interesting in that it takes the J-Curve to suggest that there will always be resistance. From their introduction to the J-Curve the point seems to be that innovation brings challenges and expense that mean ROI will drop initially. This doesn’t seem to be about resistance.

Note: #2 is also interesting. The requirement that there be significant levels of adoption prior to significant gains arising from an innovation is a problem if you accept good quality L&T being about contextually appropriate approaches. The sheer diversity of learners, teachers etc – no to mention the rapid on-going change – suggests that this model of “significant gains == significant levels of adoption” may not fit with the context. Or at least cause some problems.

The study

Qualitative method to collect reflection of practitioners in the field “regarding the challenges in the various stages of development and use of complex e-learning strategies”. 5 authors – 3 from central L&T and 2 were pioneering teachers.

Note: Would appear that the sample is biased toward the “innovators”, involving other folk may have revealed very different insights.

Three sources of data

  1. Detailed interviews with teachers and programmers and analysis of email communication logs for projects that were never implemented.
  2. Publications about the work of one of the authors.
  3. Similar from another author.

Findings

Iterations of reflection and discussion led to a table of challenges.

Teachers Students Supporting staff Technology/Environment/Culture
Planning and development Limited time and resources Miscommunication Restrictions in university resources and support
Necessity of new skills Different perception of tasks with teachers Technology being inflexible
Miscommunication Limitation in resources and expertise Idiosyncratic nature of development
Different perception of tasks with support team Idiosyncratic nature of development
Implementation New to strategies
Unwillingness to learn differently
New to strategies Sustainability
Cost-effectiveness
Dissemination Unwillingness to share
Unmotivated to learn new technologies
Strategies do not match teaching styles
Contrary to existing T&L practice
Evaluation Lack of cases Lack of appreciation Question about effectiveness

These are elaborated more with examples.

Discussion

Taking the four sources from the above table, the authors propose the idea of a “mutual comfort zone”. An e-learning project needs to have all of the factors in this MCZ for it to be successful. The paper illustrates this with the obligatory overlapping circle diagram.

Cases of successful complex e-learning strategies, thus, seem to be limited to the instances when all the factors noted in Figure 4 work in unison. It is therefore easy to see why successful cases of complex e-learning are not all that common and are restricted to highly motivated pioneering teachers who are comfortable with innovative technologies and may also be in an innovation-friendly environment. (p. 281)

Note: resonating with the mention of ZPD above.

Becoming more optimistic, the future is promising because

  • The tools are getting more “e-learning friendly”.
  • LMSs are “now more user-friendly and more flexible” makes mention of open source LMSs like Moodle.

    Note: But doesn’t more flexibility bring complexity?

  • Teachers now have better IT skills are want to use technology.
  • Supporting services are proving based on accumulated experience.

    Note: I wonder about this. Organisational restructures and the movement of people aren’t necessarily helping with this. I can point to a number of situations where it has gone the other way.

  • Institutions are adopting e-learning, so the policy problem is solved.

    Note: Assumes that the policy is done well and actually can and does have an impact on practice. I’m not sure those conditions are met all the tie.

Given all this “E-learning might then reach a critical mass and so that e-learning will progress beyond the valley bottom of the J-curve and will start climbing the growth phase in the S-curve”.

I wonder if this is evident? This links very nicely with some of the ideas in my last post.

References

Mcnaught, C., Lam, P., Cheng, K., Kennedy, D. M., & Mohan, J. B. (2009). Challenges in employing complex e-learning strategies in campus-based universities. International Journal of Technology Enhanced Learning, 1(4), 266–285.