Barriers to harnessing academic analytics

The Indicators project is an attempt to enable the examination, analysis and comparison of LMS usage across time, systems and institutions. The project is nothing new. Projects around academic analytics have been around for a while and I and others at my current institution have been talking about using the collective data in systems logs to inform the practice of L&T at universities for a long time.

Given the long term interest and the need to do this, why aren’t more universities doing it? What is getting in the way? This post is the start of a list of factors that I have gotten an inkling of through the literature, personal experience and talking with others. What would you add?

(PS. I’m trying to follow advice in this post on improving a blog post)

Nature of the technology: Limitations of the LMS

Currently, for most universities, the learning management system (LMS – Blackboard, Moodle etc) are the main environment for L&T. For good or bad, this is a major source of data. Some of the barriers are inherent to the nature of the LMS. These include:

  • Minor variations in naming and approach.
    Even though there are more similarities than differences between different LMS (Black et al, 2007), those differences are enough to make comparisons between LMS somewhat difficult. There needs to be work like that carried out by Malikowski et al (2007) to enable meaningful comparisons.
  • The LMS focus on individual courses.
    An LMS is focused on enabling individual academics create and maintain course sites. This means the features, including reporting, are targeted at the course level.
  • The LMS doesn’t encompass all data.
    To really understand the impact of LMS activity you need to be able to match student activity with student performance (i.e. their grades). Typically, student grade information is not available within the LMS. If you all rely on is the LMS data, you can’t access this and other information (e.g. demographic information about students or staff).
  • Advice to clear system logs.
    The versioni of Blackboard my institution used to run came with some advice to purge the activity accumulator. One of the major tables/systems of the LMS that recorded what was being done with Blackboard. Purging it means destorying the data. Unless the IT folk involved stored the data first, you lost it.
  • The general poor quality of the reporting/visualisation features of an LMS.
    From Dawson and McWilliam (2008) “the current LMS present poor data aggregation and similarly poor visualisation tools in terms of assisting staff in understanding the recorded patterns of student learning behaviour.”

Technical furphies

It’s been my experience that when technical people don’t really want to something, for whatever reason, they come up with a furphy to explain why it can’t be done. The best of these furphies have an element of truth in order to make them plausible. However, if you know a bit more about technology, there are usually simple solutions. The ones I’m aware of include:

  • It will overload the live database.
    The idea here is that trawling the LMS database to generate analysis of how it is being used will raise “concerns with system load for querying the database in order to extract the required data” (Dawson & McWilliam, 2008). i.e. the users of the LMS will suffer from poor performance. The problem with this furphy is that any IT department worth its salt will already have a copy of the database that is being regularly updated that it uses for development and testing. If they don’t, they’re not doing their job properly. That test database is not directly used by users and can generally be used without performance concerns. In addition, with any of the “enterprise” database systems, it shouldn’t be difficult to create another copy of the database, if they don’t want the analytics playing with the dev database.
  • We don’t have enough resources.
    This is perhaps a superset of the previous point. It usually means that we don’t have the database adminstrators, developers or hardware resources available to support the project. Providing access to an existing dev database on an “enterprise database” should take seconds – at least if the processes and automation used by the IT department is any good.

    In defence of the IT folk and also to make clear that these are not universals. In some cases, they don’t have the resources. Many institutions don’t put enough resourcing into IT to achieve what they want. In these cases, IT are probably better off collaborating with people who want to do things as a better way to generate support to get the additional resources.

  • The database is to complicated to understand.
    Some of the folk in L&T working in analytics don’t have technical backgrounds and must rely on IT. As another version of the previous points there is the argument that the databases are too complicated to easily get useful data from them.
  • It’s not us, it’s the system owner.
    We’d love to give you access, but first you have to talk to Y as they are the system owner. We’re pretty sure that they are not keen on the idea. Not sure how you might contact them. Just a bit of passive resistance.

Organisational barriers

  • Ownership of the data.
    Some teaching staff see all information about their course in the LMS as belonging to them. Some faculties believe the same for their faculty courses. IT may see ownership of the database as belonging to them. The institution may not have any firm written policy on this and simply rely on who speaks most strongly. This problem often connected with the following.
  • Mismatch with system owner requirements.
    For example, the organisational owner of student records data at our institution has been the student administration section. Folk responsible for getting students enrolled, results processed and accepting money. These folk are not mostly focused on improving learning and teaching. So may not see the rationale for analytics or other tasks.
  • Privacy concerns.
    Fears that information about individual students or even staff, will be made public and or misused — leading to the next point.
  • Plans and fears of using the data as a stick.
    It’s not hard to see some management using reports from the LMS to identify “bad” academics and punish them. This may be achieved simply through the creation of KPIs and standards for course delivery that assume that it’s possible to make such universal statements and ignore the inherent variability in university learning and teaching. This leads to the next one….
  • Task corruption and Goodhart’s law.
    If you set a particular measure as a performance indicator/target (e.g. the presence of a discussion forum or the number of contributions to the discussion forum by the staff member) you will get people achieving that target. However, in some/many cases they will be using task corruption to achieve it.

Solutions

Many people complain that I point out problems with out pointing out solutions, hence the inclusion of this section

The solutions we’ve used, mostly by luck than forethought, have included:

  • Historical responsibilities.
    At one stage, members of our project were system owners or designers of at least two of the LMS used by our institution. This meant that we have access to the databases for these systems. Though it didn’t help with the LMS we aren’t responsible for.
  • Accidents.
    As mentioned above, advice from Blackboard is that you regularly clear/rotate the activity accumulator. For some reason, our IT folk never did this. So all that data, going back to 2005, was available.
  • Technical knowledge.
    At least two of our project members have some technical knowledge. When faced with technical furphies we could point out alternate representations.
  • Organisational knowledge.
    I’ve been at my current institution for nearly 20 years (god that’s sad). For good and bad, I know a fair bit about the organisation and the people within it. Often it is who you know, not what. For example, we got access to student records data over 10 years ago because I knew the system owner of the student data regularly had lunch at a certain place. I just happened to be there to ask nicely to have access to the data.
  • Personal connections.
    Both internally and externally project members know a broad cross-section of people who can help provide alternate representations of stories that have been told. Representations that often unlock doors.
  • Organisational power.
    This is perhaps the most useful one. If you are doing work that is seen to be useful or important by one person in power, or lots of people throughout the organisation, that can often provide the necessary power to override/workaround objections.
  • Workarounds.
    Often you don’t have to ask permission. Often there are technical and social solutions to access that can work around barriers. For example, Bakharia et al (2009) talk about an approach to analyse course discussion forums that use Greasemonkey to avoid the need to access the LMS database.

Of course, in a perfect world, all members of the organisation would be warm and fuzzy folk who are only too happy to collaborate with their colleagues to achieve important outcomes for the organisation. None of them would ever be advised not to provide assistance or support to individuals within the same organisation.

References

Bakharia, A., E. Heathcote, et al. (2009). Social networks adapting pedagogical practice: SNAPP. Same places, different spaces. Proceedings ascilite Auckland 2009. Auckland: 49-51.

Black, E., D. Beck, et al. (2007). “The other side of the LMS: Considering implementation and use in the adoption of an LMS in online and blended learning environments.” Tech Trends 51(2): 35-39.

Dawson, S. and E. McWilliam (2008). Investigating the application of IT generated data as an indicators of learning and teaching performance. Melbourne, Australian Learning and Teaching Council: 45.

Malikowski, S., M. Thompson, et al. (2007). “A model for research into course management systems: bridging technology and learning theory.” Journal of Educational Computing Research 36(2): 149-173.

5 thoughts on “Barriers to harnessing academic analytics

  1. babyshark13

    Hi David,

    Quite an interesting blog you have written. I would agree with you that for many years the topic on institutional mining from a Course Management System has been a hot topic.

    As it relates to the Blackboard Activity Accumulator, well I would argue that the data you want isn’t necessarily best found in that table. Plus, the tool that Purges Data from the Activity Accumulator doesn’t delete the data. Rather it moves the data in bulk from a transactional schema to an operational schema.

    I digress…rather I think the data that is most meaningful to you comes from the entities themselves. Much of the persisted data is time based. Often institutions will keep more then 24 calendar months of data accessible in the system. Many of the entities will keep records not just of data that has been created, but also records about data that has been accessed or reviewed.

    Not every entity maintains a record of access or view (indirectly or directly). Content authors have to define tracking on an entity. I would hypothesize that less then 10% of the authoring population keenly defines tracking in advance.

    So you might have to approach your problem slightly different. You might want to look at the use of an academic tool in Course Management environment over time. This should help you understand the adoption curve from a usage perspective. There’s a lot to be correlated from this data.

    For example, let’s say that a new tool or application has been introduced by the Course Management Vendor or Open Source Community. The tool is available system wide. One of the key data points you would want to study is the adoption of the tool in relation to the volume of active courses/users in the system at the time the tool was introduced. You would then want to study over the next N months the frequency and use of the tool. You may even generate a time series data set demonstrating adoption over time. Now here’s where it gets interesting. Once you understand the adoption of the tool, you may want to correlate tool usage to some other data set. Maybe you want to correlate adoption of the tool to academic performance. Quite a grand topic…

    If that’s the case from a correlation perspective, I would take a sample of the population prior to the tool adoption, a sample of the population that adopted the tool set and a 3rd sample which did not adopt the tool set, but existed or used the Course Management system during the time the tool was available. I would then call-out some indicator about the three communities which could then be correlated to the use of the tool.

    There’s obviously lots of factors invovled…but when you are talking about this form of passive analytics a lot must be assumed.

    If you are interested in talking more about the topic, let me know…

    Regards,

    Steve

    1. G’day Steve,

      Thanks for the detailed and interesting comment.

      In terms of the digression, I’m guilty of writing quickly and not checking the details of the specific example in Blackboard. Though I will mention that in the above I am talking about v6.3 – not sure if that makes a difference.

      With your question about entities, tracking etc. you have hit on one of the problems facing the hope/aim of the Indicators project to do cross-LMS comparisons. What you reveal is that the internal operation of Blackboard is a lot more complex than might have been assumed.

      This is likely to be the case for many/all of the LMS we will try to look at. Attempting to generate meaningful LMS independent measures that enable apple vs apple comparisons could well be quit challenging. It probably also highlights the importance of getting folk involved who are experts in the internal operation of each LMS.

      In terms of Blackboard and other commercial LMS we assumed the experts are likely to be within the companies themselves. So we have been pondering ways in which we could approach them to gauge interest. Any pointers?

      I’m currently at a conference, and have been talking with other folk and there appears to be great interest in cross-LMS, cross-institutional comparisons. Some of the scuttlebut has been that someone did approach one such company (one you may be familiar with) and was shown the door somewhat quickly.

      The temporal perspective on feature adoption is not something we’ve seriously thought about, but it would be very interesting. To some extent I feel we’re starting to drown under all the possible interesting projects/perspectives to look at with this data.

      This is one of the reasons we’re very keen to talk to and work with anyone interested. It’s a project way bigger than we can handle alone, so any insights you have to share would be warmly welcome.

      Thanks again.

      David.

  2. Pingback: Academic Analytics: Data rich, information poor. Who should own the data? « Col’s Weblog

  3. Pingback: Academic Analytics: Data rich, information poor.

  4. Pingback: Functional fixedness, analytics, the LMS and the “V” word « The Weblog of (a) David Jones

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s