Introducing Hunch

One of the activities for the first week of the lak11 MOOC is to get started with using Hunch and reflect on it as a model for learning.

What is Hunch?

From the Hunch about page it is an application of machine learning to provide recommendations to users about what might be of interest to them on the web. It’s the work of a bunch of self-confessed MIT “nerds”.

Using Hunch

Creating an account on Hunch starts with logging in with either a Facebook or Twitter account. Went with Twitter. Some of the other LAK11 participants have queried the privacy question with this and then answering the questions.

The site now asks a range of questions using a fun (ish) approach using photos, increasing interest somewhat. It also provides feedback on what others have answered.

As others have noted there is a North American cultural bias to the questions.

Interesting, only 4% of respondents said they didn’t have a Facebook account.

After answering a few more than the minimum 20, Hunch presents a selection of recommendations. In this case five recommendations each for magazines, TV shows and books. I’m assuming that the categories of answers were also based on my answers. the recommendations are all good or close matches. All three categories included examples I had read/watched and enjoyed.

So, it appears that Hunch is designed with badges to earn as you use the site more, provide more information. There are other features sought to encourage connections and feedback between users. After all that would appear to be the currency that Hunch needs to generate its recommendations. The more connections, the better the math, the better the recommendations.

And perhaps that is the problem. I don’t feel the need for a site like Hunch to get the recommendations I want. I already have strategies, social networks and information sources that I use. I can’t see myself expending the effort on this sort of site. The question that is how many others might be bothered to provide this information?

That said, it does appear to be working fairly well already.

Reflections

After using Hunch, the LAK11 syllabus asks

What are your reactions? How can this model be used for teaching/learning?

and suggests sharing views in the discussion forum. I’m going to reflect here first and then check the discussion forum. Mainly because the following will be more stream of consciousness dumping than well-considered insight.

The obvious academic question to ask is what is meant by teaching/learning. Most of my experience has been/will be with more formal areas of learning and teaching and thus my reflections are likely to be coloured/biased by that experience.

My first observation (taking the viewpoint of a teacher) would be that any additional information about my students would be useful. Especially if a system like Hunch was able to provide useful recommendations. Such recommendations would be useful to the students as well, but I wonder how much freedom they would have to take up those recommendations within a formal educational setting. It would seem that what freedom does exist, lays with the teaching staff.

Such information in a L&T situation might feel somewhat similar to some of the learning style surveys that are around. Similarly, I wonder how much these type of things would reinforce existing categories/beliefs, rather than offering new paths or opportunities.

Am feeling that I’m somewhat ill-informed about the nature and capabilities of Hunch and thus somewhat ill-informed to reflect on its applicability to learning and teaching. Drawing some conclusions from the little I know means that they are building models based on answers to the questions. Then comparing that with models of the items/recommendations to come up with matches.

I wonder how difficult building these models would be for learning and teaching. It’s my understanding that disciplines such as physics have built fairly complex conceptual models of the domain, in particular for undergraduate studies. But it’s also my belief that the construction of such models was a fairly resource intensive task. Will the resource intensive nature make it difficult to implement a L&T focused Hunch? Then making the connections between other models would seem difficult. Hunch after all hasn’t handled the cross-cultural aspects all that well (probably was designed to retain the North American emphasis) and operates in an area (commercial products and services) in which there has been a lot of research and a lot of commercial interest/resources.

From the perspective of an motivated learner, a L&T flavoured Hunch could be very useful. But what percentage of learners would use such a system? e.g. given my reservations about using the current Hunch. Especially given that Hunch relies somewhat on the contributions the users make to the system. Given the limited percentage of folk that contribute content to social networking sites this is likely to limit a L&T flavoured Hunch even further.

This perhaps sums up my cynical view of the difficulty of effectively and appropriately applying analytics in L&T.

Let’s see if the Moodle discussion forum has more positive contributions.

Applying “learning analytics” to BIM

The following floats/records some initial ideas for connection two of my current projects, BIM and lak11. The ideas arise out of some discussion with my better half who is currently using BIM in one of the courses she is teaching.

Some brief background, BIM is a Moodle module that allows teaching staff to manage and encourage the use of individual student blogs. The blogs are hosted by an external blog provider of the student’s choice, not within Moodle. Typical use is to encourage reflection and make visible student work in order for comments and discussion.

BIM participation as indicator

The discussion started with the observation that by the second or third required blog post it was generally possible to identify the students in a course that would do really good and those that would do really bad. How and when the students provided their blog posts is a good indicator of overall result.

This correlation was first observed with my first use of BAM in 2006 (BIM stands for BAM into Moodle) and some findings of others.

This correlation was not something that was new. We were both able to make the observation that similar sorts of patterns exist with most educational practices. The difference is that the nature of the BIM assignments generally makes this more obvious. The discussion turned to what this pattern actually tells us?

Students with good practices

We ended up agreeing (as much as we ever do) that what this pattern is showing us is not that some students are smart and that some are not. Instead it is showing us that the “really good” students simply have the “really good” study practices. They are the ones reading the material, reflecting upon it and engaging with the assessment requirements. The “really bad” students just never get going for whatever reason. The rest of the students are generally engaging in the work at a surface level.

So, use of BIM is making this pattern more obvious, what should be done about it?

Encouraging connections

The tag line for the lak11 course is

Analyzing what can be connected

A thought that “connects” with me and what I think analytics might be good for. More specifically, my interest in analytics is focused more at the idea of

Using analysis to encourage connections

Which going by the definitions given in one of the early readings is close to what is meant by action analytics.

In the case of BIM, the idea consists of two tasks

  1. Analyse what is going on within BIM to identify patterns; and then
  2. Bring those patterns/analysis to the attention of the folk associated with a course in order to encourage action.

Some ideasOne idea

This leads to some ideas for additional features for bim. None, bits or all of them might get implemented.

Connect students with evidence of good practice

  1. Add a due date to each question a student is meant to respond to within a bim activity.
  2. Allow academic staff to choose (or perhaps create) a warning regimen.
  3. A warning regimen would be specify a list of messages to send to individual students based on the due date and the student’s own contributions to the bim activity. The specification might include
    • Time when to send messages.
      e.g. 1 week, 3 days and on the day.
    • Teacher provided content of the message.
    • Some bim analysis around the activity.
      e.g. it might include the number of students who have already submitted answers to the question, perhaps some summary of connections from previous uses of bim between when posts are submitted and overall performance. Some statistics or data about the posts so far e.g. amount of words, some textual statistics etc.
    • Links to other posts.
      This one could be seen as questionable. Links to other student posts could act as scaffolding for students not really sure what to post. Of course, the “scaffolding” could result in “copying”.

The idea being that being aware of what other students are posting or what is considered good practice would potentially encourage students, or at least make it more likely, that they may consider such practice.

This is very close to the idea behind Michael De Raadt’s progress bar for Moodle.

What “theories” exist?

One of the initial readings identified four main class of components for learning analytics. One of which is theory, which includes the statistical and data mining techniques that can be applied to the data.

I need to spend some time looking at what theories exist that might apply to BIM. e.g. I’m wondering if some of the textual analysis algorithms might provide a good proxy for evaluating the quality of blog posts and whether or not there might be some patterns/correlations with final/overall student results.

Learning analytics: Definitions, processes and potential

The following is the summary of my first reading for the LAK11 MOOC and follows on from my initial thoughts.

I decided to start with the paper title Learning analytics: Definitions, processes and potential as it appeared from the combination of the data published (Jan 2011) and title to give the more current overview. It’s also written by one of the course facilitators, so should have some connection to the course.

Summary

The paper essentially

  • Defines some terms/concepts;
  • Abstracts from some published “analytics processes” a common set of 7 processes/tasks.
  • Identifies four types of resources; and
  • combines them in the following model.

A model for learning analytics

The paper closes with what seems to be the ultimate goal of most of the folk involved with learning analytics – automated, individualised education. I’m not sure that this is a helpful aim. First, because I have my doubts that it can ever be achieved in the real world as opposed to a closed system (i.e. laboratory experiment). Second, because I think that there is a chance that having this as the ultimate aim will result in less focus on, what I think is the more fruitful approach of, working out how analytics can supplement the role of human beings in the teaching process.

Mm, that’s probably got a few assumptions within it that need to be unpicked.

The following is a slightly expanded summary of the paper.

Introduction

It starts with defining learning as “a product of interaction”. With the nature of the interaction being broadly different depending on the assumptions underpinning the learning design.

Regardless, we want to know how well things went. Traditional methods – student evaluation, grade analysis, instructor perceptions – all have limitations and problems.

Question: What are the limitations and problems with learning analytics? There is no silver bullet.

As more learning is computer facilitated, there’s interest in seeing how data accumulated can be used to improve L&T…leading to learning analytics. The application of statistics to rich data sources to identify patterns is already being used in other fields to predict future events.

The paper aims to review literature on analytics and define it, its processes and potential.

Learning analytics and related concepts defined

The cynic in me finds the definition of business intelligence particularly frightening/laughable. I do need to learn to control that.

Term Definition
Learning analytics “emerging field in sophisticated analytic tools are used to improve learning and education”..drawing from other fields of study
Business intelligence established process through which decision makers in the business world integrate strategic thinking with information technology to synthesize vast amounts of data into powerful decision making capabilities
Web analytics using web site usage data to understand how well the site is achieving its goals.
Academic analytics application of the principles and tools of business intelligence to academia
Or more narrowly by other authors, is to examine issues around student success
Action analytics greater emphasis on generating ‘action’, i.e. applying data in a “forward thinking manner”

Does mention the problems faced when implementing these type of strategies with existing institutional arrangements, especially around data/system ownership. Suggests that learning analytics is intended more specifically to address these issues. Especially in terms of providing the data/analysis to students/teachers within the teaching context. Right up to some of the automated/intelligent tutoring type approaches.

Thus, the study and advancement of learning analytics involves: (1) the development of new processes and tools aimed at improving learning and teaching for individual students and instructors, and (2) the integration of these tools and processes into the practice of teaching and learning.

I can live with that. It’s what I’m interested in. Sounds good.

Learning analytics processes

Essentially a collection of four different models/abstractions of how to do this stuff and then a synthesis into a common 7 processes of learning analytics

  1. select
  2. capture
  3. aggregate and report
  4. predict
  5. use
  6. refine
  7. share

Knowledge continuum

This is the DIKW (Data/Information/Knowledge/Wisdom) stuff which some of the KM folk, including Dave Snowden, don’t have a lot of time for. In fact, they argue strongly against it (Fricke ??).

TO DO: There is much of interest in Fricke (2007), I have not read it through and some appears heavy going, I should take the time. An interesting reference/quote is this one

Results from data mining should be treated with skepticism

drawn from some work that and describe more here

The DIKW stuff is connected to learning analytics through some work that suggests things like “Through analysis and synthesis that (sic) information becomes knowledge capable of answering the questions why and how”.

Another to do: Snowden’s thoughts on DIKW and his work suggest another “process” for learning analytics. Should take some time to look at that.

Web analytics objectives

From Hendricks, Plantz and Pritchard (2008), “four objectives essential to the effective use of web analytics in education:

  1. define the goals or objectives;
  2. measure the outputs and outcomes;
  3. use the resulting data to make improvements; and
  4. share the data for the benefit of others.

Five steps of analytics

Campbell and Oblinger (2008)

  1. capture
  2. report
  3. predict
  4. act
  5. refine

Collective application model

Summary of a Dron and Anderson model

Learning analytics tools and resources

Draws on various source to suggest that “learning analytics consists of”

  • Computers;
    Includes an interesting overview of the different bits of technology (and their limitations) that are currently available. Including some references criticising dashboards.
  • People;
    Interestingly, this is the smallest section of the four, but perhaps the most important. In particular, the observation that developing effective interventions remain dependent on people.
  • Theory;
    Points to the various “kernel theories” for analytics and the observation by MacFadyen and Dawson (2010) that there’s little advice which of these work well from a pedagogical perspective.
  • Organisations.
    Importance of the organisation in developing analytics and some of the standard “leadership is important” stuff

A start to the “Introduction to Learning and Knowledge Analytics” MOOC

So, the year of study begins. First up is an attempt to engage in a MOOC (Massive Open Online Course) on Learning and Knowledge Analytics. This first post aims to contain some reflection on the course syllabus and what I hope to get out of the course.

The problem and the promise

As the course description suggests

The growth of data surpasses the ability of organizations or individuals to make sense of it

This is a general observation, but it also applies to learning and teaching related activities.

The promise is that analytics through techniques such as modelling, data mining etc will aid the analysis of this data and help people and organisations to make sense of all the data. To improve their decision making, learning and other tasks.

The aim of the course is as

a conceptual and exploratory introduction to the role of analytics in learning and knowledge development

. It is an introductory course, no heavy math.

My reservations

I’ve dabbled in work that is close to analytics, but have always had some reservations about its promise. One of the aims of engaging in the course is to encourage me to read and reflect more on these reservations. A quick summary/mind dump of those reservations includes:

  • The data is not complete;
    At the moment, the data that is available for analytics is limited. e.g. data from an LMS gives only a very small picture of what learning and learning related activities are going on. Consequently, data driven decision making is overly influenced by the data that is available, rather than the data that is important.
  • Models and abstractions are by nature lossy;
    A lot of analytics is based on mathematical/AI models or abstractions. By definition these “abstract away” details that are deemed to be not important. i.e. information is lost.
  • Not every system is causal, except in retrospect;
    There often feels to be an assumption of (near) causality in some of this work. There are some events/systems/processes which simply aren’t causal. There is no regular, repeating pattern of “a leading to b”. Just because a lead to b this time, doesn’t mean it will next time. Some of this is related to the previous two points, but some of it is also related to the nature of the systems, especially when they are complex adaptive systems. It will be interesting to hear Dave Snowden’s (one of the invited speakers) take on this later in the course as this reservation is directly influenced by his presentations.
  • People aren’t rational;
    Personally, I don’t think most people are rational. This shouldn’t suggest that people aren’t somewhat sensible in making their decisions. One’s decisions always make sense to oneself, but they are almost certainly not the decisions that someone else would have made in the same situation. As part of that, I think our experiences constrain/influence our decision making and actions.

    This generates two concerns about analytics. First, I wonder just how much change in decision outcomes will arise from the folk seeing all the nice, lovely new visualisations produced by analytics. Are people going to make new decisions or simply use the visualisations to justify the same sub-set of decisions that their experiences would have led them to make. Second, how common amongst learners will be the patterns, models and correlations that arise from analytics? Just because the model says I did “A-B-C” does that really imply I was doing it for the same reasons as the other 88% of the population?

  • Is there enough information;
    I believe, at this currently ill-informed stage, that some (much?) of the usefulness of analytics arises from a reliance on big number statistics. i.e. there’s so much data that you get useful correlations, patterns….How many existing institutions are going to have sufficiently big data to usefully use these techniques?
  • The technologists alliance;
    Geohegan suggests there is a technologists’ alliance that has alienated the mainstream through the inability to produce an application of technology that is of absolutely compelling value in pragmatic, mainstream terms that provides the compelling reason to adopt. I think it’s important that there be researchers and innovators pushing the boundaries, but there is too little thought given to the majority and applications of innovations/new technologies/fads that they see as useful. SNAPP is a good start, but there’s some more work to be done.
  • Yet another fad;
    Analytics is showing all the hallmarks of a fad. There will almost certainly be some interesting ideas here, but the combination of the previous reservations will end up it in being misapplied, misunderstood and ultimately have limited widespread impact on practice.

    As evidence of the fad, I offer the photo below that comes from this blog post (which I reference again below).

    heads of data explosion/exploitation

  • Ethical related questions;
    A post from Johnathan MacDonald on “The Fallacy of Data Bubble Ignorance” includes the following quote

    People don’t want to be spied on. It’s an abuse of civil liberty. The fact that people don’t realise they are being spied on, is not justification to do so. Betting on a business model that goes against how society really works, will ultimately end in disaster.

    If this holds, does it hold for analytics. Will the exploitation of learning analytics lead to blow back from the learners?

    For some of the above reasons, I am not confident in the ability for most organisations to engage in the use of analytics in ways not destined to annoy and frustrate learners. Many are struggling to implement existing IT systems, let alone manage something like this. I can see the possibilities of disasters.

  • Teleological implementation.
    This remains my major reservation about all these types of innovations. In the end, they will be applied to institutional contexts through teleological processes. i.e. the change will be done to the institution and its members to achieve some set plan. Implementation will have little contextual sensitivity and thus will have limited quality adoption and will be blind to some of the really interesting context innovations that could have arisen.

A bit of duplication and perhaps some limited logic, but a start.

Onto the week 1 readings.

Thesis acknowledgements version 0.5

What follows is an early attempt at the acknowledgements section of the thesis. My better half, also completing here PhD, queried why this section would be needed? I will be including because there are some people that need to be acknowledged for their contributions.

Acknowledgements

The work described here has been made possible by a huge number of people. A number far too large to acknowledge appropriately within the space allowed. Consequently, I start by offering gratitude to all, before acknowledging a few groups and individuals.

I would like to start with the people who disagreed with the ideas expressed here and embodied in the Webfuse information system. The difficulties you have had with understanding and appreciating these ideas have pushed me further to understand and refine the ideas. On reflection, the fact that so many of you filled management or senior information technology positions within the organisation remains somewhat troubling. But this work would not be without you, thanks.

Perhaps more importantly are the tens of thousands of people who made use of the services provided by Webfuse over its years of service. Thanks for your patience and suggestions. It was the your diversity that drove recognition of how important flexibility was and just how inflexible most IT systems actually are.

Responding to this flexibility is not something I could have done myself. The development of Webfuse owes much to the project students and IT staff who worked on or with Webfuse over its years of existence. There were many of you and you rarely received the recognition due. In no particular order, thank you: Andrew Newman, Andrew Whyte, Matthew Aldous, Arthur Watts, Bret Carter, Chriss Lenz, Adrian Yarrow, Russell Gibbings-Johns, Zhijie Lu, Paul Wilton, David Binney, Chris Richter, Shawn Dollin, Paula Turnbull, Damien Clark, Scott Bytheway, Matthew Walker, Stephen Jeffries and many more I have almost certainly forgotten. Special mention should be made of Derek Jones, the last man standing in terms of Webfuse and a major influence on its development.

Mary Cranston was also amongst the staff working on Webfuse. Her contributions to the support and use of Webfuse were as important and immeasurable as they were generally unrecognised and self-effacing. By far the largest shortcoming of the organisation we worked for was its failure to recognise just how much a contribution Mary made to the organisation. Perhaps only surpassed by its failure to recognise the magnitude of the contribution Mary might have made to the organisation. I cannot thank Mary enough.

Webfuse and the work described here would not have happened without Stewart Marshall. Stewart was the Foundation Dean of the Faculty of Informatics and Communication and, as described in Chapter 5, remains the only senior manager in my experience to not only understand ateleological development but also publicly embrace it as a strategy for the organisation he was responsible. Without Stewart, chapter 5 would never have happened.

From the research perspective, I am deeply indebted to the Very Respectable Professor Gregor. Without Shirley’s knowledge, connections, influence and most especially patience this work would have been much less than it is. Perhaps my largest regret from this thesis is that I was not in a position to do more with Shirley’s contribution. The same might be said about the folk I have co-written with over recent years. I would like to make special mention of Kieren Jamieson as someone who made significant and under utilised contributions to this and related work.

Lastly, I would like to thank my family and ask forgiveness for all the time I spent on Webfuse and this thesis that I should have been spending on you. A special thanks to Sandy for starting her own PhD. Thereby, providing the motivation necessary for me to complete this thesis, before she completed hers.

A command for organisations? Program or be programmed

I’ve just finished the Douglas Rushkoff book Program or be Programmed: Ten commands for a digital age. As the title suggests the author provides ten “commands” for living well with digital technologies. This post arises from the titular and last command examined in the book, Program or be programmed.

Dougls Rushkoff

This particular command was of interest to me for two reasons. First, it suggests that learning to program is important and that more should be doing it. As I’m likely to become a information technology high school teacher there is some significant self-interest in there being a widely accepted importance to learning ot program. Second, and the main connection for this post, is that my experience with and observation of universities is that they are tending “to be programmed”, rather than program. In particular when it comes to e-learning.

This post is some thinking out loud about that experience and the Ruskoff command. In particular, it’s my argument that universities are being programmed by the technology they are using. I’m wondering why? Am hoping this will be my last post on these topics, I think I’ve pushed the barrow for all its worth. Onto new things next.

Program or be programmed

Rushkoff’s (p 128) point is that

Digital technology is programmed. This makes it biased toward those with the capacity to write the code.

This also gives a bit of a taste for the other commands. i.e. that there are inherent biases in digital technology that can be good or bad. To get the best out of the technology there are certain behaviours that seem best suited for encouraging the good, rather than the bad.

One of the negative outcomes of not being able to program, of not being able to take advantage of this bias of digital technology is (p 15)

…instead of optimizing our machines for humanity – or even the benefit of some particular group – we are optimizing humans for machinery.

But is all digital technology programmed?

In terms of software, yes, it is all generally created by people programming. But not all digital technology is programmable. The majority of the time, money and resources being invested by universities (I’ll stick to unis, however, much of what I say may be applicable more broadly to organisations) is in “enterprise” systems. Originally this was in the form of Enterprise Resource Planning system (ERPs) like Peoplesoft. It is broadly recognised that modifications to ERPs are not a good idea, and that instead the ERP should be implemented in “vanilla” form (Robey et al, 2002).

That is, rather than modify the ERP system to respond to the needs of the university. The university should modify its practices to match the operation of the ERP system. This appears to be exactly what Rushkoff warn’s against “we are optimizing humans for machinery”.

This is important for e-learning because, I would argue, the Learning Management System (LMS) is essentially an ERP for learning. And I would suggest that much of what goes on around the implementation and support of an LMS within a university is the optimization of humans for machinery. In some specific instances that I’m aware of, it doesn’t matter whether the LMS is open source or not. Why?

Software remains hard to modify

Glass (2001), describing one of the frequently forgotten fundamental facts about software engineering, suggested that maintenance consumes about 40 to 80 percent of software costs, with 60% of the maintenance cost is due to enhancement. i.e. a significant proportion of the cost of any software system is adding new features to it. You need to remember that this is a general statement. If the software you are talking about is part of a system that operates within a continually changing context, then the figure is going to be much, much higher.

Most software engineering remains focused on creation. On the design and implementation of the software. There hasn’t been enough focus on on-going modification, evolution or co-emergence of the software and local needs.

Take Moodle. It’s an LMS. Good and bad like other LMS. But it’s open source. It is meant to be easy to modify. That’s one of the arguments wheeled out by proponents when institutions are having to select a new LMS. And Moodle and its development processes are fairly flexible. It’s not that hard to add a new activity module to perform some task you want that isn’t supported by the core.

The trouble is that Moodle is currently entering a phase which suggests it suffers much the same problems as most large enterprise software applications. The transition from Moodle 1.x to Moodle 2.0 is highlighting the problems with modification. Some folk are reporting difficulties with the upgrade process, others are deciding to delay the upgrade as some of the third-party modules they use haven’t been converted to Moodle 2. There are even suggestions from some that mirror the “implement vanilla” advice for ERPs.

It appears that “we are optimizing humans for machinery”.

I’m wondering if there is anyone doing research how to make systems like Moodle more readily modifiable for local contexts. At the very least, looking at how/if the version upgrade problem can be improved. But also, the ability to modify the core to better suit local requirements. There are aspects there already. One of the difficulties is that to achieve this you would have to cross boundaries between the original developers, service providers (Moodle partners) and the practices of internal IT divisions.

Not everyone wants to program

One reason this will be hard is that not everyone wants to program. Recently, D’Arcy Norman wrote a post talking about the difference between the geeks and folk like his dad. His dad doesn’t want to bother with this techy stuff, he doesn’t want to “program”.

This sort of problem is made worse if you have an IT division that has senior management with backgrounds in non-IT work. For example, an IT director with a background in facilities management isn’t going to understand that IT is protean, that it can be programmed. Familiar with the relative permanence of physical buildings and infrastructure such a person isn’t going to understand that IT can be changed, that it should be optimized for the human beings using the system.

Organisational structures and processes prevent programming

One of the key arguments in my EDUCAUSE presentation (and my thesis) is that the structures and processes that universities are using to support e-learning are biased away from modification of the system. They are biased towards vanilla implementation.

First, helpdesk provision is treated as a generic task. The folk on the helpdesk are seen as low-level, interchangeable cogs in a machine that provides support for all an organisation’s applications. The responsibility of the helpdesk is to fix known problems quickly. They don’t/can’t become experts in the needs of the users. The systems within which they work don’t encourage, or possibly even allow, the development of deep understanding.

For the more complex software applications there will be an escalation process. If the front-line helpdesk can’t solve the problem it gets handed up to application experts. These are experts in using the application. They are trained and required to help the user figure out how to use the application to achieve their aims. These application experts are expert in optimizing the humans for the machinery. For example, if an academic says they want students to have an individual journal, a Moodle 1.9 application expert will come back with suggestions about how this might be done with the Moodle wiki or some other kludge with some other Moodle tool. If Moodle 1.9 doesn’t provide a direct match, they figure out how to kludge together functionality it does have. The application expert usually can’t suggest using something else.

By this stage, an academic has either given up on the idea, accepted the kludge, gone and done it themselves, or (bravely) decided to escalate the problem further by entering into the application governance process. This is the heavy weight, apparently rational process through which requests for additional functionality are weighed against the needs of the organisation and the available resources. If it’s deemed important enough the new functionality might get scheduled for implementation at some point in the future.

There are many problems with this process

  • Non-users making the decisions;
    Most of the folk involved in the governance process are not front-line users. They are managers, both IT and organisational. They might include a couple of experts – e-learning and technology. And they might include a couple of token end-users/academics. Though these are typically going to be innovators. They are not going to be representative of the majority of users.

    What these people see as important or necessary, is not going to be representative of what the majority of academic staff/users think is important. In fact, these groups can quickly become biased against the users. I attended one such meeting where the first 10/15 minutes was spent complaining about foibles of academic staff.

  • Chinese whispers;
    The argument/information presented to such a group will have had to go through chinese whispers like game. An analyst is sent to talk to a few users asking for a new feature. The analyst talks to the developers and other folk expert in the application. The analysts recommendations will be “vetted” by their manager and possibly other interested parties. The analysts recommendation is then described at the governance meeting by someone else.

    All along this line, vested interests, cognitive biases, different frames of references, initial confusion, limited expertise and experience, and a variety of other factors contribute to the original need being morphed into something completely different.

  • Up-front decision making; and
    Finally, many of these requests will have to battle against already set priorities. As part of the budgeting process, the organisation will already have decided what projects and changes it will be implementing this year. The decisions has been made. Any new requirements have to compete for whatever is left.
  • Competing priorities.
    Last in this list, but not last overall, are competing priorities. The academic attempting to implement individual student journals has as their priority improving the learning experience of the student. They are trying to get the students to engage in reflection and other good practices. This priority has to battle with other priorities.

    The head of the IT division will have as a priority of staying in budget and keeping the other senior managers happy with the performance of the IT division. Most of the IT folk will have a priority, or will be told that their priority is, to make the IT division and the head of IT look good. Similarly, and more broadly, the other senior managers on 5 year contracts will have as a priority making sure that the aims of their immediate supervisor are being seen to be achieved……..

These and other factors lead me to believe that as currently practiced, the nature of most large organisations is to be programmed. That is, when it comes to using digital technologies they are more likely to optimize the humans within the organisation for the needs of the technology.

Achieving the alternate path, optimizing the machinery for the needs of the humans and the organisation is not a simple task. It is very difficult. However, by either ignoring or being unaware of the bias of their processes, organisations are sacrificing much of the potential of digital techology. If they can’t figure out how to start programming, such organisations will end up being programmed.

References

Robey, D., Ross, W., & Boudreau, M.-C. (2002). Learning to implement enterprise systems: An exploratory study of the dialectics of change. Journal of Management Information Systems, 19(1), 17-46.