Moodle curriculum mapping – Step 2

This is the second exploration of an idea for enhancing Moodle to enable curriculum mapping. It carries on from the first step and is part of a broader project.

The aim today is to:

  • Create a CSV file of Moodle outcomes for a couple of programs.
    Mostly to get a feel for the outcomes that accrediting bodies are after and to test out this “uploading” of outcomes. Also to get some insight into how the “scales” might work.
  • “Map” a course or two with those outcomes.
    The aim is to get a feel for how difficult doing this actually is and how well it works. Perhaps get some insights into ways it could be made easier/more effective.
  • Start identifying the database structures where that information is placed.
    This is a pre-cursor to starting to develop extensions to Moodle that will draw on this information. It helps identify where the information is, what is there and what might be possible in terms of development.

Am going to be updating this post throughout today (30 March, 2010)

Moodle outcomes CSV file testing

Moodle allows you to upload outcomes into Moodle via a CSV file. The format is a 6 field CSV file

  • outcome_name – full name
  • outcome_shortname – short name
  • outcome_description
  • scale_name – name of scale
  • scale_items – comma separate list of scale items
  • scale_description


Participation;participation;;Participation scale;”Little or no participation, Satisfactory participation, Full participation”;

Each outcome in Moodle is associated with a scale. It’s typically used to make student performance against the outcome. For curriculum mapping, I believe the scale can be used to measure how well the course/activity/resource meets the outcome/attribute etc.

The task now is to create a useful CSV outcomes file for my purposes. The choices that exist include:

Am thinking I’ll start with the institutional graduate attributes – mostly for political reasons – and then do one of the disciplinary bodies outcomes for a bit more learning.

Graduate attributes

Not 100% certain this represents the current state of the institution’s graduate attributes, but it’s good for an experiment. The institution is apparently introducing graduate attributes progressively during 2010 with all undergraduate programs done from Jan 2011 and all other programs from 2012.

The institution has 8 graduate attributes:

  • Communication
  • Problem solving
  • Critical thinking
  • Information literacy
  • Team work
  • Information technology competence
  • Cross cultural competence
  • Ethical practice

As it stands, I’ve been unable to find any description of these. However, a document describing the project has developed some “levels of achievement” for the attributes and offered descriptions of those levels using learning outcomes and the revised Bloom’s taxonomy.

The three levels are: introductory, intermediate and graduate. Each of the outcomes/levels are associated with learning domains from the revised Bloom’s taxonomy.

Note: my aim here is to identify what has been done and work out how it can be translated into Moodle’s outcomes CSV file. Not to judge what’s been done.

The CSV file

The first version of a CSV file for the attributes is done and successfully in. Will reflect more on this after lunch.

The Moodle help documentation suggests that the format is as listed above, with outcome_description and scale_description as optional. That means that you don’t have to include them in a line, but you do need to include all fields. Getting the format exactly right was an interesting experience in trial and error.

The first two lines of the file are

Communication;comm;”Described here″;”CQU Graduate Attributes (Communication)”;”Introductory – Use appropriate language to describe/explain discipline-specific fundamentals/knowledge/ideas (C2), Intermediate – Select and apply an appropriate level/style/means of communication (C3), Graduate – Formulate and communicate views to develop an academic argument in a specific discipline (A4)”;

What is looks like

When trying to map an activity/resource in Moodle, you use the “edit” facility for that activity/resource and a part of the resulting page looks like the following – click on it to see it bigger.

Moodle outcomes

Some comments on this image:

  • Duplicate outcomes suggest Outcome management not great.
    You can see three outcomes for Communication. This is due to the problems associated with importing the CSV file – 2 failed attempts, followed by a successful one. And subsequent difficulties in finding out how to delete the older versions of the outcome…..Ahh, you have to go to “edit outcomes”.
  • Not enough information.
    While it wouldn’t be a problem eventually, the problem I’m currently facing is that I’m not familiar enough with the outcomes to understand what they mean. I want some additional pointers in the interface – even just the normal Moodle help link (a little question mark). This absence is somewhat related to the next point.
  • Can’t use the scale here and now.
    For curriculum mapping, I want to select the scale here and now. The idea is to specify to what level this activity/resource meets the outcome. This highlights the difference in purpose between the outcomes in Moodle (focused on measuring individual student performance) and what the outcomes would be used in many forms of curriculum mapping (mapping how well a course covers outcomes). For Moodle outcomes the scale starts to apply in the gradebook, i.e. when you’re marking the individual student. Not in the activity/resource.

    Graduate attributes could be used for both approaches, map the course and also track student progress.

  • The need for groupings of outcomes.
    The first outcome “David’s first outcome” is some from some earlier testing. But it does highlight an additional requirement, the ability to separate (and perhaps map) between different groupings of outcomes. e.g. CQU’s graduate attributes, course learning outcomes and perhaps discipline accrediting body learning outcomes.
  • The Moodle workflow is somewhat limited.
    With outcomes, as with other aspects of Moodle, the “workflow” – the sequence of screens you go through as you perform a task – leaves a bit to be desired. It’s not often clear where to go, or as you finish how best to proceed.

Other outcomes

Am now looking at the accreditation requirements for psychology and public relations to understand what is there and what implications that might have for this idea.

In terms of public relations it appears to be a combination of course outcomes, university graduate attributes and some specific “criteria/areas” specified by the program.

In psychology, there’s an odd mixture of discipline specific “graduate attributes”, with each having its own set of critiera, and a collection of “skills” to “map” assessment against.

Where’s the data?

Seems the outcomes stuff might be stored in three tables:

  • grade_outcomes: id, courseid, shortname, fullname, scaleid, description, timecreated, time modified, usermodified
    Obviously the table the CSV import modifies.
  • grade_outcomes_courses: id, courseid, outcomeid
    Links a course with an outcome in the previous.
  • grade_outcomes_history: id, action, oldid, source, timemodified, loggeduser, courseid, shortname, fullname, scaleid, description.
    Not sure on this one.

So, one question is where does the mapping against a particular activity/resource get put?

What about code?

moodle/lib/grade/grade_outcome.php defines a class grade_outcome, that is meant to handle it all, including database manipulation.

Moodle, Oracle, blobs and MS-Word – problem and solution?

This documents a problem, and hopefully an initial solution, around the combination of technologies that is Moodle, an Oracle database, and content that is copy and pasted from Word.

Origins of the problem

The problem first arose at my current institution and its installation of Moodle, which is using Oracle (11g I believe) as the database. It first became apparent through BIM an application I’ve written but was also occasionally occuring with the Moodle wiki.

BIM allows students to create an individual blog on their choice of blog engine – mostly – and then provides management services to staff. Part of that is keeping a copy of all of the students’ blog posts within the Moodle database.

The problem was that some student posts weren’t being inserted into the database. They were reporting the following error

ORA-01704: string literal too long

These same posts work fine on my installation of Moodle – using MySQL – so it appeared to be an Oracle problem. The Oracle error message is related to the strange requirements Oracle has when you want to insert long strings of text. On “good” databases you just do an insert, like anything statement. However, on Oracle, if you are trying to insert a long string into a field that is a BLOB or CLOB, you have to use a different process involving an insert statement putting in a special empty field and then another step.

Gotta love a wonderfully designed and consistent enterprise database.

The question is, what is causing this problem?

The problem

After much diagnosis it appears that the presence of “special characters” created by students copying and pasting content from MS Word into their blog is at the core of the problem. When inserting these long posts into the database, what normally happens is that Moodle checks the length of the post, if it is greater than 4000 Moodle will jump through the special (and silly) hoops that Oracle requires.

However, for the problem posts when Moodle checks the length of the post, it is less than 4000. And the posts do have less than 4000 characters. However, when Moodle tries the normal insert process into Oracle, we get the above error message.

It appears that the problem is being caused by the presence of “special characters” from Word. These appear to be “tricking” Oracle into thinking that these posts are greater than 4000 characters.

The solution

The solution appears to be to clean up the posts before inserting them into the database.

The Moodle discussion forum uses exactly the same process for inserting discussion forum messages into the database. It doesn’t appear to have this same problem as the HTML editor in Moodle appears to do a reasonable job as cleaning up the “special characters”. Though, this might not be 100% successful.

In a perfect world, I want to put in some PHP code into BIM (at first, and perhaps into Moodle later) that does this cleaning. The obvious question is does or why doesn’t, Moodle support this already.

Existing Moodle support

Moodle has optional support for HTMLPurifier, however, not sure this is an exact match for this purpose. To be clear, the problem here isn’t cleaning up the HTML generated by Word. It’s the special characters for quotes, dashes etc that Word uses. In some cases this is actual “special characters, but for some reason, these are also appearing in the text as things like ’ for a single quote. I realise by that description I’m revealing that I haven’t bothered to dig to far into this, yet.

In addition, my main interest is solving this problem in BIM for the short term. So something that needs to be changed at the Moodle level is of little interest.


There’s this bit of PHP suggestion, essentially what I’m looking for. It does pick up some of the problems, but not all. In particular, it doesn’t deal with the “&#” issue.

Combining a few different bits and pieces brings me to this code
$badchr = array(
'“', // left side double smart quote
'â€'.chr(157), // right side double smart quote
'‘', // left side single smart quote
'’', // right side single smart quote
'…', // elipsis
'—', // em dash
'–', // en dash

'’', // single quote
'–' // dash

$goodchr = array('"', '"', "'", "'", "...", "-", "-",
'\'', '-');

$post = str_replace($badchr, $goodchr, $post);

It seems to work with a couple of ad hoc posts. Need to test it more completely.


Aim here will be to write a test harness that attempts to insert all of the posts made by students so far into the Oracle database. The idea/hope is that this should capture all of the problem posts and give some security that the above code is getting all of the problems.

The testing code is written and running. At first run it comes up with five blogs that have additional problems.

So, I’ve started doing a character by character examination of the posts to find the “funny” characters. I’m then adding these to the “cleaning” process. (Yes, I know this is kludgy).

By the time I’d “fixed” the second of the 5 posts, the subsequent posts were working. So, let’s run the lot again.

Fixed. Had more “special chars” to tweak.

From theory to intervention: Mapping theoretically derived behavioural determinants to behaviour change techniques

The following draws on principles/theory from psychology to guide thinking about how to incorporate “data” from “academic analytics” into an LMS in a way that encourages and enables academic staff to improve their learning and teaching. It’s based on some of the ideas that underpin similar approaches that have been used for students such as this Moodle dashboard and the signals work at University of Purdue.

The following gives some background to this approach, summarises a paper from the psychology literature around behaviour modification and then explains one idea for a “signals” like application for academic staff. Some of this thinking is also informing the “Moodle curriculum mapping” project.

Very interested in pointers to similar work, suggestions for improvement or expressions of interest from folk.


I have a growing interest in how insights from psychology, especially around behaviour change can inform the design of e-learning and other aspects of the teaching environment at universities in a way to encourage and enable improvement.
Important: I did not say “require”, I said “encourage”. Too much of what passes in universities at the moment takes the “require” approach with obvious negative consequences.

This is where my current interest in “nudging” – the design of good choice architecture and behaviour modification is coming from. The basic aim is to redesign the environment within which teaching occurs in a way the encourages and enables improvement in teaching practice, rather than discourages it.

To aid in this work, I’ve been lucky enough to become friends with a pyschologist who has some similar interests. We’re currently talking about different possibilities, informed by our different backgrounds. As part of that he’s pointing me to bits in the psychological literature that offers some insight. This is an attempt to summarise/reflect on one such paper (Michie et al, 2008)

Theory to intervention

It appears that the basic aim of the paper is to

  • Develop methods to clarify the list of behaviour change techniques.
  • Identify links between the behaviour change techniques and behavioural determinants.

First a comparison of two attempts at simplifying the key behavioural determinants for change – the following table. My understanding is that there are some values of these determinants that would encourage behaviour change, and others that would not.

Key Determinants of Behaviour Change from Fishbein et al., 2001; Michie et al., 2004
Fishbein et al Michie et al
Self-standards Social/professional role and identity
Skills Skills
Self-efficacy Beliefs about capabilities
Anticipated outcomes/attitude Beliefs about consequences
Intention Motivation and goals
Memory, attention and decision processes
Environmental constraints Environmental context and resources
Norms Social influences
Action planning

It is interesting to see how well the categories listed in this table resonate with the limits I was planning to talk about in this presentation. i.e. it really seems to me, at the moment, that much of the environment within universities around teaching and learning is designed as to reduce the chance of these determinants to be leaning towards behaviour change.

Mapping techniques to determinants

They use a group of experts in a consensus process for linking behaviour change techniques with determinants of behaviour. The “Their mapping” section below gives a summary of the consensus links. The smaller headings are the determinants of behaviour from the above table, the bullet points are the behaviour change techniques.

Now, I haven’t gone looking for more detail on the techniques. The following is going to be based solely on my assumptions about what those techniques might entail – and hence it will be limited. However, this should be sufficient for the goal of identifying changes in the LMS environment that might encourage change in behaviour around teaching.

First, let’s identify some of the prevalent techniques, i.e. those that are mentioned a more than once and which might be useful/easy within teaching.

Prevalent techniques

Social encouragement, pressure and support

The technique “Social processes of encouragement, pressure, support ” is linked to 4 of the 11 determinants: Social/professional role and identity, beliefs about capabilities, motivation and goals and social influences. I find this interesting as it can be suggested that most teaching is a lone and invisible act. Especially in a LMS where what’s going on in other courses. Making what happens more visible might enable this sort of social process.

There’s also some potential connection with “Information regarding behaviour of others” which is mentioned in 3 of 11.

Monitoring and self-monitoring

Get mentioned as linked to 4 of 11 determinants. Again, most LMS don’t appear to give good overall information about what a teacher is doing in a way that would enable monitoring/self-monitoring.

Related to this is “goal/target specified”, part of monitoring.

There’s more to do here, let’s get onto a suggestion

One suggestion

There’s a basic model process embedded here, something along the lines of:

  • Take a knowledge of what is “good” teaching and learning
    For example, Fresen (2007) argues that the level of interaction, facilitation or simply participation by academic staff is a critical success factor for e-learning. There’s a bunch more literature that backs this up. And our own research/analysis has backed this up. Courses with high staff participation show much higher student participation and a clearer correlation between student participation and grade (i.e. more student participation, the higher the grade).
  • Identify a negative/insight into the behavioural determinants that affect academic staff around this issue.
    There are a couple. First, it’s not uncommon for staff to have an information distribution conception of teaching. i.e. they see their job as to disseminate information. Not to talk, to communicate, or participate. Associated with this is that most staff have no idea what other staff are doing within their course sites. They don’t know how often other staff are contribution to the discussion forum or visiting the course site.
  • Draw on a behavioural technique or two to design an intervention in the LMS that can encourage a behaviour change. i.e. that addresses the negative in the determinants.
    In terms of increasing staff participation you might embed into the LMS a graph like the following. Embed it in such a way as the first thing an academic sees when they login, is the graph – perhaps on part of the screen.

    Example staff posts feedback

    What this graph shows is for a single (hypothetical) staff member the number of replies they have made in course discussion forums for the three courses the staff member has taught. The number of replies is shown per term, in reality it might be shown by week of term – as the term progresses.

    This part can hit the “monitoring”, “self-monitoring” and “feedback” techniques.

    The extra, straight line represents the average number of replies made by staff in all courses in the LMS. Or alternatively, all courses in a program/degree into which the staff member teaches. (Realistically, the average would probably change from term to term).

    This aspect hits the “social processes of encouragement, pressure, support”, “modelling/demonstration behaviour of others”. By showing what other people are doing it is starting to create a social norm. One that might perhaps encourage the academic, if they are below the average, to increase their level of replies.

    But the point is not to stop here. Showing a graph like this is simple using business intelligence tools and is only a small part of the change necessary.

    It’s now necessary to hit techniques such as “graded task, starting with easy tasks”, “Increasing skills: problem-solving, decision-making, goal-setting”, “Planning, implementation”, “Prompts, triggers, cues”. It’s not enough to show that there is a problem, you have to help the academic with how to address the problem.

    In this case, there might be links associated with this graph that show advice on how to increase replies or staff participation (e.g. advice to post a summary of the week’s happenings in a course each week, or some other specific, context appropriate advice). Or it might also provide links to further, more detailed information to shed more light on this problem. For example, it might link to SNAPP to show disconnections.

    But it’s even more than this. If you wanted to hit the “Environmental changes (e.g. objects to facilitate behaviour)” technique you may want to go further with than simply showing techniques. You may want to enable this “showing of techniques” to be within a broader community where people could comment on whether or not a technique worked. It would be useful if the tool help automate/scaffold the performance of the task, i.e. moved up the abstraction layer from the basic LMS functionality. Or perhaps the tool and associated process could track and create “before and afters”. i.e. when someone tries a technique, store the graph before it is applied and then capture it at sometime after.

It’s fairly easy to see how the waterfall visualisation (shown below) and developed by David Wiley and his group could be used this way.


Their mapping

Social/professional role and identity

  • Social processes of encouragement, pressure, support


  • Information regarding behaviour by others


  • Goal/target specified: behaviour or outcome
  • Monitoring
  • Self-monitoring
  • Rewards; incentives (inc. self-evaluation)
  • Graded task, starting with easy tasks
  • Increasing skills: problem-solving, decision-making, goal-setting
  • Rehearsal of relevant skills
  • Modelling/demonstration of behaviour by others
  • Homework
  • Perform behaviour in different settings

Beliefs about capabilities

  • Self-monitoring
  • Graded task, starting with easy tasks
  • Increasing skills: problem-solving, decision-making, goal-setting
  • Coping skills
  • Rehearsal of relevant skills
  • Social processes of encouragement, pressure and support
  • Feedback
  • Self talk
  • Motivational interviewing

Beliefs about consequences

  • Self-monitoring
  • Persuasive communication
  • Information regarding behaviour, outcome
  • Feedback

Motivation and goals

  • Goal/target specified: behaviour or outcome
  • Contract
  • Rewards; incentives (inc. self-evaluation )
  • Graded task, starting with easy tasks.
  • Increasing skills: problem-solving, decision-making, goal-setting
  • Social processes of encouragement, pressure, support
  • Persuasive communication
  • Information regarding behaviour, outcome
  • Motivational interviewing

Memory, attention, decision processes

  • Self-monitoring
  • Planning, implementation
  • Prompts, triggers, cues

Environmental context and resources

  • Environmental changes (e.g. objects to facilitate behaviour)

Social influences

  • Social processes of encouragement, pressure, support
  • Modelling/demonstration of behaviour by others


  • Stress management
  • Coping skills

Action planning

  • Goal/target specified: behaviour or outcome
  • Contract
  • Planning, implementation
  • Prompts, triggers, cues
  • Use of imagery


Fresen, J. (2007). “A taxonomy of factors to promote quality web-supported learning.” International Journal on E-Learning 6(3): 351-362.

Michie, S., M. Johnston, et al. (2008). “From theory to intervention: Mapping theoretically derived behavioural determinants to behaviour change techniques.” Applied Psychology: An International Review 57(4): 660-680.

Limits in developing innovative pedagogy with Moodle: The story of BIM

The following is a presentation abstract that I’ve submitted to MoodleMoot AU’2010. It’s based on some ideas and thoughts expressed here previously. In particular, the main focus on the paper will be showing that the reason most university e-learning/teaching is not that great, is not really the fault of the academics. Instead it will argue that there are a range of limits within the higher education environment that are mostly to blame.

Regardless of whether it gets accepted at Moodlemoot. I’ll be presenting it at CQU sometime in early July and will post the video etc here then.


Open source Learning Management Systems (LMS) such as Moodle are widely recognised as addressing a number of the limitations of proprietary, commercial LMS. Just as those commercial LMS addressed some of the limitations of the “Fred-in-the-shed” era of early web-based e-learning. Early web-based e-learning addressed some problems with text-based Internet e-learning, which addressed limitations of text-based computer-mediated communications…and so it goes on. Rather than being “without limits” this presentation will suggest that e-learning with Moodle, as currently practised, has a number of limits and that progress can be made through the recognition, understanding and removal of those limits.

The presentation will argue and illustrate that these limits place significant barriers in the way of encouraging widespread, simple improvements in learning and teaching. Let alone the barriers these limits create for the development of true pedagogical innovation. The presentation will explain how these limits are not solely, or even primarily, due to the characteristics of Moodle. It will outline how the majority, but not all, of these limits arise from the nature and characteristics of the broader social context, institutions, purpose, processes and people involved with e-learning. It will show how a number of these limitations have been known about and generally ignored for decades, to the detriment of the quality of learning and teaching. The presentation will also seek to identify a variety of approaches or ways of thinking that may help transform the practice of e-learning with Moodle into something that truly is “without limits”.

The presentation’s argument, the identified limitations and the potential solutions all arise from and will be illustrated by drawing on the experience of developing BIM (BAM into Moodle). BIM ( is a Moodle module released in early 2010 and currently being used at CQUniversity and under consideration by the University of Canberra. BIM allows teaching staff to manage, mark and comment (privately) on individual student blogs that are hosted on the students’ choice of external blog provider. BIM is based on BAM (Blog Aggregation Management) a similar tool integrated into CQUniversity’s home-grown “LMS”. Since 2006, BAM/BIM has been used in 30+ course offerings, by 70+ staff, and with 3000+ students making 20000+ blog posts.

The design of BAM/BIM is intended to remove an inherent limit that underpins the design of all LMS. That is, as an integrated system, the LMS must provide all functionality. It appears that this is a limit that the design of Moodle 2.0 seems focused on removing. However, this presentation will suggest that this is only one of many, often fundamental, limits surrounding the use of Moodle, and that these limits need to be recognised, understood and addressed. The suggestion will be that it is only by doing this that we can aid in the development of truly innovative pedagogy.

The suffocating straightjackets of liberating ideas

Doing some reading and came across the following quote which I had to store for further use. It is quote in Chua (1986) and is ascribed to Berlin (1962, p 19)

The history of thought and culture is, as Hegel showed with great brilliance, a changing pattern of great liberating ideas which inevitably turn into suffocating straightjackets, and so stimulate their own destruction by new emancipatory, and at the same time, enslaving conceptions. The first step to understanding of men is the bringing to consciousness of the model or models that dominate and penetrate their thought and action. Like all attempts to make men aware of the categories in which they think, it is a difficult and sometimes painful activity, likely to produce deeply disquieting results. The second task is to analyse the model itself, and this commits the analyst to accepting or modifying or rejecting it and in the last case, to providing a more adequate one in its stead.

Chua (1986) uses it as a intro to alternate views of research perspectives. But it applies to so much.

For me the most obvious application, the one I’m dealing with day to day, is the practice of e-learning within universities. Post-thesis I think I need to figure out how to effectively engage more in this two stage process. I think the Ps Framework provides one small part of a tool to help this process, need to figure out what needs to be wrapped around it to encourage both steps to happen.


Berlin, R. J. (1962). Does political theory still exist? Philosophy, Politics and Society. P. Laslett and W. G. Runciman, Basil Blackwell: 1-33.

Chua, W. F. (1986). “Radical developments in accounting thought.” The Accounting Review 61(4): 601-632.

First step in “Moodle curriculum mapping”

This is perhaps the first concrete step in a project that is aiming to look at how the act of curriculum mapping can be embedded into the, increasingly, most common task and tool used by academics. That is, how can an LMS (like Moodle) be used/modified to make curriculum mapping a part of what academics do, both in terms of maintaining the mapping, but more importantly using the mapping in interesting and useful ways.

As outlined in a previous post it appears that Moodle (the institutional LMS at my current institution) already has functions that offer some basic level of support for curriculum mapping. However, they are mostly used/intended for tracking student outcomes/performance. This post documents an initial foray into using these functions to implement some form of curriculum mapping. The plan is:

  • Use existing functions to map a course or two and find out how that works and how it might be made better.
  • Use the data of the mapping to generate some applications that use the data.

Turned out, due to having to fight other fires, that today’s work was limited. Only small progress.

The courses and the people

I’m working with 2/3 courses. Two from in and around public relations and one from psychology. More detail on these later.

The set up

The plan is to perform this project on a copy of Moodle running on my laptop. i.e. it’s separate from any systems people rely upon and allows me the freedom to code. I’ll be taking backups of the live course sites for the above course, restoring them on my laptop’s Moodle and mapping the courses.

My first problem was to restore the backups. I had an old version of libxml which meant the restore process in Moodle wasn’t handling the HTML code all that well. So new install of xampp and Moodle – some wasted time. Really didn’t like the new Nazi password approach that is now default in the version of Moodle I’m using. More passwords to write down. ;)

Getting outcomes up and going

I’d had outcomes working on the old version of Moodle. My next barrier was getting outcomes to appear on the new. It wasn’t happening simply and I was running out of time, so it sat for a bit. Here’s what I’ve done to get it working:

  • As the Moodle administrator, turn on outcomes under “General Settings”
    Just typing “outcome” in the Moodle adminstrators block was the quickest way to find it.
  • Create some outcomes
    Either in the Admin box under grades or inside an individual course.
  • Possibly add site wide outcomes to the course.
    Outcomes option in course modify box.

Having completed those tasks the theory is that everytime you edit an activity or resource you will have an option to view and select appropriate outcomes.


An outcome has the following data associated with it:

  • Full and short name.
  • Standard outcome – is it available site wide.
  • Scale – which existing scale to use with the outcome.
  • Description – textual description

Outcomes can be imported using as csv file. This could be useful as you could create a set of outcomes for a particular discipline in a CSV file and make them available for anyone to use. Folk at other institutions could import them and have a consistent set of outcomes.

Also, you may not want all discipline outcomes to be available site wide. Could annoy the mathematicians if they kept seeing outcomes from psychology etc. Having outcomes as a CSV would allow these to be imported at a course level. But maybe not…

Checking when outcomes appear

Interested in seeing if the outcomes appear for all activities/resources. Doing a quick test with a couple of courses and reporting where it works. It works for

  • Forums
  • Resource
    • Web page
    • Link to file or web page

Doesn’t work for

  • Labels
    Means of inserting text/HTML into the topics. Used by some to specify readings. Might want to have outcomes on these.
  • Summary


As I was doing the above test, a few thoughts arose:

  • What outcomes would you have for a course synopsis?
    For some resources/activities they are too global, too high level to specify a list of outcomes/attributes etc. What do you do with these?

    Given that one of the aims might be to highlight “coverage”, there are some things you wouldn’t allocate anythign to.

  • Why wouldn’t you have outcomes associated with labels?
  • The obvious question which has been bugging me for a while – not all activities/resources for a course are likely to be in the course site. Any curriculum mapping based on the LMS site is not going to be complete. Unless there is some change in practice on the part of the academics. Not a straight forward thing to do.

Why is University/LMS e-learning so ugly?

Yesterday, Derek Moore tweeted

University webs require eye candy + brain fare. Puget Sound’s site does both with colour palate & info architecture

In a later he also pointed to the fact that even Puget Sounds instance of Moodle looked good. I agreed.

This resonated strongly with me because I and a few colleagues have recently been talking about how most e-learning within Universities and LMS is ugly. Depressing corporate undesign seeking to achieve quality through consistency and instead sinking to being the lowest common denominator. Sorry, I’m starting to mix two of my bete noires:

  1. Most LMS/University e-learning is ugly.
  2. Most of it is based on the assumption that everything must be the same.

Let’s just focus on #1.

I’m using ugly/pretty in the following in the broadest possible sense. Pretty, at its extreme end, is something that resonates postively in the soul as your using it effectively to achieve something useful. It helps you achieve the goal, but you feel good while your doing it, even when you fail and even without knowing why. There’s a thesis or three in this particular topic alone – so I won’t have captured it.

Why might it be ugly? An absence of skill?

Let me be the first to admit that the majority of e-learning that I’ve been responsible for is ugly. This design (used in 100s of course sites) is mostly mine, but has thankfully improved (as much as possible) by other folk. At best you might call that functional. But it doesn’t excite the eyes or resonate. And sadly, it’s probably all downhill from there as you go further back in history.

Even my most recent contribution – BIM – is ugly. If you wish to inflict more pain on your aesthetic sensibility look at this video. BIM rears its ugly head from about 1 minute 22 seconds in.

In my case, these are ugly because of an absence of skill. I’m not a graphic designer, I don’t have training in visual principles. At best I pick up a bit, mostly from what I steal, and then proceed to destroy those principles through my own ineptitude.

But what about organisations? What about the LMS projects like Moodle?

Why might it be ugly? Trying to be too many things to too many?

An LMS is traditionally intended to be a single, integrated system that provides all the functionality required for institutional e-learning. It is trying to be a jack of all trades. To make something so all encompassing look good in its entirety is very difficult. For me, part of looking good is responding to the specifics of a situation in an appropriate way.

It’s also not much use being pretty if you don’t do anything. At some level the developers of an LMS have to focus on making it easy to get the LMS to do things, and that will limit the ability to make it look pretty. The complexity of the LMS development, places limits on making it look pretty.

At some level, the complexity required to implement a system as complex as a LMS also reduces the field of designers who can effectively work with to improve the design of the system.

But what about organisations adopting the LMS, why don’t they have the people to make it look good?

Why might it be ugly? Politics?

The rise of marketing and the “importance of brand” comes with it the idea of everything looking the same. It brings out the “look and feel” police, those folk responsible for ensuring that all visual representations of the organisation capture the brand in accepted ways.

In many ways this is an even worse example of “trying to be too many things”. As the “brand” must capture a full range of print, online and other media. Which can be a bridge too far for many. The complexity kills the ability for the brand to capture and make complete use of the specific media. Worse, often the “brand police” don’t really understand the media and thus can’t see the benefits of the media that could be used to improve the brand.

The brand and the brand police create inertia around the appearance of e-learning. They help enshrine the ugliness.

Then we get into the realm of politics and irrationality. It no longer becomes about aesthetic arguments (difficult at the best of times) it becomes about who plays the game the best, who has the best connection to leadership, who has the established inertia, who can spin the best line.

The call to arms

I think there is some significant value in making e-learning look “pretty”. I think there’s some interesting work to be done in testing that claim and finding out how you make LMS and university e-learning “pretty”.

Some questions for you:

  • Is there already, or can we set up, a gallery of “pretty” LMS/institutional e-learning?
    Perhaps something for Moodle (my immediate interest) but other examples would be fun.
  • What bodies of literature can inform this aim?
    Surely some folk have already done stuff in this area.
  • What might be some interesting ways forward i.e. specific projects to get started?

Limits in developing innovative pedagogy with Moodle: The story of BIM

The following is the extended presentation abstract I plan to submit to MoodleMoot AU 2010. The idea was to submit a paper, but time has run out. The recent blog posts (starting with this one) about the story of BIM provide some of the early reflection that will form the basis of the presentation. The challenges mentioned in those posts will be abstracted somewhat to generate a series of limitations.

In part, the approach I am taking with this presentation is to respond to the polyannas who complain about me being too negative, and never seeing the positives. It’s always been my argument that what I do is not to ignore the positives, recognising and reusing what’s worked forms an important part of my information systems design theory for e-learning. For example, as far back as 1999 I have three publications (1, 2, 3) where recognising and reusing the positives is a key feature. It has been my argument that the polyannas are so busy focusing on the positives because they don’t want to recognise and engage with what doesn’t work. It’s a case of “don’t mention the war”, the SNAFU principle, confirmation bias and pattern entrainment, defensive routines and the lack of a willingness to question the practices on which ones self esteem is built. For me, it is only through recognising, understanding and addressing the limits that you can encourage innovative learning and teaching. You have to recognise and respond to the context.


Open source Learning Management Systems (LMS) such as Moodle are widely recognised as addressing a number of the limitations of proprietary, commercial LMS. Just as those commercial LMS addressed some of the limitations of the “Fred-in-the-shed” era of early web-based e-learning. Early web-based e-learning addressed some problems with text-based Internet e-learning, which addressed limitations of text-based computer-mediated communications…and so it goes on. Rather than being “without limits” this presentation will suggest that e-learning with Moodle, as currently practised, has a number of limits and that progress can be made through the recognition, understanding and removal of those limits.

The presentation will argue and illustrate that these limits place significant barriers in the way of encouraging widespread, simple improvements in learning and teaching. Let alone the barriers these limits create for the development of true pedagogical innovation. The presentation will explain how these limits are not solely, or even primarily, due to the characteristics of Moodle. It will outline how the majority, but not all, of these limits arise from the nature and characteristics of the broader social context, institutions, purpose, processes and people involved with e-learning. It will show how a number of these limitations have been known about and generally ignored for decades, to the detriment of the quality of learning and teaching. The presentation will also seek to identify a variety of approaches or ways of thinking that may help transform the practice of e-learning with Moodle into something that truly is “without limits”.

The presentation’s argument, the identified limitations and the potential solutions all arise from and will be illustrated by drawing on the experience of developing BIM (BAM into Moodle). BIM ( is a Moodle module released in early 2010 and currently being used at CQUniversity and under consideration by the University of Canberra. BIM allows teaching staff to manage, mark and comment (privately) on individual student blogs that are hosted on the students’ choice of external blog provider. BIM is based on BAM (Blog Aggregation Management) a similar tool integrated into CQUniversity’s home-grown “LMS”. Since 2006, BAM/BIM has been used in 30+ course offerings, by 70+ staff, and with 3000+ students making 20000+ blog posts.

The design of BAM/BIM is intended to remove an inherent limit that underpins the design of all LMS. That is, as an integrated system, the LMS must provide all functionality. It appears that this is a limit that the design of Moodle 2.0 seems focused on removing. However, this presentation will suggest that this is only one of many, often fundamental, limits surrounding the use of Moodle, and that these limits need to be recognised, understood and addressed. The suggestion will be that it is only by doing this that we can aid in the development of truly innovative pedagogy.

Research Method – Overview

The following is the first part of chapter 3 of my thesis. The aim of this part is to explain the broad view of research that informs the work. The second part will give more specific details about the specific method used. Over the next week, I’m re-reading this chapter, when the fixes are done, I will upload a completed version.

Update: The latest version of the complete chapter is available from this page


This thesis aims to answer the “how” question associated with the design, development and evolution of information systems to support e-learning in universities. It seeks to achieve this by using an iterative action research process (Cole, Purao et al. 2005) to formulate an information systems design theory (ISDT) (Walls, Widmeyer et al. 1992; Walls, Widmeyer et al. 2004; Gregor and Jones 2007). This chapter aims to situate, explain and justify the nature of the research method adopted in this work. It starts by examining the question of research paradigm and its connection with theory (Section 3.2). In particular, it seeks to explain why the choice of paradigm is seen as secondary to deciding the type of theory to be produced, in terms of selecting a research method. The chapter then uses four questions about a body of knowledge identified by Gregor (2006) to describe the particular perspectives that inform the research method to formulate the ISDT developed in this thesis (Section 3.3).

The formulation of an ISDT is one example of design research (Simon 1996; Hevner, March et al. 2004). At the start of this work, design research was not a dominant research methodology within the field of information systems (Lee 2000). There was a reluctance to accept the importance of this type of knowledge within information systems (Gregor 2002) and to this day there remain diverse opinions and on-going evolutionary understanding about the nature, place and process associated with design research and design theory (Baskerville 2008; Kuechler and Vaishnavi 2008). Consequently, the thinking underlying this thesis, and the content and structure of this chapter, has undergone a number of iterations as understanding has improved throughout the entire research process. For example, initial descriptions of this work (Jones and Gregor 2004; Jones and Gregor 2006) used the structure of an ISDT presented by Walls, Widmeyer and El Sawy (1992). This thesis uses the improved specification of an ISDT presented by Gregor and Jones (2007), an improvement that arose, in part, from work associated with this thesis. For these reasons, this chapter may delve into greater detail about these issues than traditional.

Paradigms and theory

It seems traditional at this point to describe the type of research paradigm that has informed the research for this work based on the assumption that the paradigm embodies a world view that has provided the fundamental assumptions to guide this research project and its selection of method. This section takes a slightly different approach.

This section seeks to argue that the question of research paradigm is of secondary importance to the matching of the research question, to the type of theory that best fits and subsequently the most appropriate research methodology or paradigm. This section argues that the aim of research is the generation and evaluation of knowledge (Section 3.2.1) and that this knowledge is typically expressed as different types of theory (Section 3.2.2). Lastly, the section seeks to connect this view with similar views of research paradigms (Section 3.2.3).

What is reseaerch?

The sixth edition of the OECD’s (2002) Frascati Manul defines research and experimental development as a

creative work undertaken on a systematic basis in order to increase the stock of knowledge, including knowledge of man, culture and society, and the use of this stock of knowledge to devise new applications

Vaishnavi and Kuechleer (2004) define research as “an activity that contributes to the understanding of a phenomenon”. Research, in its most conceptual sense, is nothing more than the search for understanding (Hirschheim 1992). Research is systematic, self-critical inquiry that is founded in curiosity and driven by a desire to understand arising from a stable, systematic and sustained curiosity and subjected to public criticism and, where appropriate, empirical tests (Stenhouse 1981).

Based on these perspectives it appears that a major aim of research is to generate and evaluate knowledge. Various perspectives on the nature of that knowledge, it’s purpose, validity, novelty, utility etc exist. Returning to the OECD (2002), they define research and development to cover three activities:

  1. 1. basic research;
    Experimental or theoretical work, without practice application in view, that aims to acquire new knowledge of the foundations of phenomena and observable facts
  2. applied research; and
    Original investigation aimed at acquiring new knowledge primarily for a specific practical aim or objective.
  3. experimental development.
    Systematic work based on existing knowledge that is directed towards producing new, or improving existing, processes, systems or services.

Even with these differences, a major aim of research appears to be to make a contribution to knowledge. If this is the case, then how is that knowledge represented. In creating and validating knowledge, scientists rely on the clear and succinct statement of theory, theory that embodies statements of the knowledge that has been developed (Venable 2006). Developing theory is what separates academic researchers from practitioners and consultants (Gregor 2006).

The role of theory and method

If an aim of research is to make a contribution to knowledge, should theory be used to represent that knowledge? Theory should be a primary output of research (Venable 2006). Theory development is a central activity in organisational research (Eisenhardt 1989). There is value in theory because it is practical. The practicality of good theory arises because it advances knowledge in a scientific discipline and guides research towards crucial questions (van de Ven 1989). Theories are practical as the enable knowledge to be accumulated in a systematic manner and the use of this knowledge to inform practice (Gregor 2006).

While there is recognition of the importance of theory, there remains questions about what it is. There has been a long-running search for the meaning of “theory” (Baskerville 2008). DiMaggio (1995) identifies at least three views of what theory should be and suggests that each has some validity and limitations. There is and has been disagreement about whether a model and a theory are different, whether or not a typology is a theory and other questions about theory (Sutton and Staw 1995). Many researchers within information systems use the word theory, but fail to give any explicit definition (Gregor 2006). This lack of consensus about what theory is, may explain why it is difficult to develop strong theory in the behavioural sciences (Sutton and Staw 1995).

Types of theory

Part of the confusion around theory has been around its purpose, around whether or not there are different types of theory. Within the information systems field there has been several different approaches to identifying different types of theory. Iivari (1983) described three levels of theorising: conceptual, descriptive and prescriptive. A number of authors (Nunamaker, Chen et al. 1991; Walls, Widmeyer et al. 1992; Kuechler and Vaishnavi 2008) have used the distinction of kernel and design theories. Taking a broad view of theory Gregor (2006) identified five inter-related categories of theory based on the primary type of question at the foundation of a research project. These five categories and their question of interest are summarized in Table 3.1.

Table 3.1 – Gregor’s Taxonomy of Theory Types in Information Systems Research (adapted from Gregor 2006)
Theory type Distinguishing attributes
I. Analysis Says “what is”.
The theory does not extend beyond analysis and description. No causal relationships among phenomena are specified and no predictions are made.
II. Explanation Says “what is”, “how”, “why”, “when”, “where”.
The theory provides explanations but does not aim to predict with any precision. There are no testable propositions.
III. Prediction Says “what is” and “what will be”.
The theory provides predictions and has testable propositions but does not have well-developed justificatory causal explanations.
IV. Explanation and prediction (EP) Says “what is”, “how”, “why”, “when”, “where” and “what will be”.
Provides predictions and has both testable propositions and causal explanations.
V. Design and action Says “how to do something”.
The theory gives explicit prescriptions (e.g., methods, techniques, principles of form and function) for constructing an artifact.

The taxonomy presented in Table 3.1 is based on little prior work and there exists opportunities for further work and improvement (Gregor 2006). There also remains some disagreement about the designation of design theory to Theory type V (Venable 2006). However, it does seem to provide a foundation on which to build sound, cumulative, integrated and practical bodies of theory within the information systems discipline (Gregor 2006).

Relationship between theory and method

Gregor (2006) suggests that research begins with a problem to be solved or a question of interest. The type of theory that is to be developed or tested depends on the nature of this problem and the questions the research wishes to address (Gregor 2006). This connection is made on the basis of the primary goals of theory (Gregor 2006). Assuming this image of the research process then it seems logical that the next step is the selection of research methods or paradigms most appropriate to develop or test the selected theory type. This is not to suggest that there is a one-to-one correspondence between a particular theory type and a particular method or paradigm. Gregor (2006) argues that none of the theory types necessitate a specific method, however, proponents of specific paradigms do favour certain types of theory over others. While there is no necessary correspondence between theory types and methods/paradigms, it is suggested that certain methods/paradigms are better suited to certain types of theory, research problems and researchers.

Recognising different types of theory makes it possible to see the differences as complementary and consequently enable integration into a larger whole (Gregor 2006). It is possible for research to make a contribution to more than one type of theory. Baskerville (2008) argues that there is clearly more to design research than design theory alone. Kuechler and Vaishnavi (2008) show how a design research project is contributing to both design theory (Gregor’s Type V) and kernel theory (Gregor’s other types). The possibility for a research project to make contributions to different types of theory suggests that a research project may draw upon several different methods or paradigms.

The role of research paradigms

Having briefly summarised the perspective on research, theory and method in previous sections, this section makes some connections between this perspective and the views on research paradigms expressed by Mingers (2001) and the pragmatic view of science/paradigm (Goles and Hirschheim 2000).

Research methodology attempts to approximate a compatible collection of assumptions and goals which underlay methods, the actual methods, and the way the results of performing those methods are interpreted and evaluated (Reich 1995). The assumptions or beliefs about the world, how it works and how it may be understood has been termed a paradigm (Kuhn 1996; Guba, 1999). Numerous authors have sought to identify and describe different research paradigms. Lincoln and Guba (2000) identify five major paradigms: positivism, postpositivism, critical theory, constructivism and participatory action. Within the information systems discipline, Orlikowski and Baroudi (1991) identify three broad research paradigms: positivist, interpretive and critical. Within information systems and in connection with the rise of design research, numerous authors (Nunamaker, Chen et al. 1991; March and Smith 1995; Hevner, March et al. 2004) have suggested that it is possible to identify two broad research paradigms within information systems: descriptive and prescriptive research. Where descriptive research is seen as traditional research where prescriptive research is design research. There are some who take issue with seeing design research as a separate paradigm (McKay and Marshall 2007).

Just as there are differing views on the number and labels of different research paradigms, there are differences on how to describe them. Guba and Lincoln (1994) describe the beliefs encompassed by a paradigm through three, interconnected questions: ontology, epistemology and methodology. Mingers (2001) describes a paradigm as being a general set of philosophical assumptions covering ontology, epistemology, ethics or axiology and methodology. Gole and Hirschheim (2000) use ontology, epistemology and axiology.

Mingers (2001) describes three perspectives on paradigms. These are:

  • isolationism;
    Views paradigms as based on contradictory assumptions which makes them mutually exclusive and consequently a researcher should follow a single paradigm.
  • complementarist; and
    Paradigms are seen as more or less suited to particular problems and selection is based on a process of choice.
  • multi-method.
    Paradigms are seen to focus on different aspects of reality and can be combined to provide a richer understanding of the problem.

Minger’s (2001) multi-method perspective seems to fit well with a research project seeking to address a research problem through making contributions to different types of theory (as described in Section 3.2.2). Such a perspective suggests that the question of whether a researcher is positivist, interpretivist or critical is of secondary importance to the question of fit between problem, theories and methods.

Such a perspective seems to have connections with that of the pragmatist perspective of research described by Goles and Hirschheim (2000). Pragmatists consider the research question as more important then the method used or the worldview meant to underpin the method (Tashakkori and Teddlie 1998). Table 3.2 compares four important paradigms, including pragmatism. It has been suggested that pragmatism draws on a philosophical basis of pluralism to undercut the traditional dichotomous battle between conflicting paradigms (Goles and Hirschheim 2000). It facilitates the construction of connections and interplay between conflicting paradigms (Wicks and Freeman 1998).

If a paradigm must be chosen, then that of pragmatism seems the best fit. This research puts the question of “how to design and support an information systems for e-learning within universities” as the focus. The type(s) of theories, the methods to be used and their appropriateness should flow and align with that question. The following section provides a explanation of this alignment and describes the choices made for this work.

Table 3.2 – Comparisons for four important paradigms used in the social and behavioural sciences (adapted from Tashakkori and Teddlie 1998)
Positivism Postpositivism Pragmatism Constructivism
Methods Quantitative Primarily quantitative Quantitative + qualitative Qualitative
Logic Deductive Primarily deductive Deductive + inductive Inductive
Epistemology Objective point of view, Knower and known are dualism Modified dualism. Findings probably objectively “true” Both objective and subjective points of view Subjective point of view. Knower and known are inseparable.
Axiology Inquiry is value-free Inquiry involves values, but they may be controlled Values play a large role in interpreting results Inquiry is value-bound
Ontology Naive realism Critical or transcendental realism Accept external reality. Choose explanations that best produce desired outcomes Relativism
Causal linkages Real causes temporally precedent to or simultaneous with effects There are some lawful, reasonably stable relationships among social phenomena. These may be known imperfectly. Causes are identifiable in a probalistic sense that changes over time There may be causal relationships, but we will never be able to pin them down. All entities simultaneously shaping each other. It’s impossible to distinguish causes from effects.


Baskerville, R. (2008). "What design science is not." European Journal of Information Systems 17(5): 441-443.

Cole, R., S. Purao, et al. (2005). Being proactive: Where action research meets design research. Twenty-Sixth International Conference on Information Systems: 325-336.

DiMaggio, P. (1995). "Comments on "What theory is not"." Administrative Science Quarterly 40(3): 391-397.

Eisenhardt, K. (1989). "Building theories from case study research." The Academy of Management Review 14(4): 532-550.

Goles, T. and R. Hirschheim (2000). "The paradigm is dead, the paradigm is dead…long live the paradigm: the legacy of Burrell and Morgan." Omega 28: 249-268.

Gregor, S. (2002). "Design Theory in Information Systems." Australian Journal of Information Systems: 14-22.

Gregor, S. (2006). "The nature of theory in information systems." MIS Quarterly 30(3): 611-642.

Gregor, S. and D. Jones (2007). "The anatomy of a design theory." Journal of the Association for Information Systems 8(5): 312-335.

Hevner, A., S. March, et al. (2004). "Design science in information systems research." MIS Quarterly 28(1): 75-105.

Hirschheim, R. A. (1992). Information Systems Epistemology: An Historical Perspective. Information Systems Research: Issues, Methods and Practical Guidelines. R. Galliers. London, Blackweel Scientific Publications: 28-60.

Iivari, J. (1983). Contributions to the theoretical foundations of systemeering research and the Picoco model. Oulu, Finland, Institute of Data Processing Science, University of Oulu.

Jones, D. and S. Gregor (2004). An information systems design theory for e-learning. Managing New Wave Information Systems: Enterprise, Government and Society, Proceedings of the 15th Australasian Conference on Information Systems, Hobart, Tasmania.

Jones, D. and S. Gregor (2006). The formulation of an Information Systems Design Theory for E-Learning. First International Conference on Design Science Research in Information Systems and Technology, Claremont, CA.

Kuechler, B. and V. Vaishnavi (2008). "On theory development in design science research: anatomy of a research project." European Journal of Information Systems 17(5): 489-504.

Kuhn, T. S. (1996). The Structure of Scientific Revolutions. Chicago, University of Chicago Press.

Lee, A. S. (2000). "Irreducibly Sociological Dimensions in Research and Publishing." MIS Quarterly 24(4): v-vii.

March, S. T. and G. F. Smith (1995). "Design and Natural Science Research on Information Technology." Decision Support Systems 15: 251-266.

McKay, J. and P. Marshall (2007). Science, Design and Design Science: Seeking Clarity to Move Design Science Research Forward in Information Systems. 18th Australasian Conference on Information Systems, Toowoomba.

Mingers, J. (2001). "Combining IS Research Methods: Towards a Pluralist Methodology." Information Systems Research 12(3): 240-259.

Nunamaker, J. F., M. Chen, et al. (1991). "Systems development in information systems research." Journal of Management Information Systems 7(3): 89-106.

OECD (2002). Frascati Manual: Proposed standard practice for surveys on research and experimental development. Paris, France, Organisation for Economic Co-operation and Development: 254.

Orlikowski, W. and J. Baroudi (1991). "Studying information technology in organizations: Research approaches and assumptions." Information Systems Research 2(1): 1-28.

Reich, Y. (1995). "The study of design research methodology." Transactions of the ASME.

Simon, H. (1996). The sciences of the artificial, MIT Press.

Stenhouse, L. (1981). "What counts as research?" British Journal of Educational Studies 29(2): 103-114.

Sutton, R. and B. Staw (1995). "What theory is not." Administrative Science Quarterly 40(3): 371-384.

Tashakkori, A. and C. Teddlie (1998). Mixed methodology: combining qualitative and quantitative approaches. Thousand Oaks, California, SAGE.

Vaishnavi, V. and B. Kuechleer. (2004, 18 January 2006). "Design Research in Information Systems."   Retrieved 20 April 2004, 2004, from

van de Ven, A. (1989). "Nothing is quite so practical as a good theory." The Academy of Management Review 14(4): 486-489.

Venable, J. (2006). The role of theory and theorising in design science research. First International Conference on Design Science Research in Information Systems and Technology, Claremont, CA.

Walls, J., G. Widmeyer, et al. (2004). "Assessing information system design theory in perspective: How useful was our 1992 initial rendition." Journal of Information Technology, Theory and Application 6(2): 43-58.

Walls, J., G. Widmeyer, et al. (1992). "Building an Information System Design Theory for Vigilant EIS." Information Systems Research 3(1): 36-58.

Wicks, A. and R. E. Freeman (1998). "Organization studies and the new pragmatism: Positivism, anti-positivism and the search for ethics." Organization Science 9(2): 123-140.

Embedding behaviour modification – paper summary

A growing interest of mine is an investigation of how the design of the environment and information systems to support university learning and teaching can be improved with a greater consideration given to factors which can help encourage improvement and change. i.e. not just building systems that do a task (e.g. manage a discussion forum) but design a discussion forum that encourages and enables an academic to adopt strategies and tactics that are known to be good. If they choose to.

One aspect of the thinking around this is the idea of behaviour modification. The assumption is that to some extent improving the teaching of academics is about changing their behaviour. The following is a summary of a paper (Nawyn et al, 2006) available here.

The abstract

Ubiquitous computing technologies create new opportunities for preventive healthcare researchers to deploy behavior modification strategies outside of clinical settings. In this paper, we describe how strategies for motivating behavior change might be embedded within usage patterns of a typical electronic device. This interaction model differs substantially from prior approaches to behavioral modification such as CD-ROMs: sensor-enabled technology can drive interventions that are timelier, tailored, subtle, and even fun. To explore these ideas, we developed a prototype system named ViTo. On one level, ViTo functions as a universal remote control for a home entertainment system. The interface of this device, however, is designed in such a way that it may unobtrusively promote a reduction in the user’s television viewing while encouraging an increase in the frequency and quantity of non-sedentary activities. The design of ViTo demonstrates how a variety of behavioral science strategies for motivating behavior change can be carefully woven into the operation of a common consumer electronic device. Results of an exploratory evaluation of a single participant using the system in an instrumented home facility are presented


Tell’s how a PDA + additional technology was used to embed behaviour modification strategies aimed at decreasing the amount of television watching. Describes a successful test with a single person.

Has some links/references to strategies and research giving principles for how to guide this type of design.


Set the scene. Too many Americans watch too much TV, are overweight and don’t get exercise. Reducing TV watching should improve health, if replaced with activities that aren’t sedentary. But difficult because TV watching is addictive and exercise is seen to have high costs and initial experience not so good.

The idea is that “successful behavior modification depends on delivery of motivational strategies at the precise place and time the behavior occurs”. The idea is that “sense-enabled mobile computing technologies” can help achieve this. This work aims to:

  • use technology to disrupt the stimulus-reward cycle of TV watching;
  • decrease the costs of physical activity.

Technology-enabled behavioral modification strategies

Prior work has included knowledge campaigns and clinical interventions – the two most common approaches. Technology used to reduce television usually gatekeepers used to limit student access – not likely to be used by adults. There are exercise-contingent TV activation systems.

More work aimed at increasing physical activity independent of television. Approaches use include measuring activity and providing open loop feedback. i.e. simple, non-intrusive aids to increase activity. The more interactive, just in time feedback may help short-term motiviation – e.g. video games. Also technology interventions that mimic a human trainer.

For those not already exercising small increases in physical activity may be better than intense regimens.

The opportunity: just-in-time interactions

Technological intervention based on the value of: that people respond best to information that is timely, tailored to their situation, often subtle, and easy to process. This intervention uses a PDA device intended to replace the television remote control and adds a graphical interface, built-in program listings, access to a media library, integrated activity management, and interactive games.

It tries to determine the goals of the user and suggest alternatives to watching TV in a timely manner. The addition of wearable acceleration sensors it can also function as a personal trainer.


Provide a user experience rewarding enough to be used over time.

Grabbing attention without grabbing time

Prior work on behavior change interventions reveals them to be:

  • resource-intensive, requiring extensive support staff;
  • time-intensive, requiring the user to stop everyday activity to focus on relevant tasks.

This is why the remote is seen as a perfect device. It’s part of the normal experience. Doesn’t need separate time to use.

Sustaining the interaction over time

Behavior change needs to be sustained over years to have a meaningful impact.
Extended use of a device might run the risk of annoyance, so avoided paternalistic or authoritarian strategies. Focus instead on strategies that promote intrinsic motivation and self-reflection. Elements of fun, reward and novelty are used to induce positive affect rather than feelings of guilt.

Avoiding the pitfall of coercion

Temptation of using coercion for motiviation. The likelihood that users will tolerate coercive devices for long is questionable.

Avoiding reliance on extrinsic justification

Optimal outcome of any behavioural intervention is change that persists. Heavy reliance on extrinsic justification – rewards or incentives – may result in dependency that can hurt persistence if removed. Also problems if the undesirable behaviour – watching TV – is the reward for exercise.

Case study

Low cost remote produced from consumer hardware. Laptop provided to manage media library. GUI with finger input.

Provides puzzles that use the TV for display and physical activity for input.

Behavior modification strategies

Most derived from basic research on learning and decision-making – suggestibility, goal-setting and operant conditioning). Examples include:

  • value integration – having persuasive strategies embedded within an application that otherwise provides value to the user increases the likelihood of adoption.
  • reduction – reducing the complexity of a task increases the likelihood that it will be performed.
  • convenience – embedding within something used regularly, increases opportunities for delivery of behaviour change strategies.
  • ease of use – easier to use = more likely to be adopted over long term.
  • intrinsic motivation – incorproating elements of challenge, curiosity and control into an activity can sustain interest.
  • suggestion – you can bias people toward a course of action through even very subtle prompts and cues.
  • encouraging incompatible behaviour – encouragement can be effective
  • disrupting habitual behaviour – eliminate bad habits by the conditions that create them are removed or avoided.
  • goal setting – concrete, achievable goals promote behaviour change by orienting the individual toward a definable outcome.
  • self-monitoring – motivated people can be more effective when able to evaluate progress toward outcome goals.
  • proximal feedback – feedback that occurs during or immediately after an activity has the greatest impact on behaviour change.
  • operant conditioning – increase frequency of desirable behaviour by pairing with rewarding stimuli.
  • shaping – transform an existing behaviour into more desirable one by rewarding successive approximations of the end goal.
  • consistency – draw on the desire of people to have a consistency between what they say and do to help them adhere to stated goals.

Exploratory evaluation

Use it with a single (real life) person to find out what happens.

Done in a specially instrumented apartment, including 3 phases: baseline with normal remote, 12 days at home, 7 days in lab with special remote. Participant not told that this was aimed at changing behaviour around watching TV and physical activity.


Television watching reduced from 133 minutes a day during baseline to 41 minutes during intervention.

Evaluation against the adopted strategies were positive.


Substantial improvement important. Phase strategies in over time. Strategies are initially seen as novel – can use this curiosity. Not all users will react well.


Nawyn, J., S. Intille, et al. (2006). Embedding behavior modification strategies into a consumer electronic device: A case study. 8th International Conference on Ubiquitous Computing: 297-314.