Leadership as defining what’s successful

After spending a few days visiting friends and family in Central Queensland – not to mention enjoying the beach – a long 7+ hour drive home provided an opportunity for some thinking. I’ve long had significant qualms about the notion of leadership, especially as it is increasingly being understood and defined by the current corporatisation of universities and schools. The rhetoric is increasingly strong amongst schools with the current fashion for assuming that Principals can be the saviour of schools that have broken free from the evils of bureaucracy. I even work within an institution where a leadership research group is quite active amongst the education faculty.

On the whole, my experience of leadership in organisations has been negative. At the best the institution bumbles along through bad leadership. I’m wondering whether or not questioning this notion of leadership might form an interesting future research agenda. The following is an attempt to make concrete some thinking from the drive home, spark some comments, and set me up for some more (re-)reading. It’s an ill-informed mind dump sparked somewhat by some early experiences on return from leave.

Fisherman’s beach by David T Jones, on Flickr

In the current complex organisational environment, I’m thinking that “leadership” is essentially the power to define what success is, both prior to and after the fact. I wonder whether any apparent success attributed to the “great leader” is solely down to how they have defined success? I’m also wondering how much of that success is due to less than ethical or logical definitions of success?

The definition of success prior to the fact is embodied in the current model of process assumed by leaders, i.e. telological processes. Where the great leader must define some ideal future state (e.g. adoption of Moodle, Peoplesoft, or some other system; an organisational restructure that creates “one university”; or, perhaps even worse, a new 5 year strategic plan etc.) behind which the weight of the institution will then be thrown. All roads and work must lead to the defined point of success.

This is the Dave Snowden idea of giving up the evolutionary potential of the present for the promise of some ideal future state. A point he’ll often illustrate with this quote from Seneca

The greatest loss of time is delay and expectation, which depend upon the future. We let go the present, which we have in our power, and look forward to that which depends upon chance, and so relinquish a certainty for an uncertainty.

Snowden’s use of this quote comes from the observation that some systems/situations are examples of Complex Adaptive Systems (CAS). These are systems where traditional expectations of cause and effect don’t hold. When you intervene in such systems you cannot predict what will happen, only observe it in retrospect. In such systems the idea you can specify up front where you want to go is little more than wishful thinking. So defining success – in these systems – prior to the fact is a little silly. It questions the assumptions of such leadership, including that they can make a difference.

So when the Executive Dean of a Faculty – that includes programs in information technology and information systems – is awarded “ICT Educator of the Year” for the state because of the huge growth in student numbers, is it because of the changes he’s made? Or is it because he was lucky enough to be in power at (or just after) the peak of the IT boom? The assumption is that this leader (or perhaps his predecessor) made logical contributions and changes to the organisation to achieve this boom in student numbers. Or perhaps they made changes simply to enable the organisation to be better placed to handle and respond to the explosion in demand created by external changes.

But perhaps rather than this single reason for success (great leadership), it was instead there were simply a large number of small factors – with no central driving intelligence or purpose – that enabled this particular institution to achieve what it achieved. Similarly, when a few years later the same group of IT related programs had few if any students, it wasn’t because this “ICT Educator of the Year” had failed. Nor was it because of any other single factor, but instead hundreds and thousands of small factors both internally and externally (some larger than others).

The idea that there can be a single cause (or a single leader) for anything in a complex organisational environment seems to be faulty. But because it is demanded of them, leaders must spend more time attempting to define and convince people of their success. In essence then, successful leadership becomes more about your ability to define and promulgate widely acceptance of this definition of success.

KPIs and accountability galloping to help

This need to define and promulgate success is aided considerably by simple numeric measures. The number of student applications; DFW rates; numeric responses on student evaluation of courses – did you get 4.3?; journal impact factors and article citation metrics; and, many many more. These simple figures make it easy for leaders to define specific perspectives on success. This is problematic and it’s many problems are well known. For example,

  • Goodhart’s law – “When a measure becomes a target, it ceases to be a good measure.”
  • Campbell’s law – “The more any quantitative social indicator (or even some qualitative indicator) is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
  • the Lucas critique.

For example, you have the problem identified by Tutty et al (2008) where rather than improve teaching, institutional quality measures “actually encourage inferior teaching approaches” (p. 182). It’s why you have the LMS migration project receiving an institutional award for quality etc, even though for the first few weeks of the first semester it was largely unavailable to students due to dumb technical decisions by the project team and required a large additional investment in consultants to fix.

Would this project have received the award if a senior leader in the institution (and the institutional itself) heavily reliant upon the project being seen as a success?

Would the people involved in giving the project the award have reasonable reasons for thinking it award winning? Is success of the project and of leadership all about who defines what perspective is important?

Some other quick questions

Some questions for me to consider.

  • Where does this perspective sit within the plethora of literature on leadership and organisational studies? Especially within the education literature? How much of this influenced by earlier reading of “Managing without Leadership: Towards a Theory of Organizational Functioning”
  • Given the limited likelihood of changing how leadership is practiced within the current organisational and societal context, how do you act upon any insights this perspective might provide? i.e. how the hell do I live (and heaven forbid thrive) in such a context?


Tutty, J., Sheard, J., & Avram, C. (2008). Teaching in the current higher education environment: perceptions of IT academics. Computer Science Education, 18(3), 171–185.

Learning analytics and complexity

Another day and another #ascilite12 paper to think about. This is the one where my periodic engagement is potentially driving my co-author slightly crazy. I’m sure this contribution will add further to that.

The idea

The basic idea of the paper is to

  1. Draw on a few more insights/patterns from the data gathered as part of the Indicators project.
    This includes usage data from a single Australian university from 3 different “Learning Management Systems” over a 10+ year period.
  2. Use the lens of complex adaptive systems to
    1. Identify potential problems with existing trends in the application of learning analytics.
    2. Identify some potential alternative applications of learning analytics.

The idea builds on some of Col’s earlier thinking in this area and is likely to inform some of the next steps we take in this area.

Potential problems

The problems that we seem to have identified so far are

  1. The hidden complexity behind the simple patterns.
    1. Abstraction losing detail.
    2. Decomposition preventing action.
  2. It’s not a causal system
    1. Correlation, causation confusion.
    2. Overweening conceit of causality.

There must be more of these problems. I do wonder if a closer reading of some of the CAS literature would provide more insights.

For each of these problems we’re hoping to

  • Illustrate the nature of the problem with evidence from the data.
  • Offer insight into why this is a problem from Complex Adaptive Systems theory.
    Morrison (2006) gives a good overview of CAS, some its application to education and some limitations.
  • Suggest a better approach based on the insights from CAS.

Hidden complexity – abstraction losing detail

This in part picks up and illustrates part of the message Gardner Campbell made in his presentation “Here I stand” as part of LAK’12. i.e. that the nature of learning analytics and its reliance on abstracting patterns or relationships from data has a tendency to hide the complexity of reality. Especially when used for decision making by people who are not directly engaged in the reality.

Col has illustrated this in this post using the traditional relationship between LMS use and grades (more use == better grades). The nice trend gets interrupted when you start looking at the complexity behind that nice trend. For example, one student who achieved a HD in every course having widely varying numbers of posts/replies in different courses. Similar to Ken’s discoveries when looking at his teaching. Same academic in a few different courses having widely varying practices.

A concrete example is management passing a rule that every course must have a course website that includes a discussion forum, even when for entirely appropriate reasons an entirely on-campus course decides it’s not appropriate.

Decomposition preventing action

The structure and organisation of universities are based on top-down decomposition. All of the various required functions are identified and divided into various parts. There’s HR, IT, the teaching academics in faculties etc. With each decomposition there is loss of the whole. Each sub-component starts to focus on their bit. This is where you get IT departments focusing on uptime, security and risk regardless of the effects it has on the quality of learning.

You can see the effect of this in the learning analytics literature. ALTC grants having to take a browser/client-based approach to tool development because the IT department won’t provide access to the database. It’s one of the reasons why the Indicators project is a little further ahead than most, even though we are a very small group. Through a series of accidents we had access to data and the skills necessary to do something with it.

The effect is also visible in the location of data. Student results are in the student records system. LMS activity is in the LMS etc. This is why “dashboards” are the solution. They bring the data into a single system that is maintained by the data mining folk within an institution. Even though the real value of the patterns revealed by these systems is within the learning environment (the LMS for most), not in yet another systems

You can also see this in the increasing tendency for “dashboards” to be the organisational solution to learning analytics. It’s what the data mining folk in the institution already do, so why not do it for learning analytics? The only trouble is that the information provided by learning analytics is most useful within the LMS. A contributing factor to some of the limitations of the tools and the difficulty staff and students have using them.

The major difficulty for learning analytics is that action in response to learning analytics takes at the teaching/learning coal-face. Not in the dashboard system or the other places inhabited by senior management and support staff.

It’s not a causal system

University senior management assume that they can manipulate the behaviour of people. For example, is the lovely quote I often use from an LMS working group. One of the technical people suggested “that we should change people’s behaviour because information technology systems are difficult to change”. As a complex system there simply isn’t the causality there.

For example, when Moodle was introduced at the institution in question there was grave concern about how few Blackboard course sites actually contained discussion forums. A solution to this was the implementation of “minimum course standards” accompanied by a checklist that was ticked by the academic and double checked by the moderator to assure certain standards were implemented. e.g. a discussion forum. Subsequent data reveals that while all courses may have had a discussion forum, a significant proportion of courses have discussion forums with fewer than 5 posts. This is the mistaken “overweening conceit of casuality”.

Then there is the obvious confusion between correlation and causation. i.e. simply because HD students average more use of the LMS, this doesn’t mean that if all students use the LMS more they’ll get better marks.

Some alternatives

Okay, so given these problems what might you do differently. A few initial suggestions

  • Put a focus on the technology to aid sense-making and action especially to aid academics and students.
    The technology can’t know enough abou the context to make decisions. It can, however, help the people make decisions.
  • Put this into the learning environment currently used by these folk (i.e. the LMS).
    It has to be a part of the environment. If it’s separate, it won’t be used.
  • Break-down the silos.
    Currently, much of learning analytics is within a course or across an institution, or perhaps focused on a specific student. Academics within a program need to be sharing their insights and actions. Students need to be able to see how they are going against others…

This is not meant to represent a new direction for the practice of learning analytics. Rather one interesting avenue for further research.


Morrison, K. (2006). Complexity theory and education. APERA Conference, Hong Kong (pp. 1-12). Retrieved from http://edisdat.ied.edu.hk/pubarch/b15907314/full_paper/SYMPO-000004_Keith Morrison.pdf

Alternate ways to get the real story in organisations

I’ve just been to a meeting with a strangely optimistic group of people who are trying to gather “real stories” about what is going on within an organisation through focus groups. They are attempting to present this information to senior management in an attempt to get them to understand what staff are experiencing, to indicate that something different might need to be done.

We we asked to suggest other things they could be doing. For quite some time I’ve wanted to apply some of the approaches of Dave Snowden to tasks like this. The following mp3 audio is an excerpt from this recording of Dave explaining the results of one approach they have used. I recommend the entire recording or any of the others that are there.

Why do we shit under trees?

Imagine this type of approach applied to students undertaking courses at a university as a real alternative to flawed smile sheets.

Tell a story about your garden – narrative and SenseMaker

There have been a few glimmers in this blog in my undeveloped, long stalled but slowly growing interest in the use of narrative, metaphor and myth to understand and engage in innovation around learning and teaching. Much, but not all, of this arises from the work of Dave Snowden and attending one of his workshops.

A chance to play with SenseMaker

One of my interests is in the SenseMaker suite as a tool that might be useful for a number of tasks. In particular, I’m interested in seeing if this might provide some interesting alternatives to the evaluation of learning and teaching. However, apart from seeing SenseMaker in action at the workshop I attended and reading about it, I haven’t had a chance to play with it.

In a recent blog post Dave Snowden extends an invitation to use a part of the SenseMaker suite to contribute to an open project about gardens.

I encourage you to go to Dave’s post and contribute a story about your garden. The rest contains some reflections on my contribution.

Some reflections

The flash interface has some issues, at least on my combination of hardware and software. The drop down boxes on the initial set of questions don’t provide some of the traditional cues you expect

  • highlighting options as you hover the mouse while figuring out which one to select;
  • you have to click on the actual down arrow to get the menu of options to appear rather than being able to click anywhere on the box;
  • it only appears to take the first letter to go to choices
    i.e. selecting which country you are from I often will type “aust” to bring up those options (I’m in Australia), rather than scroll through a long list. The flash interface only appears to take the first letter ‘a’.

Finding a story to tell about my garden was interesting and took a while. In fact the story emerged and changed as I was writing it. It took perhaps as long, possibly longer, than a survey might. I wonder how that impacts on the likelihood of people contributing.

In some of the questions asked after contributing the story – used as signifiers – I sometimes found myself wanting a “not applicable” option. I wonder what effect this has on the usefulness of the stories and the signifiers.

Quotes from Snowden and the mismatch between what univeristy e-learning does and what it needs

For the PhD I’m essentially proposing that the current industrial model of e-learning adopted (almost without exception) by universities is a complete and utter mismatch with the nature of the problem. As a consequence of this mismatch e-learning will continue to have little impact, be of limited quality and continue to be characterised by 5 yearly projects to replace a software system rather than a focus on an on-going process of improving learning and teaching by using the appropriate and available tools.

Dave Snowden has recently described a recent keynote he gave and from that description/keynote I get the following two quotes which illustrate important components of my thesis and its design theory. I share them here.

Tools and fit

The technology in e-learning is a tool. A tool to achieve a certain goal. The trouble is that the Learning Management System/LMS (be it open source or not) model, as implemented within universities, typically sacrifices flexibility. It’s too hard to adapt the tool, so the people have to adapt. The following is a favourite quote of mine from Sturgess and Nouwens (2004). It’s from a member of the technical group evaluating learning management systems

“we should change people’s behaviour because information technology systems are difficult to change”

While I recognise that this actually may be the case with existing LMSes and the constraints that exist within universities about how they can be supported. I do not agree with this. I believe the tools should adapt with the needs of the people. That a lot more effort needs to be expended doing this, and if it does significant benefits flow.

Consequently, it’s no surprise that Dave’s quote about tools, resonates with me

Technology is a tool and like all tools it should fit your hand when you pick it up, you shouldn’t have to bio-re-engineer your hand to fit the tool.

Seneca the Younger and ateleological design

Dave closes his talk with the following quote from Seneca

The greatest loss of time is delay and expectation, which depend upon the future. We let go the present, which we have in our power, and look forward to that which depends upon chance, and so relinquish a certainty for an uncertainty.

For me this connects back to the fact that (almost) all implementation of e-learning within universities focus on using a plan-driven approach, a teleological design process. It assumes that they can know what is needed into the future, which given the context of universities and the rhetoric about “change being the only thing that is constant” is just a bit silly.

Teleological design causes problems, ateleological design is a better fit.

Measuring the design process – implications for learning design, e-learning and university teaching

I came across the discussion underpinning these thoughts via a post by Scott Stevenson. His post was titled “Measuring the design process”. It is his take on a post titled “Goodbye Google” by Doug Bowman. Bowman was the “Visual Design Lead” at Google and has recently moved to Twitter as Creative Director.

My take of the heart of the discussion is the mismatch between the design and engineering cultures. Design is okay with relying on experience and intuition for the basis for a decision. While the engineering culture wants everything measured, tested and backed up by data.

In particular, Bowman suggests that the reason for this data-driven reliance is

a company eventually runs out of reasons for design decisions. With every new design decision, critics cry foul. Without conviction, doubt creeps in. Instincts fail. “Is this the right move?”

The doubt, the lack of a reason, purpose, or vision for a change creates a vacuum that needs to be filled. There needs to be some reason to point to for the decision.

When a company is filled with engineers, it turns to engineering to solve problems. Reduce each decision to a simple logic problem. Remove all subjectivity and just look at the data. Data in your favor? Ok, launch it. Data shows negative effects? Back to the drawing board.

Can’t be anything wrong with that? Can there? If you’re rational and have data to back you up then you can’t be blamed. Bowman suggests that there is a problem

And that data eventually becomes a crutch for every decision, paralyzing the company and preventing it from making any daring design decisions.

He goes on to illustrate the point, where the focus goes to small questions – should a border be 3, 4 or 5 pixels wide – while the big questions, the important questions that can make a design distinctive become ignored. This happens because hard problems are hard and almost certainly impossible to gather objective data for.

Stevenson makes this point

Visual design is often the polar opposite of engineering: trading hard edges for subjective decisions based on gut feelings and personal experiences. It’s messy, unpredictable, and notoriously hard to measure.

Learning design, e-learning and university teaching

This same problem arises in universities around learning design, e-learning and university teaching. The design of university teaching and learning has some strong connections with visual design. It involves subjective and contextual decisions, it’s messy, unpredictable and hard to measure.

The inherently subjective and messy nature of university teaching brings it into direct tension with two increasingly important and powerful cultures within the modern university:

  1. Corporate management; and
    Since sometime in the 90s, at least within Australia, corporate manageralism has been on the rise within universities. Newton (2003) has a nice section on some of the external factors that have contributed to this rise, I’ve summarised Newton here. Further underpinning this rise has been what Birnbaum (2000) calls “education’s Second Management Revolution” from around 1960 and which “marks the ascendance of rationality in academic management”.
  2. Information technology.
    With the rise of e-learning and other enterprise systems, the corporate IT culture within universities is increasingly strong. In particular, from my cynical perspective, when they can talk the same “rational” talk as the management culture and back this up with reams of data (regardless of validity) and can always resort of techno-babble to confuse management.

Both these cultures put an emphasis on rationality, on having data to support decisions and on being able to quantify things.

Symptoms of this problem

Just taking the last couple of years, I’ve seen the following symptoms of this:

  • The desire to have a fixed, up-front estimate of how long it takes to re-design a course.
    I want you to re-design 4 courses. How long will it take?
  • The attempt to achieve quality through consistency.
    This is such a fundamentally flawed idea, but it is still around. Sometimes it is proposed by people who should know better. The idea that a single course design, word template or educational theory is suitable for all courses at an institution, let alone all learners, sounds good, but doesn’t work.
  • Reports indicating that the re-design and conversion of courses to a new LMS are XX% complete.
    Heard about this just recently. If you are re-designing a raft of different courses, taught be different people, in different disciplines, using different approaches and then porting them to a new LMS, how can you say it is XX% complete. The variety in courses will mean that you can’t quantify how long it will take. You might have 5 or 10 courses completed, but that doesn’t mean you’re 50% completed. The last 5 courses might take much longer.
  • The use of a checklist to evaluate LMSes.
    This has to be the ultimate, use a check list to reduce the performance of an LMS to a single number!!
  • Designing innovation by going out to ask people what they want.
    For example, let’s go and ask students or staff how they want to use Web 2.0 tools in their learning and teaching. That old “fordist”, the archetypal example of rationalism, Henry Ford even knew better than this

    “If I had asked people what they wanted, they would have said faster horses.”

The scary thing is, because design is messy and hard, the rational folk don’t want to deal with it. Much easier to deal with the data and quantifiable problems.

Of course, the trouble with this is summarised by a sign that used to hang in Einstein’s office at Princeton (apparently)

Not everything that counts can be counted, and not everything that can be counted counts.


This mismatch between rationality and the nature of learning and teaching leads, from my perspective, to most of the problems facing universities around teaching. Task corruption and a reliance on “blame the teacher”/prescription approaches to improving teaching arise from this mismatch.

This mismatch arises, I believe, for much the same reason as Bowman used in his post about Google. The IT and management folk don’t have any convictions or understanding about teaching or, perhaps, about leading academics. Consequently, they fall back onto the age-old (and disproved) management/rational techniques. As they give the appearance of rationality.


Birnbaum, R. (2000). Management Fads in Higher Education: Where They Come From, What They Do, Why They Fail. San Francisco, Jossey-Bass.

Cognition – we’re not rational and how it impacts e-learning

It’s a small world. I work in Rockhampton at a university and last year traveled to Canberra for a Cognitive Edge workshop (which I recommend). One of the other participants was Cory Banks who, a few years ago, was a student at the university I work at. He’s obviously moved onto bigger and better things.

Our joint Cognitive Edge experience indicates some similar interests, which brings me to this post on cognition on Cory’s blog. In th epost he suggests a number of aspects of cognition that impact upon problem solving. He’s asking for help in validating and sourcing these aspects.

If you can help, please comment on his post.

My particular interest in cognition is that most information systems processes (e.g. governance, software development) are based on the assumption of rational people making object decisions drawing on all available evidence. My experience suggests that this is neither possible nor true. For me, this observation explains most of the limitations and failures associated with the design and support of information systems for e-learning (and information systems more generally).

I’ve written about aspects of this before and again.

So, as time progresses I’m hoping to add to this list in terms of references, examples and additional aspects.

Cory’s cognition list

Cory’s cognition list includes the following (a little paraphrasing)

  • We evolved as ‘first fit’ pattern matchers.
    A quote from Snowden (2005)

    This builds on naturalistic decision theory in particular the experimental and observational work of Gary Klein (1944) now validated by neuro-science, that the basis of human decision is a first fit pattern matching with past experience or extrapolated possible experience. Humans see the world both visually and conceptually as a series of spot observations and they fill in the gaps from previous experience, either personal or narrative in nature. Interviewed they will rationalize the decision in whatever is acceptable to the society to which they belong: “a tree spirit spoke to me” and “I made a rational decision having considered all the available facts” have the same relationship to reality

    I’m guessing that Kaplan’s law of instrument is somewhat related.

  • The fight or flight reaction.
  • We make assumptions.
  • We’re not analytical
    I wonder if this and most of the above points fit under “first fit pattern matchers”?
  • Failure imprints better than success.
  • Serendipitous recall (we only know what we need to know, when we need to know it).
  • We seek symmetry (attractiveness).


Snowden, D. (2005). Multi-ontology sense making: A new simplicity in decision making. Management Today, Yearbook 2005. R. Havenga.

How to improve L&T and e-learning at universities

Over the last week or so I’ve been criticising essentially all current practice used to improve learning and teaching. There are probably two main prongs to my current cynicism:

  1. Worse than useless evaluation of learning and teaching; and
    Universities are using evaluation methods that are known to be worthless and/or can’t get significant numbers of folk to agree as some definition of “good” learning and teaching.
  2. A focus on what management do.
    Where, given the difficulty of getting individual academics (let alone a significant number of them), to change and/or improve their learning and teaching (often because of the problems with point #1), the management/leadership/committee/support hierarchy within universities embark on a bit of task corruption and start to focus on what they do, rather than on what the teaching staff do.

    For example, the university has improved learning and teaching if the academic board has successfully mandated the introduction of generic attributes into all courses, had the staff development center run appropriate staff development events, and introduced “generic attributes” sections within course outlines. They’ve done lots of things, hence success. Regardless of what the academics are really doing and what impacts it is having on the quality of learning and teaching (i.e. see point #1).

So do you just give up?

So does this mean you can’t do anything? What can you do to improve learning and teaching? Does the fact that learning and teaching (and improving learning and teaching are wicked problems mean that you can’t do anything? This is part of the problem Col is asking about with his indicators project. This post is mostly aimed at trying to explain some principles and approaches that might work. As well as attempting to help Col, it’s attempting to make concrete some of my own thoughts. It’s all a work in progress.

In this section I’m going to try and propose some generic principles that might help inform how you might plan something. In the next section I’m going to try and apply these principles to Col’s problem. Important: I don’t think this is a recipe. The principles are going to be very broad and leave a lot of room for the application of individual knowledge. Knowledge of both generic theories of teaching, learning, people etc. and also of the specific contexts.

The principles I’m going to suggest are drawn from:

  • Reflective alignment – a focus on what the teachers do.
  • Adopter-based development processes.
  • A model for evaluating innovations informed by diffusion theory.
  • Emergent/ateleological design.
  • The Cynefin framework.

Reflective alignment

In proposing reflective alignment I believe it is possible to make a difference. But only if

The focus is on what the teacher does to design and deliver their course. The aim is to ensure that the learning and teaching system, its processes, rewards and constraints are aiming to ensure that the teacher is engaging in those activities which ensure quality learning and teaching. In a way that makes sense for the teacher, their course and their students.

The last sentence is important. It what make sense for the teacher. It is not what some senior manager thinks should work, or what the academic board thinks is important or good. Any attempt to introduce something that doesn’t engage with the individual teacher and doesn’t encourage them to reflect on what they are doing and hopefully make a small improvement, will fail.

Adopter-based development

This has strong connections with the idea of adopted-based development processes, which are talked about in this paper (Jones and Lynch, 1999)

places additional emphasis on being adopter-based and concentrating on the needs of the individuals and the social system in which the final system will be used.

Forget about the literature, forget about the latest fad (mostly) and concentrate first and foremost on developing a deep understanding of the local context, the social system and its mores and the people within it. What they experience, what their problems are, what their strengths are and what they’d like to do. Use these as the focus for deciding what you do next, not the latest, greatest fad.

How do you decide?

In this paper (Jones, Jamieson and Clark, 2003) we drew on Rogers’ diffusion theory (Rogers, 1995) to develop a model that might help folk make these sorts of decisions. The idea was to evaluate a potential innovation against the model in order to

increase their awareness of potential implementation issues, estimate the likelihood of reinvention, and predict the amount and type of effort required to achieve successful implementation of specific … innovations.

Variables influencing rate of adoption

The model consists of five characteristics of an innovation diffusion process that will directly influence the rate of adoption of the innovation. These characteristics, through the work of Rogers and others, also help identify potential problems facing adoption and potential solutions.

This model can be misused. It can be used as an attempt to encourage adoption of Level 2 approaches to improving learning and teaching. i.e. someone centrally decides on what to do and tries to package it in a way to encourage adoption. IMHO, this is the worst thing that can happen. Application of the model has to be driven by a deep understanding of the needs of the people within the local context. In terms of reflective alignment, driven by a desire to help encourage academics to reflect more on their learning and teaching.

Emergent/ateleological design

Traditional developer-based approaches to information systems are based on a broadly accepted and unquestioned set of principles that are completely and utterly inappropriate for learning and teaching in universities. Since at least this paper (Jones, 2000) I’ve been arguing for different design processes based on emergent development (Truex, Baskerville and Klein, 1999) and ateleological design (Introna, 1996).

Truex, Baskerville and Klein (1999) suggest the following principles for emergent development:

  • Continual analysis;
  • Dynamic requirements negotiation;
  • Useful, incomplete specifications;
  • Continuous redevelopment; and
  • The ability to adapt.

They are expanded in more detail in the paper. There have been many similar discussions about processes. This paper talks about Introna’s ateleological design process and its principles. Kurtz and Snowden (2007) talk about idealistics versus naturalistic approaches that are summarised in the following table.

Idealistic Naturalistic
Achieve ideal state Understand a sufficiency of the present in order to stimulate evolution
Privilege expert knowledge, analysis and interpretation Favour enabling emergent meaning at the ground level
Separate diagnosis from interfention Diagnosis and intervention to be intertwined with practice

No surprises for guessing that I believe that a naturalistic process is much more appropriate.

Protean technologies

Most software packages are severely constraining. I’m thinking mostly of enterprise systems here that tend to illustrate the underlying assumptions in their design where the control of what users do is necessary to ensure efficiency. I believe it just constrains what people can do, limits innovation and in an environment like learning and teaching this is a huge problem.

Truex et al (1999) make this point about systems and include “ability to adapt” as a prime requirement for emergent development. The software/systems in play have to be adaptable. As many people as possible, as quickly as possible, need to be able to modify the software to enable new functionality as the need becomes apparent. The technology has to enable, in Kurtz and Snowden’s (2007) words, “emergent meaning at the ground level”. It also to allow “diagnosis and intervention to be intertwined with practice”.

That is the software has to be protean. As much as possible the users of the system need to be able to play with the system, to try new things and where appropriate there have to be developers who can help and enable these things to happen more quickly. This implies that the software has to enable and support discussion, amongst many different people, to occur. To help share perspectives and ideas. The mixing of ideas help generate new and interesting ideas for change to the software.

Cynefin framework

Cynefin framework

Which brings us to the Cynefin framework. As a wicked problem, I place teaching and attempting to improve teaching into the Complex domain of the Cynefin framework. This means that the most appropriate approach is to “Probe – Sense – Respond”. i.e. do something small, see how it works and then encourage the stuff that works and cease/change the stuff that doesn’t.

Some ideas for a way forward

So to quickly finish this off, some off the cuff ideas for the indicators project:

  • Get the data from the indicators into a form that provides some information to real academics in a form that is easy to access and preferably as a part of a process or system they already use.
  • Make sure the form is perceived by the academics to provide some value.
  • Especially useful if the information/services provided by the indicators project enables/encourages reflection on the part of the academics.
    For example, giving a clear, simple, regular update on some information about student activity that is currently unknown. Perhaps couched with advice that helps provide options for a way to solve any potential problems.
  • Use a process and/or part of the product that encourages a lot of people talking about/contributing to ideas about how to improve what information/services the indicators provides.
  • Adopt the “open source” development ethos “release early, release often”
  • Perhaps try and create a community of academics around the project that are interested and want to use the services.
  • Pick people that are likely to be good change agents. Keep in mind Moore’s chasm and Geohegan’s identification of the technologists alliance.


Introna, L. (1996). “Notes on ateleological information systems development.” Information Technology & People 9(4): 20-39.

David Jones, Teresa Lynch, (1999). A Model for the Design of Web-based Systems that supports Adoption, Appropriation, and Evolution, Proceedings of the 1st ICSE Workshop on Web Engineering, Murugesan, S. & Deshpande, Y. (eds), Los Angeles, pp 47-56

David Jones, Kieren Jamieson, Damien Clark, (2003). “A Model for Evaluating Potential WBE Innovations,” Hawaii International Conference on System Sciences, vol. 5, no. 5, pp. 154a, 36th Annual Hawaii International Conference on System Sciences (HICSS’03) – Track 5, 2003.

Kurtz, C. and D. Snowden (2007). Bramble Bushes in a Thicket: Narrative and the intangiables of learning networks. Strategic Networks: Learning to Compete. Gibbert, Michel, Durand and Thomas, Blackwell.

Rogers, E. (1995). Diffusion of Innovations. New York, The Free Press.

Truex, D., R. Baskerville, et al. (1999). “Growing systems in emergent organizations.” Communications of the ACM 42(8): 117-123.

Patterns for e-learning – a lost opportunity or destined to fail

In the following I reflect on my aborted and half-baked attempts at harnessing design patterns within the practice of e-learning at universities and wonder whether it was a lost opportunity and/or a project that was destined to fail. This is written in the light shed by the work of a number of other folk (Google “patterns for e-learning”), including the current JISC-emerge project and, I believe, the related Pattern Language Network.

I think I’ll end up contending that it was destined to fail and hope I can provide some justification for that. Or at least that’s what I currently think, before writing the following. Any such suggestion will be very tentative.


Way back in 1999 I was a young, naive guy at the crossroads of software development and e-learning, I was wondering why more academics weren’t being innovative. Actually, the biggest and most troubling question was much simpler, “Why were they repeating the same mistakes I and others had made previously?”. For example, I lost count of the number of folk who tried to use email for online assignment submission in courses with more than 10 or 20 students. Even though many folk tried it, had problems and talked about the problems with additional workload it creates.

At the same time I was looking at how to improve the design of Webfuse, the e-learning system I was working upon, and object-oriented programming seemed like a good answer (it was). Adopting OOP also brought me into contact with the design patterns community within the broader OOP community. Design patterns within OOP were aimed at solving many of the same problems I was facing with e-learning.

Or perhaps this was an example of Kaplan’s law of instrument. i.e. patterns were the hammer and the issues around e-learning looked like a nail.

Whatever the reason some colleagues and I tried to start up a patterns project for online learning (I’m somewhat amazed that the website is still operating). The why page” for the project explains the rationale. We wrote a couple of papers explaining the project (Jones and Stewart, 1999; Jones, Stewart and Power, 1999), gave a presentation (the audio for the presentation is there in RealAudio format, shows how old this stuff is) and ran an initial workshop with some folk at CQU. One of the publications also got featured in ERIC and on OLDaily.

The project did produce a few patterns before dieing out:

There’s also one that was proposed but nothing concrete was produced – “The Disneyland Approach”. This was based on the idea of adapting ideas from how Disney designs their theme parks to online learning.

I can’t even remember what all the reasons were. Though I did get married a few months afterwards and that probably impacted my interest in doing additional work. Not to mention that my chief partner in crime also left the university for the paradise of private enterprise around the same time. That was a big loss.

One explanation and a “warning” for other patterns projects?

At the moment I have a feeling (it needs to be discussed and tested to become more than that) that these types of patterns projects are likely to be very difficult to get to work within the e-learning environment, especially if the aim is to get a broad array of academics to, at least, read and use the patterns. If the aim is to get a broad array of academics to contribute to patterns, then I think it’s become almost impossible. This feeling/belief is based on three “perspectives” that I’ve come to draw upon recently:

  1. Seven principles for knowledge management that suggest pattern mining will be difficult;
  2. the limitations of using the Technologists’ Alliance to bridge the gap;
  3. people (and academics) aren’t rational and this is why they won’t use patterns when designing e-learning and

7 Principles – difficulty of mining patterns

Developing patterns is essentially an attempt at knowledge management. Pattern mining is an attempt to capture what is known about a solution and its implementation and distill it into a form that is suitable for others to access and read. To abstract that knowledge.

Consequently, I think the 7 principles for knowledge management proposed by Dave Snowden apply directly to pattern mining. To illustrate the potential barriers here’s my quick summary of the connection between these 7 principles and pattern mining.

  1. Knowledge can only be volunteered it cannot be conscripted.
    First barrier in engaging academics to share knowledge to aid pattern mining is to get them engaged. To get them to volunteer. By nature, people don’t share complex knowledge, unless they know and trust you. Even then, if their busy…. This has been known about for a while.
  2. We only know what we know when we need to know it.
    Even if you get them to volunteer, then chances are they won’t be able to give you everything you need to know. You’ll be asking them out of the context when they designed or implemented the good practice you’re trying to abstract for a pattern.
  3. In the context of real need few people will withhold their knowledge.
    Pattern mining is almost certainly not going to be in a situation of real need. i.e. those asking aren’t going to need to apply the provided knowledge to solve an immediate problem. We’re talking about abstracting this knowledge into a form someone may need to use at some stage in the future.
  4. Everything is fragmented.
    Patterns may actually be a good match here, depending on the granularity of the pattern and the form used to express it. Patterns are generally fairly small documents.
  5. Tolerated failure imprints learning better than success.
    Patterns attempt to capture good practice which violates this adage. Though the idea of anti-patterns may be more useful, though not without their problems.
  6. The way we know things is not the way we report we know things.
    Even if you are given a very nice, structured explanation as part of pattern mining, chances are that’s not how the design decisions were made. This principle has interesting applications to how/if academics might harness patterns to design e-learning. If the patterns become “embedded” amongst the academics “pattern matching” process, it might just succeed. But that’s a big if.
  7. We always know more than we can say, and we will always say more than we can write down.
    The processes used to pattern mine would have to be well designed to get around this limitation.

Limitations of the technologists’ alliance

Technology adoption life-cycle - Moore's chasm

Given that pattern mining directly to coal-face academics is difficult for the above reasons, a common solution is to use the “Technologists’ Alliance” (Geoghegan, 1994). i.e. the collection of really keen and innovative academics and the associated learning designers and other folk who fit into the left hand two catagories of the technology adoption life cycle. i.e. those to the left of Moore’s chasm.

The problem with this is that the folk on the left of Moore’s chasm are very different to the folk on the right (the majority of academic staff). What the lefties think appropriate is not likely to match what the righties are interested in.

Geoghegan (1994) goes so far as to claim that the “alliance”, and the difference between them the righties, has been the major negative influence on the adoption of instructional technology.

Patterns developed by the lefties are like to be in language not understood by the righties and solve problems that the righties aren’t interested and probably weren’t even aware existed. Which isn’t going to positively contribute to adoption.

People aren’t rational decision makers

The basic idea of gathering patterns is that coal face academics will be so attracted to the idea of design patterns as an easy and effective way to design their courses that they will actually use the resulting pattern language to design their courses. This ignores the way the human mind makes decisions.

People aren’t rational. Most academics are not going to follow a structured approach to the design of their courses. Most aren’t going to quickly adopt a radically different approach to learning and teaching. Not because their recalcitrant mongrels more interested in research (or doing nothing), because they have the same biases and ways of thinking as the rest of us.

I’ve talked about some of the cognitive biases or limitations on how we think in previous posts including:

In this audio snippet (mp3) Dave Snowden argues that any assumption of rational, objective decision making that entails examining all available data and examining all possible alternate solutions is fighting against thousands of years of evolution.

Much of the above applies directly to learning and teaching where the experience of most academics is that they aren’t valued or promoted on the value of their teaching. It’s their research that is of prime concern to the organisation, as long as they can demonstrate a modicum of acceptable teaching ability (i.e. there aren’t great amounts of complaints or other events out of the ordinary).

In this environment with these objectives, is it any surprise that they aren’t all that interested in spending vast amounts of time to overcome their cognitive biases and limitations to adopt radically different approaches to learning and teaching?

Design patterns anyone?

It’s just a theory

Gravity, just a theory

Remember what I said above, this is just a theory, a thought, a proposition. Your mileage may vary. One of these days, when I have the time and if I have the inclination I’d love to read some more and maybe do some research around this “theory”.

I have another feeling that some of the above have significant negative implications for much of the practice of e-learning and attempts to improve learning and teaching in general. In particular, other approaches that attempt to improve the design processes used by academics by coming up with new abstractions. For example, learning design and tools like LAMS. To some extent some of the above might partially explain why learning objects (in the formal sense) never took off.

Please, prove me wrong. Can you point to an institution of higher education where the vast majority of teaching staff have adopted an innovative approach to the design or implementation of learning? I’m talking at least 60/70%.

If I were setting the bar really high, I would ask for prove that they weren’t simply being seen to comply with the innovative approach, that they were actively engaging and embedding it into their everyday thinking about teaching.

What are the solutions?

Based on my current limited understanding and the prejudices I’ve formed during my PhD, I believe that what I currently understand about TPACK offers some promise. Once I read some more I’ll be more certain. There is a chance that it may suffer many of the same problems, but my initial impressions are positive.


Geoghegan, W. (1994). Whatever happened to instructional technology? 22nd Annual Conferences of the International Business Schools Computing Association, Baltimore, MD, IBM.

David Jones, Sharonn Stewart, The case for patterns in online learning, Proceedings of Webnet’99 Conference, De Bar, P. & Legget, J. (eds), Association for the Advancement of Computing in Education, Honolulu, Hawaii, Oct 24-30, pp 592-597

David Jones, Sharonn, Stewart, Leonie Power, Patterns: using proven experience to develop online learning, Proceedings of ASCILITE’99, Responding to Diversity, Brisbane: QUT, pp 155-162

Getting half-baked ideas out there: improving research and the academy

In a previous post examining one reason folk don’t take to e-learning I included the following quote from a book by Carolyn Marvin

the introduction of new media is a special historical occasion when patterns anchored in older media that have provided the stable currency for social exchange are reexamined, challenged, and defended.

In that previous post I applied this idea to e-learning. In this post I’d like to apply this idea to academic research.

Half-baked ideas

In this post Jon Udell talks about the dissonance between the nature of blogs, the narrative form he recommends for blogs and the practices of academics. In it he quotes an academic’s response to his ideas for writing blogs as

I wouldn’t want to publish a half-baked idea.

Jon closes the blog post with the following paragraph

That outcome left me wondering again about the tradeoffs between academia’s longer cycles and the blogosphere’s shorter ones. Granting that these are complementary modes, does blogging exemplify agile methods — advance in small increments, test continuously, release early and often — that academia could use more of? That’s my half-baked thought for today.

I think this perspective sums it up nicely. The patterns of use around the old/current media for academic research (conference and journal papers) are similar to heavyweight software development methodologies. They rely on a lot of up-front analysis and design to ensure that the solution is 100% okay. While the patterns of use of the blogosphere is very much more like that of agile development methods. Small changes, get it working, get it out and learn from that experience to inform the next small change.

Update: This post talks a bit more about Udell’s views in light of a talk he gave at an EDUCAUSE conference. There is a podcast of the presentation.

There are many other examples of this, just two include:

Essentially the standard practices associated with research projects in academia prevent many folk from engaging in getting the “half-baked ideas” out into the blogosphere. There are a number of reasons, but most come back to not looking like a fool. I’ve seen this many times with my colleagues wanting to spend vast amounts of time completing a blog post.

As a strong proponent and promoter of ateleological design processes, I’m interested in how this could be incorporated into research. Yesterday, in discussions with a colleague, I think we decided to give it a go.

What we’re doing and what is the problem?

For varying reasons, Col and I are involved, in different ways, with a project going under the title of the indicators project.. However, at the core of our interest is the question

How do you data mine/evaluate usage statistics from the logs and databases of a learning management system to draw useful conclusions about student learning, or the success or otherwise of these systems.

This is not a new set of questions. The data mining of such logs is quite a common practice and has a collection of approaches and publications. So, the questions for use become:

  • How can we contribute or do something different than what already exists?
  • How can we ensure that what we do is interesting and correct?
  • How do we effectively identify the limitations and holes underpinning existing work and our own work?

The traditional approach would be for us (or at least Col) to go away, read all the literature, do a lot of thinking and come up with some ideas that are tested. The drawback of this approach is that there is limited input from other people with different perspectives. A few friends and colleagues of Col’s might get involved during the process, however, most of the feedback comes at the end when he’s published (or trying to publish) the work.

This might be too late. Is there a way to get more feedback earlier? To implement Udell’s idea of release early and release often?

Safe-fail probes as a basis for research

The nature of the indicators project is that there will be a lot of exploration to see if there are interesting metrics/analyses that can be done on the logs to establish useful KPIs, measurements etc. Some will work, some won’t and some will be fundamentally flawed from a statistical, learning or some other perspective.

So rather than do all this “internally” I suggested to Col that we blog any and all of the indicators we try and then encourage a broad array of folk to examine and discuss what was found. Hopefully generate some input that will take the project in new and interesting directions.

Col’s already started this process with the latest post on his blog.

In thinking about this I can come up with at least two major problems to overcome:

  • How to encourage a sufficient number and diversity of people to read the blog posts and contribute?
    People are busy. Especially where we are. My initial suggestion is that it would be best if the people commenting on these posts included expertise in: statistics; instructional design (or associated areas); a couple of “coal-face” academics of varying backgrounds, approaches and disciplines; a senior manager or two; and some other researchers within this area. Not an easy group to get together!
  • How to enable that diversity of folk to understand what we’re doing and for us to understand what they’re getting at?
    By its nature this type of work draws on a range of different expertise. Each expert will bring a different set of perspectives and will typically assume everyone is aware of them. We won’t be. How do you keep all this at a level that everyone can effectively share their perspectives?

    For example, I’m not sure I fully understand all of the details of the couple of metrics Col has talked about in his recent post. This makes it very difficult to comment on the metrics and re-create them.

Overcoming these problems, in itself, is probably a worthwhile activity. It could establish a broader network of contacts that may prove useful in the longer term. It would also require that the people sharing perspectives on the indicators would gain experience in crafting their writing in a way that maximises understandability by others.

If we’re able to overcome these two problems it should produce a lot of discussion and ideas that contributes to new approaches to this type of work and also to publications.


Outstanding questions include:

  • What are the potential drawbacks of this idea?
    The main fear I guess of folk is that someone, not directly involved in the discussion, steals the ideas and publishes them unattributed and before we can publish. There’s probably a chance that we’ll also look like fools.
  • How do you attribute ideas and handle authorship of publications?
    If a bunch of folk contribute good ideas which we incorporate and then publish, should they be co-authors, simply referenced appropriately, or something else? Should it be a case by case basis with a lot of up-front discussion?
  • How should it be done?
    Should we simply post to our blogs and invite people to participate and comment on the blogs? Should we make use of some of the ideas Col has identified around learning networks? For example, agree on common tags for blog posts and del.icio.us etc. Provide a central point to bring all this together?


Lucas Introna. (1996) Notes on ateleological information systems development, Information Technology & People. 9(4): 20-39