Daily Archives: August 8, 2010

Wicked problems and the need to engage with differing perspectives

In writing the last post, I had the opportunity re-read the Wikipedia article on wicked problems. This quote struck a chord with me

Rittel and Webber coined the term in the context of problems of social policy, an arena in which a purely scientific-rational approach cannot be applied because of the lack of a clear problem definition and differing perspectives of stakeholders.

My experience with writing BIM from late last year through early this year is a good example of this. BIM is based on an earlier tool called BAM that was written as part of an institution specific system called Webfuse. As of early this year, the institution was dropping Webfuse and moving to Moodle. If BAM were to be continued, it had to be ported to Moodle. And the perspectives begin.

Stakeholders and their perspectives

There are at least three sets of stakeholders in the BIM process:

  • Academic staff wanting BIM written to use in their teaching.
  • Myself, the developer/researcher wanting to write BIM because of an interest in the approach.
  • The IT folk responsible for the Moodle transition project and supporting staff.

The academic staff who wanted BIM created, wanted it because it enabled a pedagogical practice that had been previously successful, at least from their perspective. They didn’t really care a great deal about how, they just wanted to use that pedagogy again.

I wanted to work on BIM because I believed that both the pedagogy it enabled and the model of e-learning systems it embodied were worthwhile and potentially very important for future practice.

The IT folk didn’t want BIM written. They had limited resources to use on the project and anything that was not core Moodle, was not very attractive to them. Consequently, they spent a lot of time and effort proposing methods by which the pedagogy enabled by BAM, could be enabled through various combinations of core Moodle tools. There was also quite a bit of political shenanigans being undertaken to prevent BIM being written.

Effective collaboration to enable an efficient implementation of the required pedagogy was not high on the agenda.

The winner writes the history

Obviously, the above is my perspective of what happened. I’m quite sure others involved might provide different perspectives. Especially now that BIM has been somewhat successful, at least in terms of at least one other institution using it and various people in the broader Moodle community saying nice things. I know begin to wonder what the story will be written about the history of BIM.

I no longer work for the original institution, and am fairly confident that if BIM continues to enjoy some success, that the IT folk within the institution will take some credit for an environment that enabled the development of BIM. After all, the development of BIM proves the rhetoric about the value of adopting an open source LMS like Moodle. After all the institution was able to develop a Moodle module that served an effective pedagogical purpose and is being adopted by others.

From my perspective, the writing of BIM has been achieved in spite of the institutional environment. Due to the difficulties of that environment, I had to do most of the work on holidays, had to fight individuals that actively worked against the development of BIM, and a range of other problems not indicative of an environment conducive to innovation.

But, now that I’ve left the organisation, it shall be interesting to hear what stories those that remain tell of BIM, its development, and their role within it.

The main point is that difference exists

Now all of that probably sounds a bit one sided and biased. Others might suggest a different version of events and suggest that it wasn’t so bad. They are free to confess that. Which version of events is more correct isn’t the point I’m trying to make here.

The point I’m trying to make is that as a wicked problem, improving learning and teaching within a university is going to have a large number of very different perspectives. The attempt to develop “the correct” perspective – which is the aim of engineering or planning approaches to solving these problems – misses the point. To establish an arbitrary and singular “correct” perspective of the problem and its solution, such a process must ignore and continually suppress alternative perspectives. This wastes energy on the suppression, and worse, closes off more fruitful solutions that arise from actively engaging with the diversity.

30% of information about task performance

Over on the Remote Learner blog, Jason Cole has posted some information about a keynote by Dr Richard Clark at one of the US MoodleMoots. I want to focus on one key quote from that talk and its implications for Australian higher education and current trends to “improve” learning and teaching and adopt open source LMS (like Moodle).

It’s my argument that this quote, and the research behind it, has implications for the way these projects are conceptualised and run. i.e. they are missing out on a huge amount of potential.

Task analysis and the 30%

The quote from the presentation is

In task analysis, top experts only provide 30% of information about how they perform tasks.

It’s claimed that all the points made by Clark in his presentation are supported by research. It appears likely that the support for this claim comes from Sullivan et al (2008). This paper address the problem of trying to develop procedural skills necessary for professions such as surgery.

The above quote arises due to the problems experts have in describing what they do. Sullivan et al (2008) offer various descriptions and references of this problem in the introduction

This is often difficult because as physicians gain expertise their skills become automated and the steps of the skill blend together [2]. Automated knowledge is achieved by years of practice and experience, wherein the basic elements of the task are performed largely without conscious awareness [3]. This causes experts to omit specific steps when trying to describe a procedure because this information is no longer accessible to conscious processes [2]

Then later, when describing the findings of their research they write

The fact that the experts were not able to articulate all of
the steps and decisions of the task is consistent with the expertise literature that shows that expertise is highly automated [2,3,5] and that experts make errors when trying to describe how they complete a task [3,6,7]. In essence, as the experts developed expertise, their knowledge of the task changed from declarative to procedural knowledge. Declarative knowledge is knowing facts, events, and objects and is found in our conscious working memory [2]. Procedural knowledge is knowing how to perform a task and includes both motor and cognitive skills [2]. Procedural knowledge is automated and operates outside of conscious awareness [2,3]. Once a skill becomes automated, it is fine-tuned to run on autopilot and executes much faster than conscious processes [2,8]. This causes the expert to omit steps and decision points while teaching a procedure because they have literally lost access to the behaviors and cognitive decisions that are made during skill execution [2,5].

The link to analysis and design

A large number of universities within Australia are either:

  1. Changing their LMS to an open source LMS (e.g. Moodle or Sakai), and using this as an opportunity to “renew” their online learning; and/or
  2. Busy on broader interventions to “renew” their online learning due to changes in government policies such as quality assurance, graduate attributes and a move to demand funding for university places.

The common process being adopted by most of these projects is from the planning school of process. i.e. you undertake analysis to identify all relevant, objective information and then design the solution on that basis. You then employ a project team to ensure that the design gets implemented, and finally you put in a skeleton team that maintains the design. This works in terms of information systems (e.g. the selection, implementation and support of a LMS) or broader organisational change (e.g. strategic plans).

The problem is that the “expert problem” Clark refers to above means that it is difficult to gather all the necessary information. It’s difficult to get the people with the knowledge to tell all that they know.

A related example.

The StaffMyCQU Example

Some colleagues and I – over a period of almost 10 years – designed, supported, and evolved an information system call Staff MyCQU. An early part of it’s evolution is described in the “Student Records” section of this paper. It was a fairly simple web application that provided university staff with access to student records and range of related services. Over it’s life cycle, a range of new and different features were added and existing features tweaked, all in response to interactions with the system’s users.

Importantly, the systems developers were also generally the people handling user queries and problems on the “helpdesk”. Quite often, changes to the system would result in tweaks and changes. Rather than being designed up front, the system grew and changed with people using it.

The technology used to implement Staff MyCQU is now deemed ancient and, even more importantly, the system and what it represents is now politically tainted within the organisation. Hence, for the last year or so, the information technology folk at the institution have been working on replacement systems. Just recently, there’s been some concrete outcomes of that work which has resulted in systems being shown to folk, including some of the folk who had used Staff MyCQU. On being shown a particular feature of the new system, it soon became obvious that the system didn’t include a fairly common extension of the feature. An extension that had actually been within StaffMyCQU from the start.

The designers of the new system, with little or no direct connection with actual users doing actual work, don’t have the knowledge about user needs to design a system that is equivalent to what already exists. A perfect example of why the strict separation of analysis, design, implementation and use/maintenance that is explicit in most IT projects and divisions is a significant problem.

The need for growing knowledge

Sullivan et al (2008) suggest cognitive task analysis as a way to better “getting at” the knowledge held by the expert, and there’s a place for that. However, I also think that there is a need, especially in some contexts, for recognition that the engineering/planning method is just not appropriate for some contexts. In some contexts, you need more of a growing/gardening approach. Or, in some cases you need to include more of the growing/gardening approach into your engineering method.

Rather than seeking to gather and analyse all knowledge separated from practice and prior to implementation. Implementation needs to be designed to pay close attention to knowledge that is generated during implementation and the ability to act upon that knowledge.

Especially for wicked problems and complex systems

Trying to improve learning and teaching within a university is a wicked problem. There are many different stakeholders or groups of stakeholders, each with a different frame of reference which leads to different understanding of how to solve the problem. Simple techno-rational solutions to wicked problems rely on the adoption of one of those frames of reference and ignorance of the remainder.

For example, implementation of a new LMS is seen as an information technology problem and treated as such. Consequently, success is measured by uptime and successful project implementation. Not on the quality of learning and teaching that results.

In addition, as you solve wicked problems, you and all of the stakeholders learn more about the problem. The multiple frames of reference change and consequently the appropriate solutions change. This is getting into the area of complex adaptive systems. Dave Snowden has a recent post about why human complex adaptive systems are different.

Prediction

Universities that lean too heavily on engineering/planning approaches to improving learning and teaching will fail. However, they are likely to appear to succeed due to the types of indicators they choose to adopt to as measurements of success and the capability of actors to game those indicators.

Universities that adopt more of a gardening approach, will have greater levels of success, but will have a messier time of it during their projects. These universities will be where the really innovative stuff comes from.

References

Sullivan, M., A. Oretga, et al. (2008). “Assessing the teaching of procedural skills: can cognitive task analysis add to our traditional teaching methods.” The American Journal of Surgery 195: 20-23.