Nobody likes a do-gooder – another reason for e-learning not mainstreaming?

Came across the article, “Nobody likes a do-gooder: Study confirms selfless behaviour is alienating” from the Daily Mail via Morgaine’s amplify. I’m wondering if there’s a connection between this and the chasm in the adoption of instructional technology identified by Geoghegan (1994)

The chasm

Back in 1994, Geoghegan draw on Moore’s Crossing the Chasm to explain why instructional technology wasn’t being adopted by the majority of university academics. The suggestion is that there is a significant difference between the early adopters of instructional technology and the early majority. That what works for one group, doesn’t work for the others. There is a chasm. Geoghegan (1994) also suggested that the “technologists alliance” – vendors of instructional technology and the university folk charged with supporting instructional technology – adopt approaches that work for the early adopters, not the early majority.

Nobody likes do-gooders

The Daily Mail article reports on some psychological research that draws some conclusions about how “do-gooders” are seen by the majority

Researchers say do-gooders come to be resented because they ‘raise the bar’ for what is expected of everyone.

This resonates with my experience as an early adopter and more broadly with observations of higher education. The early adopters, those really keen on learning and teaching are seen a bit differently by those that aren’t keen. I wonder if the “raise the bar” issue applies? Would imagine this could be quite common in a higher education environment where research retains its primacy, but universities are under increasing pressure to improve their learning and teaching. And more importantly show to everyone that they have improved.

The complete study is outlined in a journal article.

References

Geoghegan, W. (1994). Whatever happened to instructional technology? Paper presented at the 22nd Annual Conferences of the International Business Schools Computing Association, Baltimore, MD.

University e-learning systems: the need for new product and process models and some examples

I’m in the midst of the horrible task of trying to abstract what I think I know about implementing e-learning information systems within universities into the formal “language” required of an information systems design theory and a PhD thesis. This post is a welcome break from that, but is still connected in that it builds on what is perhaps fundamentally different between what most universities are currently doing, and what I think is a more effective approach. In particular, it highlights some more recent developments which are arguably a step towards what I’m thinking.

As it turns out, this post is also an attempt to crystalise some early thinking about what goes into the ISDT. So some of the following is a bit rough. Actually, writing this has identified one perspective that I hadn’t thought of, which is potentially important.

Edu 2.0

The post arises from having listened to this interview with Graham Glass the guy behind Edu 2.0, which is essentially a cloud-based LMS. It’s probably one of a growing number out there. What I found interesting was his description of the product and the process behind Edu 2.0.

In terms of product (i.e. the technology used to provide the e-learning services), the suggestion was that because Edu 2.0 is based in the cloud – in this case Amazon’s S3 service – it could be updated much more quickly than more traditional institutionally hosted LMSs. There some connection here with Google’s approach to on-going modifications to live software.

Coupled with this product flexibility was a process (i.e. the process through which users were supported and the system evolved) that very much focused on the Edu 2.0 developers interacting with the users of the product. For example, releasing proposals and screenshots of new features within discussion forums populated with users and getting feedback; and also responding quickly to requests for fixes or extensions from users. To such an extent that Glass reports users of Edu 2.0 feeling like it is “there EDU 2.0” because it responds so quickly to them and their needs.

The traditional Uni/LMS approach is broken

In the thesis I argue that when you look at how universities are currently implementing e-learning information systems (i.e. selecting and implementing an LMS) the product (the enterprise LMS, the one ring to rule them all) and the process they use are not a very good match at all for the requirements of effectively supporting learning and teaching. In a nut shell, the product and the process is aimed at reducing diversity and the ability to learn, while diversity is a key characteristic of learning and teaching at a university. Not to mention that when it comes to e-learning within universities, it’s still very early days and it is essential that any systemic approach to e-learning have the ability to learn from its implementation and make changes.

I attempted to expand on this argument in the presentation I gave at the EDUCAUSE’2009 conference in Denver last year.

What is needed

The alternative I’m trying to propose within the formal language of the ISDT is that e-learning within universities should seek to use a product (i.e. a specific collection of technologies) that is incredible flexible. The product must, as much as possible, enable rapid, on-going, and sometimes quite significant changes.

To harness this flexibility, the support and development process for e-learning should, rather than be focused on top-down, quality assurance type processes, be focused on closely observing what is being done with the system and using those lessons to modify the product to better suit the diversity of local needs. In particular, the process needs to be adopter focused, which is described by Surry and Farquhar (1997) as seeing the individual choosing to adopt the innovation as the primary force for change.

To some extent, this ability to respond to the local social context can be hard with a software product that has to be used in multiple different contexts. e.g. an LMS used in different institutions.

Slow evolution but not there yet

All university e-learning implementation is not the same. There has been a gentle evolution away from less flexible products to more flexible produces, e.g.

  1. Commercial LMS, hosted on institutional servers.
    Incredibly inflexible. You have to wait for the commercial vendor to see the cost/benefit argument to implement a change in the code base, and then you have to wait until your local IT department can schedule the upgrade to the product.
  2. Open source LMS, hosted on institutional servers.
    Less inflexible. You still have to wait for a developer to see your change as an interesting itch to scratch. This can be quite quick, but it can also be slow. It can be especially quick if your institution has good developers, but good developers cost big money. Even if the developer scratches your itch, the change has to be accepted into the open source code base, which can take some time if its a major change. Then, finally, after the code base is changed, you have to wait for your local IT shop to schedule the upgrade.
  3. Open source LMS, with hosting outsourced.
    This can be a bit quicker than the institutional hosted version. Mainly because the hosting company may well have some decent developers and significant knowledge of upgrading the LMS. However, it’s still going to cost a bit, and it’s not going to be real quick.

The cloud-based approach used by EDU 2.0 does offer a product that is potentially more flexible than existing LMS models. However, apart from the general slowness in the updating, if the change is very specific to an individual institution, it is going to cause some significant problems, regardless of the product model.

Some alternative product models

The EDU 2.0 model doesn’t help the customisation problem. In fact, it probably makes it a bit worse as the same code base is being used by hundreds of institutions from across the globe. The model being adopted by Moodle (and probably others), having plugins you can add, is a step in the right direction in that institutions can choose to have different plugins installed.
However, this model typically assumes that all the plugins have to use the same API, language, or framework. If they don’t, they can’t be installed on the local server and integrated into the LMS.

This requirements is necessary because there is an assumption for many (but not all) plugins that they provide the entire functionality and must be run on the local server. So there is a need for a tighter coupling between the plugin and the LMS and consequently less local flexibility.

A plugin like BIM is a little different. There is a wrapper that is tightly integrated into Moodle to provide some features. However, the majority of the functionality is provided by software (in this case blogging engines) that are chosen by the individual students. Here the flexibility is provided by the loose coupling between blog engine and Moodle.

Mm, still need some more work on this.

References

Surry, D., & Farquhar, J. (1997). Diffusion Theory and Instructional Technology. e-Journal of Instructional Science and Technology, 2(1), 269-278.

How people learn and implications for academic development

While I’m traveling this week I am reading How people learn. This is a fairly well known book that arose out of a US National Academy of Science project to look at recent insights from research about how people learn and then generate insights for teaching. I’ll be reading it through the lens of my thesis and some broader thinking about “academic development” (one of the terms applied to trying to help improve the teaching and learning of university).

Increasingly, I’ve been thinking that the “academic development” is essentially “teaching the teacher”, though it would be better phrased as creating an environment in which the academics can learn how to be better at enabling student learning. Hand in hand with this thought is the observation and increasing worry that much of what passes for academic development and management action around improving learning and teaching is not conducive to creating this learning environment. The aim of reading this book is to think about ways which this situation might be improved.

The last part of this summary of the first chapter connects with the point I’m trying to make about academic development within universities.

(As it turns out I only read the first chapter while traveling, remaining chapters come now).

Key findings for learning

The first chapter of the book provides three key (but not exhaustive) findings about learning:

  1. Learners arrive with their own preconceptions about how the world exists.
    As part of this, if the early stages of learning does not engage with the learner’s understanding of the world, then the learner will either not get it, or will get it enough to pass the test, but then revert to their existing understanding.
  2. Competence in a field of inquiry arises from three building blocks
    1. a deep foundation of factual knowledge;
    2. understand these facts and ideas within a conceptual framework;
    3. organise knowledge in ways that enable retrieval and application.

    A primary idea here is that experts aren’t “smart” people. But they do have conceptual frameworks that help apply/understand much quicker than others

  3. An approach to teaching that enables students to implement meta-cognitive strategies can help them take control of their learning and monitor their progress.
    Meta-cognitive strategies aren’t context or subject independent.

Implications for teaching

The suggestion is that the above findings around learning have significant implications for teaching, these are:

  1. Teachers have to draw out and work with pre-existing student understandings.
    This implies lots more formative assessment that focuses on demonstrating understanding.
  2. In teaching a subject area, important concepts must be taught in-depth.
    The superficial coverage of concepts (to fit it all in) needs to be avoided, with more of a focus on the those important subject concepts.
  3. The teaching of meta-cognitive skills needs to be integrated into the curriculum of a variety of subjects.

Four attributes of learning environments

A latter chapter expands on a framework to design and evaluate learning environments, it includes four interrelated attributes of these environments:

  1. They must be learner centered;
    i.e. a focus on the understandings and progress of individual students.
  2. The environment should be knowledge centered with attention given to what is taught, why it is taught and what competence or mastery looks like
    Suggests too many curricula fail to support learning because the knowledge is disconnected, assessment encourages memorisation rather than learning. A knowledge-centered environment “provides the necessary depth of study, assessing student understanding rather than factual memory and incorporates the teaching of meta-cognitive strategies”.

    There’s an interesting point here about engagement, that I’ll save for another time.

  3. Formative assessments
    The aim is for assessments that help both students and teachers monitor progress.
  4. Develop norms within the course, and connection with the outside world, that support core learning values.
    i.e. pay attention to activities, assessments etc within the course that promote collaboration and camaraderie.

Application to professional learning

In the final section of the chapter, the authors state that these principles apply equally well to adults as they do to children. They explain that

This point is particularly important because incorporating the principles in this volume into educational practice will require a good deal of adult learning.

i.e. if you want to improve learning and teaching within a university based on these principles, then the teaching staff will have to undergo a fair bit of learning. This is very troubling because the authors argue that “approaches to teaching adults consistently violate principles for optimizing learning”. In particular, they suggest that professional development programs for teachers frequently:

  • Are not learner centered.
    Rather than ask what help is required, teachers are expected to attend pre-arranged workshops.
  • Are not knowledge centered.
    i.e. these workshops introduce the principles of a new technique with little time spent to the more complex integration of the new technique with the other “knowledge” (e.g. the TPACK framework) associated with the course
  • Are not assessment centered.
    i.e. when learning these new techniques, the “learners” (teaching staff) aren’t given opportunities to try this out, get feedback and even to give teachers the skills to know whether or not they’ve implemented the new technique effectively.
  • Are not community centered.
    Professional development consists more of ad hoc, separate events with little opportunity for a community of teachers to develop connections for on-going support.

Here’s a challenge. Is there any university out there were academic development doesn’t suffer from these flaws? How has that been judged?

The McNamara Fallacy and pass rates, academic analytics, and engagement

In some reading for the thesis today I came across the concept of McNamara’s fallacy. I hadn’t heard this before. This is somewhat surprising as it points out another common problem with some of the more simplistic approaches to improving learning and teaching that are going around at the moment. It’s also likely to be a problem with any simplistic implementation of academic analytics.

What is it?

The quote I saw describes McNamara’s fallacy as

The first step is to measure whatever can be easily measured. This is ok as far as it goes. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.

The Wikipedia page on the McNamara fallacy describes it as referring to Robert McNamara’s – the US Secretary of Defense from 1961 through 1968 – explanation of the USA’s failure in Vietnam down to a focus on quantifying success through simply indicators such as enemy body count, while at the same time ignoring other more important factors. Factors that were more difficult to measure.

The PhD thesis which I saw the above quote ascribes it to Yankelovich (1972), a sociologist. Wikipedia ascribes it to Charles Handy’s “The Empty Raincoat”. Perhaps indicating that the quote is from McNamara himself, just presented in different places.

Pass rates

Within higher education it is easy to see “pass rates” as an example of McNamara’s fallacy. Much of the quality assurance within higher education institutions is focused on checking the number of students who do (or don’t) pass a course. If the pass rate for a course isn’t too high, everything is okay. Much easier to measure this than the quality of student learning experience, the learning theory which informs the course design, or the impact the experience has on the student, now and into the future. This sort of unquestioning application of McNamara’s fallacy sometimes make me think we’re losing the learning and teaching “war” within universities.

What are the more important, more difficult to measure indicators that provide a better and deeper insight into the quality of learning and teaching?

Analytics and engagement

Student engagement is one of the buzz words on the rise in recent years, it’s been presented as one of the ways/measures to improve student learning. After all, if they are more engaged, obviously they must have a better learning experience. Engagement has become an indication of institutional teaching quality. Col did a project last year in which he looked more closely at engagement, the write up of that project gives a good introduction to student engagement. It includes the following quote

Most of the research into measuring student engagement prior tot he widespread adoption of online, or web based classes, has concentrated on the simple measure of attendance (Douglas & Alemanne, 2007). While class attendance is a crude measure, in that it is only ever indicative of participation and does not necessarily consider the quality of the participation, it has nevertheless been found to be an important variable in determining student success (Douglas, 2008)

Sounds a bit like a case of McNamara’s fallacy to me. A point Col makes when he says “it could be said that class attendance is used as a metric for engagement, simply because it is one of the few indicators of engagement that are visible”.

With the move to the LMS, it was always going to happen that academic analytics would be used to develop measures of student engagement (and other indicators). Indeed, that’s the aim of Col’s project. However, I do think that academic analytics is going to run the danger of McNamara’s fallacy. So busy focused on what we can measure easily, we miss the more important stuff that we can’t.

30% of information about task performance

Over on the Remote Learner blog, Jason Cole has posted some information about a keynote by Dr Richard Clark at one of the US MoodleMoots. I want to focus on one key quote from that talk and its implications for Australian higher education and current trends to “improve” learning and teaching and adopt open source LMS (like Moodle).

It’s my argument that this quote, and the research behind it, has implications for the way these projects are conceptualised and run. i.e. they are missing out on a huge amount of potential.

Task analysis and the 30%

The quote from the presentation is

In task analysis, top experts only provide 30% of information about how they perform tasks.

It’s claimed that all the points made by Clark in his presentation are supported by research. It appears likely that the support for this claim comes from Sullivan et al (2008). This paper address the problem of trying to develop procedural skills necessary for professions such as surgery.

The above quote arises due to the problems experts have in describing what they do. Sullivan et al (2008) offer various descriptions and references of this problem in the introduction

This is often difficult because as physicians gain expertise their skills become automated and the steps of the skill blend together [2]. Automated knowledge is achieved by years of practice and experience, wherein the basic elements of the task are performed largely without conscious awareness [3]. This causes experts to omit specific steps when trying to describe a procedure because this information is no longer accessible to conscious processes [2]

Then later, when describing the findings of their research they write

The fact that the experts were not able to articulate all of
the steps and decisions of the task is consistent with the expertise literature that shows that expertise is highly automated [2,3,5] and that experts make errors when trying to describe how they complete a task [3,6,7]. In essence, as the experts developed expertise, their knowledge of the task changed from declarative to procedural knowledge. Declarative knowledge is knowing facts, events, and objects and is found in our conscious working memory [2]. Procedural knowledge is knowing how to perform a task and includes both motor and cognitive skills [2]. Procedural knowledge is automated and operates outside of conscious awareness [2,3]. Once a skill becomes automated, it is fine-tuned to run on autopilot and executes much faster than conscious processes [2,8]. This causes the expert to omit steps and decision points while teaching a procedure because they have literally lost access to the behaviors and cognitive decisions that are made during skill execution [2,5].

The link to analysis and design

A large number of universities within Australia are either:

  1. Changing their LMS to an open source LMS (e.g. Moodle or Sakai), and using this as an opportunity to “renew” their online learning; and/or
  2. Busy on broader interventions to “renew” their online learning due to changes in government policies such as quality assurance, graduate attributes and a move to demand funding for university places.

The common process being adopted by most of these projects is from the planning school of process. i.e. you undertake analysis to identify all relevant, objective information and then design the solution on that basis. You then employ a project team to ensure that the design gets implemented, and finally you put in a skeleton team that maintains the design. This works in terms of information systems (e.g. the selection, implementation and support of a LMS) or broader organisational change (e.g. strategic plans).

The problem is that the “expert problem” Clark refers to above means that it is difficult to gather all the necessary information. It’s difficult to get the people with the knowledge to tell all that they know.

A related example.

The StaffMyCQU Example

Some colleagues and I – over a period of almost 10 years – designed, supported, and evolved an information system call Staff MyCQU. An early part of it’s evolution is described in the “Student Records” section of this paper. It was a fairly simple web application that provided university staff with access to student records and range of related services. Over it’s life cycle, a range of new and different features were added and existing features tweaked, all in response to interactions with the system’s users.

Importantly, the systems developers were also generally the people handling user queries and problems on the “helpdesk”. Quite often, changes to the system would result in tweaks and changes. Rather than being designed up front, the system grew and changed with people using it.

The technology used to implement Staff MyCQU is now deemed ancient and, even more importantly, the system and what it represents is now politically tainted within the organisation. Hence, for the last year or so, the information technology folk at the institution have been working on replacement systems. Just recently, there’s been some concrete outcomes of that work which has resulted in systems being shown to folk, including some of the folk who had used Staff MyCQU. On being shown a particular feature of the new system, it soon became obvious that the system didn’t include a fairly common extension of the feature. An extension that had actually been within StaffMyCQU from the start.

The designers of the new system, with little or no direct connection with actual users doing actual work, don’t have the knowledge about user needs to design a system that is equivalent to what already exists. A perfect example of why the strict separation of analysis, design, implementation and use/maintenance that is explicit in most IT projects and divisions is a significant problem.

The need for growing knowledge

Sullivan et al (2008) suggest cognitive task analysis as a way to better “getting at” the knowledge held by the expert, and there’s a place for that. However, I also think that there is a need, especially in some contexts, for recognition that the engineering/planning method is just not appropriate for some contexts. In some contexts, you need more of a growing/gardening approach. Or, in some cases you need to include more of the growing/gardening approach into your engineering method.

Rather than seeking to gather and analyse all knowledge separated from practice and prior to implementation. Implementation needs to be designed to pay close attention to knowledge that is generated during implementation and the ability to act upon that knowledge.

Especially for wicked problems and complex systems

Trying to improve learning and teaching within a university is a wicked problem. There are many different stakeholders or groups of stakeholders, each with a different frame of reference which leads to different understanding of how to solve the problem. Simple techno-rational solutions to wicked problems rely on the adoption of one of those frames of reference and ignorance of the remainder.

For example, implementation of a new LMS is seen as an information technology problem and treated as such. Consequently, success is measured by uptime and successful project implementation. Not on the quality of learning and teaching that results.

In addition, as you solve wicked problems, you and all of the stakeholders learn more about the problem. The multiple frames of reference change and consequently the appropriate solutions change. This is getting into the area of complex adaptive systems. Dave Snowden has a recent post about why human complex adaptive systems are different.

Prediction

Universities that lean too heavily on engineering/planning approaches to improving learning and teaching will fail. However, they are likely to appear to succeed due to the types of indicators they choose to adopt to as measurements of success and the capability of actors to game those indicators.

Universities that adopt more of a gardening approach, will have greater levels of success, but will have a messier time of it during their projects. These universities will be where the really innovative stuff comes from.

References

Sullivan, M., A. Oretga, et al. (2008). “Assessing the teaching of procedural skills: can cognitive task analysis add to our traditional teaching methods.” The American Journal of Surgery 195: 20-23.

No messiness here thanks – SNAFU all the way

Dave Cormier has presented 8 hypotheses summarising what he’s learned about open learning. I’ve been meaning to post about it for a day or so, there is some really good stuff here. Some really important insights that formal education institutions need to think about, especially those wishing to ride the online learning revolution into the greener fields of growing student markets.

I’m posting this now because Stephen Downes has picked up on the couple of sentences of Dave’s post which I think summarise exactly why it will be very difficult for some formal educational institutions to even consider learning from Dave’s hypotheses. Here’s the quote from Dave’s post

One of the biggest advantages of openness (and it is one of the things that connects all of the points here) is the messiness of the process. It allows for the unexpected good and bad, to pop into a learning scenario. It approximates real life in a way that is sadly lacking from the majority of our educational encounters

“Strategic” management and its problem with messiness

For a long time (I have seen quotes going back to the early 1900s) universities have been exhorted to be run more like commercial businesses. This is typically code for having strong, strategic leadership. In turn this translates into having a firmly accepted vision of where the institution is going and how it is going to get there. It is efficient, nothing is wasted on extraneous activities that do not contribute to pre-identified missions and methods. Everything is consistent. It is tidy.

In some/most cases this drive for consistency, or perhaps better phrased as “an absence of messiness”, results in defensive routines and the SNAFU principle. A consequence of the SNAFU principle is the “progressive disconnection of decision-makers from reality”.

This could be phrased alternatively as, the further you progress up the organisational ladder, the cleaner the world seems. Any messiness is to kept out because it makes life “difficult”.

Hypothesis #1

Perhaps it’s time to start proposing some hypothesis of my own, in this case, about how to improve L&T within a university. Here’s #1.

Hypothesis 1 – Embrace the messiness
Open learning is messy. Innovation and chance are messy. Learning and teaching within a complex setting, are messy. To implement widespread change in learning and teaching a university has to actively embrace and engage with messiness.

Mmm, this approach could actually produce something a bit more positive and actually contribute to the thesis. Speaking of which…