PhD Update – Week #4 – Frustration and progress

This week is turning out to be perhaps the most frustrating, not due to lack of program, but instead due to connections between what I’m reading/writing and what I’m seeing in my local context. As per last week’s update the aim this week was to complete sections of chapter 2 related to the Ps Framework. The first section I targeted was “Past Experience” and this has been the source of the frustration.

The more I read, synthesize and write about the history of learning and teaching in universities, especially e-learning, the more I get frustrated. Mainly due to seeing the same mistakes being made again and again. Especially locally.

The frustration means I’ve bitten the bullet and am writing this update a bit earlier than normal.

What I’ve done this week

Here’s a summary of what I said I’d do last week and what has actually happened:

  • Complete as many sections of the Ps Framework (chapter 2) as possible and have most put onto the blog.

    I posted a first draft of the “introduction to the Ps framework” section. I’ve made some significant progress in structuring most of the 7 sections associated with components of the Ps Framework. Most progress has been made on the “Past Experience” section.
  • Need to complete reading the theory building paper and provide feedback.
    I’ve done nothing on this one. Sorry Shirley.
  • Need to tidy up a bit of the other outstanding literature I have gathered.
    This has been the other task I’ve done this week. Trouble is that there has been a minimum of tidying and a maximum of finding more literature that needs tidying up. That said, the new literature is good stuff and will help – but it’s still frustrating (and always will be) to find new insights that help inform what you’re doing.

In terms of PhD related blog posts, I’ve done the following this week

  • First draft of the “introduction to the Ps framework” section.
  • A post railing against the “technology will change teaching” matra that I’m seeing all the time these days, even though past experience suggests it’s no where near that simple.
  • A post drawing on some insights from Alavi and Leidner (2001) about organisational implementation of e-learning.
  • A first post to take a couple of lessons from history and apply it to LMS implementation.
  • A summary of a paper that applies some insights from information systems to e-learning implementation.

What’s the aim for next week?

I’m hoping by this time next week that I’ll have:

  • Completed at least 2 sections of the Ps Framework for Chapter 2 – probably “Past Experience” and “People”. If I’m motiviated, perhaps add “Product”.
  • Cleaned up a lot of the literature I’ve found in the last week.

Of course, next week is shaping up to be a particular frustrating week from other perspectives, so it will be interesting to see if any of the above gets done.

Technology will *not* change the way we teach – an example why we’re an amnesiac field

I’m currently working on chapter 2 of my thesis, which is an explication of some of what is known about “e-learning” through the lens of the Ps Framework.

Last night I posted a draft of the section of the chapter that introduces the Ps Framework. Today, I’ve been working on getting a first draft of the “Past Experience” section of the chapter onto the blog.

I’ve already mined some of this work for a previous post about how the lessons of the past can inform the present. That post even used the “doomed to repeat” quote.

Today, I’ve come across a new quote, specifically about the learning technology field that was going to serve as a basis for this post. Then, as fate would have it, (pause for breath) I came across this post from George Siemens, which mentions article in “The Wired Campus”, which in turn references this essay from the current issue of Academic Commons.

The topic is e-portfolios. Something of which I am a self-confessed skeptic.

In this post I’m going to try to justify a link between the comments in the article from “The Wired Campus” and an important lesson that we haven’t learned from history. In particular, I’d like to create another couple of data points to support this claim from Oliver (2003)

Learning technology often seems an amnesiac field

What’s claimed for e-portfolios

The article from “The Wired Campus” starts of with the claim

If we truly want to advance from a focus on teaching to a focus on student learning, then a strategy involving something like electronic student portfolios, or ePortfolios, is essential.

The article ends with

At the moment, ePortfolios represent perhaps the most promising strategy for responding to calls for accountability and at the same time nurturing a culture of experimentation with new forms of learning.

In between it suggests four fundamental features of ePortfolios:

  1. Integrate student learning in an expanded range of media, literacies and viable intellectual work.
  2. Enable students to link together diverse parts of their learning including the formal and informal curriculum.
  3. Engage students with their learning.
  4. Offer colleges a meaningful mechanism for accessing and organising the evidence of student learning.

Wow! The solution has been found

What’s been forgotten?

Zemsky and Massey (2004) claim

One of the more hopeful assumptions guiding the push for e-learning was the belief that the use of electronic technologies would force a change in how university students are taught.

Along similar lines, Littlejohn and Peacock (2003) say

There was, in many, a false assumption that exposure to computers and CAL packages was sufficient to drive the development of new forms of teaching with technology

Conole (2003) weighs in from a different track, but still somewhat related (at least in my PhD ravaged mind – my emphasis added)

Politics is a very strong theme that runs across all learning technology research. This in part relates to the over hyping which occurs, leading to an over expectation of what is possible. It is also partly due to different local agendas and associated in-fighting as well as the major impact that technologies can have.

One example of this over-hyping is Suppes (1966) – a Stanford professor writing in Scientific American in the 60s about computer-assisted learning.

the processing and the uses of information are undergoing an unprecedented technological revolution……One can predict that in a few more years millions of school-children will have access to what Philip of Macedon’s son Alexander enjoyed as a royal prerogative: the personal services of a tutor as well-informed and responsive as Aristotle.

Well, it’s 43 years later, have you got your personal Aristotle?

Do you get any sense that this has a connection with what’s happening with e-portfolios (or in some contexts open source learning management systems) at the moment?

How do you change teaching?

I don’t think technology is going to change teaching. If anything the history of e-learning (and other innovations) offer strong evidence that new technology will get used as “horseless carriages”. Doing the old stuff, with the new tools.

If you want to change teaching, and subsequently student learning, I currently ascribe to some of the work of Trigwell (2001) (and others) as represented by the following figure. i.e. if you want to change teaching, you have to change the strategies used by teachers.

Zemsky and Massey (2004) again

Elearning will become pervasive only when faculty change how they teach—not before.

I believe the same applies for e-portfolios.

Trigwell's model of teaching

At best, a new technology will offer a small change in the outer onion skin in the above figure. The teaching and learning context. A new technology, like eportfolios, with some affordances that actively support better teaching strategies will certainly make it possible for teaching to change. I just don’t think the addition of a new technology will make it likely (or certain) that teaching will change.

There are too many other complicating factors within the teaching and learning context (I’m assuming a university context) that are likely to overwhelm the addition of the new technology. Not too mention the complexity of the interactions between the changes in teaching and learning context and all the other onion skins. For example, the change in teaching will, to some extent, rely on students seeing the value in the change and adapting to it. This is not always a given.

Why do we continue to focus on the technology?

(Where I’m definining technology as more than just the eportfolio system. But also all the learning designs and other resources that exist to help staff to use the system.)

Back to Littlejohn and Peacock (2003)

This is because technological issues have in the main been easier to solve than the more complex, social, cultural and organisational issues involved in mainstreaming technology in learning and teaching.

It’s easy to introduce an e-portfolio system and offer training sessions in how to use it. This is all an example of “level 2” knowledge about how to improve learning and teaching (see this post introducing reflective alignment) .

One of the related reasons (at least for me) is the “technologists alliance” (Geoghegan, 2004) that is talked about in my original comments on eportfolios.

Conclusions

Any form of technology will not improve learning and teaching. Actually, I think the authors of the Wired Campus article and most of the people who will read this, will know this. So am I simply creating a strawman argument? Am I simply stating the obvious?

The reason I bother in pointing out the obvious, is that I continue to see this happening in universities. It’s happening in my current university right now. While the folk that are deeply into learning technology understand that technology will not change learning and teaching. Many of the folk that take on the task of improving learning and teaching use this rhetoric to justify their task. In many cases, some of these folk do believe it.

For example, I’d love to do a survey of all the universities in the world and find out how many started the process of adopting an e-portfolio after someone in the institution’s senior leadership read the Wired Campus article or the Academic Commons paper on eportfolios and thought that sounded like a really good. Regardless of the current state or requirements of the local institution. And, almost certainly without a detailed knowledge of the factors at play in the Trigwell figure above.

References

Conole, G. (2003). Understanding enthusiasm and implementation: E-learning research questions and methodological issues. Learning Technology in Transition: From Individual Enthusiasm to Institutional Implementation. J. K. Seale. Lisse, Netherlands, Swets & Zeitlinger: 129-146.

Geoghegan, W. (1994). Whatever happened to instructional technology? 22nd Annual Conferences of the International Business Schools Computing Association, Baltimore, MD, IBM.

Littlejohn, A. and S. Peacock (2003). From pioneers to partners: The changing voices of staff developers. Learning Technology in Transition: From Individual Enthusiasm to Institutional Implementation. J. K. Seale. Lisse, Netherlands, Swets & Zeitlinger: 77-89.

Suppes, P. (1966). The Uses of Computers in Education. Scientific American: 207-220.

Trigwell, K. (2001). “Judging university teaching.” The International Journal for Academic Development 6(1): 65-73.

Zemsky, R. and W. F. Massey. (2004). “Thwarted innovation: What happened to e-learning and why.” Retrieved 1st July, 2004, from http://www.thelearningalliance.info/WeatherStation.html.

Coordination, support and knowledge sharing associated with e-learning – where does your organisation fit?

A recent post summarised a paper that was taking some insights from the information systems discipline and applied it to the implementation of a LMS/VLE. This post draws on some insights from Alavi and Leidner (2001), an influential paper (208 citations on Google Scholar) from the information systems discipline. A paper that calls for IS researchers to focus more on technology-mediated learning – i.e. e-learning.

Of the many ways the paper suggests IS researchers can make a contribution the following is the focus of this post.

Lastly, at the organizational level of analysis, IS scholars might have insights to provide to the question of how to encourage instructors (in the role of end-users) to incorporate IT tools to improve their instructional design, development, and delivery.

Based on a review of the literature the authors suggest a simple matrix – the figure below – to summarise four common approaches universities take to the coordination, support and knowledge sharing around e-learning at the organisational level.

Coordination, support and knowledge sharing

The four quadrants can be described as:

  1. Acquisition of technology and their support is uncoordinated. Sharing of knowledge is random – generally limited to ad hoc social networks.
  2. Technology acquisition and support remains uncoordinated, but some facilitation of knowledge sharing occurs.
  3. Technology acquisition and support is now coordinated across the institution, however, knowledge sharing is random.
  4. Technology acquisition and support is coordinated and knowledge sharing is facilitated.

Alavi and Leidner suggest that most universities are in quadrant #1. That may have been true in North America in 2000/2001. It might continue to be true in that country. At this point in time, in Australia, I believe most universities could probably be said to be in quadrants 3 and 4. Though, in reality, there are probably aspects of practice that dip into the other quadrants. Where is your university?

The quadrant I’m most familiar with is probably quadrant 3 though at times we might have touched on 4. Facilitation of knowledge sharing is by far the most difficult of the three tasks. One I’m not sure anyone has really grappled with effectively. Especially because facilitation of knowledge sharing is not separate from acquisition and support. Though it is often treated as separate.

Acquisition and support can easily be allocated to the information technology folk. Which means that facilitation of knowledge sharing can occur about how to use the technology. But leveraging the knowledge sharing to inform the modification of existing or adoption of new technology has to battle across the disciplinary gulf that separates learning and teaching from IT. Not only that, it also has to battle the problem of resource starvation where funding/resourcing for L&T IT gets starved of attention due to the “size” of the problem outside of L&T.

References

Alavi, M. and D. E. Leidner (2001). “Research commentary: technology-mediated learning – a call for greater depth and breadth of research.” Information Systems Research 12(1): 1-10.

The Ps framework

The following is a section of my thesis – chapter 2. As I get first drafts of this stuff done, I’m going to post it to the blog – where appropriate. This is the first.

This section is the first major part of chapter 2 – the literature review. It explains the background of the Ps Framework which will be used to structure the rest of the chapter.

The Ps Framework

This chapter aims to illustrate knowledge of the extant literature and associated worthy research issues around the problem of designing and implementing e-learning and the supporting information systems within universities. The development of this understanding and its description in the remainder of this chapter has been achieved through the formulation of the Ps Framework as a theory to enable analysis, understanding and description of that extant literature. Elsewhere (Jones 2008; Jones, Vallack et al. 2008) the Ps Framework has been used to illustrate how the framework can be a useful tool for helping the diverse stakeholders to effectively share and negotiate their various perspectives and consequently, make sound and pragmatic decisions around e-learning. In this chapter, the Ps Framework helps illustrate that the literature survey is “constructively analytical rather than merely descriptive” (Perry 1998) and its components make the main section headings in this chapter.

The first part of this section (Why the Ps Framework?) provides a brief description of why the Ps Framework is necessary. Next (Components of the Ps framework), the individual components of the Ps Framework and their graphical representation is explained. The next section (hopefully online later this week) begins the use of components of the Ps Framework, in this case “Past Experience”, to describe one aspect of what is currently known about e-learning.

Why the Ps Framework?

The focus of this work is the development of an Information Systems Design Theory (ISDT) for e-learning. The aim is to develop insight into appropriate approaches to the design and implementation of e-learning. Consequently, the research in this thesis can be seen as a design problem. There is growing interest in design, design research and design theory in fields such as management (Boland 2002; van Aken 2004; van Aken 2005), information systems (Walls, Widmeyer et al. 1992; Hevner, March et al. 2004; Walls, Widmeyer et al. 2004; Gregor and Jones 2007), and education (Brown 1992; Collins 1992; Savelson, Phillips et al. 2003). Design is the core of all professional training (Simon 1996). Design can be seen as a transformation from some known situation (the initial state) which is deemed to be problematic by some interested parties into a target state (Jarvinen 2001). The formulation of the initial state into an effective representation is crucial to finding an effective design solution (Weber 2003). Representation has a profound impact on design work (Hevner, March et al. 2004) particularly on the way in which tasks and problems are conceived (Boland 2002).

The organisational selection, adoption and use of educational technology by universities is increasingly seen as an information systems implementation project (Jones, Vallack et al. 2008). How such projects are conceptualised significantly influence the design of the resulting system. Jamieson and Hyland (2006) suggest that there are relationships between decisions made in the pre-implementation phase of an information systems project, the factors considered in those decisions and the degree of success of the project outcomes. During the pre-implementation phase, decisions involve a high volume of information, are incredibly complex, and are associated with a high degree of uncertainty (Jamieson and Hyland 2006). There remains some distance until there is complete understanding of the complexity of innovation and change around university implementation of e-learning (Cousin, Deepwell et al. 2004).

Bannister and Remenyi (1999) contend that given such difficult decisions, both individual and corporate decision makers will more than likely base their decisions on instinct. Given the non-linear nature of e-learning implementation it becomes more complex to handle and there is a need for meaning-makers or mediators between the different elements of the “implementation ecology” (Cousin, Deepwell et al. 2004). How a design problem is conceptualised by the members of an organization influences what they see as valid solutions to that problem, it impacts directly on the quality of the decisions they make about projects. Different members of an organization will, as a result of their different experiences, have varying perspectives on a design problem. Too often, the full diversity of experience is so difficult to capture, compare and contrast that decision-making processes often, both consciously and unconsciously, avoid the attempt.

Frameworks offer new ways of looking at phenomena and provide information on which to base sound, pragmatic decisions (Mishra and Koehler 2006). Gregor (2006) defines taxonomies, models, classification schema and frameworks as theories for analysing, understanding and describing the salient attributes of phenomena and the relationships therein. The development of taxonomies, models and frameworks to aid understanding is common in most disciplines. Examples from the educational technology field include:

  • the 4Es conceptual model (Collis, Peters, & Pals, 2001);
    This is a model to predict the acceptance of ICT innovations by an individual within an educational context. It proposes that an individual’s acceptance of educational ICT innovations is based upon four concepts: environment, effectiveness, ease of use and engagement.
  • the ACTIONS model (Bates, 2005); and
    This framework provides guidance to the process of selecting a particular educational technology by drawing on 7 components: Access, Costs, Teaching and learning, Interactivity and user-friendliness, Organisational issues, Novelty and Speed.
  • E-learning ecology elements (Cousin, Deepwell et al. 2004).
    Four elements or domains are identified as requiring consideration during the implementation of e-learning within universities: pedagogical, technological, cultural and organisational.

Components of the Ps Framework

The Information Technology (IT) artifact is often taken for granted or assumed it to be unproblematic which often results in narrow conceptualisations of what technology is, how it has effects and how and why it is implicated in social change (Orlikowski and Iacono 2001). Such limited conceptualisations often view IT as fixed, neutral and independent of their context of use. The position taken in this thesis, and demonstrated in this chapter through the use of the Ps framework, is that IT is one of a number of components of an emergent process of change where the outcomes are indeterminate because they are situationally and dynamically contingent (Markus and Robey 1988). On-going change is not solely “technology led” or solely “organisational/agency driven”, instead, change arises from a complex interaction between technology, people and the organization (Marshall and Gregor 2002). This view of the Ps framework and the IT artifact connects with Orlikowski and Iacono’s (2001) ensemble view of the IT artifact where technology is seen to be embedded with the conditions of its use.

The Ps Framework consists of 7 components. Only one of which – Product – specifically encompasses technology. The remaining six seek to describe and understand the parts of the complex and dynamic social context within which e-learning is applied. The seven components of the Ps framework are: (as work progresses and I post additional sections of the chapter, I’ll link to them from the following)

  1. The problem and purpose;
    What is the purpose or reason for the organization in adopting e-learning or changing how it currently implements e-learning? What does the organization hope to achieve? How does the organization conceptualise e-learning?
  2. Place;
    What is the nature of the organization in which e-learning will be implemented? What is the social and political context within which it is placed?
  3. People;
    What type of people and roles existing within the organization? Management, professional and academic staff, students. What are their beliefs, biases and cultures?
  4. Pedagogy;
    What are the conceptualisations about learning and teaching which the people within the place bring to e-learning? What practices are being used to learn and teach? What practices might they like to adopt?
  5. Past experience;
    What has gone on before with e-learning, both within and outside of this particular place? What worked and what didn’t?
  6. Product; and
    What system has been chosen or designed to implement e-learning? Where system is used in the broadest possible definition to include the hardware, software and support roles.
  7. Process.
    What are the characteristics of the process used to choose how or what will be implemented and what process will be used to implement the chosen approach?

One, of potentially many, explanation of the relationship between the seven components starts with purpose. Some event, problem or factor arises that will require the organization to change the way in which it supports e-learning. This becomes the purpose underlying a process used by the organization to determine how (process) and what it (product) will change. This change will be influenced by a range of factors including: characteristics of the organization and its context (place); the nature and conceptions of the individuals and cultures within it (people); the conceptualisations of learning and teaching (pedagogy) held by the people and the organization; and the historical precedents by within and outside the organisation (past experience).

This is not to suggest that there exists a simple linear, or even hierarchical, relationship between the components of the Ps Framework. The context of implementing educational technology within a university is too complex for such a simple reductionist view (Jones, Vallack et al. 2008). As stated above, the perspective underpinning the Ps Framework is one where the technology is one of even components of an emergent process of change where the outcomes are indeterminate because they are situationally and dynamically contingent (Markus and Robey 1988).

Figure 1 provides a representation of the 7 components of the Ps Framework for E-Learning. The situationally contingent nature of these components is represented by the Place component encapsulating all of the remaining six. The dynamically contingent nature of these components is represented by the messiness of their representation. It is intended also that each component be connected in someway to every other component as a representation that each component can influence the other, and vice versa.

The Ps Framework: a messy version

“Blame the teacher” and its negative impact on learning and e-learning

The following post is sparked by reading Findlow (2008) as part of my PhD work. I’m about halfway through it and finding it very interesting. In particular, this post is sparked by the following paragraph from the paper

The institutional counterpoint to this was the feeling expressed, implicitly or explicitly, by all the administrative staff I talked to that, in the words of one mid-ranking administrator, ‘No offence, but some academics need a kick up the bum’. Only five survey respondents cited constraints not explicitly linked to aspects of ‘the system’, singling out academics’ ‘attitudes’, which they elaborated as: ‘lack of imagination’ and/or ‘a reluctance to take risks’

Blame the teacher – level 1

In promulgating the idea of reflective alignment I borrowed and reworked Biggs (Biggs and Tang, 2007) ideas of constructive alignment to take it from looking at how individual academics could design teaching to looking at how a university could improve the design of teaching performed by its academics.

The sentiments outlined in the above quote from Findlow (2008) are a perfect example of, what I framed as, level 1 knowledge about improving teaching. The “blame the teacher” level. The level at which management can feel consoled that the on-going problems with the quality of teaching is not the fault of the system they manage. It’s the fault of those horrible academics. It would all be better if the academics simply got a “kick up the bum”.

Escalation into “accountability” – level 2

The immediate problem that arises from level 1 knowledge about improving teaching, is that management very quickly want to provide that “kick up the bum”. This is typically done by introducing “accountability”. As Findlow (2008) writes

Key to the general mismatch seemed to be the ways in which the environment – both institution and scheme – demanded subscription to a view of accountability that impeded real innovation; that is, the sort of accountability that is modelled on classic audit: ‘conducted by remote agencies of control’ (Power 1994, 43), presuming an absence of trust, and valuing standardisation according to a priori standards.

This approach fits nicely into Level 2 knowledge about improving teaching – i.e. it is a focus on what management does. The solution here is that management spend their time setting up strategic directions against which all must be evaluated. They then set up “accountability courts” (i.e. “remote agencies of control) to evaluate everything that is being done to ensure that it contributes to the achievement of those strategic directions.

This can be seen in such examples as IT governance or panels that evaluate applications for learning and teaching innovation grants. A small select group sits in positions of power as accountability judges to ensure that all is okay.

Once the directions are set and the “accountability courts” are set up, management play their role within those courts, especially in terms of “kicking but” when appropriate.

Mismatch and inappropriate

There is an argument to be made that such approaches are an anathema to higher education. For example, Findlow (2008) makes this point

New managerialism approaches knowledge as a finished product, packaged, positive, objective, externally verifiable and therefore located outside the knower. By contrast, an ‘academic exceptionalist’ (Kogan and Hanney 2000, 242) view of knowledge places it in the minds of knowledgeable individuals, with the holder of the knowledge also the main agent in its transmission (Brew 2003). This kind of expert or ‘professional knowing’, closely related to conventionally acquired ‘wisdom’ (Clegg 2005, 418), is produced through an organic process between people in a culture of nurturing new ideas. The process is allowed to take as long as it takes, and knowledge is not seen as a finished product.

There are arguments back and forth here. I’m going to ignore them as beyond scope for this post.

I will say that I have no time for many of the academics who, at this stage, will generally trot out the “academic freedom” defense to “accountability courts”. Accountability, of an appropriate sort, is a key component of being an academic, peer review anyone? Findlow (2008) has this to say

accountability is intrinsic to academia: the sort of accountability that is about honesty and responsibility, about making decisions on the basis of sound rationales, on the understanding that you may be called to account at any point. Strathern (2000a, 3) suggests that ‘audit is almost impossible to criticise in principle – after all, it advances values that academics generally hold dear, such as responsibility, openness about outcomes’.

Academics should be open and clear about what and why the perform certain tasks. Hiding behind “academic freedom” is to often an excuse to avoid being “called to account”. (That said there are always power issues that complicate this).

My argument against “accountability courts” is not on the grounds of principle, but on pragmatic grounds. It doesn’t work

It doesn’t work

Remember, we’re talking here about improving the design of courses across an institution. To some extent this involves innovation – the topic of Findlow (2008) – who makes the following point about innovation (emphasis added)

The nature of innovation … is change via problematisation and risk. In order to push the boundaries of what we know, and break down dogma, problems have to be identified and resolved (McLean and Blackwell 1997, 96). Entering uncharted territory implies risk, which requires acceptance by all stakeholders.

This last point is where the problems with “accountability courts” arise. It starts with the SNAFU principle which in turn leads to task corruption.

SNAFU principle

Believed to arise from the US army in World War II the phrase SNAFU is commonly known as an acronym that is expanded out to Situation Normal, All Fouled Up – where “Fouled” is generally replaced with a more colloquial term. Interestingly, and as a pause to this diatribe, here’s a YouTube video of Private Snafu – a cartoon series made by the US armed services during World War II to educate the troops about important issues. You may recognise Mel Blanc’s voice.

However, the SNAFU principle gets closer to the problem. The principle is defined as

“True communication is possible only between equals, because inferiors are more consistently rewarded for telling their superiors pleasant lies than for telling the truth.”

This is illustrated nicely by the fable on this SNAFU principle page.

Can this be applied to innovation in higher education? Surely it wouldn’t happen? Findlow (2008) again

My own experience as a funded innovator, and the prevailing experience of my respondents, was that participation in a funded ‘scheme’ made authentic problematisation, and honest description of risk, difficult. Problematisation was inhibited by the necessary consideration given to funding body and institutional agendas in defining parameters for approval. Audit can be seen as a response to fear of risk, and audit-managerially governed schemes require parameters pre-determined, expected outcomes and costs known in advance. Respondents in this case related the reluctance of the scheme to provide for unanticipated needs as they arose, without which effective innovation was much harder.

Task corruption

Task corruption can be defined as

is where either an institution or individual, conciously or unconsciously, adopts a process or an approach to a primary task that either avoids or destroys the task.

It can arise when the nature of the system encourages people to comply through a number of different mechanisms. Findlow (2008) reports on one as applied to innovation

The discussion groups of new academics unanimously recounted a feeling of implicit pressure not to acknowledge problems. They all said they had quickly learned to avoid mention of ‘problems’, that if necessary the word ‘issues’ was preferable, but that these ‘issues’ should be presented concisely and as if they had already been dealt with. While their formal institutional training programme emphasised the importance of honestly addressing shortcomings, their informal exposure to management culture conveyed a very different message. They had learned, they said, that to get on in academia you had to protect yourself and the institution, separate rhetoric from reality, strategy from truth – that authentic problematisation was non-productive and potentially dangerous.

Findlow goes on to reference Trowler’s (1998) term “coping strategies” and the phrase “work to rule”. Findlow gives examples, such as innovators have to lie about a particular aspect of their innovation in the formal documents required by an “accountability court” in order to fulfill requirements. Even thought the rationale was accepted by senior adminstrators.

Academics start to work the system. This creates a less than stellar confidence in the nature of the system and subsequently reduce the chances of innovation. Findlow (2008) again

Allen’s (2003) study of institutional change found that innovation was facilitated by the confidence that comes with secure working environments. Where change was judged by staff to be successful, it tended to emerge from university environments where holistic and humanistic views of scholarship and systems of implicit trust were embedded. These gave academics the confidence to take risks. Allen found that insecure environments created what Power (1994, 13) describes as ‘the very distrust that [they were] meant to address’, removed the expectation and obligation for genuinely responsible academic accountability (Giri 2000, 174), and made staff reluctant to devote time, signpost problems or try something that might not work and could reflect negatively on their career portfolios.

A solution?

The last quote from Findlow (2008) seems to provide a possible suggestion in “university environments where holistic and humanistic views of scholarship and systems of implicit trust were embedded”. Perhaps such an environment would embody level 3 knowledge of how to improve design of courses. Such an environment might allow academics to enage in Reflective Problematisation.

Such an environment might focus on some of the features of this process.

References

Biggs, J. and C. Tang (2007). Teaching for Quality Learning. Maidenhead, England, Open University Press.

Findlow, S. (2008). “Accountability and innovation in higher education: a disabling tension?” Studies in Higher Education 33(3): 313-329.

Comparing VLEs/LMS to the past: flaws and implications for development models

George Santayana, a Spanish American philosopher and writer

I’m working on chapter 2 of the thesis and, in particular, on the “Past Experience” section. As part of the Ps Framework, “Past Experience” is meant to talk about

What has gone on before with e-learning, both within and outside of this particular place? What worked and what didn’t? What other aspects of previous experience at this particular institution will impact upon current plans?

. So it’s fairly obvious that at some stage I’m going to use the following quote from George Santayana

Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it.

Early and new insights

The thesis is aimed at e-learning and, in particular, web-based education. Based on my experience and resulting perspectives I’ve had quite a few ideas about what might come out in this section (remember the aim of chapter 2 of the thesis is to illustrate I know the area and also to identify some problems). The point of this post is to summarise a couple of new perspectives that have been brought to bear by my first reading in the area.

The reading is Jesse Heines 2004 chapter on “Technology for Teaching: Past Masters Versus Present Practices” (Heines, 2004). This chapter goes back into the history of technology use for learning and compares what was known and possible with systems from last century with what is possible with more modern technology. Given the tone of this post, I doubt it’s any surprise that his is not a favourable comparison for the modern systems.

The two insights that have been highlighted for me are:

  1. VLEs/LMSes/CMSes are not informed by best practice.
  2. The commercial model of these systems constrains the ability to be informed by best practice.

Consideration of these points raised a question for me about the open source systems and whether they suffer from the same problem – which I kind of think they do. More on this in the last section of the post.

VLEs/LMSes/CMSes are not informed by best practice

In the early noughties the vendors of course management systems (CMSes) caught onto the growing adoption of enterprise resource planning systems within universities. They knew that this trend created all sorts of advantages for them in convincing universities to fork out big money on their systems. So they started labeling their systems as “enterprise” systems.

Now one of the assumed features of “enterprise” systems is that their design is meant to be informed by and encapsulate “best practice” (Jones et al, 2004). This is used as one excuse why the organisation should adapt its processes to suit the new system, because it encapsulates “best practice”.

One of the more common features of an CMS that academics use is the quiz facility. Heines (2004) describes much of the history of work around programmed instruction – i.e. automated testing using technology. He relates this story

In a recent conversation with a representative of one of the leading CMS vendors about their testing subsystem, I asked about the system’s capability to analyze the data it stored and present it to teachers. [CMS stands for “course management system,” another new term applied to a capability that’s been around for years.] I was told that the system can show the teacher each student’s response to every question. OK, I responded, but can a busy teacher see a summary of that data so that s/he can see trends and identify widespread class misunderstandings? The representative didn’t know. He said something about computing an average, but he was not familiar with the terms “item analysis,” “difficulty index,” “discrimination index,” and “standard deviation.” (Sigh.)

He then proceeds to highlight some additional limitations

  • Many CMS don’t even store the data necessary to do item analysis and other features available in much earlier systems.
  • Facilities to construct test banks doesn’t enforce “even the most basic, long-established rules of good test construction”.

Questions:

  • Is this still true of more recent versions of these systems?
  • Is this true of the open source alternatives – e.g. Moodle, Sakai etc.

The commercial model causes this

Exterior of Pressey Testing Machine, patent dates 1928 and 1930.

Heines (2004) then makes the point that economic and commercial system used to produce these systems may be somewhat to blame. He starts by offering this quote from Sidney Pressey (who developed the “teaching machine” in the image to the left in 1928)

The writer has found from bitter experience that one person alone can accomplish very little.”

i.e. he funded much of the development of his machine and without commercial support had difficulty.

Heines (2004) then suggests that you need “product commercialization” to have a real impact on the education system. But, he also suggests a flaw for this approach

the cost of developing and marketing commercial products today is so huge that they must often cater to the lowest common denominator in an effort to appeal to the widest possible audience

If this is true, and I do think this is fairly well accepted, then what does it say for the assumption that “enterprise” systems embody “best practice”.

Is open source the solution?

Heines (2004) suggests that showing the vendors how to expand their product capabilities is the solution. Funny that. Just last week I saw someone from an Australian university asking about a basic function within the most recent, commercial version of Blackboard. Apparently, Blackboard had been told that this basic function was necessary quite sometime ago but still hadn’t included it.

This basic function was a simple “reporting” problem, i.e. how information was being displayed. It wasn’t some as difficult as storing additional data about student performance on quizzes and implementing known algorithms for item analysis. But even it hadn’t been done yet. And this is for a function that was reported through vendor initiated “user dialogue”.

So, of course, open source must be the answer. That seems to be what the latest fad sweeping higher education might suggest. As that previous post suggests, I have my doubts.

One simple empirical test might be to look at the testing engines within existing open source CMSes and see if they suffer the same flaw. My quick look at Moodle suggests that it does.

Do you know better?

Limitations

Okay, complicated quiz reporting systems may not be the best example of modern pedagogy. Consequently, it may not be the best test. But I’m sure you could find similar things in terms of discussion forums, student/staff interaction etc. There’s probably an interesting paper in this.

How do you solve it?

So, if both the commercial and the open source “enterprise” systems suffer this same flaw, how do you solve this problem?

Heines (2004) suggests that a “plug-in” approach might be possible. The reality of this, however, may be a little more complex. Some of the features that need changing may be “core” to the system, something a plug-in couldn’t change. Being able to change the “core” also raises some problems.

If I can’t give an answer to how you would do it, I can at least describe a situation that would not solve it. That’s the old “implement vanilla” approach to enterprise systems – the situation where the local organisation actively decides not to make any changes.

For me this approach ignores the messiness of information systems.

References

Jones, D., S. Behrens, et al. (2004). The rise and fall of a shadow system: Lessons for enterprise system implementation. Managing New Wave Information Systems: Enterprise, Government and Society, Proceedings of the 15th Australasian Conference on Information Systems, Hobart, Tasmania.

Heines, J. (2004). Technology for Teaching: Past Masters Versus Present Practices. Online Learning: Personal Reflections on the Tranformation of Education. G. Kearsley, Educational Technology Publications: 144-162.

Virtual learning environments: three implementation perspectives

The aim of this post is to summarise my current reading – Keller (2005). I believe it will have some connection with the thesis.

Aside: Using the United Kingdom term – virtual learning environment (VLE) – as a synonym for the more common (in Australia and elsewhere) – learning management system (LMS). Have to say I still prefer course management system (CMS) as a more appropriate label.

Apart from the PhD, this article has contextual implications as my existing institution is currently adopting Moodle as mentioned elsewhere.

Summary

This is an interesting conceptual paper – there are no empiricial data – that is a bit light on in detail (e.g. how the CoP approach can be used to improve a VLE implementation is very abstract and repetitive). This may also be true for the innovation and acceptance perspectives. I am more familiar with those so may be automatically inserting my own experiences.

For me it provides a reference for the complementary value of adding an IS perspective to VLE implementation.

It does open up some possibilities for some interesting empirical work examining what is being done and how within VLE implementation within institutions.

Abstract

Seems to suggest that the common theoretical framework for implementing VLES – instructional design – can be complemented by 3 different perspectives of VLE implementation from the information systems implementation research and organisation theory. Would appear to suggest that these perspectives have important things to say about the successful use and implementation of VLES. The three complementary perspectives are

  • technology acceptance;
    Sees the VLE as a new technology that will be accepted or rejected by users.
  • diffusion of innovations; and
    VLE implementation is seen as the effort to diffuse the VLE within the user community.
  • learning process.

Introduction

Some typical stuff about the impact of ICTs and e-learning on students and institutions.

Refers to an earlier publication of the authors (Keller & Cernerud, 2002) in an attempt to justify why there is a “strong need for closer study of models of implementation and an exploration of their underlying theoretical frameworks”. But I haven’t got it yet – Is it too late on a Friday afternoon to be starting this?

The basic point seems to be that factors such as age, gender, learning style, degree programmes and previous knowledge of computers have been assumed to influence students’ perceptions of e-learning and the implementation strategy found. They reference (Mitra et al., 2000; Nachmias & Shany, 2002) for this point. However, they claim that the author’s earlier study found that these only exert a minor influence.

Sounds weak to me.

The rest of the introduction is broken up into sections:

  • Virtual learning environments;
    Mentions that e-learning encompasses an awful lot. Narrows things down to VLEs and defines what they are.
  • Concept of implementation;
    This is the bit the explains that the instructional design view of VLE implementation is somewhat narrow – has some references. Argues for the view of VLE implementation that sees the VLE as an information system and the university as being an organisation. Outlines a number of different theoretical views of information systems implementation – most around initiation, development and implementation/termination. Includes mention of Lewin’s freeze/unfreeze model, but makes this point

    six-phase view of the information systems implementation process compared to Lewin’s model of organizational change (adapted from Kwon & Zmud, 1987) Within these models, implementation is seen as a continuous process. This is in accordance with Mintzberg and Quinn’s (1996) view of implementation as being intertwined with formulation of organizational goals in a complex interactive process.

    This is an important distinction and salves my problems with this view somewhat, however, I don’t think it goes far enough. I think the reality in most organisations and most information systems demands that evolution and termination receive specific treatment – but that’s an argument for elsewhere.

  • Three perspectives of implementation
    Briefly names the three implementation perspectives to be examined and explains that the point is to see what these perspectives can tell us about VLE implementation. The next 3 sections introduce each perspective.

The concept of implementation section refers to a figure like the following to show the linkage between Lewin’s model of organisational change and the stages of an implementation model.

IS Implementation Phases and Lewin's organisational change

There seems to be some connection with George Siemens’ IRIS model, some similarities and some differences. I’ve expressed some reservations about both the IRIS model and also Lewins model. A few things have come together that mean I do need to revisit these.

Implementation as technology acceptance

Explains how technology acceptance is seen as one of the most mature research streams in IS. In fact, some see it as one of the few original contributions that IS has made. Others see it as an example of a flawed adoption of a perspective from another discipline (psychology and sociology mentioned in this paper) that has failed to keep up with the improved understandings of that original discipline.

Keller explains

The models focus on explaining individual decisions of accepting and using a new technology. The factors influencing these decisions are seen as variables measured at a specific point. Relationships between the variables are identified by statistical correlation analysis. Among the most influential models of this research stream are technology acceptance model (TAM) and social cognitive theory (SCT) (Venkatesh et al., 2003).

And then goes onto examine each of these.

Technology acceptance model (TAM)

Some friends and I have used the TAM in a couple of previous papers. One for an e-learning audience and another for an information systems audience. But we used an older version with an emphasis on perceived usefulness and perceived ease of use.

Keller uses a later version from Venkatesh & Davies (2000) that adds subjective norm and behavioural intention. See the following figure.

TAM with the extension of subjective norm

Social cognitive theory (SCT)

A different theory developed by other researchers from Bandura. Includes elements such as computer self-efficacy, outcome expectations (performance), outcome expectations (personal), affect, anxiety, and usage. Won’t go into detail, because Keller mentions UTAUT next.

Unified theory of acceptance of use of technology (UTAUT)

This is where 8 of the influential models of user acceptance have been integrated into a theory that has been found to explain 70% of the variance in users acceptance and use of information systems.

UTAUT

Implementation as diffusion of innovations

Mostly drawing on Rogers diffusion of innovation work – we gave a summary of this in this paper. Though I did like this quote, particularly the last part

Innovation research indicates that there is a significant positive relationship between participation in innovation decisions and rate of adoption of innovations (Jahre & Sannes, 1991) and that internally induced innovations are more likely to be accepted than those induced externally (Marcus & Weber, 2000).

Uses the following structure to suggest how the innovation process occurs within an organisation

  1. Inititation – information gathering, conceptualizing and planning of the innovation adoption
    1. Agenda-setting – define the problem or need for the innovation. Identify a performance gap.
    2. Matching – tailor the innovation to fill the gap.
  2. Implementation – all events, actions and decisions involved in putting an innovation into use.
    1. Redefining/restructuring – modify innovation to accommodate org needs more closely.
    2. Clarifying – meaning of innovation becomes more clear to members of the organisation and use broadens
    3. Routinizing – the innovation becomes a part of the organisation and ceases to be an innovation.

Damn, that’s a teleological view of diffusion. No surprise in guessing I don’t like that characterisation. But I guess that is how it is likely to be used within an organisation.

Implementation as a process of learning

Suggests that “learning in organisations” can be studied from different perspectives including:

  • action theory
  • organisational learning
  • knowledge management
  • communities of practice

The emphasis here is on community of practice because of

its capability to describe social learning, but also interactions that occur between man (communities of practice) and technology (boundary objects).

CoP arises from situated learning and based on two basic premises

  1. activity-based nature of knowledge (practice)
  2. group-based character of organisational activities (communities)

talks about work done by Hislop (2003) examining innovation in IT from the perspective of CoP. Finding is that the CoP and innovation implementation are mutually dependent. Innovation creates new communities and change the knowledge distribution within the organisation. The CoP affect how the innovation is supported.

CoPs connect through boundary objects. VLE connects student and teacher communities – boundary object. Four characteristics enable artefacts to be boundary objects:

  1. Modularity – different users, different views
  2. abstraction – distinguishing certain important features of described concepts in the system
  3. accommodation – different functions to support different activities
  4. standarisation – functions can be organised in the same way?? this sounds somewhat funny.

Suggests Wegner wants information systems to be designed to facilitate participation, rather than to facilitate use. Connected with Brown and Duguid’s (1998) statement that technology aimed at supporting knowledge distribution should support informal communication between communities and deal with reach and reciprocity.

Conclusions

The guts of this appears to be summarised in the two tables. The first summarises the differences between these perspectives. The second derives implications for implementation of VLEs.

A comparison of 3 implementation perspectives (adapted from Keller, 2005)
Technology acceptance Diffusion of innovations Learning process
Basic concepts Variables influencing decisions of acceptance or rejection by individual users at specific points The individual decision process of adapting an innovation.
The diffusion process of innovations in organizations
The learning process of different communities of practice within an organization
Regards the VLE as A new technology to be accepted or rejected by users An innovation to be diffused in an organization A boundary object connecting different communities of practice
Regards the users of the VLE as: Individual users making personal decisions of accepting or rejecting a technology Individuals making personal decisions of adopting or rejecting an innovation; an organization adopting or rejecting an innovation Different communities of practice interacting through a boundary object
Considers the different roles of teachers and students No No Yes

I have some disagreements with the above

  • Both UTAUT (TAM) and diffusion theory include consideration of the social system in adoption. While the group is perhaps not as central as with CoP etc., it is still a consideration. To some extent it would depend on how the approaches we’re implemented. To say for certain it sees the users as individuals making decisions….is not entirely true.
  • Similarly, to suggest that diffusion theory and TAM don’t consider the students is not neccesarily entirely correct. In applying diffusion theory to the implementation of online assignment submission we have employed it both to encourage use by students and staff. Yes, CoP may well support the notion of boundary objects to connect students and staff. But a lot of CoP doesn’t necessarily take that on board, just like a lot of TAM/DoI work doesn’t – they all can, but don’t have to.
Implications for the use and implementation of VLEs
Technology acceptance Diffusion of innovations Learning process
Successful use The VLE should: enhance the resolving of educational tasks; be easy to use; improve user’s self-efficacy The VLE should: fill a performance gap; create positive visible outcomes; be consistent with existing beliefs; be less complex to use The VLE should: provide modularity, abstraction, accommodation and standardization; support informal communication; be designed for participation
Successful implementation The implementation process should be supported by: formal and informal leaders; a reliable technological infrastructure The implementation process should: be internally induced; be based on a consensus decision; provide possibilities of trying the VLE beforehand The implementation process should: allow peripheral participation; consider the impact of the VLE on different communities of practice

Again, some potential points of disagreement:

  • This one is probably more a matter of definition. “Educational tasks” in the TAM/successful use box could be interpreted as mostly emphasising the instructional design perspective. i.e. directly and only at learning and teaching. In some, perhaps many contexts, the administrative tasks associated with education (results processing, assignment submission etc.) are of more interest to the academics. Especially if the context is problematic. That’s the point we made in the papers applying TAM to online assignment submission – the main reason academics perceived it as useful, was it solved a range of workload issues. Learning perspectives (e.g. rapid return of feedback to students) were a long last.
  • I’m not sure the DoI approach necessarily excludes input from leaders. In fact, appropriate change agents (informal leaders?) can be important.

Implications

One obvious application of this work would be to apply the different lenses to understanding what is happening in the actual implementation of a new LMS at a university. Is the project being sold as any one of these three perspectives and if so is the implementation plan following any of the guidelines? What, if any, impact will this have on adoption and use.

Or is the implementation taking a different perspective entirely. For example, the “build it and they will come” approach. Is it simply implementing the tech and assuming people will use it.

Are these the only perspectives that could inform VLE implementation? Are there others? What?

As with most discussion of this sort of thing, I don’t think this paper pays sufficient attention to more ateleological processes. All of these assume that the VLE will be used, it’s just a nature of what processes we surround the VLE with. i.e. is it technological determinist?

It’s a very conceptual paper. Obvious thing would be to use this type of approach to examine implementation.

Different performance gaps and the impacts

The diffusion perspective requires that “decisions of implementing a VLE must be based on a performance gap, and hence create a visible and tangible positive outcome for the university”. What happens if the performance gap is perceived differently by different folk? For example, at my institution I can see the following performance gaps being discussed:

  • A prime performance gap seen by IT and management was cost. The institution was seen to have two LMSes, one would be cheaper. Also we were paying licence fees to a commercial VLE vendor. A single, open source LMS solves this performance gap nicely.
  • Some “educational determinists” see the performance gap as the vast majority of online courses at our institution not following a particular type of “good pedagogy”. The new LMS is seen as a solution to this. As a different system it will enable, support or perhaps require “good” pedagogy.
  • Some pragmatic folk see the current systems as old and out of date. In need of updating. The new LMS is seen as a way to be more modern.

Perhaps talking to folk, observing documents and meetings would be a way of surfacing additional performance gaps that the new LMS is seen to solve.

The presence of different percieved gaps raise some other questions:

  • How will a single implementation project plan cater for all these different performance gaps? i.e. how you solve the problem of licence fees is very different to the problem of “good” pedagogy or modern features. Is the plan focusing on solving a particular performance gap at the expense of others?
  • Which gap is the most important? Are some gaps just silly? How do you handle this?

References

Keller, C. (2005). “Virtual learning environments: three implementation perspectives.” Learning, Media and Technology 30(3): 299-311.

PhD update – week #3

A new record. A renewed interest in the PhD has lasted 3 weeks. I’ve even made a come back from the weak second album problem and probably had the most fulfilling week. Though I could have been more productive, perhaps that’s an aim for next week.

This week I did cross a lot of things off the PhD to do list, but I also added a fair few.

7 March to 13 March

All that said, I claimed that I would be aim to complete the following this week:

  • Complete first draft of at least 1 Ps component section for chapter 2 – lets start with “Past Experience” – Not done, not even started.
  • Complete reading and give feedback on Shirley’s DESRIST paper.
  • Finalise a structure with rough content for chapter 3. Done and sent to the supervisor for feedback.

The other main work on the thesis this week included:

  • Gathering and a bit of reading of additional literature for both chapters 2 and 3.
  • A few blog posts on the PhD or ideas arising from it.
    Last week’s output include
    • The biggest flaw in university L&T/e-learning – is connected to thinking associated with chapter 2 and the Ps Framework. In particular, a big problem any approach to e-learning within a university has to address.
    • How to improve L&T and e-learning at universities – provides one perspective on the “solutions” that have arisen from the thesis work to the problem outlined in the previous post.
    • Moving from scarcity to abundance changes things – music – draws on a very recent example to illustrate how a scarce resource becoming overly abundant changes many of the fundamental assumptions of prior practice. A key part of my thesis work is a suggestion that e-learning and information systems are having to face this paradigm change with the raise of the internet, social media and many more. The assumption of scarcity is one of the major flaws of many current approaches to e-learning and organisational information systems.

There is a double-edged sword with the blog posts. They take time away from writing on the PhD, however, they also help deal with the need for a quick sense of completion and also encourage me to get ideas down into writing. Writing that I should be able to re-use in the thesis…theoretically. I need to keep an eye on this.

Next week

For the next week I’d like to:

  • Complete as many sections of the Ps Framework (chapter 2) as possible and have most put onto the blog.
  • Need to complete reading the theory building paper and provide feedback.
  • Need to tidy up a bit of the other outstanding literature I have gathered..

Moving from scarcity to abundance changes things – music

Growing up in Rockhampton in the 70s and 80s, access to music that wasn’t pop or C&W just didn’t happen. Different types of music were scarce. In these days of iTunes, peer-to-peer etc. it has radically changed. There is an abundance.

The impact of this change is difficult to underestimate, and difficult to illustrate. This YouTube video and what it embodies does a really good job.

This post has more about it. This is the core of it for my point

Israeli musician Kutiman has taken hundreds of YouTube samples – often non-musical ones – and turned them into an album that’s awesome on so many levels that it leaves you stunned. First of all, the music is good; really good, especially if you’re a fan of Ninja Tune’s catalog. Secondly, it’s amazing to see all those unrelated YouTube bits and pieces fit together so perfectly

The question is…

The same migration from scarcity to abundance is happening in learning and teaching and e-learning at universities. How is the practice of those tasks going to change?

Another perspective for the indicators project

The indicators project is seeking to mine data in the system logs of a learning management system (LMS) in order to generate useful information. One of the major problems the project is facing is how turn the mountains of data into something useful. This post outlines another potential track based on some findings from Lee et al (2007).

The abstract from Lee et al (2007) includes the following summary

Sample data were collected online from 3713 students….The proposed model was supported by the empirical data, and the findings revealed that factors influencing learner satisfaction toward e-learning were, from greatest to least effect, organisation and clarity of digital content, breadth of digital content’s coverage, learner control, instructor rapport, enthusiasm, perceived learning value and group interaction.

Emphasis on learner satisfaction???

This research seeks to establish factors which impact on learner satisfaction. Not on the actual quality itself, but with how satisfied students are with it. For some folk, this emphasis on student satisfaction is not necessarily a good thing and at best is only a small part of the equation. Mainly because its possible for students to be really happy with a course, but to have learnt absolutely nothing from it.

However, given that most evaluation of learning at individual Australian Universities and within the entire sector rely almost entirely on “smile sheets” (i.e. low level surveys that test student satisfaction), an emphasis on improving student satisfaction may well be a pragmatically effective past-time.

How might it be done?

The following uses essentially the same process used in a previous post that describe another method for informing the indicators project’s use of the mountains of data. At least that suggested approach had a bit more of an emphasis on quality of learning.

The process is basically:

  • Identify a framework that claims to illustrate some causality between staff/institutional actions and good outcomes.
  • Identify the individual factors.
  • Identify data mining that can help test the presence or absence of those factors.
  • Make the results available to folk.

In this case, the framework is the empirical testing performed by the authors to identify factors that contribute to increased student satisfaction with e-learning. The individual factors they’ve identified are:

  • organisation and clarity of digital content;
  • breadth of digital content’s coverage;
  • learner control;
  • instructor rapport;
  • enthusiasm;
  • perceived learning value; and
  • group interaction.

Now some of these can’t be tested for by the indicators project. But some can. For example,

  • Organisation of digital content
    Usually put into a hierarchical structure (weeks/modules and then resources), is the hierarchy balanced?
  • Breadth of content coverage
    In my experience, it’s not unusual for the amount of content to significantly reduce as the term progresses. If breadth is more even and complete, greater student satisfaction?
  • group interaction – participation in discussion forums.
  • instructor rapport – participation in discussion forums and presence in the online course.

Questions

I wonder if the perception of there being a lot of course content for the entire course is sufficient. Are students happy enough that the material is there? Does whether or not they use it become academic?

References

Lee, Y, Tseng, S et al (2007). “Antecedents of Learner Satisfaction toward E-learning.” Journal of American Academy of Business 11(2): 161-168.