Reflections on a 12 year-old course site – It would be harder now

For a variety of reasons, mostly due to some current study, I’ve retrieved from the rubbish tip of history a course website I helped design and teach back in 1999. What follows here are some reflections on what has and hasn’t changed since 1999. At the end there is a bit of speculation that the current context within universities would make it harder to generate this type of course site.

What’s changed or not

Bleeding edge is now standard?

This course site was designed/implemented with a couple of unusual undergraduate project students. Unusual in the sense that they were mature age students starting their second careers. One of them had been a multimedia designer in their previous career. Which is why the look of the site is so unusual, it’s based on the cover of the textbook.

The course design included:

There weren’t very many other course websites in 1999 within this institution, or elsewhere that approached this level of use of the web.

But since then universities have spent significantly greater funds on implementing “enterprise” level e-learning systems and process.. After all that money has been spent surely there must have been a significant increase in the number of course sites that are approaching what this course site did 12 years ago? Surely all that money has led to the development of systems and processes that actually automate, or at least significantly reduce the effort required to achieve, these tasks.

Well, let’s look at the four courses I’m studying this year. Even if I focus on the fairly simple task of having a well-integrated electronic version of the study guide, none of my current courses match this 12 year-old course. Two of them have a study guide page, but it remains separate from the study schedule. i.e. the study schedule doesn’t have links to the appropriate PDF. The other two had used ad hoc approaches with either Word documents or Moodle HTML resources.

And that’s without looking for good instructional design or attempts to modify standard practice to match the capabilities of the new technologies.

Barriers to sharing

This course was implemented using Webfuse, the system that formed the basis of my thesis. Webfuse course sites, including this one, were completely open by default. The only password protected area on this course site was the Staff section which we used to share assignment solutions and discuss some issues. In addition, Webfuse was a web publishing system. i.e. it generated static web pages. Mainly to minimise load on under-strength servers.

Even with it producing unprotected web pages, it isn’t a straight forward process to share/repurpose this course site. Webfuse had various assumptions that meant its web pages aren’t straight web pages. In order to share this course site, I had to manually update the pages to make them usable outside of Webfuse.

I didn’t complete this process on all of the pages in the course site. If you come across a page with broken images or a “This is a old course site” message, you’ve entered areas that haven’t been fixed up.

Using a system, any system, reduces in someway the ease with which the content can be re-used. Even using standards limit reuse to contexts which support those standards.

Some of these course sites were once available via the Wayback Machine, but at some stage in the last 5 years, the robots.txt file was changed on the server and this meant that the Wayback machine stopped making the sites available. An earlier iteration of this site is available on the wayback machine.

LMS enforced “quality through consistency”

This course site design would not be possible at the host institution, because it is now using Moodle. Moodle simply couldn’t support this design.

Moodle (like most LMS) are generally used by institutions to achieve quality by making everything look the same. At a minimum the look and feel for this course, which connects directly to the textbook and looks okay (especially in 1999), would have to be sacrificed to fit within the ugly constraints of the institutional Moodle template.

While “quality through consistency” helps bring those from below up to some minimal standard, it also constrains those who want to move beyond back to some minimal standard. Let alone the question of responding to the diversity inherent in teaching.

Beyond that, a structure and interface that moves beyond just a study schedule would all have to be lost (or at least significantly modified) to fit within the constraints of Moodle.

As would the open nature of the course and the content. As implemented by the institution, the Moodle course site would be available only to students enrolled in the course. As a consequence the value of the course and its resources doesn’t get known. A bit of a Google search can find a range of folk using the animations/lectures produced as part of this course: by William Stallings one of the authors of the *standard* OS texts; one of the lectures not sure of this context; good recommendation of the animations in the comments of this post etc.

External factors on teaching

If I were responsible for this course today, there is no chance in this era of ERA that I would be investing the time necessary to implement a course like this.

Space, bandwidth and dropbox

Back in 1999 we were concerned with bandwidth. So, only audio for online lectures and we also produced a CD-ROM mirror of the website. The site is no longer available via the institution that offered the course for various reasons, including saving space.

The site is now hosted on my free dropbox account with which I get 2Gb. The site is about 300Mb in size.

And this is just one simple indication of just how much better the technology has gotten in 12 years. Not to mention how online learning has gone from a novelty to be feared to an expectation.

Conclusions

If I had to teach this course today, in the current university context, I would be hamstrung by institutional e-learning policies and technologies as well as the focus on research. I would have to spend far more of my time trying to work-around limiting institutional factors, far more than I had to 12 years ago.

And that’s despite online technology being more broadly available, widely accepted and of a significantly better quality than it was 12 years ago.

Now, it is possible to describe this course as an example of a “lone ranger”/fred-in-the-shed doing his thing and an approach that doesn’t fit well with institutional systems and approaches. But it is also fair to say (I think) that for the majority of courses, the institutional systems and approaches are failing to provide something approach a minimal acceptable standard (which is what they claim to do).

Lone ranger doing his thing
Sledge to a computer

Academics, course websites and power laws

Last week I was thinking that academics shouldn’t manually create course sites. That arose out of the process of writing up the why/what behind what we did with Webfuse from 1999 through 2004. Today, I’ve been continuing that and looking at the usage statistics from that period.

The following focuses on statistics about how often an academic modified a course site. The Webfuse model was to automatically create a default course site for every course offered by the faculty. Academics could then modify that site as much, or as little as they wanted. The following two graphs (first in 1999 and then in 2005) shows how many staff were editing Webfuse course sites and how many times they made page updates. Staff are ordered on increasing number of updates. What is striking to me is the similarity of the curves – both look like a “power law”. Some rambling on implications below.

1999

Updates 1999

2005

Webfuse page updates  2005

In terms of the 2005, the top 22 academic staff performed 21,298 updates on course sites. That’s over 8,000 more updates than the remaining 127 academic staff (13,472 updates in total). 17% of the academic staff performed 61% of the updates of course websites.

Implications

With the Indicators Project we’ve been using three questions to frame investigations of LMS usage:

  1. What?
    What is actually going on within LMS usage? What are the patterns that can be identified?
  2. Why?
    Why do these patterns exist? Can we identify why this pattern has arisen?
  3. How?
    How can this insight be used to improve practice?

What?

The Wikipedia page on “power law” states “Power-law relations characterize a staggering number of naturally occurring phenomenon”. With the Webfuse data shown above, which is spread over a 6 year time period during which there were significant changes, there is a fair indication that this might be leaning towards a “natural phenomenon”. It would be interesting to perform a similar analysis on more recent data and more “traditional” LMS to see how “natural” this if this might represent a more widespread, “natural” phenomenon.

Based on my experience, and without looking at the data, I suspect that this type of pattern is likely to exist in most universities around use of their LMS.

Why?

So, why do you think this pattern exists? Suggestions?

My suspicion is based on Geoghegan’s (1994) identification of a chasm – a la Moore (2002) – in the adoption of instructional technology. i.e. that there is a chasm/difference between two groups of academics – innovators/early adopters and the pragmatists. It’s the innovators/early adopters that are the big users of instructional technology.

So, one interpretation of the above figures is that the majority of academics are pragmatists. This is not necessarily a negative. They want to do a reasonable job of teaching (as measured by the institution, themselves and their students) but aren’t going to allow other work (mostly research) to suffer. My suspicion is this “pragmatic” perspective is the dominant perspective amongst academics. It’s the type of perspective that environment within universities encourages.

How?

So, if you found support for this perspective, how might it be used to improve learning and teaching?

If it is the university teaching environment that creates this “pragmatic” approach, perhaps it needs to be changed.

If a majority of academics aren’t editing course sites, this suggests that these course sites aren’t that great. Perhaps it also suggests that the quality of the student learning experience isn’t all that great. If this is the case, then continuing the practice of academics having to create course sites within an LMS may not be the way to go. Perhaps, it is time to investigate alternatives ranging from the evolutionary – provided a default course site for academics to build upon – to the revolutionary – such as PLEs etc.

Postscript – Implications for LAMS?

LAMS – Learning Activity Management System has for quite some time been positioned as a better, alternative to the LMS model. From the “About” section on the LAMS site

LAMS is a revolutionary new tool for designing, managing and delivering online collaborative learning activities. It provides teachers with a highly intuitive visual authoring environment for creating sequences of learning activities. These activities can include a range of individual tasks, small group work and whole class activities based on both content and collaboration.

LAMS must be good, it has won a gold medal

Does LAMS usage – within institutions that have adopted it – follow the same “power law”?

The question of how to do an apples versus apples comparison between LAMS and a LMS would be interesting as they follow very different models.

If this could be done appropriately, then my prediction is that yes, in a university environment LAMS would follow this pattern, possibly even more pronounced because LAMS is that much more different to past practice for academics than the LMS.

Also, from the perspective of a typical teaching academic (and perhaps even students?), there’s a lot more to an “online course” than learning activity design. Most of which LAMS doesn’t support directly, hence the need to integrate it with LMSs.

References

Geoghegan, W. (1994). Whatever happened to instructional technology? 22nd Annual Conferences of the International Business Schools Computing Association, Baltimore, MD, IBM.

Moore, G. A. (2002). Crossing the Chasm. New York, Harper Collins.

Choosing your indicators – why, how and what

The unit I work with is undertaking a project called Blackboard Indicators. Essentially the development of a tool that will perform some automated checks on our institution’s Blackboard course sites and show some indicators which might identify potential problems or areas for improvement.

The current status is that we’re starting to develop a slightly better idea of what people are currently doing through use of the literature and also some professional networks (e.g. the Australasian Council on Open, Distance and E-learning) and have an initial prototype running.

Our current problem is how do you choose what the indicators should be? What are the types of problems you might see? What is a “good” course website?

Where are we up to?

Our initial development work has focused on three groupings of category: course content, coordinator presence and all interactions. Some more detail on this previous post.

Colin Beer has contributed some additional thinking about some potential indicators in a recent post on his blog.

Col and I have talked about using our blogs and other locations to talk through what we’re thinking to develop a concrete record of our thoughts and hopefully generate some interest from other folk.

Col’s list includes

  • Learner.
  • Instructor.
  • Content.
  • Interactions: learner/learner, learner/instructor, learner/content, instructor/content

Why and what?

In identifying a list of indicators, as when trying to evaluate anything, it’s probably a good idea to start with a clear definition of why you are starting on this, what are you trying to achieve.

The stated purpose of this project is to help us develop a better understanding of how and how well staff are using the Blackboard courses sites. In particular, we want to know about any potential problems (e.g. a course site not being available to students) that might cause a large amount of “helpdesk activity”. We would also like to know about trends across the board which might indicate the need for some staff development, improvements in the tools or some support resources to improve the experience of both staff and students.

There are many other aims which might apply, but this is the one I feel most comfortable with, at the moment.

Some of the other aims include

  • Providing academic staff with a tool that can aid them during course site creation by checking their work and offering guidance on what might be missing.
  • Provide management with a tool to “check on” course sites they are responsible for.
  • Identify correlations between characteristics of a course website and success.

The constraints we need to work within include

  • Little or no resources – implication being that manual, human checking of course sites is not currently a possibility.
  • Difficult organisational context due to on-going restructure – which makes it hard to get engagement from staff in a task that is seen as additional to existing practice and also suggests a need to be helping staff deal with existing problems more so than creating more work. A need to be seen to be working with staff to improve and change, rather than being seen as inflicting change upon them.
  • LMS will be changing – come 2010 we’ll be using a new LMS, whatever we’re doing has to be transportable.

How?

From one perspective there are two types of process which can be used in a project like this

  1. Teleological or idealist.
    A group of experts get together, decide and design what is going to happen and then explain to everyone else why they should use it and seek to maintain obedience to that original design.
  2. Ateleological or naturalist.
    A group of folk, including significant numbers of folk doing real work, collaborate together to look at the current state of the local context and undertake a lot of small scale experiments to figure out if anything makes sense, they examine and reflect on those small scale experiments and chuck out the ones that didn’t work and build on the ones that did.

(For more on this check out: this presentation video or this presentation video or this paper or this one.)

From the biased way I explained the choices I think it’s fairly obvious which approach I prefer. A preference for the atelelogical approach also means that I’m not likely to want to spend vast amounts of time evaluating and designing criteria based on my perspectives. It’s more important to get a set of useful indicators up and going, in a form that can be accessed by folk and have a range of processes by which discussion and debate is encouraged and then fed back into the improvement of the design.

The on-going discussion about the project is more likely to generate something more useful and contextually important than large up-front analysis.

What next then?

As a first step, we have to get something useful (for both us and others) up and going in a form that is usable and meaningful. We then have to engage with them and find out what they think and where they’d like to take it next. In parallel with this is the idea of finding out, in more detail, what other institutions are doing and see what we can learn.

The engagement is likely going to need to be aimed at a number of different communities including

  • Quality assurance folk: most Australian universities have quality assurance folk charged with helping the university be seen by AUQA as being good.
    This will almost certainly, eventually, require identifying what are effective/good outcomes for a course website as outcomes are a main aim for the next AUQA round.
  • Management folk: the managers/supervisors at CQU who are responsible for the quality of learning and teaching at CQU.
  • Teaching staff: the people responsible for creating these artifacts.
  • Students: for their insights.

Initially, the indicators we develop should match our stated aim – to identify problems with course sites and become more aware with how they are being used. To a large extent this means not worrying about potential indicators of good outcomes and whether or not there is a causal link.

I think we’ll start discussing/describing the indicators we’re using and thinking about on a project page and we’ll see where we go from there.

Getting started on Blackboard indicators

The unit I work for is responsible for providing assistance to CQUniversity staff and students in their use of e-learning. Which currently at CQUni is mostly the use of Blackboard.

The current model built into our use of Blackboard is that the academic in charge of the course (or their nominee) is responsible for the design and creation of the course site. In most instances, staff are provided with an empty course site for a new term at which stage they copy over the content from the previous offering, make some modifications and make the site available to students.

Not surprisingly, given the cruftiness of the Blackboard interface, the lack of time many staff have and a range of other reasons there are usually some fairly common, recurrent errors that are made. Errors which create workload for us when students or staff have problems. In many cases it may even be worse than this as students become frustrated and don’t even complain, they suffer in agony.

Most of these problems, though not all, are fairly simple mistakes. Things that could be picked up automatically if we had some sort of automated system performing checks on course sites. The fact that Blackboard doesn’t provide this type of functionality says something about the assumptions underlying the design of this version of Blackboard – a very teaching academic focus, not so much on the support side.

Developing this sort of system is what the Blackboard Indicators project is all about. It’s still early days but we’ve made some progress. Two main steps

  • Developed an initial proof of concept.
  • Started a literature, colleague and literature search.

Initial proof of concept

We currently have a web application up and running that, given a term, will display a list of all the courses that are meant to have Blackboard course sites and generate a number between 0 and 100 summarising how well a site has meant a particular indicator.

Currently, the only indicator working is the “Content Indicator”. This is meant to perform some objective tests on, what is broadly defined as, the content of the course. Currently this includes

  • Is the course actually available to students?
    The score becomes 0 automatically if this is the case.
  • Does the the site contain a link to the course profile?
    20 is taken off the score there isn’t one.
  • Is the course profile link for the right term?
    50 taken off if it’s wrong.

At the moment, we’re planning to put in place three indicators, the content indicator plus

  • “Coordinator Presence”
    How present is the coordinator of the course? Have they posted any announcements? Are they reading the discussion forum? Posting to it? What activity have they done on the site in the last two weeks?
  • “All interactions”
    What percentage of students and staff are using the site? How often? What are they using?

It’s still early days and there remain a lot of questions, which we hope will be answered by our searching and some reflection.

Literature, web and colleague search

We’ve started looking in the literature, doing google searches and asking colleagues what they are doing. Have some interesting information already.

What we do find will be discussed in our blogs, bookmarked on del.icio.us (tag: blackboardIndicators) and talked about on the project page.

What do students find useful?

In a growing category of blog posts I’m expanding and attempting to apply my interest in diffusion theory and related theories to increase the use of course websites. A major requirement, as outlined in the previous post, in achieving this requires and understanding of what students find useful?

In this post, I’m trying to bring together some research that I’m aware of which seeks to answer this question by actually asking the students. If you know of any additional research, let me now.

Accessing the student voice

Accessing the student voice is the final report from a project which analysed 280,000 comments on Course Experience Questionnaire’s from 90,000 students in 14 Australian Universities. The final report has 142 pages (and is available from the web page). Obviously the following is a selective synopsis of an aspect of it.

The report summarises (pp. 7 and 8) the 12 sub-domains which attracted the highest percentage of mentions which it equates to those that are important to students. In rank order they are

  1. Course Design: learning methods (14.2% share of the 285,000 hits)
    There were 60 different methods identified as the best aspect of their studies which fell into 5 clusters

    • 16 face-to-face methods that focused on interactive rather than passive learning
    • 7 independent study and negotiated learning methods
    • 20 practice-oriented and problem-based learning methods
    • 6 methods associated with simulated environments and laboratory methods
    • 11 ICT enabled learning methods
  2. Staff: quality and attitude (10.8%)
  3. Staff: accessibility (8.2%)
  4. Course Design: flexibility & responsiveness (8.2%)
  5. Course Design: structure & expectations (6.7%)
  6. Course Design: practical theory links (5.9%)
  7. Course Design: relevance (5.6%)
  8. Staff: teaching skills (5.4%)
  9. Support: social affinity (3.8%)
  10. Outcomes: knowledge/skills (3.8%)
  11. Support: learning resources (3.5%)
  12. Support: infrastructure & learning environment (3.4%)

Going into totals

  • Course design – 40.6%
  • Staff – 24.4%
  • Support – 10.7%
  • Outcomes: knowledge/skills – 3.8%

Link to the 7 principles

A quote from the report

The analysis revealed that practice-oriented and interactive, face-to-face learning methods attracted by far the largest number of ‘best aspect’ comments.

Of the 7 Principles for Good Practice in Education mentioned in the last post, #3 is “encourages active learning”

What about CQU students

In late 2007 we asked CQU’s distance education students three questions

  1. What did you like or find useful?
  2. What caused you problems?
  3. What would you like to see?

Students were asked to post their answers to an anonymous discussion forum. This means they could see each others posts and respond.

An initial summary of the responses is available and CQU staff can actually view a copy of the discussion forum containing the original student comments.

A simple analysis revealed the following top 10 mentions

  1. 106 – Some form of eLecture – video, audio etc.
  2. 86 – Quick, effective and polite responses to study queries.
  3. 66 – Clear and consistent information about the expectations of the course and assignments e.g. making past assignments and exams available.
  4. 55 – Study guides.
  5. 53 – Good quality and fast feedback on assignments.
  6. 33 – For resources that are essentially print documents to be distributed as print documents.
  7. 30 – A responsive discussion board.
  8. 27 – Online submission and return of assignments.
  9. 25 – More information about exams, including more detailed information on how students went on exams.
  10. 21 – Having all material ready by the start of term.

CQU Students – 1996

Back in 1996 CQU staff undertook a range of focus groups of CQU distance education students aim at identifying issues related to improving distance education course quality. This work is described in more detail in a paper (Purnell, Cuskelly and Danaher, 1996) from the Journal of Distance Education.

Arising from this work were six interrelated areas of issues. These issues were used to group the suggested improvements from the DE students, these improvements are explained in detail in the paper and are summarised below.

  1. student contact with lecturers/tutors,
    • Easy access to people with relevant expert knowledge and skills (usually the lecturer).
    • Flexible hours for such access.
    • Some personal contacts through telephone and, where possible, some face-to-face contact.
    • Additional learning resources, such as audio- and videotapes to provide more of a personal touch.
  2. assessment tasks,
    • Detailed feedback (approximately one written page) on completed assessment tasks indicating how to improve achievement.
    • Timely feedback so that students can utilize feedback in future assessment tasks in the unit.
    • A one-page criteria and standards sheet showing specific criteria to be used in each assessment task and the standards associated with each criterion (statements of the achievement required for a high grade, etc.).
    • Clear advice on assessment tasks in the unit resource materials and in other contacts such as teleconferences.
    • Where possible, the provision of exemplar responses to similar assessment tasks be provided in the study materials.
    • Lecturers to be mindful of the differences in resources available to rural students compared to those in larger urban areas when setting and marking assessment tasks.
  3. flexibility,
    • Non-compulsory residential schools available at various locations of no more than three days’ duration and incorporating use of facilities such as libraries.
    • Greater consideration of the complexities of lives of distance education students by encouraging, for example, more self-paced learning.
    • Access to accredited study outside traditional semester times.
    • Lecturers/tutors to consider more fully the needs of isolated students in rural areas in support provided.
  4. study materials,
    • Ensure study materials arrive on time (preferably in the week prior to the commencement of a semester).
    • Efficient communications with students-particularly with the written materials provided in addition to the study materials.
    • Ensure each unit’s study guide matches other resources used in a unit, such as a textbook.
    • Lecturers should be mindful of extra costs for students to complete a unit in which, for example, specialized computer software might be needed; if a textbook must be purchased, it should be used sufficiently to justify its purchase.
    • Lecturers should cater to the range of students they have, especially from rural areas, with the study requirements for each unit (many participants reported that self-contained study materials in which there was little or no need to secure other resources to achieve high grades were valued).
  5. mentors, and
    • Having access to mentors is desirable but should be optional for students.
    • Issues about the role of a mentor need to be clarified.
  6. educational technology.
    • Continue to use and make more effective use of technologies familiar to students, such as the telephone and audio- and videocassettes.
    • Examine ways of minimizing access costs to the Internet for students, especially in rural areas.
    • Provide appropriate technical support for students to be able to access and use the Internet.
    • Provide professional development for staff to meet individual needs for using educational technologies involving, for example, interactive television, audio graphics, CD-ROM, e-mail, and the World Wide Web.

The commonalities between this list, from 1996, and the list generated in 2007 are not small.

Creating quality course websites – the pragmatic approach

In a previous post I laid out some rationale for an organisational approach to increase the usage of course websites. In this post I provide more detail on the rationale behind the pragmatic approach, which was described this way in that previous post.

  • Pragmatic – ad hoc changes to particular aspects of a course website.
    Most academic staff make these changes in an unguided way. I’ll suggest that you are likely to obtain greater success if those ad hoc changes are guided by theories and frameworks such as the Technology Acceptance Model (TAM) and related work (Davis, 1989), Diffusion Theory (Rogers, 1995) and the 7 Principles for Good Practice in Education (Chickering and Gamson, 1987).

I’ll describe each of the three theories that form the foundation of this idea. In a later post, I’ll try and take up the idea of how this could be used in the design of a course website.

The fundamental idea is that these three theories become the basis for guiding the design of a standard course website which becomes the minimal online course presences for an institution. These theories are applied with close attention to the local context and consequently there will be differences between contexts, perhaps even between disciplines or types of courses.

Technology acceptance model

The Technology Acceptance Model (TAM) suggests that there are two main factors which will influence how and when people use an information system (the assumption is that a course website is an information system):

  1. Perceived usefulness.
    “The degree to which a person believes that using a particular system would enhance his or her job performance” (Davis 1989).
  2. Perceived ease of use.
    “The degree to which a person believes that using a particular system would be free from effort” (Davis 1989).

Some more discussion about the use of TAM within e-learning can be found in (Behrens, Jamieson et al, 2005).

The questions that arise from this idea for the design of a standard course web might include:

  1. What do the students currently find useful?
  2. What additions might they find useful?
  3. The same questions applied to all staff, both teaching and support.
  4. How can these requirements be fulfilled in a way that is easy to use?
  5. How just do you determine that?

Diffusion theory

Diffusion Theory (Rogers 1995) encompasses a number of related theories explaining why people adopt (or don’t) innovations. The best known of these related theories are the perceived attributes.

The idea is that the how a potential adopter perceives an attribute influences whether or not they will adopt. The perceived attributes that have the biggest influence on adoption are:

  • Relative advantage.
    The degree to which an innovation is perceived as better than the idea it supersedes.
  • Compatibility.
    The degree to which an innovation is perceived as being consistent with the existing values, past experiences, and needs of potential adopters.

  • Complexity.
    The degree to which an innovation is perceived as difficult to understand and use.

If you want students to make use of an online course presence then they must perceive the services offered by that course presence to be useful (relative advantage), easy to use (complexity) and something that meets their expectations of a university experience (compatibility).

The questions which arise from this include

  • What do students expect from their university learning experience?
  • What are their capabilities when it comes to technology and online learning?
  • What do they find useful?

This is one theoretical explanation for why you would expect online lectures, especially if implemented with a YouTube like interface, to be seen as a positive thing by students. In particular because most students still see lectures as a core component of a university education. They expect to have lectures.

This prediction is backed up by the findings of the Carrick Project examining the impact of Web-based lecture technologies. You can find particular mention of this in the projects publications.

Jones, Jamieson and Clark (2003) talk more about the use of diffusion theory for choosing potential web-based educational innovations.

This paper moves beyond the perceived attributes component of diffusion theory. These other components of diffusion theory offer a range of insights and potential advice for other aspects of this type of project. For example,

  • Innovation-decision – is the decision to adopt a particular innovation an optional, collective or authority decision.
    Authority decisions enable the fastest adoption, but may be circumvented.
  • Communication channels – the nature of how information is communicated to people impact on the level of adoption.

The 7 principles

The 7 principles for good practice in undergraduate education were drawn from research on good teaching and learning and were intended to help academics address problems including apathetic students.

The 7 principles are that good practice in education:

  1. encourages contact between students and faculty,
  2. develops reciprocity and cooperation among students,
  3. encourages active learning,
  4. gives prompt feedback,
  5. emphasizes time on task,
  6. communicates high expectations, and
  7. respects diverse talents and ways of learning.

It could be argued that the 7 principles are very specific, research informed advice about how to design activities and resources which students perceive to be useful and provide them with relative advantage. Which has obvious connections with the diffusion theory and TAM.

For example, principle 4 is “gives prompt feedback”. A design feature derived from that might be to return marked assignments to all students within 2 days. Based on my experience with students, they would perceive this as very useful and believe they gain an advantage from it.

This connection suggests that appropriate use of the 7 principles could significantly increase the use of an online course presence.

Implementation considerations – what about the staff?

The insights from diffusion theory and TAM also apply to the teaching staff and even the organisation. Teaching staff are critical to learning and teaching. If they aren’t positively involved it won’t work well. From an organisational perspective, anything that is planned needs to be doable within the resource constraints and also needs to be compatible with the organisation’s current structure.

Creating quality course websites

CQUni has an interest in increasing the quality of the course websites, as part of a broader push to improve the quality of learning and teaching. This post is an attempt to engage with some of the issues and develop some suggestions for moving forward.

There are many an argument why this particular focus on course websites might be somewhat misguided. For example, there is a growing argument (under various titles, including Personal Learning Environments) for the need to move away from this sort of institutional focus to a much greater focus on the learning environment of the individual students. While those discussions are important, as an institution we do need to have a pragmatic focus and improve what we do and our students experience.

So, this post and any subsequent project ignores those broader questions. However, that’s not to say that CQU isn’t engaging in those broader questions, we are. For example, the PLEs@CQUni project is aiming to examine some of the questions raised by notions of PLEs and social software and what impact they might/should have on institutional practice.

What is quality and success

Before embarking on any sort of attempt to “improve quality” you should probably seek to define what quality is. When do you know that you have succeeded?

There have been any number of attempts to determine quality standards for course websites. Many of these draw on guidelines from the educational research literature or from the Human Computer Interface, Information Architecture and other web/online related disciplines. I’m from an information systems background so, not surprisingly, I’ll draw on that background.

Within the information systems research literature how to determine success and consequently replicate it has received a great deal of attention. One of the problems this attention has established is that the notion of success is extremely subjective. An IT department will label something successful while a user of the same system may disagree strongly. The finance division may have yet another perspective. For this and other reasons the IS literature has moved onto using system use as a measure of success (Behrens, Jamieson et al. 2005).

i.e. a system or a tool is successful if there is large and sustained usage of it by people. Hopefully those people you intended. As you might imagine there has been subsequent research talking about the quality of that use. However, the level of use has been established as a fairly reliable measure of success.

I’m going to suggest that the level of student and staff use of a course website is a useful benchmark for the success, and even quality, of an online course. You certainly cannot impact on learning outcomes or students’ perceptions and experience through an online course presence, if they don’t make use of it.

If you accept this, then the question becomes what can we do to encourage use, to encourage success.

If you don’t accept it, and many people might, then the rest of this argument becomes almost pointless. If you don’t accept it, I would like to hear the arguments why it doesn’t make sense.

Encouraging use

Drawing on some of the ideas from previous post I’m going to suggest that there are two main approaches you can take to increase the use of a course website

  1. Pragmatic – ad hoc changes to particular aspects of a course website.
    Most academic staff make these changes in an unguided way. I’ll suggest that you are likely to obtain greater success if those ad hoc changes are guided by theories and frameworks such as the Technology Acceptance Model (TAM) and related work (Davis, 1989), Diffusion Theory (Rogers, 1995) and the 7 Principles for Good Practice in Education (Chickering and Gamson, 1987).
  2. Re-design – where the entire course (and consequently course website) are re-designed by a project that returns to first principles.
    Again, most academic staff I’m familiar with do this type of re-design in a fairly unguided way. I’ll suggest that re-design approaches informed by appropriate educational theories or frameworks are more likely of success. I’ll suggest the approach already being used at CQU, which uses Constructive Alignment (Biggs and Tang, 2007) and the 7 Principles (Chickering and Gamson, ???), work well.
  3. The argument is that a well implemented approach that draws on either of these options has the capability to improve the use of a course website. However, I will propose, that the cost and level of success of each approach is different as shown in the following table.

    The following table seeks to suggestion some potential characteristics of the two approaches as applied to an institutional setting. i.e. not an individual course, but a large collection of courses within a program, faculty or university. Obviously these are broad predictions of outcomes abstracted away from a particular context.

    Characteristics Pragmatic Re-design
    Level of use A significant increase in use is possible A really significant increase in use is possible
    Quality of use/outcomes Some increase in the quality of use/student outcomes but still largely reliant on the individual student’s capabilities etc rather than the actual course website Potentially huge increases in the quality of use and student outcomes.
    Difficulty/cost of implementation Somewhat difficult but not all that expensive, large scale change in a university is always difficult. Extremely difficult and expensive. Typically this will require the academic staff associated with the courses to radically rethink their conceptualisations of learning and teaching. This is not easy.

    The suggestion

    I’m hoping to expand on this further in subsequent posts as an attempt to outline a project by which an institution like CQU can significantly improve the use of its course websites and subsequent improve the learning experience of its students.

    In summary, the proposal involves the following

    1. A broad scale push for pragmatic improvement to course sites
      A project to ensure that all course sites are informed by diffusion theory, TAM and the 7 principles in a way that is “automatic” and simple. Essentially, (almost) all course websites should at least illustrate these characteristics.
    2. A process of complete re-design targeting courses with the largest numbers of students.
    3. An on-going process by which the lessons and outcomes of the re-design are fed into the broad scale process of pragmatic improvement.