Product models – LMS, BoB and alternatives

The following completes the “alternate models” section of the Product component started in a previous post. It’s a bit rough and ready, but hopefully good enough.

Product models

The ERP market was one of the fastest growing and most profitable areas of the software industry during the last three years of the 1990s (Sprott 2000) and has tended to dominate the IT field (Light, Holland et al. 2001). It was at this same time – the late 1990s – that the availability of commercial LMS and their use within universities became increasingly prevalent. Perhaps then, it is not surprising that in terms of the underlying product model an LMS appears to be very close to that of a single-vendor Enterprise Resource Planning (ERP) system. In both cases, all the required functionality is provided in one, integrated package sourced from a single provider. In comparing the literature it is possible to see significant commonality between the advantages and disadvantages of an LMS and that of ERP system. The aim of this section is not to repeat the advantages and disadvantages of LMS – covered somewhat in the “LMS characteristics and limitations” section in Section 2.1.2 – or ERPs – covered in more detail in the relevant literature (Kallinikos 2004; Light 2005). It is instead to establish the existence of other potential product models and compare these with the ERP model. In addition, towards the end of this section the additional complicating and recent factor of user-owned technology is raised.

There are two approaches to the design of an LMS (Weller, Pegler et al. 2005):

  1. monolithic or integrated approach; and
    All common tools are provided by the one software package provided and supported by the one vendor. The predominant approach.
  2. best of breed approach.
    An alternative approach also termed a component or hybrid architecture. Aims to provide the same level of integration but the ability to select components that best suit the local context.

The same two approaches can be identified in the broader provision of enterprise information systems. It is possible to identify a reasonable spread of literature (Dewan, Seidmann et al. 1995; Geishecker 1999; Light, Holland et al. 2001; Hyvonen 2003; MacKinnon, Grant et al. 2008; Burke, Yu et al. 2009) examining various questions arising out of the difference between a monolithic ERP product model and the best of breed (BoB) model. This may not be all that surprising as such discussions have been billed as the “long-running debate” with the pendulum swinging from one view to the other and back again (Geishecker 1999). It is a debate that is encompassed be an even longer standing debate over the centralisation of decentralisation of computing, its focus on efficiency versus effectiveness and the supposed rational attempts at optimising the trade-off (King 1983). A debate that appears unresolvable due to the actual driving issues in the debate being the politics of organisation and resources and especially the apparently central issue of control (King 1983).

ERP adoption involves a centralised organisation of processes and a tendency to reduce autonomy and increase rigidity (Lowe and Locke 2008). Centralisation of control preserves top management prerogatives in most decisions, whereas decentralisation allows lower level managers discretion in choosing among options (King 1983). A BoB approach allows each department to select its own solution (Dewan, Seidmann et al. 1995). Light, Holland and Wills (2001) perform a comparative analysis of the ERP (monolithic or integrated) and best of breed (BoB) approaches to enterprise information systems and is summarised in Table 2.3.

Table 2.3 – Comparison of major differences between ERP and BoB (adapted from Light, Holland et al. 2001)
Best of breed Single vendor ERP
Organisation requirements and accommodations determine functionality The vendor of the ERP system determines functionality

A context sympathetic approach to BPR is taken A clean slate approach to BPR is taken
Good flexibility in process re-design due to a variety in component availability Limited flexibility in process re-design, as only one business process map is available as a starting point
Reliance on numerous vendors distributes risk as provision is made to accommodate change Reliance on one vendor may increase risk
The IT department may require multiple skills sets due to the presence of applications, and possibly platforms, from different sources A single skills set is required by the IT department as applications and platforms are common
Detrimental impact of IT on competitiveness can be dealt with, as individualism is possible through the use of unique combinations of packages and custom components Single vendor approaches are common and result in common business process maps throughout industries. Distinctive capabilities may be impacted on
The need for flexibility and competitiveness is acknowledged at the beginning of the implementation. Best in class applications aim to ensure quality Flexibility and competitiveness may be constrained due to the absence or tardiness of upgrades and the quality of these when they arrive
Integration of applications is time consuming and needs to be managed when changes are made to components Integration of applications is pre-coded into the system and is maintained via upgrades

Even in 1983, over twenty-five years ago, it was recognized that the terrain in which to decide between centralized and decentralized computing was continually changing (King 1983). This change is driven in no small part by the changing nature of technology from main-frames to personal computers to managed operating environments. Similarly, the smaller discussion between ERP and BoB has also been influenced by changes in technology. In the early to mid-1980s, the mainframe-dominant market automatically defaulted to an integrated ERP approach (Geishecker 1999). Most recently integration technologies like web services and service-oriented architectures (SOA) are seen to be enabling the adoption of BoB approaches (Chen, Chen et al. 2003). Such approaches are having an impact within the LMS field with attempts at implement a BoB LMS being enabled by the development of service-oriented architectures such as that be JISC (Weller, Pegler et al. 2005). Such an approach may allow a more post-industrial approach to the LMS allowing the taking of parts that are needed, when they are needed and granting control where it is needed (Dron 2006). Bailetti et al (2005) report on an early system that uses web services to implement a BoB approach.

In general, however, discussion about and comparison between ERP and BoB approaches to enterpise systems suffer the same limitation as the discussion of procurement strategies in the previous section. They are still based on the assumption that it is the responsibility of the institution, and its information technology department, to select, own and maintain all of the information systems required by users. Web 2.0, e-learning 2.0 (Downes 2005) and the rise of social software requires that organization of e-learning moves beyond centralized and integrated LMS and towards a variety of separate tools which are used and managed by the students in relation to their self-governed work. (Dalsgaard 2006). Stiles (2007) argues that in the future organizational needs will be best met by a BoB approach, however student initiated processes will be done using their choice of tools and services. An approach that provides students with a tool-box of loosely joined small pieces (Ryberg 2008).


Bailetti, T., M. Weiss, et al. (2005). An open platform for customized learning environments. International Conference on Management of Technology (IAMOT).

Burke, D., F. Yu, et al. (2009). "Best of Breed Strategies: Hospital characteristics associated with organizational HIT strategy." Journal of Healthcare Information Management 23(2): 46-51.

Chen, M., A. Chen, et al. (2003). "The implications and impacts of web services to electronic commerce research and practices." Journal of Electronic Commerce Reseaerch 4(4): 128-139.

Dalsgaard, C. (2006) "Social software: E-learning beyond learning management systems." European Journal of Distance Education Volume,  DOI:

Dewan, R., A. Seidmann, et al. (1995). Strategic choices in IS infrastructure: Corporate standards versus "Best of Breed" Systems. ICIS’1995.

Downes, S. (2005). "E-learning 2.0." eLearn 2005(10).

Dron, J. (2006). Any color you like, as long as it’s Blackboard. World Conference on E-Learning in Corporate, Government, Healthcare and Higher Education, Honolulu, Hawaii, USA, AACE.

Geishecker, L. (1999). "ERP vs. best-of-breed." Strategic Finance 80(9): 62-67.

Hyvonen, T. (2003). "Management accounting and information systems: ERP versus BoB." European Accounting Review 12(1): 155-173.

Kallinikos, J. (2004). "Deconstructing information packages: Organizational and behavioural implications of ERP systems." Information Technology & People 17(1): 8-30.

King, J. L. (1983). "Centalized versus decentralized computing: organizational considerations and management options." ACM Computing Surveys 15(4): 319-349.

Light, B. (2005). "Potential pitfalls in packaged software adoption." Communications of the ACM 48(5): 119-121.

Light, B., C. Holland, et al. (2001). "ERP and best of breed: a comparative analysis." Business Process Management Journal 7(3): 216-224.

Lowe, A. and J. Locke (2008). "Enterprise resource planning and the post bureaucratic organization." Information Technology & People 21(4): 375-400.

MacKinnon, W., G. Grant, et al. (2008). Enterprise information systems and strategic flexibility. 41st Annual Hawaii International Conference on System Sciences, Waikoloa, Hawaii.

Ryberg, T. (2008). Challenges and potentials for institutional and technological infrastructures in adopting social media. 6th International Confernece on Networked Learning, Halkidiki, Greece.

Sprott, D. (2000). "Componentizing the enterprise application packages." Communications of the ACM 43(4): 63-69.

Stiles, M. (2007). "Death of the VLE? A challenge to a new orthodoxy." Serials 20(1): 31-36.

Weller, M., C. Pegler, et al. (2005). "Students’ experience of component versus integrated virtual learning environments." Journal of Computer Assisted Learning 21(4): 253-259.

Procurement and software: alternate models for e-learning

And here’s the next bit of the Products component for chapter 2 of my thesis. The aim of this section is basically two argue that the LMS approach to e-learning embodies one view of how to procure software and one software model. I eventually aim to argue that both of these predominant models are essentially bad matches for the nature of e-learning within a university. The following is intended more to identify that there are alternatives than argue for the inappropriateness. That’s for later. But I doubt I’ve stopped it coming through.

This section focuses on procurement, I hope to have the product section up later today.

Procurement and software: alternate models for e-learning

As has been noted previously, within higher education the selection and purchase of an LMS has become the almost ubiquitous and unquestioned technical solution to the provision of e-learning. This singular approach can be said to embody a single approach to the procurement of software – “buy” – and a standard software model – the integrated, enterprise system. This section is based on the assumption that there are alternatives to both these models. There are different approaches to software procurement and different software product models that may be more appropriate for e-learning within universities, especially in light of recent changes within the broader information technology market place.

Procurement strategies for information systems

There is recognition that the choice of IS procurement strategy is critical for company operations and that different kinds of systems, require different kinds of resources and consequently different procurement strategies are applicable (Hallikainen and Chen 2005). Alignment between information technology and business is seen by scholars as an important principle for the success of IT deployment and implementation (Beukers, Versendaal et al. 2006). Saarinen and Vepsalainen (1994) propose the Procurement Principle as a prescriptive model for information systems investments. The principle is based on the assumption that optimal decisions about procurement are made when there is alignment between three choices: what type of system, what procurement strategy, and what type of organisational requirements (Wild and Sobernig 2007).

The Procurement Principle is based on transaction cost economics and draws on two inherent factors – specificity of system design and uncertainty of requirements – to develop three generic types of organisational requirements (Saarinen and Vepsalainen 1994):

  1. routine;
    Common to many or most organizations with stable requirements and low uncertainty.
  2. standard; and
    Common to a group of organizations, possibly within a given domain (Wild and Sobernig 2007), with some variety and uncertainty in requirements.
  3. speculative.
    Highly specific to one company and involve high uncertainty in terms of functionality, user interfaces and the competitiveness of the organisation.

In terms of the two inherent factors – specificity of design and requirements uncertainty – the above generic types represent systems on the diagonal. Saarinen and Vesalainen (1994) recognise other types of systems exist, suggest that they may be difficult to deal with and recommend solutions that modify requirements to fit with the three identified types or postponed.

Saarinen and Vesalainen (1994) identify generic types of developers that fit with these procurement strategies. The three types are:

  1. implementers;
    Employed by an external software development company these developers of high levels of product specific knowledge but only limited, common knowledge about the user organisation.
  2. analysts; and
    Commissioned by the client these staff are responsible for specifying user requirements and improving system solutions by drawing on their abilitiy to solve generic problems and specify complex integrated systems.
  3. innovators.
    Usually employed by the user organisation these developers have specialised knowledge about the user organisation, its users and information systems. They can communicate easily with the users and can specify and create new innovative solutions.

The appropriate matching of the type of requirements and the types of developer is now used to identify three efficient and generic procurement strategies. In large projects, the above three generic strategies will have to be combined and redefined in practice (Saarinen and Vepsalainen 1994). The three generic strategies are (Saarinen and Vepsalainen 1994):

  1. Routine systems can be best implemented by acquiring software packages from implementers.
  2. Standard applications require software contracting by analysts and possibly other outside resources for implementation.
  3. Speculative investments are best left for internal development by innovators.

These three generic strategies correspond to the three major approaches to information systems development: software product purchase, contractual customized development with outside vendors, and in-house development (Heiskanen, Newman et al. 2000). The selection and implementation of an LMS within a university represents software product purchase with some limited integration work. There is increasingly an absence of institutions adopting other approaches, either individually or in combination.

The over-emphasis on the software product purchase approach contributes to an increased in a techno-centric view. Due to the cost involved in modifying a complex software package most commercial systems require the institution to modify its practices to accommodate the system (Dodds 2007). So, rather than using IT to foster a culture of innovation by taking the point of view of the individual (Dodds 2007), or even the organisation, the focus is on the technology and its capabilities. As early as 1982 an alternate evolutionary approach, which appears much closer to in-house development, was recommended by Kerr and Hiltz (1982) for computer-mediated communication and found to be common with interactive systems which provide cognitive support. Kerr and Hiltz (1982) suggested that because the technology was so new, the possibilities for alternative functions and capabilities so numerous, and that users could not adequately understand what they might do with a new technology until they had an opportunity to experience it that an approach of feedback, evaluation and incremental implementation of new features was desirable.

The reasons identified by Kerr and Hiltz (1982) seem to fit two (requirements identity and requirements volatility) of the three categories of risks associated with requirements development identified by Tuunanen et al (2007) and shown in Table 2.2. If this observation remains appropriate for current practices around e-learning it would appear to question the alignment between the LMS procurement approach and the types of requirements that would make that approach the most efficient as identified by the Procurement Principle.

Table 2.2 – Requirements development risks (adapted from Tuunanen, Rossi et al. 2007)
Risks Definition
Requirements identity The availability of requirements; high identity risk indicates requirements are unknown or indistinguishable
Requirements volatility The stability of requirements; high volatility risk indicates requirements easily change as a result of environmental dynamics or individual learning
Requirements complexity The understandability of requirements; high complexity risk indicates requirements are difficult to understand, specify, and communicate

In addition, both the nature of the LMS and the procurement model assume that it is necessary that for the organisation to provide all of the components of the information system. In recent years the functionality and usability of technology available ot individuals has been outstripping that of technology provided centrally by institutions (Johnson and Liber 2008). Increasingly, university students and staff are using a collection of tools and systems they choose, rather than tools and systems selected, owned and maintained by the university (Jones 2008).


Beukers, M., J. Versendaal, et al. (2006). "The procurement alignment framework construction and application." Wirtschaftsinformatik 48(5): 323-330.

Hallikainen, P. and L. Chen (2005). "A holistic framework on information systems evaluation with a case analysis." The Electronic Journal Information Systems Evaluation 9(2): 57-64.

Johnson, M. and O. Liber (2008). "The Personal Learning Environment and the human condition: from theory to teaching practice." Interactive Learning Environments 16(1): 3-15.

Jones, D. (2008). PLES: framing one future for lifelong learning, e-learning and universities. Lifelong Learning: reflecting on successes and framing futures. Keynote and refereed papers from the 5th International Lifelong Learning Conference, Rockhampton, CQU Press.

Saarinen, T. and A. Vepsalainen (1994). "Procurement strategies for information systems." Journal of Management Information Systems 11(2): 187-208.

Wild, F. and S. Sobernig (2007). Learning tools in higher education: Products, characteristics, procurement. Second Conference on Technology Enhanced Learning. Crete, Greece.

Learning Tools in Higher Education: Products, Characteristics, Procurement

Back to the PhD today, probably will do a couple of summaries of papers I’m reading. The focus is on the product models and procurement strategies used by Universities to solve the technical problem of e-learning. I start with a paper with the title “Learning Tools in Higher Education: Products, Characteristics, Procurement” (Wild and Sobernig, 2007)


Uses interviews of 100 European universities from 27 countries to identify the tools they use to facilitate learning, how intensively they are used and what procurement strategies are used.

Gives some rough figures of types of systems used. Gives a longitudinal feel to some previous studies.

Seems to indicate that European institutions seem to find it “very important to have an institutional platform run by the institutions them-
selves, however, with strong connections to the open-source world”.

I wonder if the results would be the same in the US or Australia where commercial LMS adoption has been more predominant – though changing somewhat.

The reporting of the findings are, to me at least, somewhat confusing.

The greatest value for me is pointing me to the literature (Saarinen et al, 1994; Heiskanen et al, 2000) that proposes an optimal relationship between types of requirements, types of system and types of procurement strategy. I’ll be using this in the PhD and potentially some papers.


Most unis using some sort of LMS. 250 commercial software providers,40 open source products – large and heterogenous products. Some evidence (Pituch and Lee, 2006) that functionality and interactivity drive usage.

What tools are being used today?

Products in the market

Participants report

  • 182 distinct tools occurred 290 times: LMS, content management, collaboration tools
  • Moodle most used – 44 instances, but only 15 of these not running in parallell with others.
  • WebCT – 14 installations.
  • 15 pure content management systems in 20 installations
  • 18 pure admin information systems – 19x.
  • 22 different authoring tols
  • 14 learning object repositories
  • 10 different assessment tools
  • 32 different collaboration tools with 51 installations
  • Most heavily used systems identified by highest active number of users – WebCT (twice), .LRN (once), CampusNet (once), Blackoard (once) and eLSe (once).

References a couple of other similar investigations of tools

Since one – five systems have vanished.

Portfolio characteristics

What activities did the tools support:

  • text-based communication – 87 (out of 100)
  • Assessments – 81
  • Quality assurance and evaluation – 53
  • Collaborative publishing – 52
  • Individual publishing – 44
  • social networking – 34
  • Authoring learning designs – 31
  • Audio/video conferencing – 31
  • Audio/video broadcasting – 25
  • User portfolio management – 23
  • simulations/online labs – 21

Text-oriented predominant. Multimedia lacking support

Following table compares reports of courses sites from two previous studies and this one – some issues in comparison.

Categories Paulsen (1999) Paulsen (2003) Wild and Sobernig (2007)
Up to 15 courses 68% 38% 22%
More than 15 25% 50% 56%

This study also found – 36% more than 100. 5% more than 1000.

Tool usage: 49/100 delivery and 54/100 course management.

Report on problems with calculating number of users because of varios difficulties.

Procurement strategies

Procurement decisions based on 3 types of requirements

  1. Speculative requirements – organisationaly unique or involve uncertainty.
  2. Standard requirements – common to organisations of a particular domain.
  3. Routine requirements – invariant across domain boundaries.

Literature suggest that in optimal cases, organisational choices are driven by these requirements. Suggests this choice represents a combination of

  • Software type – custom developed, packaged and off-the-shelf
  • Procurement strategy – in-house development (internal procurement), contracting and acquisition (both external procurement).

Same literature suggests an alignment between requirement types and organisational choices:

  • Predominantly speculative – internal development of custom software.
  • Standard requirements – customised, packaged software where customisation external contracted.
  • Routine requirements – off-the shelf software.

At this stage, the explanation of the findings from the survey are really hard to follow – at least for me. I would’ve though this should be easy. Keep that in mind when you read the following.

  • 40% follow procurement configurations considered optimal
  • 44% reported mixed configurations of requirements and procurement strategy
  • 5% report external procurement from external contractors
  • External procurement, when it does occur, predominantly with speculative requirements.
  • Internal development equally distributed across requirements – 21% speculative, 19% mixed, 18% standard
  • There are other percentages reported, but I can’t follow it and/or make sense of it with the ones I’ve summarised above


Heiskanen, A., M. Newman, et al. (2000). “The social dynamics of software development.” Accounting, Management & Information Technology 10(1): 1-32.

Paulsen, M. F.: Online Education. An International Analysis of Web-based Education and Strategic Recommendations for Decision Makers. NKI Forlaget, Bekkestua, Norway (2000)

Paulsen, M. F. (2003). “Experiences with Learning Management Systems in 113 European Institutions.” Educational Technology & Society 6(4): 134-148.

Pituch, K., and Lee, Y.: The influence of system characteristics on e-learning use. Computers & Education. 47(2) (2006) 222–244

Saarinen, T. and A. Vepsalainen (1994). “Procurement strategies for information systems.” Journal of Management Information Systems 11(2): 187-208.

Wild, F. and S. Sobernig (2007). Learning tools in higher education: Products, characteristics, procurement. Second Conference on Technology Enhanced Learning. Crete, Greece.

Comparisons between LMS – the need for system independence

Some colleagues and I are putting the finishing touches on a paper that has arisen out of the indicators project. The paper is an exploratory paper, seeking to find interesting patterns that might indicate good or bad things about the use of LMS (learning management systems, aka course management systems, virtual learning environments etc) that might help improve decision-making by all participants (students through management). I hope to post the paper in coming days.

This post is about one aspect of the paper. The section where we compare feature adoption between two different LMS that have been used side-by-side at our institution: Blackboard and Webfuse. (Important: I don’t believe Webfuse is an LMS and will argue that in my PhD (Webfuse is the topic of my thesis). But it’s easier to go with the flow). This is one of the apparent holes in the literature, we haven’t found any publications analysing and comparing system logs from different LMS, especially within the one institution over the same long time frame. In our case we went from 2005 through the first half of 2009.

The aim of this post is to identify the need and argue for the benefits in developing a LMS independent means of analysing and comparing the usage logs of different LMS at different institutions.

Anyone interested?

The following gives a bit of the background, reports on some initial findings and out of that identifies the need for additional work.

Our first step

Blackboard and Webfuse have a number of significant differences. All LMS have somewhat different assumptions, designs and names. Webfuse is significantly different, but that’s another story. The differences make comparisons between LMS more difficult. How do you compare apples with apples?

The only published approach we’re aware of that attempts to make a first step towards a solution to this problem is the paper by Malikowski, Thompson and Theis (2007) for which the abstract makes the following claims

…This article recommends a model for CMS research that equally considers technical features and research about how people learn…..This model should also ease the process of synthesizing research in CMSs created by different vendors, which contain similar features but label them differently.

I’ve talked about and used the model previously (first, second and other places). For the purposes of the paper we produced a different representation of the Malikowski et al (2007) model.

Reworked Malikowski model

From my perspective there are three contributions the model makes

  1. Provides an argument for 5 categories of features and LMS might have, gives them a common title and specifies which common features fit where.
  2. Draws on existing literature give some initial benchmarks for the level of adoption (specified by the percentage of courses with a feature) to be expected grouped into three levels.
    I must admit that Malikowski et al don’t specify the percentages directly, these are taken from the examples they list in tables.
  3. Suggests a model where features are adopted sequentially over time as academics become more comfortable with existing features.

Blackboard versus Webfuse – 2005 to 2009

The benefit the model has provided us is the ability to group the different features of Webfuse and Blackboard into the five categories and then compare the levels of feature adoption between the two systems and with the benchmarks identified in the Malikowksi et al (2007) paper. The following summarises what we found.

Transmitting content

Malikowski et al (2007) define this to include announcements, uploaded files and the use of the gradebook to share grades (but not assignment submission etc.). The following graph shows the percentage of course sites in both Blackboard (black continuous line), Webfuse (black dashed lines) and the “benchmark region” identified in Malikowski et al. In the case of transmitting content the “benchmark region” is between 50 and 100%.

Feature adoption - Transmit Content - Wf vs Bb

This shows that both Blackboard and Webfuse are in the “benchmark region”. Not surprising given the pre-dominant use of LMSs for content transmission. What may be surprising is that Webfuse only averages around 60-75%. This is due to one of those differences in LMS. Webfuse is designed to automatically create default course sites that contain a range of content. Also, it’s quite common for the announcements facility in a Webfuse course site to be used by faculty management to disseminate administrative announcements to students.

So, in reality 100% of Webfuse courses transmit content. The percentage show those courses where the academics have uploaded additional content or made announcements themselves.

Class interactions

Class interactions covers chat rooms, email, discussion forums, mailing lists etc. Anything that get folk in a course talking.

Feature adoption - Class Interaction- Wf vs Bb

Both Blackboard and Webfuse are, to varying extents, outside of the “benchmark area”. Webfuse quite considerably reaching levels near 100% in recent years. Blackboard has only just crept over. This creeping over of Bb may be an indicator that the “benchmark area” is out of date. It was created drawing on 2004 and earlier literature. If feature adoption increases over time, the “benchmark area” has probably moved up.

Evaluating students

Online assignment submission, quizzes and use of other tools to assess/evaluate students.

Feature adoption - Evaluating Students - Bb vs Wf

Over recent years Webfuse has seen double the adoption of these features than Blackboard. It’s grown outside the “benchmark area”. Most of this is online assignment submission, in fact some of the courses using the Webfuse online assignment submission system are actually Blackboard courses.

Evaluating course/instructor

The last category we covered was evaluating the course/instruction through survey tools etc. We didn’t cover computer-based instruction as very few Blackboard courses use it and Webfuse doesn’t provide the facility.

Which raises an interesting question. I clearly remember a non-Webfuse person being quite critical that Webfuse did not offer the computer-based instruction funtionality – we could have added it but no-one ever asked. What is better, paying for features few people ever use or not having features that a few people will use?

Feature adoption: evaluating Courses Bb versus Wf

First, it should be pointed out that for “rarely used” features like course evaluation there is an absence of percentages in Malikowski et al (2007). I’ve specified 20% as the upper limit for this “benchmark area” because “moderately used” was 20% or higher. So it’s probably unfair to describe the Blackboard adoption level as being at the bottom of the range. On the hand, Webfuse is streets ahead. Near 100% dropping to just less than 40%. More on this below.

Work to do

Generating the above has identified a need or value in the following future work:

  • Do an up to date literature review and establish a new “benchmark area”.
    Malikowski et al (2007) rely on literature from 2004 and before. Levels of adoption have probably gone up since then.
  • Refine the list of features per category through the same literature review.
    In recent years LMS have added blogs, wikis, social networking etc. Where do they fit?
  • Refine the definition of “adoption”.
    Malikowski and his co-authors have used at least two very different definitions of adoption. There is apparently no work to check that the papers used to illustrate the model in Malikoswki et al (2007) use a common definition of adoption.
  • Develop feature specific LMS independent usage descriptions.
    In their first paper Malikowski et al (2006) count adoption as the presence of a feature, regardless of how broadly it is used. This causes problems, for example, the course evaluation figure for Webfuse is near 100% because for a number of years a course barometer (Jones, 2002) was a standard part of a Webfuse default site. i.e. everyone course had one. Just doing a quick check, only 23% of Webfuse courses in 2006 had a barometer in which a student made a comment.

    Malikowski (2008) adopted a new measure for adoption. Course use of a particular feature had to be above the 25th percentile of use for that feature in order to be counted. I don’t find this a good measure. Just 1 student comment on a barometer could be a potentially essential use of the feature.

    There appears to be a need for being able to judge the level of use of a feature in a way that is sensitive to the feature. 1 entry in a gradebook for a course for 500 students is probably an error can be ignored. 1 comment on a barometer for that same course that points out an important issue probably shouldn’t be ignored.

  • Attempt to automate comparison between LMS.
    In order to enable broader and quicker comparison between different LMS, whether between institutions or within institutions, there appears a need to automate the process. To make it quicker and more staight forward.

    One approach might be to design a LMS independent database scheme for the extended Malikowski et al (2007) model. Such a scheme would enable people to write “conversion scripts” that take usage logs from Blackboard, Moodle, WebCT or whatever LMS and automatically insert them into the schema. Once someone has written the conversion script of an LMS, no-one else would have to. The LMS independent schema could than be analysed and used to compare and contrast different systems and different institutions without the apples and oranges problem.


Jones, D. (2002). Student Feedback, Anonymity, Observable Change and Course Barometers. World Conference on Educational Multimedia, Hypermedia and Telecommunications, Denver, Colorado, AACE.

Malikowski, S., M. Thompson, et al. (2006). “External factors associated with adopting a CMS in resident college courses.” Internet and Higher Education 9(3): 163-174.

Malikowski, S., M. Thompson, et al. (2007). “A model for research into course management systems: bridging technology and learning theory.” Journal of Educational Computing Research 36(2): 149-173.

PhD Update #22 – one day active, but some movement

The last week has seen only one day spent on the PhD. Mainly due to working on a conference paper. The good news is that the paper is connected to the PhD. It looks at mining LMS usage logs to generate indicators of patterns which may be interesting. The paper includes a comparison of LMS feature adoption between Blackboard (CQU’s existing institutional LMS) and Webfuse – the topic of the PhD. Webfuse comes out favourably from a couple of perspectives. More on this later.

What I did

The intent expressed in the last PhD update was to complete the Product section and make a good start on the pedagogy section.

In the one day and a bit days I spent on the thesis I

So, one and a bit sections left to complete Product

What I’ll do next week

The plan is to complete product and hopefully complete pedagogy.

At this stage, I should have a fair number of days to work on the PhD, so I might get somewhere close.

E-Learning 2.0 and reliability of external services

BAM is a little project of mine playing at the edges of post-industrial e-learning. Since 2006 it’s been relying on students creating and using blogs provided by external service provides – mostly

This reliance on external service providers has been one of the “problems” raised by folk who don’t like the idea. They fear that because the university doesn’t own the technology or have a contract with the service provider there is no certainty about the quality of service. That the students and the staff will be left high and dry as the service is yanked in the middle of term.

Those fears are not unfounded. There have been stories of Web 2.0 services disappearing in the middle of a course. However, my argument has always been that if you pick the right service providers and design systems to allow safe failure you can achieve generally better outcomes (for a cheaper price) than the mandate and purchase approach traditionally taken by institutions.

This post shares a recent experience that supports this argument and ruminates on some alternate explanations for why this approach might be better.

The story

Yesterday I received an email from one of the teaching staff involved in a course that is using BAM. The course has 170+ students spread across 5+ campuses using BAM with their posts being tracked and marked by 10 staff. Three of the students for this teacher are reporting that they can’t access their blogs.

While BAM allows students to create and use a blog on any service provider we have found it useful to suggest providers whom we find reliable. Originally this was blogger and, in the last year or so we’ve recommended only. i.e. based on our experience, we found more usable and reliable. I should point out though, that the institution I work for does not have a formal agreement with The students create free blogs on like any of the other of thousands of folks who do each week. I’ll pick up on this later.

After looking at the reported problem it was apparent that the blogs for the three students had been suspended because they apparently had contravened the terms of service (ToS). This mean that the students couldn’t post to their blog and no-one could see any of the content posted to their blog. While it seemed unlikely that the students would have done anything to deserve this, it’s amazing what can happen. So the question was what had they done?

A key part of BAM is that it is designed to enable safe failure. If, as in this case, the student’s blog has disappeared – for whatever reason – it doesn’t matter. BAM keeps a mirror of the blog’s RSS/Atom feed on a university server. So while I couldn’t see the blogs posts on, I could see the content on BAM. Nothing there looked like it would contravene the ToS.

So the only way forward was to ask why they did this. This is where the fear of failure arises. I’ve seen any number of examples of technical support being horrible. Including instances where the institution has paid companies significant amounts of money for support only to receive slow responses that do little more than avoid the question or report “it looks alright from here”. If you get this sort of “service” from supplier you pay, what sort of response am I going to get from

Remember, these blogs are not my blogs. The blogs belong to students who attend the university I work for. A university is not likely to know anything about. A university they certainly don’t have any relationship with. In fact, it’s a university that appears to favour a competitor. Both IT division and our Vice-Chancellor have blogs hosted by blogger.

For these reasons, I was somewhat pessimistic about the response I would get. I was fearful that this experience would provide support for the nay sayers. How wrong I was.

12 hours after I contacted about this issue. I received an email response which basically said “Oops, sorry it looked like the profiles matched spammers. Our mistake. The blogs are back.”.

12 hours might seem like a long time if you’re picky. But I’m more than happy with that. It’s streets ahead of the response times I’ve seen from vendors who are being paid big money. It’s orders of magnitude better in terms of effectiveness.

Do one thing and do it well

It’s my proposition that if you choose a good Web 2.0 service provider, rather than being more risky than purchasing, installing and owning your own institutional version of the service, it is actually less risky, less expensive and results in better quality on a number of fronts. This is because a good Web 2.0 service provider has scale and is doing one thing and doing it well.

Unlike an integrated system (e.g. an LMS) only has to concentrate on blog engines. So it’s blog service is always going to be streets ahead of that provided by the LMS. Even if the LMS is open source.

A commercial LMS vendor is going to have to weight the requirements of huge numbers of very different clients, wanting different things and consequently won’t be able to provide exactly the service the institution needs. Not to mention that they will be spread really thin to cover all the clients.

An open source LMS generally has really good support. But the institution needs to have smart people who know about the system in order to properly engage with that support and be flexible with the system.

There’s more to draw out here, but I don’t have time. Have a paper to write.

Learning requires willingness to suffer injury to one’s self-esteem

Over recent weeks I have ignored Twitter, it was consuming too much time and I have to focus on writing the PhD. There is a cost involved to doing this, you miss out on some good insights.

Aside: The quality of the insights you gather from twitter are directly correlated with the quality of the people you follow. Listening to this podcast yesterday I heard the following description of the difference between Facebook and Twitter. Facebook is for the people you already know, Twitter is for those you don’t.

This morning I gave in and started up Nambu and have come across the following, very fitting quote

“Every act of conscious learning requires the willingness to suffer an injury to one’s self-esteem. That is why young children, before they are aware of their own self-importance learn so easily; and why older persons, especially if vain or important, cannot learn at all.” — Thomas Szasz, 1973

I plan to use this quote to argue that current approaches within universities – or at least those I’m familiar with – prevent learning.


I came across this quote via a tweet by Gardner Campbell pointing to the first lecture by Michael Wesch. The quote is the lead in to the lecture.

Thomas Szasz is a somewhat controversial figure, so perhaps not the perfect source for a quote. But the quote does capture what I see as a key aspect of learning – and one that I personally struggle with.

Learning means being wrong

Szasz suggests you have to be willing to suffer through injury to your self-esteem to learn. To get it wrong. This connects with many of the other insights, quotes and perspectives on learning that I’ve seen and discussed on the blog. I’m sure there are many more.

Additional support for this idea comes from confirmation bias, the Tolstoy syndrome and pattern entrainment and not to mention the Golden Hammer law and status quo adherence. All summed up nicely by a quote from Tolstoy

The most difficult subjects can be explained to the most slow-witted man if he has not formed any idea of them already; but the simplest thing cannot be made clear to the most intelligent man if he is firmly persuaded that he knows already, without a shadow of doubt, what is laid before him.

In order to learn something new you have to be prepared to think anew, critically examine what you currently take for granted and hold it up to the light of new insights to see if it is found wanting. While learning something new, you will make mistakes. In fact, there are any number of quotes around innovation that posit the importance of failure

If you’re not failing every now and again, it’s a sign you’re not doing anything very innovative. — Woody Allen


The essential part of creativity is not being afraid to fail. — Edwin Land


Success is on the far side of failure. — Thomas Watson Sr

Fear of failure is embedded in academia

Jon Udell has argued that academia is heavily focused on not being seen to make mistakes. Researchers only release ideas that are fully baked, half-baked ideas are discouraged

As Gardner Campbell observes in this article

For an academic, “failure” is often synonymous with “looking stupid in front of someone.” For many faculty, and maybe for me back in the 1980s, computers mean the possibility of “pulling a Charlie Gordon,” as the narrator poignantly terms it in Daniel Keyes’s Flowers for Algernon.

Fear of failure is made worse by managerialism

For quite some time I have been arguing that teleological approaches to online learning – and I know expand that to broader styles of management – within higher education is ill-suited to the challenge (Jons, Luck, McConachie and Daner, 2005; Jones and Muldoon, 2007). Approaches to leadership and management that are driven by current over-emphasis on efficiency and accountability are based heavily on teleological assumptions and because of the mismatch end up damaging universities.

But worse, at least from the perspective of learning, such approaches to leadership – at least as often practiced – are hugely fearful of failure. They seek to avoid it as much as possible. The SNAFU principle is a humourous explanation of this tendency for authoritarian hierarchies to screw up.

Of course there is also much written in the management and organisational research about this tendency. This post covers a small sample of it and includes the following quote from Argyris and Schon (1978, p116)

In a Model 1 behavioral world, the discovery of uncorrectable errors is a source of personal and organisational vulnerability. The response to vulnerability is unilateral self-protection, which can take several forms. Uncorrectable errors, and the processes that lead to them, can be hidden, disguised, or denied (all of which we call ‘camouflage’); and individuals and groups can protect themselves further by sealing themselves off from blame, should camouflage fail.


Jones, D., J. Luck, et al. (2005). The teleological brake on ICTs in open and distance learning. Conference of the Open and Distance Learning Association of Australia’2005, Adelaide.

Jones, D. and N. Muldoon (2007). The teleological reason why ICTs limit choice for university learners and learning. ICT: Providing choices for learners and learning. Proceedings ASCILITE Singapore 2007, Singapore.