Comparisons between LMS – the need for system independence

Some colleagues and I are putting the finishing touches on a paper that has arisen out of the indicators project. The paper is an exploratory paper, seeking to find interesting patterns that might indicate good or bad things about the use of LMS (learning management systems, aka course management systems, virtual learning environments etc) that might help improve decision-making by all participants (students through management). I hope to post the paper in coming days.

This post is about one aspect of the paper. The section where we compare feature adoption between two different LMS that have been used side-by-side at our institution: Blackboard and Webfuse. (Important: I don’t believe Webfuse is an LMS and will argue that in my PhD (Webfuse is the topic of my thesis). But it’s easier to go with the flow). This is one of the apparent holes in the literature, we haven’t found any publications analysing and comparing system logs from different LMS, especially within the one institution over the same long time frame. In our case we went from 2005 through the first half of 2009.

The aim of this post is to identify the need and argue for the benefits in developing a LMS independent means of analysing and comparing the usage logs of different LMS at different institutions.

Anyone interested?

The following gives a bit of the background, reports on some initial findings and out of that identifies the need for additional work.

Our first step

Blackboard and Webfuse have a number of significant differences. All LMS have somewhat different assumptions, designs and names. Webfuse is significantly different, but that’s another story. The differences make comparisons between LMS more difficult. How do you compare apples with apples?

The only published approach we’re aware of that attempts to make a first step towards a solution to this problem is the paper by Malikowski, Thompson and Theis (2007) for which the abstract makes the following claims

…This article recommends a model for CMS research that equally considers technical features and research about how people learn…..This model should also ease the process of synthesizing research in CMSs created by different vendors, which contain similar features but label them differently.

I’ve talked about and used the model previously (first, second and other places). For the purposes of the paper we produced a different representation of the Malikowski et al (2007) model.

Reworked Malikowski model

From my perspective there are three contributions the model makes

  1. Provides an argument for 5 categories of features and LMS might have, gives them a common title and specifies which common features fit where.
  2. Draws on existing literature give some initial benchmarks for the level of adoption (specified by the percentage of courses with a feature) to be expected grouped into three levels.
    I must admit that Malikowski et al don’t specify the percentages directly, these are taken from the examples they list in tables.
  3. Suggests a model where features are adopted sequentially over time as academics become more comfortable with existing features.

Blackboard versus Webfuse – 2005 to 2009

The benefit the model has provided us is the ability to group the different features of Webfuse and Blackboard into the five categories and then compare the levels of feature adoption between the two systems and with the benchmarks identified in the Malikowksi et al (2007) paper. The following summarises what we found.

Transmitting content

Malikowski et al (2007) define this to include announcements, uploaded files and the use of the gradebook to share grades (but not assignment submission etc.). The following graph shows the percentage of course sites in both Blackboard (black continuous line), Webfuse (black dashed lines) and the “benchmark region” identified in Malikowski et al. In the case of transmitting content the “benchmark region” is between 50 and 100%.

Feature adoption - Transmit Content - Wf vs Bb

This shows that both Blackboard and Webfuse are in the “benchmark region”. Not surprising given the pre-dominant use of LMSs for content transmission. What may be surprising is that Webfuse only averages around 60-75%. This is due to one of those differences in LMS. Webfuse is designed to automatically create default course sites that contain a range of content. Also, it’s quite common for the announcements facility in a Webfuse course site to be used by faculty management to disseminate administrative announcements to students.

So, in reality 100% of Webfuse courses transmit content. The percentage show those courses where the academics have uploaded additional content or made announcements themselves.

Class interactions

Class interactions covers chat rooms, email, discussion forums, mailing lists etc. Anything that get folk in a course talking.

Feature adoption - Class Interaction- Wf vs Bb

Both Blackboard and Webfuse are, to varying extents, outside of the “benchmark area”. Webfuse quite considerably reaching levels near 100% in recent years. Blackboard has only just crept over. This creeping over of Bb may be an indicator that the “benchmark area” is out of date. It was created drawing on 2004 and earlier literature. If feature adoption increases over time, the “benchmark area” has probably moved up.

Evaluating students

Online assignment submission, quizzes and use of other tools to assess/evaluate students.

Feature adoption - Evaluating Students - Bb vs Wf

Over recent years Webfuse has seen double the adoption of these features than Blackboard. It’s grown outside the “benchmark area”. Most of this is online assignment submission, in fact some of the courses using the Webfuse online assignment submission system are actually Blackboard courses.

Evaluating course/instructor

The last category we covered was evaluating the course/instruction through survey tools etc. We didn’t cover computer-based instruction as very few Blackboard courses use it and Webfuse doesn’t provide the facility.

Which raises an interesting question. I clearly remember a non-Webfuse person being quite critical that Webfuse did not offer the computer-based instruction funtionality – we could have added it but no-one ever asked. What is better, paying for features few people ever use or not having features that a few people will use?

Feature adoption: evaluating Courses Bb versus Wf

First, it should be pointed out that for “rarely used” features like course evaluation there is an absence of percentages in Malikowski et al (2007). I’ve specified 20% as the upper limit for this “benchmark area” because “moderately used” was 20% or higher. So it’s probably unfair to describe the Blackboard adoption level as being at the bottom of the range. On the hand, Webfuse is streets ahead. Near 100% dropping to just less than 40%. More on this below.

Work to do

Generating the above has identified a need or value in the following future work:

  • Do an up to date literature review and establish a new “benchmark area”.
    Malikowski et al (2007) rely on literature from 2004 and before. Levels of adoption have probably gone up since then.
  • Refine the list of features per category through the same literature review.
    In recent years LMS have added blogs, wikis, social networking etc. Where do they fit?
  • Refine the definition of “adoption”.
    Malikowski and his co-authors have used at least two very different definitions of adoption. There is apparently no work to check that the papers used to illustrate the model in Malikoswki et al (2007) use a common definition of adoption.
  • Develop feature specific LMS independent usage descriptions.
    In their first paper Malikowski et al (2006) count adoption as the presence of a feature, regardless of how broadly it is used. This causes problems, for example, the course evaluation figure for Webfuse is near 100% because for a number of years a course barometer (Jones, 2002) was a standard part of a Webfuse default site. i.e. everyone course had one. Just doing a quick check, only 23% of Webfuse courses in 2006 had a barometer in which a student made a comment.

    Malikowski (2008) adopted a new measure for adoption. Course use of a particular feature had to be above the 25th percentile of use for that feature in order to be counted. I don’t find this a good measure. Just 1 student comment on a barometer could be a potentially essential use of the feature.

    There appears to be a need for being able to judge the level of use of a feature in a way that is sensitive to the feature. 1 entry in a gradebook for a course for 500 students is probably an error can be ignored. 1 comment on a barometer for that same course that points out an important issue probably shouldn’t be ignored.

  • Attempt to automate comparison between LMS.
    In order to enable broader and quicker comparison between different LMS, whether between institutions or within institutions, there appears a need to automate the process. To make it quicker and more staight forward.

    One approach might be to design a LMS independent database scheme for the extended Malikowski et al (2007) model. Such a scheme would enable people to write “conversion scripts” that take usage logs from Blackboard, Moodle, WebCT or whatever LMS and automatically insert them into the schema. Once someone has written the conversion script of an LMS, no-one else would have to. The LMS independent schema could than be analysed and used to compare and contrast different systems and different institutions without the apples and oranges problem.

References

Jones, D. (2002). Student Feedback, Anonymity, Observable Change and Course Barometers. World Conference on Educational Multimedia, Hypermedia and Telecommunications, Denver, Colorado, AACE.

Malikowski, S., M. Thompson, et al. (2006). “External factors associated with adopting a CMS in resident college courses.” Internet and Higher Education 9(3): 163-174.

Malikowski, S., M. Thompson, et al. (2007). “A model for research into course management systems: bridging technology and learning theory.” Journal of Educational Computing Research 36(2): 149-173.

2 thoughts on “Comparisons between LMS – the need for system independence

  1. Pingback: How do you develop a cross-LMS usage comparison? « The Weblog of (a) David Jones

  2. Pingback: Examining feature adoption – slightly better approach « The Weblog of (a) David Jones

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s