Call for participation: Getting the real stories of LMS evaluations?

The following is a call for participation from folk interesting in writing a paper or two that will tell some real stories arising from LMS evaluations.

Alternatively, if you are aware of some existing research or publications along these lines, please let me know.

LMSs and their evaluation

I think it’s safe to say that the idea of a Learning Management System (LMS) – aka Course Management System (CMS), Virtual Learning Environment (VLE) – is now just about the universal solution to e-learning for institutions of higher education. A couple of quotes to support that proposition

The almost universal approach to the adoption of e-learning at universities has been the implementation of Learning Management Systems (LMS) such as Blackboard, WebCT, Moodle or Sakai (Jones and Muldoon 2007).

LMS have become perhaps the most widely used educational technologies within universities, behind only the Internet and common office software (West, Waddoups et al. 2006).

Harrington, Gordon et al (2004) suggest that higher education has seen no other innovation result in such rapid and widespread use as the LMS. Moodle or Sakai. Almost every university is planning to make use of an LMS (Salmon, 2005).

The speed with which the LMS strategy has spread through universities is surprising (West, Waddoups, & Graham, 2006).

Even more surprising is the almost universal adoption of just two commercial LMSes, both now owned by the same company, by Australia’s 39 universities, a sector which has traditionally aimed for diversity and innovation (Coates, James, & Baldwin, 2005).

Oblinger and Kidwell (2000) comment that the movement by universities to online learning was to some extent based on an almost herd-like mentality.

I also believe that increasingly most universities are going to be on their 2nd or perhaps 3rd LMS. My current institution could be said to be on its 3rd enterprise LMS. Each time there is a need for a change, the organisation has to do an evaluation of the available LMS and select one. This is not a simple task. So it’s not surprising to see a growing collection of LMS evaluations and associated literature being made available and shared. Last month, Mark Smithers and the readers of his blog did a good job of collecting links to many of these openly available evaluations through a blog post and comments.

LMS evaluations, rationality and objectivity

The assumption is that LMS evaluations are performed in a rational and objective way. That the organisation is demonstrating its rationality by objectively evaluating each available LMS and making informed decisions about which is most appropriate for it.

In the last 10 years I’ve been able to observe, participate and hear stories about numerous LMS evaluations from a diverse collection of institutions. When no-one is listening, many of those stories turn to the unspoken limitations of such evaluations. They share the inherent biases of participants, the cognitive limitations and the outright manipulations that . Stories that rarely, if ever, see the light of day in research publications. In addition, there is a lot of literature from various fields suggesting that such selection processes are often not all that rational. A colleague of mine did his PhD thesis (Jamieson, 2007) looking at these sorts of issues.

Generally, at least in my experience, when the story of an institutional LMS evaluation process is told, it is told by the people who ran the evaluation (e.g. Sturgess and Nouwens, 2004). There is nothing inherently wrong with such folk writing papers. The knowledge embodied in their papers is, generally, worthwhile. My worry is that if these are the only folk writing papers, then there will be a growing hole in the knowledge about such evaluations within the literature. The set of perspectives and stories being told about LMS evaluations will not be complete.

The proposal

For years, some colleagues and I have regularly told ourselves that we should write some papers about the real stories behind various LMS evaluations. However, we could never do it because most of our stories only came from a small set (often n=1) of institutions. The stories and the people involved could be identified simply by association. Such identification may not always be beneficial to the long-term career aspirations of the authors. There is also various problems that arise from a small sample size.

Are you interested in helping solve these problems and contribute to the knowledge about LMS evaluations (and perhaps long term use)?

How might it work?

There are any number of approaches I can think of, which one works best might depend on who (or anyone) responds to this. If there’s interest, we can figure it out from there.

References

Coates, H., R. James, et al. (2005). “A Critical Examination of the Effects of Learning Management Systems on University Teaching and Learning.” Tertiary Education and Management 11(1): 19-36.

Harrington, C., S. Gordon, et al. (2004). “Course Management System Utilization and Implications for Practice: A National Survey of Department Chairpersons.” Online Journal of Distance Learning Administration 7(4).

Jamieson, B. (2007). Information systems decision making: factors affecting decision makers and outcomes. Faculty of Business and Informatics. Rockhampton, Central Queensland University. PhD.

Jones, D. and N. Muldoon (2007). The teleological reason why ICTs limit choice for university learners and learning. ICT: Providing choices for learners and learning. Proceedings ASCILITE Singapore 2007, Singapore.

Oblinger, D. and J. Kidwell (2000). “Distance learning: Are we being realistic?” EDUCAUSE Review 35(3): 30-39.

Salmon, G. (2005). “Flying not flapping: a strategic framework for e-learning and pedagogical innovation in higher education institutions.” ALT-J, Research in Learning Technology 13(3): 201-218.

Sturgess, P. and F. Nouwens (2004). “Evaluation of online learning management systems.” Turkish Online Journal of Distance Education 5(3).

West, R., G. Waddoups, et al. (2006). “Understanding the experience of instructors as they adopt a course management system.” Educational Technology Research and Development.

7 thoughts on “Call for participation: Getting the real stories of LMS evaluations?

  1. I’m replying to my own post to expand upon a tweet response and perhaps add a more concrete example of what I’m getting at above.

    Via twitter Nate Angell pointed to Marist’s evaluation of Sakai as reported in this article. The article was written by Josh Barron who, according to the article bio, is both

    Director of Academic Technology and eLearning at Marist College in Poughkeepsie, New York and Chair of the Sakai Foundation Board.

    The point I’m trying to make is that there is a chance (a fairly good chance if you go by the research literature on IS selection, decision making, rationality etc.) that the story of Marist’s evaluation and selection of Sakai may not tell the whole story.

    In part this is a call to avoid self-reporting – not the best basis for evaluation – but it’s also to capture the full complexity of what is a very difficult and complex process.

    That said, I’m really glad to have the reference.

  2. ixmati

    I see what you’re getting at better now that I’ve actually read your post ;) As you can see, I’m more likely to fire off something on twitter than do any actual research (hence my ABD status ;)

    As per your tweet back to me, a major part of the forest I’ve seen missing is to align IT selection with actual institutional mission and values (if they have even been distilled). Too often, learning technologies are viewed as mere tools and the process gets bogged down in the false hope of valid statistics from bakeoffs, the minutiae of widgets, or the usability preferences of different folks on the selection committees.

    With this part of the forest in view, I strongly believe open source technologies are typically far more likely to align with the missions and values of educational institutions than proprietary products.

    Or, one can view it in purely economic terms: In the community source model (such as the Sakai Project), institutions combine their resources to build their own technologies. Is it always cheaper in Year 1? Probably not, but there will be savings across the community over time, along with better alignment with their educational goals.

    1. Let’s not start talking about thesis…I’m getting close to finishing, but it’s been a long time….

      While the theory most organisations are based on does require alignment with institutional goals and the information systems they choose, I’m yet to see a university (with minimal breadth of experience) that has successfully developed institutional goals that are applied in a disciplined way.

      I’ve actually argued in numerous places, most recently in this presentation that the top down approach isn’t appropriate for L&T.

      Similarly, while I agree with the sentiment that open source provides a better basis than proprietary, I have the same question that Col asked in another comment. How many open source LMS installations are implemented “vanilla”? I’m guessing the folk participating in the Sakai foundation don’t fit in that category.

      I’ve actually suggested in an earlier post that open source LMS are the next fad in e-learning.

  3. Another follow up to my own post. Also inspired by tweets from Nate.

    I’ll use the concrete example of Josh Barron and the Marist evaluation above.

    The suggestion here is not that Josh was “bad”. The suggestion is that the evaluation of an LMS as most suitable for a university is hugely complex. No matter how or what you do, you will miss aspects. No-one or no project group can hope to know everything. Things will be missed.

    If you can accept that, then perhaps the following makes sense.

    If the only perspectives being shared about LMS evaluations are the from the people leading the evaluations, then the knowledge embedded in those publications will be missing things. Some of those things might be important.

    Perhaps, asking people who weren’t the leaders or implementers of the evaluation will generate some insight into what (if anything) is missing. Perhaps being aware of those things can help improve the practice of LMS evaluations, perhaps not. You don’t know if you don’t ask.

    It has to be recognised that the perspectives of the “others” are also likely to miss things.

  4. beerc

    I’m wondering how many institutions who have adopted an open source LMS have radically changed the vanilla product to fit their context? Its often said by folk espousing the virtues of open source LMS that a key strength is their ability to be adapted to the local context. This seems contrary to the missions of IT departments who like centralized, stable systems that rarely change and would most likely resist any change. An indication of an organization’s disposition towards customization would be the size and number of hoops and hurdles one has to navigate to have a change made.

  5. mweisburgh

    I’m doing a paper on trends in LMS’s and I’d love to find out more about 1) how schools evaluate what/whether to get an LMS and 2) what needs are changing the way LMS’s are or should be used.

    I’m on skype and my email is mitch [dot] weisburgh [at] academicbiz [dot] com.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s