Lectures, alternatives, poll everywhere and unexpected events

This Wednesday I’m involved with an experiment and presentation that is seeking to test out some alternatives for lectures/presentations. As it happens, the last week has brought a couple of events that are (so far) helping the case for the experiment. These are described below.

And now for a word from our sponsors…

The aim of the experiment it to break out of the geographic limitations of participation in lectures/presentations. Anyone with a web browser can participate (a Twitter account and mobile phone will increase your ability to participate, but aren’t necessary). The more people who use these medium, the better. So you are invited.

More detail on the experiment/presentation page.

We return now to your regularly scheduled program

Being bumped

I work at CQUniversity. The university has 4/5 regional campuses spread across a fairly broad geographic area. A significant number of courses are offered across all of those campuses. A common approach for some years has been for lectures for these courses to be given from one campus and broadcast across the other campuses via the Interactive System-wide Learning (ISL) system. Essentially a video-conference system with specially built rooms at each of the campuses.

This approach is becoming embedded into the operations of the institution. To such an extent that the ISL rooms are becoming a resourcing bottleneck. Apart from teaching, these rooms are also used for research presentations and meetings. It’s getting to the stage that trying to get these rooms during campus is simply impossible.

Originally, the experiment was scheduled to use one room on each of the campuses

Rockhampton – 33/G.14. Bundaberg – 1/1.12. Gladstone – MHB 1.09. Mackay – 1/1.01.

On Friday I was told that we’ve been bumped from the Mackay room. Apparently someone senior needs the Mackay room for an ISL session that is more important than my experiment.

Normally, this would have meant Mackay staff would miss out on the live presentation. They’d have to rely on the recorded presentation.

Not now. Theoretically, they should be able to participate the same as people off campus. I’m actually happy about this, it gives me a practical story to tell about why this approach might be useful. It will be interesting to see what problems arise.

PollEverywhere Polls and results

Over the weekend, while avoiding work on the presentation I came across this post from Wes Fryer. It describes how they used PollEverywhere in a conference presentation. PollEverywhere is essentially a commercial version of Votapedia which I plan to use on Wednesday.

Some things I found interesting:

  • The graphs.
    The PollEverywhere graphs look much nicer than Votapedias (minor point).
  • A comment that students like this approach because it is a legitimate use of their mobile phones in class.
  • The idea that this type of experiment was an “a-ha” moment for some.

The bureaucratic model and the grammar and future of universities

Last week I attended a presentation by a colleague at CQUniveristy titled The Bureaucratic Model of Adult Instructional Design. The stated purpose of the presentation was

present and explore the Bureaucratic Model as a narrative that we must understand if we are to influence the direction of adult education.

The talk resonated with me as much of my current struggles/work is trying to make folk aware of a range of unstated assumptions that guide their thinking about learning and teaching within a university context. As Jay says, we have to understand those assumptions before we can think of influencing the future of learning and teaching – and somewhere in that, universities.

Since Jay’s talk I’ve come across and/or been reminded of a range of related work. Please feel free to add more here.

A vision for the future

Tony Bates has recently posted the second of his blog posts title Using technology to improve the cost-effectiveness of the academy: Part 2 within which he gives his vision for the future of universities.

A number of his implications seek to remove many of the basic assumptions that underpin university operation (e.g. semesters, fixed exams). However, a number of them show connections with an existing orthodoxy (e.g. all PhD students will have 6 months training in L&T).

That’s one of the problems I have with visioning. Too often it excludes interesting possibilities because it is held back by the background, preferences, ideas and prejudices of the people doing the visioning. My preference would be to let it emerge through a institution/setting that is flexible, open and questioning. I think much more interesting things can emerge from that situation than can ever happen because of the visioning of experts.

That’s because, no matter who you are, you have unstated assumptions that define what you can think of. Often this is addressed by having lots of different people do the visioning, but too often such attempts use approaches that to quickly focus on a particular vision, closing out future possibilities.

The grammar of school

In this post I mentioned a 1995 article by Seymour Papert on Why school reform is impossible. In this article Papert draws on Tyack and Cuban’s (1995) idea of the “grammar of school”

The structure of School is so deeply rooted that one reacts to deviations from it as one would to a grammatically deviant utterance: Both feel wrong on a level deeper than one’s ability to formulate reasons. This phenomenon is related to “assimilation blindness” insofar as it refers to a mechanism of mental closure to foreign ideas. I would make the relation even closer by noting that when one is not paying careful attention, one often actually hear the deviant utterance as the “nearest” grammatical utterance a transformation that might bring drastic change in meaning.

This sounds very much like what is happening in Jay’s bureaucratic model.

The need for experiments

A lot of the current debate about the future of universities is built on the comparison with print media. i.e. look, newspapers are a long-running institution that are dieing. Look, Universities, they are a long-running institution, they must be dieing also.

Clay Shirkey has written a long blog post title “Newspapers and Thinking the Unthinkable”. A major point that he makes in his post, seems to apply directly to the future of universities and the limitations of attempts at visioning like those of Tony Bates. In particular, this

Revolutions create a curious inversion of perception. In ordinary times, people who do no more than describe the world around them are seen as pragmatists, while those who imagine fabulous alternative futures are viewed as radicals. The last couple of decades haven’t been ordinary, however. Inside the papers, the pragmatists were the ones simply looking out the window and noticing that the real world was increasingly resembling the unthinkable scenario. These people were treated as if they were barking mad. Meanwhile the people spinning visions of popular walled gardens and enthusiastic micropayment adoption, visions unsupported by reality, were regarded not as charlatans but saviors.

He then draws on the development of the printing press to talk about revolutions

That is what real revolutions are like. The old stuff gets broken faster than the new stuff is put in its place. The importance of any given experiment isn’t apparent at the moment it appears; big changes stall, small changes spread. Even the revolutionaries can’t predict what will happen

Dede’s metaphors of learning

Lastly, the following recording is of talk by Professor Chris Dede and some metaphors of learning. It is the current underlying assumption of consistency in delivery of learning that underpins much of what universities are currently doing which is my biggest bugbear. It’s what is contributing to university learning and teaching approaching what Dede describes as “the worst of fast food”.

Chris Dede: Human behaviours and metaphors for learning

Participation, impact, collecting data and connecting people

A couple of colleagues and I are trying to kickstart a little thing we call the Indicators project. We’ve developed a “tag line” for the project which sums up the core of the project.

Enabling comparisons of LMS usage across institutions, platforms and time

The project is seeking to enable different people at different institutions to analyse what is being done with their institutions learning management system (LMS, VLE, CMS) and compare and contrast it with what is happening at different institutions with different LMS.

To some extent this project is about improving the quality of the data available to decision makers (which we define to include students, teaching staff, support staff and management). In part this is about address the problem identified by David Wiley

The data that we, educators, gather and utilize is all but garbage.

But it’s not just about the data. While the data might be useful, it’s only going to be as useful as the people who are seeing it, using it and talking about it. David Warlick makes this point about what’s happening in schools at the moment

not to mention that the only people who can make much use of it are the data dudes that school systems have been hiring over the past few years.

And then this morning George Siemens tweeted the following

Collecting data less valuable that connecting people” http://bit.ly/3SMJCT agree?

If it’s an either/or question, then I agree. But with the indicators project I see this as a both/and question. For me, the indicators project is/should be collecting data in order to connect people.

What follows is an attempt to map out an example.

The link between LMS activity and grades

There is an established pattern within the literature around data mining LMS usage logs. That pattern is essentially

the higher the grade, the greater the usage of the LMS

The order is reversible as I don’t think anyone has firmly established a causal link, it’s just a pattern. My belief (yet to be tested) is that is probably, mostly good students get good grades and do everything they can do to get good grades, including using the LMS.

With our early work on the indicators project we have found some evidence of this pattern. See the two following graphs (click on them to see bigger versions).

The X axis in both graphs is student final grade at our current institution. From best to worst the grades are high distinction (HD), distinction (D), credit (C), pass (P), and fail (F).

In the first graph the Y axis is the average number of hits on either the course website or the course discussion forum. Hopefully you can see the pattern, students with better grades average a higher number of hits.

Average student hits on course site/discussion forum for high staff participation courses

In the next graph, the Y axis is the average number of posts (starting a discussion thread) and the average number of replies (responding to an existing discussion thread) in the course discussion forum. So far, the number of replies is always greater than the number of posts. As you can see, the pattern is still there, but it is somewhat less evident for replies.

Average student posts/replies on discussion forums for high staff participation courses

Importance of staff participation

Fresen (2007) identified the level of interaction or facilitation by teaching staff as a critical success factor for web-supported learning. We though we would test this out using the data from the project by dividing courses up into categories based on the level of staff participation.

The previous two graphs are actually for the 678 courses (the high staff particiaption courses) for which teaching staff had greater than 3000 hits on the course website during the term. The following two graphs show the same data, but for the super-low staff participation courses (n=849). A super-low course is one where teaching staff had less than 100 hits on the course website during term.

What do you notice about the pattern between grade and LMS usage?

First, the hits on the course site and the course discussion forum

Average student hits on course site/discussion forum for super low staff participation courses

Now, the average number of posts and replies in the course discussion forum

Average student posts/replies on discussion forums for super low staff participation courses

For me, the pattern is not there. The HD students appear to have decided there’s no value on the course website and decided they need to rely upon themselves. They’ve still been able to get a HD in spite of the super low staff participation. More work needs to be done.

I’m also interested in what the students in these super low courses might be talking about and what networks they are forming. The SNAPP tool/work at Wollongong might be useful here.

How to bring people together

My fear is that this type of finding will be used to “bring people together” in a way that is liable to be more destructive than anything. i.e. something like this:

  • The data mining dweebs (I do recognise that this probably includes my colleagues and I) will bring it to the attention of university management.
    After all, at least at my institution it’s increasingly management that have access to the dashboards, not the academic staff.
  • The data mining dweebs and management will tell stories about these recalcitrant “super-low” academics and their silliness.
  • A policy will be formulated, probably as part of “minimum standards” (aka maximum requirements), that academics must average at least X (probably 3000 or more) hits on their course website in a term.
  • As with any such approach task corruption will reign supreme.

While the indicators project is a research project focused on trying to generate some data, we also have to give some thought and be vocal about how the data could be used appropriately. Here are some initial thoughts on some steps that might help:

  • Make it visible.
    To some extent making this information visible will get people talking. But that visibility can’t be limited to management or even teaching staff. All participants need to be able to see it. We need to give some thought about how to do this.
  • Make it collaborative.
    If we can encourage as many people as possible to be interested in examining this data, thinking about it and working on ways to harness it to improve practice, then perhaps we can move away from the blame game.
  • Be vocal and critical about the blame game.
    While publicising the project and the resulting data, we need to continuously, loudly and effectively criticise the silliness of the “blame game”/policy approach to responding to the findings.
  • Emphasise the incompleteness and limitation of the data.
    The type of indicators/data we gather through the LMS is limited and from some perspectives flawed. An average doesn’t mean a great deal. You can’t make decisions with a great deal of certainty solely on this data. You need to dig deeper, use other methods and look closer at the specifics to get a picture of the real diversity in approaches. There may be some cases where a super-low staff participation approach makes a lot of sense.

References

Fresen, J. (2007). A taxonomy of factors to promote quality web-supported learning. International Journal on E-Learning, 6(3), 351-362.

Alternate ways to get the real story in organisations

I’ve just been to a meeting with a strangely optimistic group of people who are trying to gather “real stories” about what is going on within an organisation through focus groups. They are attempting to present this information to senior management in an attempt to get them to understand what staff are experiencing, to indicate that something different might need to be done.

We we asked to suggest other things they could be doing. For quite some time I’ve wanted to apply some of the approaches of Dave Snowden to tasks like this. The following mp3 audio is an excerpt from this recording of Dave explaining the results of one approach they have used. I recommend the entire recording or any of the others that are there.

Why do we shit under trees?

Imagine this type of approach applied to students undertaking courses at a university as a real alternative to flawed smile sheets.

Choosing a research publication outlet

I’m reluctant to post this. It’s part of a pragmatic approach to figuring out where, as an Australian academic, I should try and target publications. It seeks to identify publications in the higher education and educational technology areas that would be “best”.

I’m well aware of the questionable aspects of this approach, but if this is the game…. Especially when your institution is starting to discuss definitions of research active staff – the implication being that if you aren’t research active you don’t get time to do research – that include requirements for fixed numbers of A and A* journals within a 3 year period.

My mitigation strategy against this type of pragmatism is that I am fairly open when it comes to my research. Much of it gets an airing here first. It’s not much, but better than nothing (or at least that’s what I keep telling myself).

For my immediate purposes, it looks like AJET is a good fit. A journal that is open access.

Work to do

  • Find out how much value is placed on the difference between A and A* journals.
  • Check the final lists from the government to see if rankings have changed.

What’s your suggestion?

What’s the “best” publication outlet?

I’m assuming that when it comes to writing a paper based on that research that the first step is to choose the outlet. Which journal or conference are you aiming the paper at? I think you need to answer this question as there is a part of the writing process that has to respond to the specifics of the outlet (e.g. address the theme of a conference etc.).
In answering this question, I can think of at least the following dimensions to consider:

  1. Quality.
    There are two common strategies I’ve heard: top down or bottom up. Bottom up folk go for the “worst” journal based on the hope that their poor article will get accepted. The top down folk suggest starting at the top because you never know, you might get lucky, and if you don’t you will at least get good feedback to improve the paper. At this stage you prepare it for submission to outlet #2.
  2. Fit.
    i.e. the one which best fits the topic or point of your paper. Which may be to visit Hawaii (conference) or might be a topic match (the paper “Gerbils preference in social software” might be a good fit for the journal “Studies in Gerbil Selection of Social Software”.
  3. Speed of review.
    How quickly will the journal accept and publish your paper.
  4. Openness.
    Are the papers published in a closed or open manner? Can you circulate copies? Is the journal an open access journals .

The rankings approach that is increasingly prevalent tends to suggest that “Quality” is the first choice. The following focuses on the quality dimension, however, in operation there needs to be an appropriate balance with the other factors.

How to judge the top quality publication?

The “top quality publication” dimension begs the question, “How do you know what is the top quality publication?”. In some disciplines this is a clear cut thing. You can’t be a researcher within a field without knowing. The trouble is that in some other fields, it’s not so clear. Especially if you’re new to the field.

Those wonderful folk in the Australian government, following the lead of their British colleagues, are making it easier for us poor Australian academics. As part of this work they are developing “discipline-specific tiered outlet rankings”. i.e. if you want to play the game, you follow their rankings – while trying to balance the other dimensions.

While the Oz government lists are still under development John Lamp is providing a nice interface to view the rankings as part of his broader site. There’s a but field of research method and a search. This is provided for two lists from the Australian Research Council – an early draft one and a more recent one. The recent one isn’t that integrated into the database – so the following information is a bit out of date, but it gives an indication.

In the following I’ve selected those journals of potentially most interest to me – I could be mistaken and have left some important ones out – but it’s a start. I’ve added a link to the journal home page and made some comments from my look at their online information.

My main interests are in educational technology within higher education, so that’s the focus. Suggestions and comments welcome.

One of the outstanding tasks I have, is to determine how much of a difference folk are making between A and A* journals.

Higher education

Most of these are selected from this list

Ranking Journal Comments
A* Higher Education Research and Development Max 7000 words
Closed access
6 issues a year
A* Studies in Higher Education Max 7000 words
Closed access
8 issues a year
A Higher Education Quarterly

Associated with Society for Reseach in Higher Education
Closed
A Higher Education Review 5K to 10K words
Copyright is assigned to Tyrrell Burgess Associates with a fee? to cover all rights. Author allowed to circulate with acknolwedgement
This is interesting

HIGHER EDUCATION REVIEW is committed to a problem-based epistemology. In all countries there is an urgent need to formulate the problems of post school education, to propose alternative solutions and to test them. The policy and practice of governments and institutions require constant scrutiny. New policies and ideas are needed in all forms of post school education as new challenges arise.

A International Journal of Teaching and Learning in Higher Education

he specific emphasis of IJTLHE is the dissemination of knowledge for improving higher education pedagogy.

Review process ~ 3 months
Open
4K to 7K words
3 types of article: research, instructional (designed to explain and clarify innovative higher education teaching methods) and review

A Journal of Higher Education 6 issues a year
Paper-based submission!!!
Max 30 pages, double-spaced
12 months submission to publication (usuaully)
closed
A Teaching in Higher Education One aim of journal “identifies new agendas for research”
3K to 6K words
6 issues a year
closed

Coming out of that table, the International Journal of Teaching and Learning in Higher Education sounds interesting, at least for me. It’s open access, shortish review times, promise of good feedback, has a couple of types of articles and is related to the scholarship of learning and teaching which is connection to an aspect of my current position.

Educational technology journals

Most of these came from this list

Ranking Journal Comments
A* British Journal of Educational Technology Closed
Various suggestions it’s the top journal in this sort of field.
Only 4000 words
Not clear about hosting your own sites
6 issues a year
A* Computers & Education Closed
8 issues a year
Impact factor higher than BJET?
Apparently horrible restrictions on reuse
Authors suggest reviewers!
No max length
A ALT-J 3 times a year
Basically closed
5K words
A Australasian Journal of Educational Technology Open access
5k to 8K words, with occasional flexibility
A Australian Educational Computing 2 issues a year
Closed
A Educational Technology & Society Open access
7K words
About 4 issues a year
A Educational Technology Research and Development Closed
Claimed two month review process
5K to 8K words
A Journal of Computer Assisted Learning Closed
3K to 7K words
A Technology Pedagogy and Education Closed
3 issues a year
B International Journal on E-Learning Closed
AACE journal
B Internet and Higher Education Closed
10 to 30 pages double spaced
C Studies in Learning Evaluation Innovation and Development Open
3K to 6K
Disclaimer: I’m associated with this journal

Discipline specific and curriculum

Sometimes I do work with discipline folk, some of the following might be interesting. More of these journals here. I’ve only included links for these.

Ranking Journal
A* Management Learning
A* Nursing outlook
A* Science Education
A Computer Science Education
A Journal of Engineering Education

Podcast for presentations at the PLEs & PLNs symposium

The following basically tells the rationale and approach used to create a (audio) podcast of the presentations from the Personal Learning Environments & Personal Learning Networks Online symposium on learning-centric technology.

I don’t know if anyone else has already done this, but just in case will share.

If you don’t want to be bored by the background, this is the link for the podcast.

Rationale

I’ve hated the idea of the LMS for quite some time. I even had the chance to briefly lead a project looking at investigating how PLEs could be grown and used within a university, at least before the organisational restructure came. In its short life the project produced a symposium, a number of publications, various presentations and a little bit of software.

Due to the background I had some significant interest in the symposium being organised by George Siemens and Stephen Downes. However, due to other responsibilities, odd times (given my geographical location) for the elluminate presentations and the low speed of my home Internet connection I knew I was unlikely to actively engage. Some of these factors have already prevented my on-going engagement with CCK09.

I probably would have left it there, however, over the last 24 hours two separate folk have mentioned the symposium and almost/sort of guilted me into following up. The one thing I can do at the moment, due to a fitness kick involving a great deal of walking, is listen to mp3s. So, I wanted an easy way to get the mp3s. A podcast sounds ideal for my current practices.

The podcast

Last night I did a quick google and found this page that seems to provide a collection of links to video and audio recordings of presentations associated with the CCK09 course. Including some mp3s from the presentations at the PLEs & PLNs symposium

Rather than download and play silly buggers with iTunes I decided to recreate an approach we used on our first “Web 2.0 course site”. Using del.icio.us the students and staff in the course could tag audio/video for inclusion in a podcast created by Feedburner.

So I followed the same process for these:

I just hope now that I have the time to reflect and write about what I listen to.

Thank you Deidre and Maijann for the encouragement to engage with the symposium. Thanks to those organising the symposium and CCK09 for the resources.

Thoughts about the next steps for the indicators project

This post is an attempt to capture some adhoc, over night thoughts about how the indicators project might move forward.

Context

Currently the indicators project is an emerging research project at CQUniversity. There are currently three researchers involved and we’re all fairly new to this type of project. I’d characterise the project at being at the stage where we’ve laid a fair bit of the ground work, done some initial work, identified some interesting holes in the literature around analytics/LMS evaluation and made the observation that there is a lot of different ways to go.

The basic aim is to turn the data gathered in Learning Mangement Systems (LMS, aka CMS, VLEs) usage logs into something useful that can help students, teaching staff, support staff, management and researchers using/interested in e-learning make sense of what is going on so they can do something useful. We’re particularly interested in doing this in a way that enables comparisons between different institutions and different LMS.

The process

A traditional approach to this problem would be big up front design (BUFD). The idea is that we spend – or at least report that we spent – lots of time in analysis of the data, requirements and the literature before designing the complete solution. The assumption is that, like gods, we can learn everything we will ever need to know during the analysis phase and that implementation is just a straight forward translation process.

Frankly, I think that approach works only in the most simplistic of cases, and generally not even then because people are far from gods. The indicators project is a research project. We’re aiming to learn new things.

For me this means that we have to adopt a more emergent, agile or ateleological approach. Lots of small steps where we are learning through doing something meaningful.

Release small patterns, release often

So, rather than attempt to design a complete LMS and institutional independent data schema and associated scripts to leverage that data, lets start small, focus on one or two interesting aspects, take them through to something final and then reflect. i.e. focus on a depth first approach, rather than a breadth first.

As part of this we should take the release early, release often approach. Going breadth first is going to take some time. Depth first we should be able to have something useful that we can release and share. That something will/should also be fairly easy for someone else to experiment with. This will be important if we want to encourage other folk, from other institutions to participate.

We should also aim to build on what we have already done and also build on what other people have done. I think that the impact on LMS usage by various external factors might be a good fit.

External factors and LMS usage

First, this is a line of work in which others published. Malikowski, Thompson & Theis (2006) investigate what effect class size, level of class and college in which a course was offered had on feature adoption (only class size had significant impact). Hornik et al (2008) have put courses into high and low paradigms and seen how this, plus the level of the course, has impacted on outcomes in web-based courses. There are some limitations of this work we might be able to fill. For example, Malikowski et al (2006) manually checked courses sites and because of this are limited to observations from a single term.

Second, we’ve already done some work in this area in our first paper. We

This sort of examination of external factors and their impact on LMS usage is useful as it helps identify areas of interest in terms of further research and also potential insights for course design. It’s also (IMHO) somewhat useful in its own right without any need for additional research. So it’s something relatively easy for us to do, but also should be fairly easy for others to experiment with.

Abstracting this work up a bit

The first step in examining this might be an attempt to abstract out the basic principles and components of this sort of work. If we can establish some sort of pattern/abstraction this can guide us in the type of work required and some sort of move towards a more rigorous process. The following is my initial attempt.

There have been two main approaches we’ve taken in the first paper:

  1. Impacts on student performance.
  2. Impacts on LMS feature adoption.

Impacts on student performance

An example is the impact of an instructional designer. The following graph compares the level of student participation mapped against final result between course designed with an instructional designer and all other courses.

Instuctional Designer Designed Courses vs Overall Average
Instuctional Designer Designed Courses vs Overall Average

In this type of example, we’ve tended to use three main components:

  1. A measure of LMS usage.
    So far we have concentrated on

      the average number of hits by the student on the course website and discussion forum; and

    • the average number of posts and replies by the student on the discussion forum
  2. A measure of student performance.
    Limited to grade achieved in the course, at the moment.
  3. A way to group students.
    This has been done on the basis of mode of delivery/type of student (i.e. a distance education student, an Australian on-campus student, an international student) or by different types of courses.

Having identified these three components we can actively search for alternatives. What alternatives to student performance might there be?

For example, in the paper we use Fresen’s (2007) taxonomy of factors to promote quality web-supported learning as a way to group students. For example, staff participation should promote quality, hence is there any difference in courses with differing levels of staff participation?

Are there other theoretical insights which could guide this work?

Impacts on LMS feature adoption

We’ve used the LMS independent framework for LMS features developed by Malikowski et al (2007) to examine to what level different features are used within courses. We’ve looked at this over time and between different LMS. The following shows the evolution of feature adoption over time within the Blackboard LMS used at CQU.

Blackboard Feature Adoption

Under this model, the components could be described as:

  • Framework for grouping LMS features.
  • Definition of adoption.

A mixture of the two?

I wonder if there’s any value in using the level of feature adoption as another way of grouping courses to identify if there’s any connection with student outcome. e.g. do courses with just content distribution have different student outcomes/usage than courses with everything?

Next steps

Some quick ideas:

  • Look at improving the abstraction and alternatives of the two abstractions above.
  • Look at focusing on developing some platform independent database schema to enable the cross-LMS and cross-institutional comparison of the above two abstractions.
    This would include:
    • the database scheme;
    • some scripts to convert various LMS logs into that database format;
    • some tools to automate interseting graphs.

References

Fresen, J. (2007). “A taxonomy of factors to promote quality web-supported learning.” International Journal on E-Learning 6(3): 351-362.

Hornik, S., C. S. Saunders, et al. (2008). “The impact of paradigm development and course level on performance in technology-mediated learning environments.” Informing Science 11: 35-58.

Malikowski, S., M. Thompson, et al. (2006). “External factors associated with adopting a CMS in resident college courses.” Internet and Higher Education 9(3): 163-174.

Malikowski, S., M. Thompson, et al. (2007). “A model for research into course management systems: bridging technology and learning theory.” Journal of Educational Computing Research 36(2): 149-173.