Plans for a AJET paper around the indicators project

The following is an attempt to make concrete some idea and tasks associated with writing a journal paper for the Australian Journal of Educational Technology (AJET) based on the ASCILITE’09 paper based on the work of the Indicators project. The paper will be co-authored by a group and the aim of this post is to start discussion about content of the paper and the tasks we need to do to get it written.

The purpose/abstract of the paper

One argument is that as a group of authors we should have a common sense of the purpose of the paper. We should have an elevator pitch about the paper, a 30 second spiel that summarises what we’re trying to achieve/show. Any improvements?

The use of Learning Management Systems (LMS) is almost ubiquitous within universities. However, few institutions are actively using the data generated by student and staff use of these systems to guide future work. This paper seeks to highlight:

  • Some limitations of existing attempts to leverage LMS usage data.
  • Illustrate knowledge about e-learning within universities that, because of these limitiations, appear not to be as certain or clear as existing literature or common-sense might suggest.
  • Indicate how the work within this initial paper can be used to design further work that can improve our understanding of what is happening with e-learning, why it is happening and how it can be improved.

This might become a useful tag line for the paper:

  • What is happening.
    Refers to the patterns we find in the usage data.
  • Why is it happening.
    Refers to additional research with different methods and theories that would be required to suggest and test reasons why these patterns might exist.
  • How to change practice.
    Refers to further research and insight that seeks to combine the what and why information with other theory to change how learning/teaching practice is changed.

Ideas for titles

The title of a paper is, from one perspective, a summary of the elevator pitch. It should attract the reader and inform them what they might find out. What follows are some initial ideas.

  • The indicators project: Improving the what, why and how of e-learning.
  • Student LMS activity, grades, and external factors: Identifying the need for cross-platform, cross-institutional and longitudinal analysis of LMS usage.
    This title is based on a narrowing down of the “what” to be shown in the paper to just the linkage between LMS activity, grades and external factors. i.e. exclude the feature adoption stuff.

The structure of the paper

At this stage, the structure of the paper for me is heavily based around the three aims of the paper outlined in the purpose/abstract section above. The basic structure would be:

  • General introduction.
    Explain the background/setting of the paper. i.e. widespread use of LMS as implementation for e-learning. Start of people doing academic analytics. The identification of some known patterns. The importance of knowing what is going on and how to improve it. The diversity of universities and how one size might not fit all. i.e. different universities might have different experiences.
  • Limitations of existing work.
    Seek to provide some sort of framework to understand the work that has gone before in terms of academic analytics and/or LMS usage log analysis.
  • Different perspectives on what is happening.
    Examine and explain how we’ve found patterns which seem to differ or provide alternative insights to what has already been found. The idea here is to establish at least 2 groupings of patterns that illustrate some differences between what we have found and what has been reported in the literature. Each of the groupings could have multiple patterns/findings but there would be some commonality. More on this below.
  • Further work.
    Argue that these differing findings suggest that there is value in further work that:
    • addresses the limitations identified in the 2nd section and,
      i.e. cross platform, cross-institutional and longitudinal.
    • also expands upon the findings found in the 3rd section.
      Moves from just examining the “what” into the “why” and “how”.
  • Conclusions.

The work to be done

Now time to identify the work that needs to be done.

Limitations of existing work

The basic aim of this section is to expand and formalise/abstract the knowledge/opinion about the existing literature around LMS usage analysis expressed in the ASCILITE paper.

The draft ASCILITE paper – prior to compaction due to space limitations – was working on a framework for understanding the literature based on the following dimensions:

  • # of institutions;
  • # of LMS;
  • time period;
  • Method.

Work to do:

  • Ensure that we have covered/gathered as much of the relevant literature as possible.
  • Examine that literature to see how it fits within the framework.
  • Identify from the literature any additional dimensions that might be useful.
  • Identify if there is any findings that support or contradict the findings we want to introduce in the next section.

Different perspectives on what is happening

This is the section in which we draw on the data from CQU to identify interesting and/or different patterns from what is found in the established literature. The biggest question I have about this section is, “What patterns/groupings do we use?”. The main alternatives I’m aware of are

  1. Exactly what we did in the ASCILITE paper.
  2. The slight modification we used in the ASCILITE presentation
  3. Drop the feature adoption stuff entirely and focus solely on the correlation between student activity, grade and external factors. Perhaps with the addition of some analysis from Webfuse courses.

Whichever way we go, we’ll need to:

  1. Identify and define the patterns we’re using
    e.g. The correlation between level of participation, the grade achieved and some external factors.
  2. Identify literature/references summarising what is currently known about the pattern.
    e.g. The correlation that suggests the greater the level of participation in a LMS, the better the grade.
  3. Identify ways in which the pattern can be measured (see below).
  4. Use the measure to examine the data at CQU.
    e.g. many of the graphs in the ASCILITE paper/presentation.
  5. Look for any differences between expected and what we see.
    e.g. LMS usage by HD students is less than others for AIC students and Super low courses
  6. Establish that the differences are statistically significant.
  7. Perhaps generate some initial suggestions why this might be the case.

The patterns

The patterns we’ve been using so far seem to fit into one of two categories. Each of these categories has a definition of the pattern and how we’re actually measuring it. This is also an area of difference, i.e. there could be different ways of measuring.

The patterns we’ve used so far:

  1. % of courses that have adopted different LMS features.
    This is the comparison of feature usage between Blackboard and Webfuse during the period of interest. It shows that different systems and different assumptions do modify outcomes.

    • Current measurement – Malikowski et al
    • Alternative measurement – Rankine et al (2009)
  2. The link between LMS activity, student grades and various external factors.
    We’re currently measuring this by
    • # of hits/visits on course site and discussion forum.
    • # of posts and replies to discussion forum.

    The external factors we’ve used in papers and presentations are:

    • mode of delivery: flex, AIC, CQ.
    • Level of staff participation.
    • Different staff academic background.
    • Impact of input from instructional designer.
    • Age for FLEX students.

    I think there are a range of alternative measures we could use, need to think more about these.

We need to come to some consensus about the patterns we should use.

Statistical analysis

In the meantime, however, I think that we will end up using the activity/grades correlation at least for:

  • Mode of delivery.
  • Level of staff participation.
  • Age for FLEX students.

I would suggest that having the statistical analysis done and written up for these three would be a good first step. At least while we talk about the other stuff.

Further work

In terms of the further work section of the paper I think we need to:

  • Summarise the recommendations from the literature.
  • Identify where we might disagree and why.
  • Identify what we think should be done.

In terms of further work, my suggestions would be:

  • What
    • Testing the existing patterns in cross-platform, cross-institutional and longitudinal ways.
    • Establishing and testing alternate and additional patterns. e.g. the SPAN work
    • Establishing and testing alternate measurements for these patterns
    • Testing and developing alternate methods to make these patterns and enable different institutions to use them.
    • Identifying theories that would suggest other patterns which might be useful.
    • How to make these patterns available to front-line teaching staff and academics.
  • Why
    • Lots of work seeking to explain the differences in patterns found.
    • Identifying theories that help explain the patterns.
  • How
    • How to make these patterns available to staff and students in a way that encourages and enables improvement.
    • How to encourage management not to use these patterns as a stick.

Work to do

What follows is a list of tasks, by no means complete:

  • Everyone
    • Read and comment on the elevator pitch. Does it sound right? Can it be made better? Is there a different approach?
    • Does the proposed structure work? Should there be something more?
    • Suggestions for a title.
    • Thoughts about what patterns we use in the paper.
  • Stats guy
    • Have done the analysis on the “mode of delivery”, “level of staff participation” and “Age for flex students” patterns.
    • Be able to help us write a blog post that summarises/explains the analysis for each pattern in a way that would be suitable for the journal paper.
  • Non-stats guys
    • Combine existing literature around LMS usage analysis into a single bibliography, actively try and fill the holes.
    • Analyse the literature to help develop the framework for understanding and comparing the different approaches.
    • Pull out any interesting patterns, measures or findings that either support or contradict what we’ve found.
    • Writing
      • Introduction
      • Limitations of existing work
      • Future work.

Call for participation: Getting the real stories of LMS evaluations?

The following is a call for participation from folk interesting in writing a paper or two that will tell some real stories arising from LMS evaluations.

Alternatively, if you are aware of some existing research or publications along these lines, please let me know.

LMSs and their evaluation

I think it’s safe to say that the idea of a Learning Management System (LMS) – aka Course Management System (CMS), Virtual Learning Environment (VLE) – is now just about the universal solution to e-learning for institutions of higher education. A couple of quotes to support that proposition

The almost universal approach to the adoption of e-learning at universities has been the implementation of Learning Management Systems (LMS) such as Blackboard, WebCT, Moodle or Sakai (Jones and Muldoon 2007).

LMS have become perhaps the most widely used educational technologies within universities, behind only the Internet and common office software (West, Waddoups et al. 2006).

Harrington, Gordon et al (2004) suggest that higher education has seen no other innovation result in such rapid and widespread use as the LMS. Moodle or Sakai. Almost every university is planning to make use of an LMS (Salmon, 2005).

The speed with which the LMS strategy has spread through universities is surprising (West, Waddoups, & Graham, 2006).

Even more surprising is the almost universal adoption of just two commercial LMSes, both now owned by the same company, by Australia’s 39 universities, a sector which has traditionally aimed for diversity and innovation (Coates, James, & Baldwin, 2005).

Oblinger and Kidwell (2000) comment that the movement by universities to online learning was to some extent based on an almost herd-like mentality.

I also believe that increasingly most universities are going to be on their 2nd or perhaps 3rd LMS. My current institution could be said to be on its 3rd enterprise LMS. Each time there is a need for a change, the organisation has to do an evaluation of the available LMS and select one. This is not a simple task. So it’s not surprising to see a growing collection of LMS evaluations and associated literature being made available and shared. Last month, Mark Smithers and the readers of his blog did a good job of collecting links to many of these openly available evaluations through a blog post and comments.

LMS evaluations, rationality and objectivity

The assumption is that LMS evaluations are performed in a rational and objective way. That the organisation is demonstrating its rationality by objectively evaluating each available LMS and making informed decisions about which is most appropriate for it.

In the last 10 years I’ve been able to observe, participate and hear stories about numerous LMS evaluations from a diverse collection of institutions. When no-one is listening, many of those stories turn to the unspoken limitations of such evaluations. They share the inherent biases of participants, the cognitive limitations and the outright manipulations that . Stories that rarely, if ever, see the light of day in research publications. In addition, there is a lot of literature from various fields suggesting that such selection processes are often not all that rational. A colleague of mine did his PhD thesis (Jamieson, 2007) looking at these sorts of issues.

Generally, at least in my experience, when the story of an institutional LMS evaluation process is told, it is told by the people who ran the evaluation (e.g. Sturgess and Nouwens, 2004). There is nothing inherently wrong with such folk writing papers. The knowledge embodied in their papers is, generally, worthwhile. My worry is that if these are the only folk writing papers, then there will be a growing hole in the knowledge about such evaluations within the literature. The set of perspectives and stories being told about LMS evaluations will not be complete.

The proposal

For years, some colleagues and I have regularly told ourselves that we should write some papers about the real stories behind various LMS evaluations. However, we could never do it because most of our stories only came from a small set (often n=1) of institutions. The stories and the people involved could be identified simply by association. Such identification may not always be beneficial to the long-term career aspirations of the authors. There is also various problems that arise from a small sample size.

Are you interested in helping solve these problems and contribute to the knowledge about LMS evaluations (and perhaps long term use)?

How might it work?

There are any number of approaches I can think of, which one works best might depend on who (or anyone) responds to this. If there’s interest, we can figure it out from there.

References

Coates, H., R. James, et al. (2005). “A Critical Examination of the Effects of Learning Management Systems on University Teaching and Learning.” Tertiary Education and Management 11(1): 19-36.

Harrington, C., S. Gordon, et al. (2004). “Course Management System Utilization and Implications for Practice: A National Survey of Department Chairpersons.” Online Journal of Distance Learning Administration 7(4).

Jamieson, B. (2007). Information systems decision making: factors affecting decision makers and outcomes. Faculty of Business and Informatics. Rockhampton, Central Queensland University. PhD.

Jones, D. and N. Muldoon (2007). The teleological reason why ICTs limit choice for university learners and learning. ICT: Providing choices for learners and learning. Proceedings ASCILITE Singapore 2007, Singapore.

Oblinger, D. and J. Kidwell (2000). “Distance learning: Are we being realistic?” EDUCAUSE Review 35(3): 30-39.

Salmon, G. (2005). “Flying not flapping: a strategic framework for e-learning and pedagogical innovation in higher education institutions.” ALT-J, Research in Learning Technology 13(3): 201-218.

Sturgess, P. and F. Nouwens (2004). “Evaluation of online learning management systems.” Turkish Online Journal of Distance Education 5(3).

West, R., G. Waddoups, et al. (2006). “Understanding the experience of instructors as they adopt a course management system.” Educational Technology Research and Development.

How do you develop a cross-LMS usage comparison?

I recently posted about the need to develop an approach that allows for the simple and consistent comparison of usage and feature adoption between different Learning Management Systems (aka LMS, Virtual Learning Environments – VLEs – see What is an LMS?). That last post on the need didn’t really establish the need. The aim of this post is to explain the need and make some first steps in identifying how you might go about enabling this sort of comparison.

The main aim is to get my colleagues in this project thinking and writing about what they think we should and how we might do it.

What are you talking about?

Just to be clear, what I’m trying to get at is a simple method by which University X can compare how its staff and students are using its LMS with usage at University Y. The LMS at University Y might be different to that at University X. It might be the same.

They might find out that more students use discussion forums at University X. More courses at University Y might use quizzes. The could compare the number of times students visit course sites, or whether there is a correlation between contributions to a discussion forum and final grade.

Why?

The main reason is so that the university, its management, staff, students and stakeholders have some idea about how the system is being used. Especially in comparison with other universities or LMSes. This information could be used to guide decision making, identify areas for further investigation, as input into professional development programs or curriculum design projects, comparison and selection processes for a new LMS, and many other decisions.

There is a research project coming out of Portugal that has some additional questions that are somewhat related.

The main reason is that there currently appears to be no simple, effective method for comparing LMS usage between systems and institutions. The different assumptions, terms and models used by systems and institutions get in the way of appropriate comparisons.

How might it work?

At the moment, I am thinking that you need the following:

  • a model;
    An cross-platform representation of the data required to do the comparison. In the last post the model by Malikowski et al (2007) was mentioned. It’s a good start, but has doesn’t cover everything.

    As a first crack the model might include the following sets of information:

    • LMS usage data;
      Information about the visits, downloads, posts, replies, quiz attempts etc. This would have to be identified by tool because what you do with a file is different from a discussion forum, from a quiz etc.
    • course site data;
      For each course, how many files, is there a discussion forum, what discipline is the course, who are the staff, how many students etc.
    • student characteristics data;
      How were they studying, distance education, on-campus. How old were they?
  • a format;
    The model has to be in an electronic format that can be manipulated by software. The format would have to enable all the comparisons and analysis desired but maintain anonymity of the individuals and the courses.
  • conversion scripts; and
    i.e. an automated way to take institutional and LMS data stick it into the format. Conversion scripts are likely to be based around LMS and perhaps student records system. e.g. a Moodle conversion script could be used by all the institutions using Moodle.
  • comparison/analysis scripts/code.
    Whatever code/systems are required to take the information in the format and generate reports etc. that help inform decision making.

Format

I can hear some IT folk crying out for a data warehouse to be used as the format. The trouble is that there are different data warehouses and not all institution’s would have them. I believe you’d want to initially aim for a lowest common denominator, have the data in that and then allow further customisation if desired.

When it comes to the storage, manipulation and retrieval of this sort of data, I’m assuming that a relational database is the most appropriate lowest common denominator. This suggests that the initial “format” would be an SQL schema.

How would you do it?

There are two basic approaches to developing something like this:

  • big up front design; or
    Spend years analysing everything you might want to include, spend more time designing the perfect system and finally get it ready for use. Commonly used in most information technology projects and I personally think it’s only appropriate for a very small subset of projects.
  • agile/emergent development.
    Identify the smallest bit of meaningful work you can do. Do that in a way that is flexible and easy to change. Get people using it. Learn from both doing it and using it to inform the next iteration.

In our case, we’ve already done some work from two different systems for two different needs. I think discussion forums are shaping up as the next space we both need to look at, again for different reasons. So, my suggestion would be focus on discussion forums and try the following process:

  • literature review;
    Gather the literature and systems that have been written analysing discussion forums. Both L&T and external. Establish what data they require to perform their analysis.
  • systems analysis;
    Look at the various discussion forum systems we have access to and identify what data they store.
  • synthesize;
    Combine all the requirements from the first two steps into some meaningful collection.
  • peer review;
    If possible get people who know something to look at it.
  • design a database;
    Take the “model” and turn it into a “format”.
  • populate the database;
    Write some conversion scripts that will take data form the existing LMSes we’re examining and populate the database.
  • do some analysis;
    Draw on the literature review to identify the types of analysis/comparison that would be meaningful. Write scripts to perform that role.
  • reflect on what worked and repeat;
    Tweak the above on the basis of what we’ve learned.
  • publish;
    Get what we’ve done out in the literature/blogosphere for further comment and criticism.
  • attempt to gather partners.
    While we can compare two or three different LMS within the one institution. The next obvious step would be to work with some other institutions and see what insights they can share.

The knowledge and experience gained this for “discussion forums” could then be used to move onto other aspects.

What next?

We probably need to look at the following:

  • See if we can generate some outside interest.
  • Tweak the above ideas to get something usable.
  • Gather and share a bibliography of papers/work around analysing discussion forum participation.
  • Examine the discussion forum data/schema for Blackboard 6.3 and Webfuse.

That’s probably enough to be getting on about.

References

Malikowski, S., M. Thompson, et al. (2007). “A model for research into course management systems: bridging technology and learning theory.” Journal of Educational Computing Research 36(2): 149-173.

Identifying file distribution on Webfuse course sites

As part of the thesis I’ve been engaging with some of the literature around LMS feature usage to evaluate usage of Webfuse. A good first stab of this was reported in an earlier post. There were a number of limitations of that work, it’s time to expand a bit on it. To some extent for the PhD and to some extent because of a paper.

As with some of the other posts this one is essentially a journal or a log of what I’m doing and why I’m doing it. A permanent record of my thinking so I can come back later, if needed.

There’s even an unexpected connection with power law distributions towards the end.

Content distribution

In that previous post I did not include a graph/any figures around the use of Webfuse course sites to distribute content or files. This is because Webfuse had a concept of a default course site. i.e. every course would have the same basic default site created automatically. Since about 2001 this meant that every course site performed some aspect of information distribution including: the course synopsis on the home page, details about the course assessment, details about course resources including textbook details and a link to the course profile, and details about the teaching staff.

Beyond this staff were able to upload files and other content as they desired. i.e. moving beyond the default course site was optional and left entirely up to the teaching staff. Some of us, perhaps went overboard. Other staff may have been more minimal. The aim here is to develop metrics that illustrate that variability.

Malikowski et al (2007) have a category of LMS usage called Transmitting Content. The LMS features they include in this category include:

  • Files uploaded into the LMS.
  • Announcements posted to the course site.

So, in keeping with the idea of building on existing literature. I’ll aim to generate data around those figures. Translating those into Webfuse should be fairly straight forward, thinking includes:

  • Files uploaded into the LMS.
    Malikowski et al (2007) include both HTML files and other file types. For Webfuse and its default course sites I believe I’ll need to treat these a little differently:
    • HTML files.
      The default course sites produce HTML. I’ll need to exclude these standard HTML files.
    • Other files.
      Should be able to simply count them.
    • Real course sites.
      Webfuse also had the idea of a real course site. i.e. an empty directory into which the course coordinator could upload their own course website. This was usually used by academics teaching multimedia, but also some others, who knew what they wanted to do and didn’t like the limitations of Webfuse.
  • Announcements.
    The default course site has an RSS based announcements facility. However, some of the announcements are made be “management”. i.e. not the academics teaching the course but the middle managers responsible for a group of courses. These announcements are more administrative and apply to all students (so they get repeated in every course). In some courses they may be the only updates. These announcements are usually posted by the “webmaster”, so I’ll need to exclude those.

Implementation

I’ll treat each of these as somewhat separate.

  • Calculate # non-HTML files.
  • Calculate # of announcements – both webmaster and not.
  • Calculate # HTML files beyond default course site (I’ll postpone doing this one until later)

Calculate # non-HTML files.

Webfuse created/managed websites. So all of the files uploaded by staff exist within a traditional file system. Not in a database. With a bit of UNIX command line magic it’s easy to exact name of every file within a course site and remove those that aren’t of interest. The resulting list of files is the main data source that can then be manipulated.

The command to generate the main data source goes like this

find T1 T2 T3 -type f | get all the files for the given terms
grep -v ‘.htm$’ | grep -v ‘.html$’ | remove the HTML files
grep -v ‘CONTENT$’ | remove the Webfuse data files
grep -v .htaccess | remove Apache access restriction file
grep -v ‘updates.rss$’ | remove the RSS file used for announcements
grep -v ‘.ctb$’| grep -v ‘.ttl$’ | grep -v ‘/Boards/[^/]*$’ | grep -v ‘/Members/[^/]*$’ | grep -v ‘/Messages/[^/]*$’ | grep -v ‘/Variables/[^/]*$’ | grep -v ‘Settings.pl’ | remove files created by discussion forum
sed -e ‘1,$s/.gz$//’

The sed command at the end removes the gzip extension that has been placed on all the files in old course sites that have been archived – compressed.

The output of this command is the following

T1/COIT11133/Assessment/Assignment_2/small2.exe
T1/COIT11133/Assessment/Weekly_Tests/Results/QuizResults.xls
T1/COIT11133/Resources/ass2.exe

The next aim is to generate a file that contains the number of files for each course offering. From there the number of courses with 0 files can be identified, as can some other information. The command to do this is

sed -e ‘1,$s/^\(T.\/………\/\).*$/\1/’ all.Course.Files | sort | uniq -c | sort -r -n > count.Course.Files

After deleting a few entries for backup or temp directories. We have our list. Time to manipulate the data, turn it into a CSV file and into Excel. Graph below, fairly significant disparity in number of files – the type of curve looks very familiar though.

Number of uploaded files per Webfuse course site for 2005

In total, for 2005 there were 178 course sites that had files. That’s out of 299 – so 59.5%. This compares to the 50% that Col found for the Blackboard course sites in the same year.

Calculate # of Announcements

The UNIX command line alone will not solve this problem. Actually, think again, it might. What I have to do is:

  • For each updates.rss
    • count the number of posts by webmaster
    • count the number of posts by non-webmaster
    • output – courseOffering,#webmaster,#non-webmaster

Yep, a simple shell script will do it

echo COURSE,ALL,webmaster
for name in `find T1 -name updates.rss`
do
  all=`grep '' $name | wc -l`
  webmaster=`grep 'webmaster' $name | wc -l`
  echo "$name,$all,$webmaster"
done

Let’s have a look at the 2005 data. Remove some dummy data, remove extra whitespace. 100% of the courses had updates. 166 (55%) had no updates from the teaching staff, 133 (45%) did. That compares to 77% in Blackboard. Wonder if the Blackboard updates also included “webmaster” type updates?

In terms of the number of announcements contributed by the teaching staff. The following graph shows the distribution. The largest number for a single offering was 34. Based on a 12 week CQU teaching term, that’s almost, on average, 3 announcements a week

Number of coordinator announcements - Webfuse 2005

Power laws and LMS usage?

The two graphs above look very much like a power law distribution. Clay Shirky has been writing and talking about power law distributions for some time. Given that there appears to be a power law distribution going on here with usage of these two LMS features, and potentially that the same power law distribution might exist with other LMS features, what can Shirky and other theoretical writings around power law distributions tell us about LMS usage?

References

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

External factors associated with CMS adoption

This post follows on from a previous post and continues an examination of some papers written by Malikowski and colleagues examining the adoption of features of an LMS/VLE/CMS. This one focuses on the 2006 paper.

External factors associated with CMS adoption

The abstract of this paper (Malikoski, Thompson and Theis, 2006) is

Course management systems (CMSs) have become a common resource for resident courses at colleges and universities. Researchers have analyzed which CMS features faculty members use most primarily by asking them which features are used. The study described builds on previous research by counting the number of CMS features a faculty member used and by analyzing how three external factors are related to the use of CMS features. The external factors are (a) the college in which a course was offered, (b) class size, and (c) the level of a class—such as 100 or 200. The only external factor showing a statistically significant relationship to the use of CMS features was the college in which a course was offered. Another finding was that CMSs are primarily used to transmit information to students. Implications are described for using external factors to increase effective use of more complex CMS features.

Implication: repeat this analysis with the Webfuse and Blackboard courses at CQU We can do this automatically for a range of external factors beyond those.

Point of the research

Echoing what was said in the 2008 paper and one reason I am interesting in this work

Faculty members often receive help in using a CMS (Arabasz, Pirani, & Fawcett, 2003; Grant, 2004). This help typically comes from professionals who focus on instructional design or technology. Information about which features are used most could provide these professionals with a starting point for promoting effective use of more complex CMS features. Information about how external factors influence use could identify situations in which more complex features can be successfully promoted.

Prior research

Points of difference between this research and prior work are listed as:

  • Few studies have focused on the use of the CMS in resident courses.
  • Generated from surveys.
  • Morgan’s suggestion that faculty use more features over time may only be partially correct.
  • Only one study used a statistical analysis.
  • Previous studies analyse usage for all staff or for a broad array of staff – focusing on a few factors might be a contribution.
  • Lastly, add to research by including examination of how people learn.

    Currently, research into CMS use has considered CMS features, opinions from teachers about these features, and student satisfaction with CMS features. Gagné, Briggs, and Wager summarize the importance of considering both learning psychology and technology, which they refer to as “media.” They emphasize “the primacy of selecting media based on their effectiveness in supporting the learning process. Every neglect of the consideration of how learning takes place may be expected to result in a weaker procedure and poorer learning results.” (Gagné, Briggs, & Wager, 1992, p. 221). For decades, researchers have studied how teaching methods affect learning outcomes. Several recent publications describe seminal research findings, research that has built on these findings, and learning theories that have emerged from this research (Driscoll, 2005; Gagné et al., 1992; Jonassen & Association for Educational Communications and Technology, 2004; Reigeluth, 1999).

    That is as may be. But given my suspicion that most academic don’t really make a rational judgements about how they teach based on educational literature – would such an analysis be misleading and pointless?

They argue that the model from Malikowski, Thompson and Theis (2007) is what they use here and that it combines both features and theory and can be used to synthesise research.

Methodology

Interestingly, they have a spiel about causation and relationship

An important point to clarify is that the method applied in this study was not intended to determine if external factors caused the use of CMS features. Identifying causation is an important but particularly challenging research goal (Fraenkel &Wallen, 1990). Instead, the current method and study only sought to determine if significant relationships existed between external factors and the adoption of specific CMS features.

Looks like basically the same methodology and perhaps same data set as the 2008 paper. They do note some problems with the manual checking of course sites

This analysis was a labor intensive process of viewing a D2LWeb site for a particular course and completing a copy of the data collection form, by counting how often features in D2L were used. In some cases, members of the research team counted thousands of items.

Results

The definition of adoption used is different than that in the 2008 paper

In this study, a faculty member wasconsidered to have adopted a feature if at least one instance of the feature was present. For example, if a faculty member had one quiz question for students, that faculty member was considered as having adopted the quiz feature

Only 3 of the 13 LMS features available were used by more than half the faculty – grade book, news/announcements, content files. Also the only 3 features where the percentage of adoption was greater than the standard deviation.

Implication: A comparison of Webfuse usage using different definitions of adoption could be interesting as part of a way to explore what would make sense as a definition of adoption.

In some cases STDDEV was twice as large as the percentage of faculty members using a feature.

They include the following pie chart that is meant to use the model from Malikowski et al (2007). But I can’t, for the life of me, figure out how they get to it.

Categories of CMS Features

Found that only the college (discipline) could be said to be the only external factor that was a significant predictor of feature usage.

Discussion

Raises the question of norms and traditions within disciplines driving CMS feature adoption. I’m amazed more isn’t made of these being residential courses. This might play a role.

Implication: It might be argued that norms and tradition are more than just discipline based. I would argue that at CQU, when it comes to online learning that there were three main traditions based on faculty structures from the late 1990s through to early noughties:

  1. Business and Law – some courses with AIC students, very different approach to distance education and also online learning. Had a very strong faculty-based set of support folk around L&T and IT.
  2. Infocom – similar to Business and Law in terms of AIC courses and distance education. But infected by Webfuse and similar to BusLaw had a strong faculty-based set of support folk around L&T and IT.
  3. Others – essentially education, science, engineering and health. Next to no AIC students. Some had no distance education. No strong set of faculty-based support folk around IT and L&T> Though, education did have some.

Would be interesting to follow/investigate these norms and traditions and how that translated to e-learning. Especially since the faculty restructure around 2004/2005 meant there was a mixing of the cultures. BusLaw and large parts of Infocom merged. Parts of Infocom merged with education and arts….

Limitations

Study involved 81 faculty members as opposed to 862, 730, 192 and 191 in other studies. Argument is that those other studies used surveys, not the more resource intensive approach used by this work.

The recognise the problem with change

The current study analyzed CMS Web sites when they were on a live server. The limitation in this case is that a faculty member can change a Web site while it is being analyzed. Fortunately, the university at which this study occurred has faculty members create a different CMS Web site each time a course is offered.

References

Malikowski, S., Thompson, M., & Theis, J. (2006). External factors associated with adopting a CMS in resident college courses. Internet and Higher Education, 9(3), 163-174.

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

Malikowski, S. (2008). Factors related to breadth of use in course management systems. Internet and Higher Education, 11(2), 81-86.

Automating calculation of LMS/CMS/VLE feature usage – a project?

I’m in the midst of looking at the work of Malikowski et al in evaluating the usage of VLE features. The aim of this work is an attempt to provide information that can help those who help academics use VLEs. The following is an idea to address those problems and arrive at something that might be useful for cross-institutional comparisons.

Given the widespread adoption of the LMS/VLE, I’d be kind of surprised if someone hasn’t given some thought to what I’ve suggested, but I haven’t heard anything.

Do you know of a project that covers some of this?

Interested in engaging in something like this?

Their contribution

An important contribution they’ve made is to provide a useful framework for comparing feature usage between different systems and summarised the basic level of usage between the different parts of the framework. The framework is shown in the following image.

Malikowski Flow Chart

Limitations

However, there remain two important questions/problems:

  1. How do you generate the statistics to fill in the framework?
    Malikowski et al suggest that prior studies relied primarily on asking academics what they did with the LMS. They then point out that this approach is somewhat less than reliable. The adopt a better approach by visiting each course site and manually counting feature usage.

    This is not much of an improvement because of the workload involved but also the possibility of errors due to them missing usage. For example, the role in the LMS of the user visiting each course site may not be able to see everything. Alternatively, when they visit the site may change what they see e.g. an academic that deletes a particular function before term ends.

  2. What does it mean to adopt a feature?
    In Malikowski (2008) adoption is defined as using a feature more than the 25% percentile. This, I believe, is open to some problems as well.

Implications

Those limitations mean, that even with their framework, it is unlikely that a lot of organisations are going to engage in this sort of evalaution. It’s too difficult. This means less data can be compared between institutions and systems. This in turn limits reflection and knowledge.

Given the amount of money being spent on the LMS within higher education, it seems there is a need to address this problem.

One approach

The aims of the following suggestion are:

  • Automate the calculation of feature usage of LMS.
  • Enable comparison across different LMS.
  • Perhaps, include some external data.

One approach to this might be to use the model/framework from Malikowski et al as the basis for the design of a set of database tables that are LMS independent.

Then, as need arises, write a series of filters/wrappers that retrieve data from a specific LMS and inserts it into the “independent” database.

Write another series of scripts that generate useful information.

Work with a number of institutions to feed their data into the system to allow appropriate cross institutional/cross LMS comparisons.

Something I forgot – also work on defining some definition of adoption that improves upon those used by Malikowski.

Start small

We could start something like this at CQU. We have at least two historically used “LMS/VLEs” and one new one. Not to mention Col already having made
progress on specific aspects of the above.

The logical next step would be to expand to other institutions. Within Australia? ALTC?

Factors related to the breadth of use of LMS/VLE features

As a step towards thinking about how you judge the success of an LMS/VLE, this post looks at some work done by Steven Malikowski. Why his work? Well he is co-author on three journal papers that provide one perspective on the usage of features of an LMS, including one that proposes a model for research into course management systems. A list of the papers in the references section.

This post focuses on looking at the 2008 paper. On the whole, there seems to be a fair bit of space for research to extend and improve on this work.

Factors related to breadth of use

The abstract of this paper (Malikowski, 2008) is

A unique resource in course management systems (CMSs) is that they offer faculty members convenient access to a variety of integrated features. Some featurs allow faculty members to provide information to students, and others allow students to interact with each other or a computer. This diverse set of features can be used to help meet the variety of learning goals that are part of college classes. Currently, most CMS research has analyzed how and why individual CMS features are used, instead of analyzing how and why multiple features are used. The study described here reports how and why faculty members use multiple CMS features, in resident college classes. Results show that nearly half of faculty members use one feature or less. Those who use multiple features are significantly more likely to have experience with interactive technologies. Implications for using and encouraging the use of multiple CMS features are provided.

Suggests that cognitive psychology is the theoretical framework used. In particular, the idea that there are discrete categories of learning goals ranging from simple to complex and that learners that don’t master the simple, first, will have difficulties if they attempt the more complex. An analogy is made with the use of a CMS, there are simple features that need to be learned before using complex features.

In explaining previous research on adoption of features of an LMS (mostly his own quantitative evaluations) the author reports that the college/discipline an academic is in explains most variation.

How to use these findings

The point is made that a CMS is used to transmit information more than twice as much as it is used for anything else. Also, that there are cheaper and better ways to transmit information.

The suggestion is then made that

Instructional designers, researchers, and others interested in increasing effective CMS use can use the research just summarized to emphasize factors that are related to the use of uncommon CMS features and deemphasize factors that are not related to increased use.

But the best advice that is presented is that if you wish to promote use X, then encourage it in discipline Y first since they have shown interest in related features. Then, after generating insight, seek to take it elsewhere…….??

Use of multiple features

Only a small number of studies have focused on use of multiple features. Most achieved by asking academics how they use the CMS. Suggests that a second way is to visit course sites and observe which features are used. Suggests that observing behaviour is more accurate than asking them how they behave.

IMPLICATION: the approach Col and Ken are using for Blackboard and what I’m using for Webfuse is automated. Not manual. A point of departure.

Methodology

Three bits of data were used

  1. Usage of 6 common CMS features
    • Random sample of 200 staff at US institution using D2L were asked to participate – 81 chose to participate.
    • 154 D2L sites were analysed as staff teach more than one course a semester
    • 2 research team members visited and manually analysed each course site – repeating until no discrepancies.
  2. External factors: class size, the college/discipline and class level (1st, 2nd year etc)
    Gathered manually from the course site.
  3. 10 internal factors focused primarily on the faculty members’ previous experience with technology.
    Gathered by surveying staff.

Limitation: I wonder if D2L has any adaptive release mechanisms like Blackboard. Potentially, if the team member visiting each course site has an incorrectly configured user account, they may not be able to see everything within the site.

Purpose was to determine if internal or external factors were related to adoption of multiple CMS features. Established using a regression analysis with the dependent variable being the number of features adopted and the independent variables being the 3 external and 10 internal factors.

What is adoption?

This is a problem Col and I have talked about and which I’ve mentioned in some early posts looking at Webfuse usage. The definition Malikowski used in this study was

In this study, adopting a feature was defined as a situation where a D2L Web site contained enough instances of a feature so this use was at or above the 25th percentile, for a particular feature. For example, if a faculty member created a D2L Web site with 10 grade book entries, the grade book feature would have been adopted in this Web site, since the 25th percentile rank for the grade book feature is 7.00. However, if the same Web site contained 10 quiz questions, the quiz feature would not have been adopted since the 25th percentile rank for quiz questions is 12.25

I find this approach troubling. Excluding a course from adopting the quiz feature because it has only 10 questions seems harsh. What if the 10 questions were used for an important in class test and was a key component of the course. What if there are a few courses that have added all of the quiz questions provided with the textbook into the system.

Implication: There’s an opportunity to develop and argue for a different – better – approach to defining adoption.

Sample of results

  • 36% of sites used only 1 feature
  • 72% of sites used 2 or less features
  • 0% of sites used all 6 features
  • Only four of the external/internal factors could be used to predict the number of CMS features adopted
    1. Using quizzes
    2. College of social science
    3. Using asynchronous discussions
    4. Using presentation software (negative correlation)

Discussion

Suggests that the factors found to predict multiple feature use can be used to guide instructional designers to work with these faculty to determine what works before going to the others.

Limitation: I don’t find this a very convincing argument. I start to think of the technologists alliance and the difference between early adopted and the majority. The folk using multiple LMS features are likely to be very different than those not using many. Focusing too much on those already using many might lead to the development of insight that is inappropriate for the other category of user.

Implication: There seems to be some research opportunities that focuses on identifying the differences between these groups of users by actually asking them. i.e. break academics into groups based on feature usage and talk with them or ask them questions designed to bring out differences. Perhaps to test whether they are early adopters or not.

References

Malikowski, S., Thompson, M., & Theis, J. (2006). External factors associated with adopting a CMS in resident college courses. Internet and Higher Education, 9(3), 163-174.

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

Malikowski, S. (2008). Factors related to breadth of use in course management systems. Internet and Higher Education, 11(2), 81-86.

How do you measure success with institutional use of an LMS/VLE?

My PhD is essentially arguing that most institutional approaches to e-learning within higher education (i.e. the adoption and long term use of an LMS) has some significant flaws. The thesis will/does describe one attempt to formulate an approach that is better. (Aside: I will not claim that the approach is the best, in fact I’ll argue that the notion of there being “one best way” to support e-learning within a university is false.) The idea of “better” raises an interesting/important question, “How do you measure success with institutional use of an LMS?” How do you know if one approach is better than another?

These questions are important for other reasons. For example, my current institution is currently implementing Moodle as its new LMS. During the selection and implementation of Moodle there have been all sorts of claims about its impact on learning and teaching. During this implementation process, management have also been making all sorts of decisions made about how Moodle should be used and supported (many of which I disagree strongly with). How will we know if those claims are fulfilled? How will we know if those plans have worked? How will we know if we have to try something different? In the absence of any decent information about how the institutional use of the LMS is going, how can an organisation and its management make informed decisions?

This question is of increasing interest to me for a variety of reasons, but the main one is the PhD. I have to argue in the PhD and resulting publications that the approach described in my thesis is in some way better than other approaches. Other reasons include the work Col and Ken are doing on the indicators project and obviously my beliefs about what the institution is doing. Arguably, it’s within the responsibilities of my current role to engage in some thinking about this.

This post, and potentially a sequence of posts after, is an attempt to start thinking about this question. To flag an interest and start sharing thoughts.

At the moment, I plan to engage with the following bits of literature:

  • Malikowski et al and related CMS literature.
    See the references section below for more information. But there is an existing collection of literature specific to the usage of course management systems.
  • Information systems success literature.
    My original discipline of information systems has, not surprisingly a big collection of literature on how to evaluate the success of information systems. Some colleagues and I have used bits of this literature in some publications (see references).
  • Broader education and general evaluation literature.
    The previous two bodies of literature tend to focus on “system use” as the main indicator of success. There is a lot of literature around the evaluation of learning and teaching, including some arising from work done at CQU. This will need to be looked at.

Any suggestions for other places to look? Other sources of inspiration?

Why the focus on use?

Two of the three areas of literature mentioned above draw heavily on the level of use of a system in order to judge its success. Obviously, this is not the only measure of success and may not even be the best one. Though the notion of “best” is very subjective and depends on purpose.

The advantage that use brings is that it can, to a large extent, be automated. It can be easy to generate information about levels of “success” that are at least, to some extent, better than having nothing.

At the moment, most universities have nothing to guide their decision making. Changing this by providing something is going to be difficult. After all, providing the information is reasonably straight forward. Changing the mindset and processes at an institution to take these results into account when making decisions…..

Choosing a simple first step, recognising it’s limitations and then hopefully adding better measures as time progresses is a much more effective and efficient approach. It enables learning to occur during the process and also means if priorities or the context changes, you lose less as you haven’t invested the same level of resources.

In line with this is that the combination of Col’s and Ken’s work on the indicators project and my work associated with my PhD provides us with the opportunity to do some comparisons of two different systems/approaches within the same university. This sounds like a good chance to leverage existing work into new opportunities and develop some propositions about what works around the use of an LMS and what doesn’t.

Lastly, there are some good references that suggest that looking at use of these systems is a good first start. e.g. Coates et al (2005) suggest that it is the uptake and use of features, rather than their provision, that really determines their educational value.

References

Behrens, S., Jamieson, K., Jones, D., & Cranston, M. (2005). Predicting system success using the Technology Acceptance Model: A case study. Paper presented at the Australasian Conference on Information Systems’2005, Sydney.

Coates, H., James, R., & Baldwin, G. (2005). A Critical Examination of the Effects of Learning Management Systems on University Teaching and Learning. Tertiary Education and Management, 11(1), 19-36.

Jones, D., Cranston, M., Behrens, S., & Jamieson, K. (2005). What makes ICT implementation successful: A case study of online assignment submission. Paper presented at the ODLAA’2005, Adelaide.

Malikowski, S., Thompson, M., & Theis, J. (2006). External factors associated with adopting a CMS in resident college courses. Internet and Higher Education, 9(3), 163-174.

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

Malikowski, S. (2008). Factors related to breadth of use in course management systems. Internet and Higher Education, 11(2), 81-86.

What can history tell us about e-learning and its future?

The following contains some initial thoughts about what might turn into a paper for ASCILITE’09. It’s likely that I’ll co-author this with Col Beer.

Origins

The idea of this paper has arisen out of a combination of local factors, including:

  • The adoption of Moodle as the new LMS for our institution.
  • The indicators project Col is working on with Ken.
    Both Col and I used to support staff use of Blackboard. This project aims to do some data mining on the Blackboard system logs to better understand how and if people were using Blackboard.
  • Some of the ideas that arose from writing the past experience section of my thesis.

Abstract and premise

The premise of the paper starts with the Santayana quote

Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it.

The idea is that there is a long history of attempting to improve learning and teaching through technology. There is a history of universities moving to new learning management systems and staff within those universities using learning management systems. In fact, our institution has over 10 years experience using learning management systems. Surely, there are some lessons within that experience that can help inform what is being done with the transition to Moodle at our institution?

The aim of the paper will be, at least, to examine that history, both broadly and specifically at our institution, and seek to identify those lessons. Perhaps the paper might evaluate the transition to Moodle at our institution and, based on that past experience, seek to suggest what some possible outcomes might be.

As you might guess from some of the following and some of what I’ve written in the past experience section of my thesis, I have a feeling that as we explore this question we are likely to find that our institution has failed Santayana’s advice on retentiveness and that the institution may be repeating the past.

Given that some of the folk directly involved in our institution’s transition to Moodle read this blog and we’ll be talking about this paper within the institution, perhaps we can play a role in avoiding that. Or perhaps, as we dig deeper, the transition is progressing better than I currently perceive.

In reality, I think we’ll avoid making specific comments on what is happening in our institution. The transition to Moodle is being run as a very traditional teleological process. This means that any activity not seen as directly contributing to the achievement of the purpose (i.e. that is not critical) will be seen as something that needs to be curtailed.

Connection with conference themes?

The paper should try and connect with the themes of the conference. Hopefully in a meaningful way, but a surface connection would suffice. The theme for the conference is “Same places, different spaces” and includes the following sub themes (I’ve included bits that might be relevant to this paper idea)

  • Blended space
    What makes blended learning effective, why, how, when and where?
  • Virtual space
    What is the impact, what are the implications and how can the potential of this emergent area be realistically assessed?
  • Social space
    What Web 2.0 technologies are teachers and students using? How well do they work, how do you know, and what can be done to improve and enhance their use?
  • Mobile space
  • Work space

Not a great fit with the sub themes but I think a connection with the theme in a round about way. Perhaps the title could be “E-learning and history: different spaces, same approaches” or something along those lines. This might have to emerge, once we’ve done some work.

Potential structure and content

What follows is an attempt to develop a structure of the paper and fill in some indicative content and/or work we have to do. It assumes an introduction that will position e-learning as a amnesiac field. This suggestion will be built around the following and similar quotes

Learning technology often seems an amnesiac field, reluctant to cite anything ‘out of date’; it is only recently that there has been a move to review previous practice, setting current developments within an historical context…many lessons learnt when studying related innovations seem lost to current researchers and practitioners. (Oliver, 2003)

I should note that the following is a first draft, an attempt to get my ideas down so Col and I can discuss it and see if we can come up with better ideas. Feel free to suggest improvements.

History of technology mediated learning and hype cycles

The aim of this section is to examine the broader history of technology-mediated learning going back to the early 1900s and drawing a small amount of content from ????.

The main aim, however, is attempt to identify a hype cycle associated with these technologies that generally results in little or no change in the practice of learning and teaching. It will draw on some of the ideas and content from here. It will also draw on related hype cycle literature including Birnbaum’s fad cycle and Gartner’s hype cycle.

E-learning usage: quantity and quality

This section will provide a summary of what we know from the literature and also from the local institution about the quantity and quality of past usage of e-learning. With a particular focus on the LMS.

Col’s indicators project has generated some interesting and depressing results from the local system. For example, out institution has a large distance education student cohort. A group of students that rarely, if ever, set foot on a campus. They study almost entirely by print-based distance education and e-learning. Recently, Col has found that 68% of those distance education students have never posted to a course discussion forum.

Paradigms of e-learning and growing abundance

The aim of this section would be to suggest that the focus on the LMS is itself rather short-sighted and does not recognise the on-going evolution of e-learning. i.e. that we’re not going to be stuck in the LMS rut for long term and perhaps the institution should be looking at that change and how it can harness it.

This section will draw on the paradigms of e-learning. It may also draw on some of the ideas contained in this TED talk by Chris Anderson around the four key stages of technology and related work.

Thinking about this brings up some memories of the 90s. I remember when friends of mine in the local area would enroll at the university in order to get Internet access and an email address. I remember when the university had to discourage students from using outside email accounts (e.g. hotmail) because they didn’t provide enough disk space.

This was because email and Internet access inside Universities was more abundant than outside. Those days are long gone. External email providers like hotmail and gmail provide large disk quotas for email than institutions. For many people, it’s increasingly cheaper to get Internet access at home. At least it’s cheaper to pay for it than pay for a university education you don’t need.

Diffusion, chasms and task corruption

Perhaps this section could be titled “Lessons”.

The idea behind this suggested section is starting to move a little beyond the historical emphasis. It’s more literature and/or idea based. So I’m not sure of its place. Perhaps it’s the history of ideas around technology. Perhaps it can fit.

The idea would be to include a list of ideas associated with e-learning:

Predictions and suggestions

This is getting to the sections that are currently more up in the air. Will it be an evaluation of the transition or will it be simply a list of more generic advice. The generic advice might be safer institutionally, better fit with the conference themes, and more more generally useful.

An initial list:

  • The adoption of Moodle will decrease the quality of learning and teaching at our institution, at least in the short term.
  • Longer term, unless there is significant activity to change the conception of learning and teaching held by the academics, the quantity and quality of use of Moodle will be somewhat similar, possibly a little better (at least quantity) than that of previous systems.
    Idea: Col, can we get some of those global figures you showed me broken down by year to see what the trend is? i.e. does it get better or worse over time?
  • Strategic specification of standards or innovation will have little or no impact on quantity and quality, will perhaps contributed to a lowest common denominator, and will likely encourage task corruption, work arounds and shadow systems.
  • Increasingly, the more engaged academics will start to use external services to supplement the features provided by the LMS.

I’m often criticised as being negative. Which is true, I believe all of my ideas have flaws, imagine what I think of the ideas of others! So, perhaps the paper should include some suggestions.

  • Focus more on contextual factors that are holding back interest in learning and teaching by academics. (See technology gravity)
  • Recognise the instructional technology chasm and take steps to design use of Moodle to engage with the pragmatists.
  • Others??

References

Oliver, M. (2003). Looking backwards, looking forwards: an Overview, some conclusions and an agenda. Learning Technology in Transition: From Individual Enthusiasm to Institutional Implementation. J. K. Seale. Lisse, Netherlands, Swets & Zeitlinger: 147-160.

Measuring the design process – implications for learning design, e-learning and university teaching

I came across the discussion underpinning these thoughts via a post by Scott Stevenson. His post was titled “Measuring the design process”. It is his take on a post titled “Goodbye Google” by Doug Bowman. Bowman was the “Visual Design Lead” at Google and has recently moved to Twitter as Creative Director.

My take of the heart of the discussion is the mismatch between the design and engineering cultures. Design is okay with relying on experience and intuition for the basis for a decision. While the engineering culture wants everything measured, tested and backed up by data.

In particular, Bowman suggests that the reason for this data-driven reliance is

a company eventually runs out of reasons for design decisions. With every new design decision, critics cry foul. Without conviction, doubt creeps in. Instincts fail. “Is this the right move?”

The doubt, the lack of a reason, purpose, or vision for a change creates a vacuum that needs to be filled. There needs to be some reason to point to for the decision.

When a company is filled with engineers, it turns to engineering to solve problems. Reduce each decision to a simple logic problem. Remove all subjectivity and just look at the data. Data in your favor? Ok, launch it. Data shows negative effects? Back to the drawing board.

Can’t be anything wrong with that? Can there? If you’re rational and have data to back you up then you can’t be blamed. Bowman suggests that there is a problem

And that data eventually becomes a crutch for every decision, paralyzing the company and preventing it from making any daring design decisions.

He goes on to illustrate the point, where the focus goes to small questions – should a border be 3, 4 or 5 pixels wide – while the big questions, the important questions that can make a design distinctive become ignored. This happens because hard problems are hard and almost certainly impossible to gather objective data for.

Stevenson makes this point

Visual design is often the polar opposite of engineering: trading hard edges for subjective decisions based on gut feelings and personal experiences. It’s messy, unpredictable, and notoriously hard to measure.

Learning design, e-learning and university teaching

This same problem arises in universities around learning design, e-learning and university teaching. The design of university teaching and learning has some strong connections with visual design. It involves subjective and contextual decisions, it’s messy, unpredictable and hard to measure.

The inherently subjective and messy nature of university teaching brings it into direct tension with two increasingly important and powerful cultures within the modern university:

  1. Corporate management; and
    Since sometime in the 90s, at least within Australia, corporate manageralism has been on the rise within universities. Newton (2003) has a nice section on some of the external factors that have contributed to this rise, I’ve summarised Newton here. Further underpinning this rise has been what Birnbaum (2000) calls “education’s Second Management Revolution” from around 1960 and which “marks the ascendance of rationality in academic management”.
  2. Information technology.
    With the rise of e-learning and other enterprise systems, the corporate IT culture within universities is increasingly strong. In particular, from my cynical perspective, when they can talk the same “rational” talk as the management culture and back this up with reams of data (regardless of validity) and can always resort of techno-babble to confuse management.

Both these cultures put an emphasis on rationality, on having data to support decisions and on being able to quantify things.

Symptoms of this problem

Just taking the last couple of years, I’ve seen the following symptoms of this:

  • The desire to have a fixed, up-front estimate of how long it takes to re-design a course.
    I want you to re-design 4 courses. How long will it take?
  • The attempt to achieve quality through consistency.
    This is such a fundamentally flawed idea, but it is still around. Sometimes it is proposed by people who should know better. The idea that a single course design, word template or educational theory is suitable for all courses at an institution, let alone all learners, sounds good, but doesn’t work.
  • Reports indicating that the re-design and conversion of courses to a new LMS are XX% complete.
    Heard about this just recently. If you are re-designing a raft of different courses, taught be different people, in different disciplines, using different approaches and then porting them to a new LMS, how can you say it is XX% complete. The variety in courses will mean that you can’t quantify how long it will take. You might have 5 or 10 courses completed, but that doesn’t mean you’re 50% completed. The last 5 courses might take much longer.
  • The use of a checklist to evaluate LMSes.
    This has to be the ultimate, use a check list to reduce the performance of an LMS to a single number!!
  • Designing innovation by going out to ask people what they want.
    For example, let’s go and ask students or staff how they want to use Web 2.0 tools in their learning and teaching. That old “fordist”, the archetypal example of rationalism, Henry Ford even knew better than this

    “If I had asked people what they wanted, they would have said faster horses.”

The scary thing is, because design is messy and hard, the rational folk don’t want to deal with it. Much easier to deal with the data and quantifiable problems.

Of course, the trouble with this is summarised by a sign that used to hang in Einstein’s office at Princeton (apparently)

Not everything that counts can be counted, and not everything that can be counted counts.

Results

This mismatch between rationality and the nature of learning and teaching leads, from my perspective, to most of the problems facing universities around teaching. Task corruption and a reliance on “blame the teacher”/prescription approaches to improving teaching arise from this mismatch.

This mismatch arises, I believe, for much the same reason as Bowman used in his post about Google. The IT and management folk don’t have any convictions or understanding about teaching or, perhaps, about leading academics. Consequently, they fall back onto the age-old (and disproved) management/rational techniques. As they give the appearance of rationality.

References

Birnbaum, R. (2000). Management Fads in Higher Education: Where They Come From, What They Do, Why They Fail. San Francisco, Jossey-Bass.