The VLE model and the wrong level of abstraction

I’m currently trying to formalise the information systems design theory for e-learning that is meant to be the contribution of my thesis. i.e. what is the model, set of principles etc that I think is important.

As it happens, I recently came across this post on models around the use of VLEs/LMS from Barking College. It’s part of a discussion involving a group of folk (James Clay, David Sugden, and Louise Jakobsen) talking about models for getting academics to use the complete functionality of the VLE/LMS.

This is interesting for two reasons. First, it helps me see what others are thinking in this sphere. Second, it provides a spark for me to think about my “model”. As an example there is an interesting point made on the Barking College post that I want to pick up on in the following.

The basic idea is that the functionality of a bog-standard VLE/LMS – like Moodle – embodies the wrong level of abstraction. At least in terms of encouraging and enabling effective use of the VLE/LMS within a university by academics. A traditional VLE model is at a very low level of abstraction which means lots of flexibility, but also lots of problems. I think there is some value (also some dangers) in moving the level of abstraction up a few notches.

Moodle document management and the wrong level of abstraction

The post from Barking College makes the point that uploading and maintaining document content on Moodle “is one of the most round-about and time consuming things anyone can do”. I agree. But it doesn’t end there. Even when academics understand and engage in the uploading process, their are more problems.

Then we teach staff to upload files to the VLE … and in no time they have transferred their heap and bad habits onto the file areas of VLE courses. If, in addition to their own unique ways of filing things, they are not the only editing teacher on a course then total chaos is almost guaranteed.

There are some exceptions. There are folk who are well organised and anally retentive enough that they have their files organised into well thought out directory structures with meaningful file names. But most academics don’t. As pointed out, this causes enough problems when it is just that originating academic having to deal with the resulting chaos, but when a course is taught by multiple academics….

This highlights one of the major flaws I see in most VLEs/LMS. They are design at the wrong level of abstraction. The file upload capability is very basic, it simply provides the ability to manage files. It provides no additional cognitive support for structuring content and certainly none that connects with the act of learning and teaching. The file management capability can’t tell the difference between a course synopsis, a tutorial sheet, a trial exam, a set of solutions or a set of lectures. Most can’t even tell you if it’s a PDF file, a Powerpoint file or a Word document.

This low level of abstraction is necessary to enable the broadest possible flexibility in use. The more abstraction you build into a system, the more specific you make it.

The CQU eStudyGuides

One example of what I mean here is the CQU eStudyGuide Moodle block I played with last year. Some brief context:

  • CQU has a long history of being a print-based distance education provider.
  • A print-based study guide was a standard component of the package of materials traditionally provided to distance education students.
  • The study guide was written by CQU academics and intended to guide the distance education student through the learning materials and activities they were meant to complete.
  • The production of the study guides was a formal process involving a central “desktop publishing unit” which would produce “professional” quality documents that were then printed.
  • The “desktop publishing” group is part of the unit I work with.
  • Back in 2007/2008 the unit modernised the study guide production process so that it improved the quality of the final document and produced both a print and electronic version.

The electronic version was long over due. For a long time academic staff wanted to provide electronic versions of the course study guide on the course website. Some had done this via an ad hoc, manual process, but it was time for something better.

Producing the electronic version of the study guide was only the first step. We also had to produce an automated process that would allow academics place the eStudyGuide on their course website. Requiring academics to do this manually was inefficient and likely to result in less than professional outcomes. This connects with the observation about file management made in the Barking College post about file management.

So, we implemented the automatic generation of an eStudyGuide web page for every course. The following image is one example, click on it to see it in a bigger size.

CQU eStudyGuide web page

The eStudyGuide page for a course was produced by a script. The script was aimed at a much higher level of abstraction. The script knew about CQU courses, it knew about the format we used for the eStudyGuide and it was able to use that knowledge to produce the file. e.g. it pulled the title of each part of the eStudyGuide from the eStudyGuide.

The CQU eStudyGuide Moodle block

By 2010 CQU had moved to Moodle as its LMS. As part of learning Moodle development I played with the creation of an Moodle eStudyGuide block. A block that would embody the same greater level of knowledge about CQU’s eStudyGuide than a more general Moodle block. Consequently, significantly simplifying the process of uploading an eStudyGuide to a Moodle course site. The following table compares and contrasts the eStudyGuide block approach with the more traditional manual appproach.

eStudyGuide block Moodle file upload
  1. Login to Moodle and go to course site.
  2. Turn editing on.
  3. Choose “eStudyGuide” block from “Add a block menu”
  4. Position it where you want.
  1. Get a zip file containing the eStudyGuide from somewhere.
  2. Login to Moodle and go to course site.
  3. Go to the “Files” link under administration.
  4. Create a directory for the eStudyGuide.
  5. Upload the zip file containing the eStudyGuide.
  6. Unzip it.
  7. Return to the course site.
  8. Turn editing on.
  9. For each of the chapters (usually 10 to 12) of the eStudyGuide
    • Manually add a link to the chapter
    • To make sure each link is meaningful you may have to open the chapter PDF to remember what the title of the chapter was.
  10. Add another link to the PDF containing the entire eStudyGuide.
  11. Add the blurb about how to use PDF files and where to get a copy of Acrobat or other PDF viewer.

The trade-off

There isn’t a perfect solution. Both low and high levels of abstraction involve a tradeoff between different strengths and weaknesses.

A low level of abstraction means the solution is more flexible, can be used in more institutions and for unexpected uses (good and bad). It also means that the user need to have greater knowledge. If they do, then good things happen. If they don’t, it’s not so good. It also means that the user has to spend a lot more time doing manual activities, which increases the likelihood of human error.

A high level of abstraction, especially one that connects with practices at a specific institution reduces workload on the part of the users, reduces the chance of errors and perhaps allows users to focus on other more important tasks. But, it also limits portability of the practice. i.e. the CQU eStudyGuide process probably wouldn’t work elsewhere. Which means it requires additional resources to implement and maintain.

A greater level of abstraction also removes some flexibility from what the user can do. The simple solution to this is not to mandate the higher level of abstraction. e.g. at CQU we provided the automated eStudyGuide page, but academics weren’t required to use it. They could do their own thing if they wanted to. Most didn’t.

Other examples

Providing a high level of abstraction to the VLE is almost certainly going to be a component of my ISDT, of my model. This is exactly what the default course site approach attempted to do. Provide a much higher level of abstraction on top of the VLE.

Into the future, it’s also a key part of what I’m interested in investigating. I think the addition of a “default course site” approach to Moodle, especially one that increases the level of abstraction but at the same time can be used at multiple institutions, is especially interesting.

Webfuse feature adoption – 1997 through 2009

The following presents details of feature adoption by courses using the Webfuse system from 1997 through 2009. It presents the data generated by the work described in the last post.

No real analysis or propositions, mostly just the data. Analysis is next step.

How features are grouped

To provide some level of Webfuse independence I’ve used the model developed by Malikowski et al (2007) and represented in the following image.

Reworked Malikowski model

Essentially, each Webfuse feature was assigned to one of four categories:

  1. Transmitting content.
  2. Interactions.
  3. Evaluating students.
  4. Evaluating courses.

Feature adoption

The following table shows the results. The adoption rate is shown as a percentage of Webfuse courses that used a particular feature category.

Year # course sites Transmit content Interactions Student eval Course eval
1997 109 34.9 1.8 0.9 9.2
1998 138 38.4 48.6 1.4 0.7
1999 189 46.0 9.0 2.1 9.5
2000 174 46.6 43.7 24.7 6.9
2001 244 51.6 32.4 47.1 28.3
2002 312 69.6 63.8 57.7 44.2
2003 302 69.2 68.5 93.7 37.7
2004 328 61.3 61.9 91.8 35.7
2005 299 64.2 69.2 93.6 39.8
2006 297 70.0 68.7 105.1 31.6
2007 251 68.5 102.0 168.1 33.1
2008 225 72.9 110.7 192.0 51.6
2009 211 69.2 105.7 211.4 42.7

When is a course a Webfuse course?

The most surprising aspect of the previous table is that some of the percentages are greater than 100%. How do you get more than 100% of the Webfuse courses adopting a feature?

Due to the nature of Webfuse, its features, and the political context it is possible that a course could use a Webfuse feature without having a Webfuse course website. Webfuse was never the official institutional LMS, that honour fell upon WebCT and then Blackboard. However, a number of the features provided by Webfuse were still of use to folk using WebCT or Blackboard.

For the purpose of the above, a Webfuse course is a course that has a Webfuse course site.

Discussion

Each of the following will show a graph of each feature category and offer some brief notes and initial propositions based on the data.

Click on the graphs to see them bigger.

Transmit content

Adoption of content transmission features Webfuse 1997 through 2009

Background

  • 100% of Webfuse course sites had content transmission.
  • From the 2nd half of 2001 these were automatically created.
  • Feature adoption above only includes where teaching academics have placed additional content onto the course site.
  • Main Webfuse features in this category are: course web pages, uploading various files and using the course announcements/update features.

Observations and propositions:

  • The automated and expanded default course site introduction in 2001 appears correlated with an increase in use. However, that could simply be broader acceptance of the Web.
  • Even at most accepted, almost 30% of staff teaching courses did not place additional content on the course site. This could be seen as bad, 30% didn’t do anything, or good, 30% could focus on other tasks.

Interactions

Adoption of interaction features Webfuse 1997 through 2009

Background:

  • Percentage adoption in 1997 through 1999 is probably higher as significant numbers of courses used Internet mailing lists. However, records of these aren’t tightly integrated with Webfuse course sites.
  • The 1998 archives have been found, so the table above shows 48.6% of courses having mailing lists.
  • From 2000 onwards Webfuse default sites included a web-based mail archive of mailing lists.
  • Features include: interactive chat rooms, web-based discussion forums, email-merge, BAM and web-based archives of mailing lists.
  • The push over 100% in 2007 onwards comes from a combination of more widespread use of mailing lists/discussion forums in default course sites and broader adoption of the email merge facility by non-Webfuse courses.

Evaluate students

Adoption of student evaluation features Webfuse 1997 through 2009

Background:

  • Main traditional features are online quizzes and online assignment submission and management.
  • Other non-traditional student evaluation features include an academic misconduct application, assignment extension system, informal review of grade system etc.
  • About 2005 onwards some of the non-traditional features became institutional systems.

Evaluate courses

Adoption of course evaluation features Webfuse 1997 through 2009

Background:

  • 2000 and before primary evaluation is via web-based forms with a bit of course barometer usage.
  • Post 2001 course barometer becomes standard in all Webfuse courses.
  • But not all courses have contributions. The percentages only include a barometer feature if someone has posted a comment to it. If measured as having a course barometer, then figure would be 100% from 2001 through 2004/5.
  • Spike in usage in 2008 comes from small institutional project using barometer in non-Webfuse courses.
  • Similar spike in 2002 comes from active enouragement of barometer idea.

Examining feature adoption – slightly better approach

I’m in the throws of finalising the second last bit of data analysis for the thesis. For this I’m trying to examine the level of feature adoption within courses supported by the Webfuse system (the main product for the the thesis). The following describes an attempt to formalise the process for this evaluation.

This has been under construction for almost a week. It’s complete. The following just documents what was done. Probably not all that interesting. Will present some results in the next post.

The main outcome of the below, is that I now have a database that has abstracted Webfuse feature adoption data from 1997 through 2009.

Rationale

There are three reasons to do this:

  1. Improve the process;
    So far, I’ve been doing this with a collection of UNIX scripts and commands, text files and the odd bit of a database. It works, but is not pretty.
  2. Record what I do; and
    I need to document what I’m doing so that I can re-create/check it later on. I could do this in a Word document but this way I can share what I’m doing.
  3. Move the Indicators project along a bit.
    Given contextual reasons, not sure how much further the project might go, but this might help a little.

The problem

The aim is to understand what features of an e-learning system are being used. i.e. how many courses are using the discussion forum, the quiz system etc. The aim is not to just understand this in the context of a single term, single institution or a single e-learning system. The idea is to examine feature adoption across systems, time and institutions in order to see if there are interesting patterns that need further investigation. This is the underlying aim of the Indicators project and more immediately important for me, what I have to do for my thesis around the Webfuse system.

So, I need to gather all the information about Webfuse feature adoption and turn it into a form that can be compared with other systems. I’ve done this before. It was first blogged about and then became part of an ASCILITE paper (Beer et al, 2009)

But since that work, I’ve gotten some additional Webfuse data and also had the opportunity to revisit the design and implementation of Webfuse through writing this second last chapter. I’ve also come up with a slightly different way to interpret the data. This means I need to revisit this usage data with some new insights.

One of the problems is that the original calculations in the ASCILITE paper did not draw on the full set of Webfuse features that fit into the Malikowski et al (2007) categories (represented in the diagram below). I need to add a bit more in and that means trawling a range of data sources. I need to have this done through a single script.

Reworked Malikowski model

In some ways, this need to have a “single script” encapsulates a key component of what the indicators project needs, an LMS independent computer representation of feature adoption of e-learning systems. A representation that can be queried and analysed quickly and easily.

What follows is my first attempt. I believe I’ll learn just by doing this. Hopefully, this means that when/if the indicators project does this in anger, it will be better informed.

The plan

I’m essentially going to create a couple of very simple database tables

  • courses: period, year, course, lms
    Which courses were offered in which period by which LMS. I’m using a very CQU centric period/year combination as I’m not going to waste my time and cognition establishing some sort of general schema. That’s for the next step, if it ever comes. I want to solve my problem first.
  • feature_adoption: period, year, course, category, feature
    Which features (in terms of specific feature and the Malikowski feature category) have been used in which courses.

It’s neither pretty, complex or technically correct (from a relational database design perspective), but it should be simple and effective for my needs.

To populate this set of tables I am going to write a collection of scripts that parse various databases, course website archives and Apache system logs to generate feature adoption figures.

Once populated, I should be able to write other scripts to generate graphs, CSV files and various forms of analysis to suit my purpose or that of others.

The rest of this post documents the implementation of these plans.

Create the tables

courses

Simple (simplistic?), straight-forward. Have used an enum of LMS and included the list I’m likely to use at my current institution.

create table courses
(
  period char(2) not null,
  year int(4) not null,
  course varchar(10) not null,
  lms enum ( 'webfuse', 'blackboard', 'moodle' ),
  index offering (period,year,course)
)

feature_adoption

Another simple one, the category values “match” the 5 categories proposed by Malikowski et al (2007)

create table feature_adoption
(
  period char(2) not null,
  year int(4) not null,
  course varchar(10) not null,
  category enum ( 'transmitContent', 'interactions', 'evaluateStudents', 'evaluateCourses', 'cbi' ) not null,
  feature varchar(20) not null,
  index offering (period,year,course)
)

Fill the database tables

With the database tables in-place it is now time to fill them with data representing feature adoption within courses using Webfuse. I’m going to do this via Perl scripts because Webfuse is written in Perl and so, I’m very comfortable with Perl and Webfuse provides some classes that will help make this process a bit simpler. Am going to work through this Malikowski category by category. Will leave out the computer-based instruction category as Webfuse never provided this feature. But first, have to populate the courses table.

courses

There is an existing database table that tracks Webfuse course sites, however, I’m going to use a slightly different track based on some existing text files I’ve generated for earlier analysis. These text files contain lists of Webfuse course sites per year. I’m simply going to use vi to turn them into SQL commands and insert them into the database. This took three commands in vi and is working.

Done.

Interactions

For each of these following sections, the process is:

  • Identify the features within Webfuse that fit this category.
    Webfuse provides features via two means: page types and Wf application. Calculating usage of each is somewhat different. They’ll need to be considered differently.
    • Page Types – WebBBS, YaBB, Etutes, CourseGroup, EwgieChat, WWWBoard. DONE
      For some years, splitting page types into categories has already been done. Just a matter of doing the vi translation into SQL.
    • Wf applications – EmailMerge : DONE
      The problem with email merge is that while it generally originates from a specific course, it is implemented via list of student ids. This makes it hard to associate emailmerge usage with a course. Attempts to find a solution to this is described below.
  • Identify percentage adoption of these features per year.
  • Stick it in the database

The attempt to associate use of EmailMerge with a course used the following steps:

  • Look at referer in Apache log
    This gives a range of courses that have used email merge. So, some data could be retrieved. There’s also mention of JOBID – i.e. mail merge stores information about jobs in a database table.
  • Look at email merge database tables;
    One has the username of the staff member associated with the job and the date created. This could be used to extract the course, but a bit ad hoc.

Solution is to parse out the refers that mention course/period/year and convert that into SQL for insertion. This should capture some of the uses of MailMerge, but won’t get them all.

Evaluating students

  • page types DONE
    All done using the
  • Wf applications DONE
    Need to write a script to extract info from various databases and updates stats. The additional Wf applications are:
    • BAM – EAST:BAM_CONFIGURE
    • IROG – DATA:IROG
    • AES – EAST:REQUEST
    • AMD – Plagiarism:PlagCase

Evaluating courses

  • page types: UnitFeedback, FormMail and Survey
  • Wf applications: Barometer

Transmitting content

This category is a bit more difficult. All Webfuse course sites transmit content, there’s a basic level incorporated into all sites. What I need to calculate here is the percentage of courses that have additional content, beyond the default, added by the teaching staff. Evidence of this could include used of the following by teaching staff (not support staff):

  • course updates DONE
    This generates an RSS file, which I think is mostly put into CONTENT file of the course site. Each element as a dc:creator tag with the name of the user.

    One approach would to be find all updates content files, grep for creator (including course/period), remove creators that are support staff. From 2002, this is done in a separate RSS file, but all good.

  • fm DONE
    This is recorded in the apache logs. Will need to parse that.
  • page update
    Again, parsing of apache log files

Calculating adoption

Once the data is in the database, the next step is to calculate the adoption rate, which is essentially:

  • Get the total # of courses in a year.
  • For each Malikowski category
    • calculate percentage of courses adopting features in the category

Show the results in the next post

References

Beer, C., D. Jones, K. Clark. (2009). The indicators project identifying effective learning, adoption, activity, grades and external factors. ASCILITE’2009. Auckland, NZ.

Malikowski, S., M. Thompson, et al. (2007). “A model for research into course management systems: bridging technology and learning theory.” Journal of Educational Computing Research 36(2): 149-173.