Another perspective for the indicators project

The indicators project is seeking to mine data in the system logs of a learning management system (LMS) in order to generate useful information. One of the major problems the project is facing is how turn the mountains of data into something useful. This post outlines another potential track based on some findings from Lee et al (2007).

The abstract from Lee et al (2007) includes the following summary

Sample data were collected online from 3713 students….The proposed model was supported by the empirical data, and the findings revealed that factors influencing learner satisfaction toward e-learning were, from greatest to least effect, organisation and clarity of digital content, breadth of digital content’s coverage, learner control, instructor rapport, enthusiasm, perceived learning value and group interaction.

Emphasis on learner satisfaction???

This research seeks to establish factors which impact on learner satisfaction. Not on the actual quality itself, but with how satisfied students are with it. For some folk, this emphasis on student satisfaction is not necessarily a good thing and at best is only a small part of the equation. Mainly because its possible for students to be really happy with a course, but to have learnt absolutely nothing from it.

However, given that most evaluation of learning at individual Australian Universities and within the entire sector rely almost entirely on “smile sheets” (i.e. low level surveys that test student satisfaction), an emphasis on improving student satisfaction may well be a pragmatically effective past-time.

How might it be done?

The following uses essentially the same process used in a previous post that describe another method for informing the indicators project’s use of the mountains of data. At least that suggested approach had a bit more of an emphasis on quality of learning.

The process is basically:

  • Identify a framework that claims to illustrate some causality between staff/institutional actions and good outcomes.
  • Identify the individual factors.
  • Identify data mining that can help test the presence or absence of those factors.
  • Make the results available to folk.

In this case, the framework is the empirical testing performed by the authors to identify factors that contribute to increased student satisfaction with e-learning. The individual factors they’ve identified are:

  • organisation and clarity of digital content;
  • breadth of digital content’s coverage;
  • learner control;
  • instructor rapport;
  • enthusiasm;
  • perceived learning value; and
  • group interaction.

Now some of these can’t be tested for by the indicators project. But some can. For example,

  • Organisation of digital content
    Usually put into a hierarchical structure (weeks/modules and then resources), is the hierarchy balanced?
  • Breadth of content coverage
    In my experience, it’s not unusual for the amount of content to significantly reduce as the term progresses. If breadth is more even and complete, greater student satisfaction?
  • group interaction – participation in discussion forums.
  • instructor rapport – participation in discussion forums and presence in the online course.

Questions

I wonder if the perception of there being a lot of course content for the entire course is sufficient. Are students happy enough that the material is there? Does whether or not they use it become academic?

References

Lee, Y, Tseng, S et al (2007). “Antecedents of Learner Satisfaction toward E-learning.” Journal of American Academy of Business 11(2): 161-168.

Cooked course feeds – An approach to bringing the PLEs@CQUni, BAM and Indicators projects together?

The following is floating an idea that might be useful in my local context.

The idea

The idea is to implement a “cooked feed” for a CQUniversity course. An RSS or OPML feed that either students or staff or both can subscribe to and receive a range of automated information about their course. Since some of this information would be private to the course or the individuals involved, it would be password protected and could be different depending on the identity of the person pulling the feed.

For example, a student of the course would receive generic information about the course (e.g. any recent posts to the discussion forums, details of resources uploaded to the course site) as well as information specific to them (e.g. that their assignment has been marked, or someone has responded to one of their discussion posts). A staff member could receive similar generic and specific information. Since CQU courses are often offered across multiple campuses staff and student information could be specific to the campus or the sets of students (e.g. a tutor would receive regular updates on their students – have they logged into the course site etc)

A staff member might get a set of feeds like this:

  1. Student progress – perhaps containing a collection of feeds. One might be summary that summarises progress (or the lack thereof) for all students and then one feed per student.
  2. Course site – provides posts related to the course website. For example, posts to discussion forums, usage statistics of resources and features etc.
  3. Tasks and events – updates of when assignments are due, when assignments are meant to be marked, when results need to be uploaded. These updates would not only contain information about what needs to be done, but also provide links and advice about how to perform them.

The “cooked” adjective suggests that the feeds are not simply raw data from original sources. But that they undergo additional preparation to increase the value of the information they contain. For example, rather than a single post simply listing the students who have (or have not) visited a course site the post might contain the students GPA for previous courses, some indication of how long into a term they normally access a course site, when they added the course (in both date and week format – i.e. week 2 of term), links back to institutional information systems to see photos and other details of the students, links to an email merge facility to send a private/bulk email to all students in a particular category, a list of which staff are responsible for which students etc.

The point is that the “cooking” turns generic LMS information into information that is meaningful for the institution, the course, the staff, and the students. It is this contextual information that will almost always be missing from generic systems, simply because they have to be generic and each institution is going to be different.

Why?

The PLEs@CQUNi project already has a couple of related sub-projects doing work in this area – discussion forums and BAM.

Discussion forums. The slideshow below explains how staff can currently access RSS feeds generated from the discussion forums of CQU’s current implementation of Blackboard version 6.3. A similar feature has already been developed for the discussion forum used in the other “LMS” being used at CQU.

The above slideshow uses the idea of the “come to me” web. This meme is encompasses one reason why doing this might be a good thing. It saves time, it makes information more visible to the staff and the students. Information they can draw upon to decide what to do next. Information in a form that allows them to re-purpose and reuse for tasks that make sense to them, but would never be apparent to a central designer.

BAM. The Blog Aggregation Management (BAM) project now generates an OPML feed unique for each individual staff member to track their students’ blog posts. The slidecast below outlines how they can use it.

The indicators project is seeking to mine usage logs of the LMS to generate information that is useful to staff. I think there is value in this project looking at generating RSS feeds for staff based on the information it generates. Why depends on the difference between lag and lead indicators.

I’ve always thought that too much of the data generated at Universities are lag indicators. Indicators that tell you how good or bad things went. For example, “oh dear, course X had a 80% failure rate”. While having this information is useful it’s too late to do anything. You can’t (well you shouldn’t be able to) change the failure rate after it has happened.

What is much more useful are lead indicators. Indicators that offer you some insight into what is likely to happen. For example, “oh dear, the students all failed that pop quiz about topic X”. If you have some indication that something is starting to go wrong, you may be able to do something about it.

Aside: Of course things brings up the problematic way most courses are designed, especially the assessment. They are designed in ways such that there are almost no lead indicators. The staff have no real insight into how the students are going until they hand in an assignment or take an exam. By which time it is too late to do anything.

Having the indicators project generating RSS posts summarising important lead indicators for a course might encourage and help academics take action to prevent problems developing into outright failure.

This is also encompassed in the idea of BAM generating feeds and the very idea of BAM in the first place. It allows staff to see which students are or are not progressing (lead indicator) and then take action they deem appropriate.

It’s also a part of the ideas behind reflective alignment. That post also has some suggestions about how to implement this sort of thing.

Another spectrum for using indicators to place course websites

This post adds another perspective borrowed from Gonzalez (2009) as a framework to report or evaluate findings from Col and Ken’s indicators project. Col added an update on his work recently. Like previous post this one borrows a table of dimensions around conceptions of online learning because it may be helpful.

First the table and then how it might be used.

Dimensions

Dimensions delimiting approaches to online teaching – (Gonzalez, 2009: p311)
Informative/individual learning focuses Communicative/Networked learning focused
Intensity of use Small range on media and tools used to support learnign tasks and activities (mainly sources of information with small opportunities for interaction and communication) Wide range of media and tools used to support learning tasks and activities (with emphasis on interaction and communication)
Resources Web pages with information. Lecture notes. Links to websites. Web pages with information. Lecture notes. Links to web sites. Discussion boards. Chat. Blogs. Spaces for sharing. Animations. Videos. Still images.
Role of the learner Select and present information Design spaces for sharing and communication. Support the process.
Role of the students Study individually information provided Participate in a process of knowledge building

How might it be used

The above dimensions could be used to develop “analysis routines” that would place courses within these dimensions. Some potential approaches:

  • Variety and use of tools and media within a course site. (Intensity of use and Resources)
    Group the different tools available in the course management system into different types. e.g. those used for information distribution and those for interaction/communication. Count the number of different types of tools present in a course site and the level of usage.

    The difficulty here is the increasing use of non-CMS based tools for communication. e.g. I know of an increasing number of staff and students who are using external tools such as Messenger to work around the limitations of CMS services.

  • Measure student and staff activity (Role of the lecturer/students)
    I believe Blackboard, the main CMS at our institution, tracks the activity in some detail of each course site participant. If the type of activity can be categorised into groups (e.g. adding information to the site, using information on the site, posting to a discussion forum, responding to a post in a discussion forum etc.) then analysis could be run against the activity of all participants. This would identify the type of role the main groups are taking on.

What’s the value of this?

I can hear some thinking, “so what!”. What is the value of this sort of thing? A couple of thoughts.

  • As a framework to help make sense of the data.
    From my perspective it appears that the project is “drowning in data” and could use some sort of reviewed framework with which to organise or structure their investigations. These dimensions might provide it.
  • Enable institutions to get a handle on what is happening.
    Most of it ain’t great. The combination of the dimensions and the data potentially enable institutions, that are spending a lot of money on course management systems, to improve the awareness they have of what is actually happening. At the very least some sort of indication of where online courses site within the institution, as imperfect as it will be, sit within the dimensions might start some conversations about online practice that is actually somewhat informed by the reality of what is going on.
  • As a demonstration of building on the work of others.
    It is possible to argue with the value/validity of the knowledge generated by Gonzalez (2009) – but then it’s possible to argue against the validity of just about the knowledge generated by any research project depending on your perspective. However, this work is in a fairly prestigious journal, so it comes with a certain stamp of approval. This will help Col and Ken.
  • Perfect opportunity for a publication.
    Building on the last point, suggests that complimenting the qualitative nature of Gonzalez (2009) with some more quantitative measures from a broad collection of students and courses sounds like a pretty good publication opprotunity (or three).

It’s the potential for discussion within the organisation that is, I believe, of potentially the most beneficial for the most people.

The potential for publication is probably the most interesting to the project participants and frankly by far the easiest.

Further publications

The publication idea would be strengthened if previous work in this area (e.g. the recent ALTC project Learning and teaching performance indicators report either doesn’t do something like this or uses a different set of dimensions.

In addition, Gonzalez interviewed only 7 academics within a single discipline within a single institution. Chances are the results and dimesions identified in the paper are going to exhibit some sort of limitation, potentially caused by the nature of the context. Using a different approach in a different context will at least compliment/reinforce the findings and potentially identify additional dimensions.

References

Gonzalez, C. (2009). “Conceptions of, and approaches to, teaching online: a study of lecturers teaching postgraduate distance courses.” Higher Education 57(3): 299-314.

Dimensions delimiting conceptions of online teaching – something to guide the indicators and the evaluation of LMS data?

Col Beer has been doing some work around the “indicators” project – an attempt to mine system logs and databases of a course management system (CMS) to generate data of some use.

One of the (many) potential problems with the work, and the work of its like, has been attempting to generate some sort of understanding about how you can rank or categorise the type of learning or activity taking part on on the CMS.

In the following I wonder if the work on teachers’ conceptions of teaching, particularly that associated with online teaching (e.g. Gonzalez, 2009) might provide a useful solution to this problem.

Research on teachers’ conceptions of teaching

There is a large amount of research, quite a research tradition, around understanding the different conceptions of teaching (and subsequently learning) that academics bring to their experience. Much of this work believes that the quality of student learning is directly influenced and constrained by the conceptions of teaching held by teaching staff. (Following from this is the idea that to improve the quality of student learning you have to target teachers’ conceptions of teaching, but that is another story.)

Teachers’ conceptions of online teaching

Gozalez (2009) extends the work on teachers’ conceptions of teaching to the online environment. One of the contributions of this work is some “dimensions delimiting conceptions of online teaching”. The following table is adapted from Gonzalez (2009) and represents these dimensions. I wonder if these dimensions could be used to guide the indicators project? More on this below.

Dimensions delimiting conceptions of online teaching (Gonzalez, 2009: p 310)
The web for individual access to learning materials and information; and for individual assessment The web for learning related communication (asynchronous and/or synchronous) The web as a medium for networked learning
Teacher Provides structured information/directs students to selected web sites Set up spaces for discussion/facilitates dialogue Set up spaces for communication, discussion and knowledge building/facilitates-guides the process
Students Individually study materials provided Participate in online discussions Share and build knowledge
Content Provided by lectuerer Provided by the lecturere but students can modify – extend it through online discussions Buit by students using the space set up by the lecturer
Knowledge Owned by lecturer Discovered by students within lecturer’s framework Built by students

The benefit that this provides is to give an existing framework, with some basis in research about what staff already do, to guide the design of statistics/indicators to be drawn from system logs and databases. Statistics that could indicate the conception of online teaching that is being used by the academics. This could be useful to identify “good” staff using more advanced pedagogy, identify the traditional ones, use this insight to guide training and interventions and perhaps as part of a research project to establish connections between the conceptions identified form the system logs and the outcomes of students in terms of final results.

For example, some potential indicators

  • A course where all content is provided by the academics indicates that the staff member is at the “lower” end.
  • The use of tools such as wikis, blogs (tools that encourage contributions from students) and which are actively used by students indicates a staff member/courses at the “higher” end.
  • A course site where the site framework is put in place by the academic and can’t be modified by students, indicates low end.
  • Large amount of discussions from students, that has low levels of interaction, indicates someone in the middle. High levels of interaction indicate someone at the higher level.

Implications and questions

There is probably many more than the simple ones outlined below. But it is getting late.

  • There is mention of the role context plays in limiting or influencing teachers’ conceptions (and thus the quality of student learning), should the nature and affordances of the technology available play a similar role?
    • Do the affordances of a CMS actively get in the way of teachers’ being able to, or even aware of, the “networked learning” (the “good”) approach?
    • Do the affordances of a PLE type approach actively encourage a more “networked learning” approach?
  • Can this work help expand/enhance the evaluation of learning and teaching, which is somewhat limited at most universities.
  • Is there a role in a design theory for e-learning for some of these ideas?

References

Gonzalez, C. (2009). “Conceptions of, and approaches to, teaching online: a study of lecturers teaching postgraduate distance courses.” Higher Education 57(3): 299-314.

Choosing your indicators – why, how and what

The unit I work with is undertaking a project called Blackboard Indicators. Essentially the development of a tool that will perform some automated checks on our institution’s Blackboard course sites and show some indicators which might identify potential problems or areas for improvement.

The current status is that we’re starting to develop a slightly better idea of what people are currently doing through use of the literature and also some professional networks (e.g. the Australasian Council on Open, Distance and E-learning) and have an initial prototype running.

Our current problem is how do you choose what the indicators should be? What are the types of problems you might see? What is a “good” course website?

Where are we up to?

Our initial development work has focused on three groupings of category: course content, coordinator presence and all interactions. Some more detail on this previous post.

Colin Beer has contributed some additional thinking about some potential indicators in a recent post on his blog.

Col and I have talked about using our blogs and other locations to talk through what we’re thinking to develop a concrete record of our thoughts and hopefully generate some interest from other folk.

Col’s list includes

  • Learner.
  • Instructor.
  • Content.
  • Interactions: learner/learner, learner/instructor, learner/content, instructor/content

Why and what?

In identifying a list of indicators, as when trying to evaluate anything, it’s probably a good idea to start with a clear definition of why you are starting on this, what are you trying to achieve.

The stated purpose of this project is to help us develop a better understanding of how and how well staff are using the Blackboard courses sites. In particular, we want to know about any potential problems (e.g. a course site not being available to students) that might cause a large amount of “helpdesk activity”. We would also like to know about trends across the board which might indicate the need for some staff development, improvements in the tools or some support resources to improve the experience of both staff and students.

There are many other aims which might apply, but this is the one I feel most comfortable with, at the moment.

Some of the other aims include

  • Providing academic staff with a tool that can aid them during course site creation by checking their work and offering guidance on what might be missing.
  • Provide management with a tool to “check on” course sites they are responsible for.
  • Identify correlations between characteristics of a course website and success.

The constraints we need to work within include

  • Little or no resources – implication being that manual, human checking of course sites is not currently a possibility.
  • Difficult organisational context due to on-going restructure – which makes it hard to get engagement from staff in a task that is seen as additional to existing practice and also suggests a need to be helping staff deal with existing problems more so than creating more work. A need to be seen to be working with staff to improve and change, rather than being seen as inflicting change upon them.
  • LMS will be changing – come 2010 we’ll be using a new LMS, whatever we’re doing has to be transportable.

How?

From one perspective there are two types of process which can be used in a project like this

  1. Teleological or idealist.
    A group of experts get together, decide and design what is going to happen and then explain to everyone else why they should use it and seek to maintain obedience to that original design.
  2. Ateleological or naturalist.
    A group of folk, including significant numbers of folk doing real work, collaborate together to look at the current state of the local context and undertake a lot of small scale experiments to figure out if anything makes sense, they examine and reflect on those small scale experiments and chuck out the ones that didn’t work and build on the ones that did.

(For more on this check out: this presentation video or this presentation video or this paper or this one.)

From the biased way I explained the choices I think it’s fairly obvious which approach I prefer. A preference for the atelelogical approach also means that I’m not likely to want to spend vast amounts of time evaluating and designing criteria based on my perspectives. It’s more important to get a set of useful indicators up and going, in a form that can be accessed by folk and have a range of processes by which discussion and debate is encouraged and then fed back into the improvement of the design.

The on-going discussion about the project is more likely to generate something more useful and contextually important than large up-front analysis.

What next then?

As a first step, we have to get something useful (for both us and others) up and going in a form that is usable and meaningful. We then have to engage with them and find out what they think and where they’d like to take it next. In parallel with this is the idea of finding out, in more detail, what other institutions are doing and see what we can learn.

The engagement is likely going to need to be aimed at a number of different communities including

  • Quality assurance folk: most Australian universities have quality assurance folk charged with helping the university be seen by AUQA as being good.
    This will almost certainly, eventually, require identifying what are effective/good outcomes for a course website as outcomes are a main aim for the next AUQA round.
  • Management folk: the managers/supervisors at CQU who are responsible for the quality of learning and teaching at CQU.
  • Teaching staff: the people responsible for creating these artifacts.
  • Students: for their insights.

Initially, the indicators we develop should match our stated aim – to identify problems with course sites and become more aware with how they are being used. To a large extent this means not worrying about potential indicators of good outcomes and whether or not there is a causal link.

I think we’ll start discussing/describing the indicators we’re using and thinking about on a project page and we’ll see where we go from there.

Getting started on Blackboard indicators

The unit I work for is responsible for providing assistance to CQUniversity staff and students in their use of e-learning. Which currently at CQUni is mostly the use of Blackboard.

The current model built into our use of Blackboard is that the academic in charge of the course (or their nominee) is responsible for the design and creation of the course site. In most instances, staff are provided with an empty course site for a new term at which stage they copy over the content from the previous offering, make some modifications and make the site available to students.

Not surprisingly, given the cruftiness of the Blackboard interface, the lack of time many staff have and a range of other reasons there are usually some fairly common, recurrent errors that are made. Errors which create workload for us when students or staff have problems. In many cases it may even be worse than this as students become frustrated and don’t even complain, they suffer in agony.

Most of these problems, though not all, are fairly simple mistakes. Things that could be picked up automatically if we had some sort of automated system performing checks on course sites. The fact that Blackboard doesn’t provide this type of functionality says something about the assumptions underlying the design of this version of Blackboard – a very teaching academic focus, not so much on the support side.

Developing this sort of system is what the Blackboard Indicators project is all about. It’s still early days but we’ve made some progress. Two main steps

  • Developed an initial proof of concept.
  • Started a literature, colleague and literature search.

Initial proof of concept

We currently have a web application up and running that, given a term, will display a list of all the courses that are meant to have Blackboard course sites and generate a number between 0 and 100 summarising how well a site has meant a particular indicator.

Currently, the only indicator working is the “Content Indicator”. This is meant to perform some objective tests on, what is broadly defined as, the content of the course. Currently this includes

  • Is the course actually available to students?
    The score becomes 0 automatically if this is the case.
  • Does the the site contain a link to the course profile?
    20 is taken off the score there isn’t one.
  • Is the course profile link for the right term?
    50 taken off if it’s wrong.

At the moment, we’re planning to put in place three indicators, the content indicator plus

  • “Coordinator Presence”
    How present is the coordinator of the course? Have they posted any announcements? Are they reading the discussion forum? Posting to it? What activity have they done on the site in the last two weeks?
  • “All interactions”
    What percentage of students and staff are using the site? How often? What are they using?

It’s still early days and there remain a lot of questions, which we hope will be answered by our searching and some reflection.

Literature, web and colleague search

We’ve started looking in the literature, doing google searches and asking colleagues what they are doing. Have some interesting information already.

What we do find will be discussed in our blogs, bookmarked on del.icio.us (tag: blackboardIndicators) and talked about on the project page.