Choosing your indicators – why, how and what

The unit I work with is undertaking a project called Blackboard Indicators. Essentially the development of a tool that will perform some automated checks on our institution’s Blackboard course sites and show some indicators which might identify potential problems or areas for improvement.

The current status is that we’re starting to develop a slightly better idea of what people are currently doing through use of the literature and also some professional networks (e.g. the Australasian Council on Open, Distance and E-learning) and have an initial prototype running.

Our current problem is how do you choose what the indicators should be? What are the types of problems you might see? What is a “good” course website?

Where are we up to?

Our initial development work has focused on three groupings of category: course content, coordinator presence and all interactions. Some more detail on this previous post.

Colin Beer has contributed some additional thinking about some potential indicators in a recent post on his blog.

Col and I have talked about using our blogs and other locations to talk through what we’re thinking to develop a concrete record of our thoughts and hopefully generate some interest from other folk.

Col’s list includes

  • Learner.
  • Instructor.
  • Content.
  • Interactions: learner/learner, learner/instructor, learner/content, instructor/content

Why and what?

In identifying a list of indicators, as when trying to evaluate anything, it’s probably a good idea to start with a clear definition of why you are starting on this, what are you trying to achieve.

The stated purpose of this project is to help us develop a better understanding of how and how well staff are using the Blackboard courses sites. In particular, we want to know about any potential problems (e.g. a course site not being available to students) that might cause a large amount of “helpdesk activity”. We would also like to know about trends across the board which might indicate the need for some staff development, improvements in the tools or some support resources to improve the experience of both staff and students.

There are many other aims which might apply, but this is the one I feel most comfortable with, at the moment.

Some of the other aims include

  • Providing academic staff with a tool that can aid them during course site creation by checking their work and offering guidance on what might be missing.
  • Provide management with a tool to “check on” course sites they are responsible for.
  • Identify correlations between characteristics of a course website and success.

The constraints we need to work within include

  • Little or no resources – implication being that manual, human checking of course sites is not currently a possibility.
  • Difficult organisational context due to on-going restructure – which makes it hard to get engagement from staff in a task that is seen as additional to existing practice and also suggests a need to be helping staff deal with existing problems more so than creating more work. A need to be seen to be working with staff to improve and change, rather than being seen as inflicting change upon them.
  • LMS will be changing – come 2010 we’ll be using a new LMS, whatever we’re doing has to be transportable.

How?

From one perspective there are two types of process which can be used in a project like this

  1. Teleological or idealist.
    A group of experts get together, decide and design what is going to happen and then explain to everyone else why they should use it and seek to maintain obedience to that original design.
  2. Ateleological or naturalist.
    A group of folk, including significant numbers of folk doing real work, collaborate together to look at the current state of the local context and undertake a lot of small scale experiments to figure out if anything makes sense, they examine and reflect on those small scale experiments and chuck out the ones that didn’t work and build on the ones that did.

(For more on this check out: this presentation video or this presentation video or this paper or this one.)

From the biased way I explained the choices I think it’s fairly obvious which approach I prefer. A preference for the atelelogical approach also means that I’m not likely to want to spend vast amounts of time evaluating and designing criteria based on my perspectives. It’s more important to get a set of useful indicators up and going, in a form that can be accessed by folk and have a range of processes by which discussion and debate is encouraged and then fed back into the improvement of the design.

The on-going discussion about the project is more likely to generate something more useful and contextually important than large up-front analysis.

What next then?

As a first step, we have to get something useful (for both us and others) up and going in a form that is usable and meaningful. We then have to engage with them and find out what they think and where they’d like to take it next. In parallel with this is the idea of finding out, in more detail, what other institutions are doing and see what we can learn.

The engagement is likely going to need to be aimed at a number of different communities including

  • Quality assurance folk: most Australian universities have quality assurance folk charged with helping the university be seen by AUQA as being good.
    This will almost certainly, eventually, require identifying what are effective/good outcomes for a course website as outcomes are a main aim for the next AUQA round.
  • Management folk: the managers/supervisors at CQU who are responsible for the quality of learning and teaching at CQU.
  • Teaching staff: the people responsible for creating these artifacts.
  • Students: for their insights.

Initially, the indicators we develop should match our stated aim – to identify problems with course sites and become more aware with how they are being used. To a large extent this means not worrying about potential indicators of good outcomes and whether or not there is a causal link.

I think we’ll start discussing/describing the indicators we’re using and thinking about on a project page and we’ll see where we go from there.

Getting started on Blackboard indicators

The unit I work for is responsible for providing assistance to CQUniversity staff and students in their use of e-learning. Which currently at CQUni is mostly the use of Blackboard.

The current model built into our use of Blackboard is that the academic in charge of the course (or their nominee) is responsible for the design and creation of the course site. In most instances, staff are provided with an empty course site for a new term at which stage they copy over the content from the previous offering, make some modifications and make the site available to students.

Not surprisingly, given the cruftiness of the Blackboard interface, the lack of time many staff have and a range of other reasons there are usually some fairly common, recurrent errors that are made. Errors which create workload for us when students or staff have problems. In many cases it may even be worse than this as students become frustrated and don’t even complain, they suffer in agony.

Most of these problems, though not all, are fairly simple mistakes. Things that could be picked up automatically if we had some sort of automated system performing checks on course sites. The fact that Blackboard doesn’t provide this type of functionality says something about the assumptions underlying the design of this version of Blackboard – a very teaching academic focus, not so much on the support side.

Developing this sort of system is what the Blackboard Indicators project is all about. It’s still early days but we’ve made some progress. Two main steps

  • Developed an initial proof of concept.
  • Started a literature, colleague and literature search.

Initial proof of concept

We currently have a web application up and running that, given a term, will display a list of all the courses that are meant to have Blackboard course sites and generate a number between 0 and 100 summarising how well a site has meant a particular indicator.

Currently, the only indicator working is the “Content Indicator”. This is meant to perform some objective tests on, what is broadly defined as, the content of the course. Currently this includes

  • Is the course actually available to students?
    The score becomes 0 automatically if this is the case.
  • Does the the site contain a link to the course profile?
    20 is taken off the score there isn’t one.
  • Is the course profile link for the right term?
    50 taken off if it’s wrong.

At the moment, we’re planning to put in place three indicators, the content indicator plus

  • “Coordinator Presence”
    How present is the coordinator of the course? Have they posted any announcements? Are they reading the discussion forum? Posting to it? What activity have they done on the site in the last two weeks?
  • “All interactions”
    What percentage of students and staff are using the site? How often? What are they using?

It’s still early days and there remain a lot of questions, which we hope will be answered by our searching and some reflection.

Literature, web and colleague search

We’ve started looking in the literature, doing google searches and asking colleagues what they are doing. Have some interesting information already.

What we do find will be discussed in our blogs, bookmarked on del.icio.us (tag: blackboardIndicators) and talked about on the project page.

Alternate foundations – the presentation

A previous post outlined the abstract for a presentation I gave last Monday on some alternate foundations for leadership of learning and teaching at CQUniversity. Well, I’ve finally got the video and slides online so this post reflects on the presentation and gives access to the multimedia resources

Reflection

It seemed to go over well but there’s significant room for improvement.

The basketball video worked well this time, mainly because the introduction was much better handled.

What was missing

  • Didn’t make the distinction between safe-fail and fail-safe projects.
  • Not enough time on implications, strategies and approaches to work with this alternate foundation.
  • The description of the different parts of the Cynefin Framework were not good

The second point about strategies of working within this area is important as the thinking outlined in the presentation is hopefully going to inform the PLEs@CQUni project.

The resources

The video of the presentation is on Google Video

The slides are on Slideshare

Some alternate foundations for leadership in L&T at CQUniversity

On Monday the 25th of August I am meant to be giving a talk that attempts to link complexity theory (and related topics) to the practice of leadership of learning and teaching within a university setting. The talk is part of a broader seminar series occurring this year at CQUniversity as part of the institution’s learning and teaching seminars. The leadership in L&T series is being pushed/encouraged by Dr Peter Reaburn.

This, and perhaps a couple of other blogs posts, is meant to be a part of a small experiment in the use of social software. The abstract of the talk that goes out to CQUniversity staff will mention this blog post and some related del.icio.us bookmarks. I actually don’t expect it to work all that well as I don’t have the energy to do the necessary preparations.

Enough guff, what follows is the current abstract that will get sent out.

Title

Some alternate foundations for leadership in L&T at CQUniversity

Abstract

Over recent years an increasing interest in improving the quality of university learning and teaching has driven a number of projects such as the ALTC, LTPF and AUQA. One of the more recent areas of interest has been the question of learning and teaching leaders. In 2006 and 2007 ALTC funded 20 projects worth about $3.4M around leadership in learning and teaching. Locally, there has been a series of CQUniversity L&T seminars focusing on the question of leadership in L&T.

This presentation arises from a long-term sense of disquiet about the foundations of much of this work, an on-going attempt to identify the source of this disquiet and find alternate, hopefully better, foundations. The presentation will attempt to illustrate the disquiet and explain how insights from a number of sources (see some references below) might help provide alternate foundations. It will briefly discuss the implications these alternate foundations may have for the practice of L&T at CQUniversity.

This presentation is very much a work in progress and is aimed at generating an on-going discussion about this topic and its application at CQUniversity. Some parts of that discussion and gathering of related resources is already occuring online at
http://cq-pan.cqu.edu.au/david-jones/blog/?p=202
feel free to join in.

References and Resources

Snowden, D. and M. Boone (2007). A leader’s framework for decision making. Harvard Business Review 85(11): 68-76

Lakomski, G. (2005). Managing without Leadership: Towards a Theory of Organizational Functioning, Elsevier Science.

Davis, B. and D. Sumara (2006). Complexity and education: Inquiries into learning, teaching, and research. Mahwah, New Jersey, Lawrence Erlbaum Associates

PLE drivers being considered in the corporate IT world?

The PLEs@CQUni is being driven, in part, by a range of external factors around the practices, availability and affordances of information technology, especially those associated with Web 2.0 and social software. We’ll be looking at this means for the use of educational technology within universities, not to mention the practice of learning and teaching.

Obviously these same drivers are going to have some interesting implications for the practice of the broader problem of how to IT is supported within organisations. There’s sure to be much work in this area and it will be important to keep an eye on it for what they find and subsequently work out what it means for the PLEs project.

Susan Scrupski drops a few hints about some research she is involved with and provides a link to a video commentary from one of the US news/business programs talking about one aspect of the problem. The problem talked about in this commentary is one which faces how universities practice of e-learning.

Of course, there will be some argument about all this being just the latest fad being beaten up by various academics and commercial consulting firms. But I’m not sure that this is a fad, it seems to be a key shift in IT and one that will need to be addressed. Accompanied, of course, by a good dose of cynicism.

For example, I’m not sure that the broad generalisation of “younger workers” used in the video is broadly true. Do all young workers really want that? Do all CQU students really want that?

The real questions become, just how do you address this problem, provide the type of resources that they younger workers expect within the constraints of existing organisations, especially in terms of resources? For example, at my institution there seems to be growing concern about the cost of Internet usage. Usage is growing, it is costing more and this at a time when minimising cost is important.

Creating a voice thread presentation

The following is step 2 in getting organised for a trial of VoiceThread as part of the PLEs@CQUni project. The background was given in a previous post.

This post tries to summarise what’s been found about create a presentation in Voice Thread. It’s more a work in progress and a way of saving what I’m finding, rather than any particular use for anyone else.

It appears that a presentation (i.e. like Powerpoint) might be one approach to support the development of an online research poster. Much of this information is taken from the VoiceThread Tutorials example on VoiceThread Presentations.

The example is a nice example, includes the talking head and some doodling.

It’s that hard a process, apparently includes the following steps

  • Create the presentation file.
    VoiceThread suggest that the presentation file should be PDF and 1024×768 or larger with a 4:3 aspect ratio. Powerpoint, at least on the Mac, has it as an export option.
  • Upload it to Slideshare.
    This seems a fairly simple process, need to set the options. The 1 minute VoiceThread tutorial gives a good introduction.
  • Set some options – collaboration
    Has the ability to invite people, make it public, moderate comments and include the voice thread in public list. Will need to include this in a screen cast.

  • Record the narration.
    Using voice thread’s 5 methods of recording.

Comments on VoiceThreads appear to require that you have a VoiceThread account. This will be a bit of a limitation when it comes to having visitors to the research poster session comment on the posters.

Interesting that Slideshare does offer a “guest comment” facility that requires you are able to read a captcha, rather than login.