Automating a SET leaderboard

End of semester 1 is fast approaching. One of the end of semester tasks is encouraging students in courses to complete the institutional Student Evaluation of Teaching (SET) surveys. Last year I experimented with a “SET leaderboard” (see the following image). It’s a table that lists the response rates on the SET surveys for the current and previous offerings of the course ranked according to the percentage response rate.

SET leaderboard

At my institution the SET surveys open quite a few weeks before end of semester and remain open until just before the release of final results. While the surveys are open teaching staff cannot see any of the responses. However, we can see the number of responses (Update: at least until this year when they introduced a new system that removed this functionality Update on the update: Nope, looks like the new system does support it, PEBKAC). This was how I was able to update the leaderboard for the current offering every couple of days. The leaderboard was visible to students whenever they visited the course website. Making the current response rate visible.

As the above image shows, it seems to have been a fairly successful approach.

Automating the process

That success means has generated some interest from others in replicating this approach. The problem is that doing so requires some familiarity with HTML and tables. The standard GUI HTML editors don’t do a great job of supporting the re-ordering of table rows. To help others adopt this practice, and also to reduce my load a bit, the following explores if/and how the process can be slightly automated.

The plan is to write some Javascript that can be included in a Web page that will automatically generate a table like the above (with the correctly ordered rows) based on data from a Google spreadsheet.

The implication being that all I (or some other academic) needs to do is to keep the Google spreadsheet updated and the script will take care of the rest. It also means that I can create multiple copies of the leaderboard in different locations, but only need to modify the data in one source.

In a perfect world, the institutional SET survey system would have an API that could be used to extract the data. Thereby removing the need for the academic to manually copy the response rate to a Google spreadsheet.

Of course, after implementing all of the work below, it appears that the new institutional system for administering the SET surveys has removed the functionality that allowed course examiners to see how many students had responded so far.

Reading data from a Google spreadsheet

There are a variety of ways this can be done, the Sheetrock library looks like an useful approach.

Got the sample working locally.  Connect it to my spreadsheet. Need to make the spreadsheet public and use the same URL I use, not the “shareable link” to allow others view.

The neat thing is that the Google query language allows the data to be pre-ordered. So the table is automatically in the right order.

Make it look pretty

Next step is to style it, in particular to highlight the current offering.

Applying the existing styles is a first step. Highlighting the current year. All good.

Make it real

Now update it with the data that I’ll be using this term, and use the Google spreadsheet to auto-calculate the percentage.  Simple, the hardest part of this was the manual process for gathering the data to put in the spreadsheet.

Time to test it in Moodle. All seems to work. Here’s a version within a Book chapter. This should work on any web page. Each time the page below is reloaded the Javascript will update the table based on the latest data in the Google spreadsheet.  The current semester is always highlighted, but it will move up and down the ranking based on its response rate.


Automated leaderboard

 What’s required

To get this to work, you need to have

  1. A Google spreadsheet that has been made public.
    This allows anyone (including the script) to read the contents, but they can’t change anything.  The spreadsheet should have a row for each offering of the course with the following columns:

    1. Year – of offering
    2. Responses – number of responses
    3. Percentage – the % of total enrolment that has responded
      I’ve implemented this using a spreadsheet formula using an extra column Total Enrolment.
    4. Current – a yes should go in the row that matches the current year.
  2. A link to a modified version of the sheetrock library.
    The modification is a function that generates the table.
  3. A table element that has the id SETleaderboard and has four columns: Rank, Year, Responses, Percentage
  4. The following javascript
    (Which is still a little rough)


var mySpreadsheet = 'some_url_here';
  url: mySpreadsheet,
  query: "select A,B,C,D order by C desc",
  callback: myCallback

Building a CASA for student evaluation of teaching results

I have a problem with my Student Evaluation of Teaching (SET) data!

No. It’s not that the results are terrible. Some are good, some not so much. (see the two images in this post)

Student comments - EDC3100

The problem is that I (and every other academic at my institution) is unable to get access to the data in a form that we can analyse. For example, back in early 2014 I manually extracted the free text comments from the SET data and analysed them using NVIVO to produce the graph to the right. Click on it to see a larger version. Yea, manually.

The following documents the development of what might be called a kludge or a work around to this problem. Though being an academic I prefer to define and use my own term of Context Appropriate Scaffolding Assemblage (CASA). Expect to hear a bit more about that.

The aim is to produce a bit of technology that I can slot into my context that will scaffold my ability to perform a required task in a way that is appropriate. Rather than the current situation where performing the task requires stupid jumping through of unnecessary, manual hoops. Not to mention an organisational structure that over many years has been unable to see the need, let alone do something.

The following outlines (briefly) the process used to create a Greasemonkey script that when I visit a web page containing SET data for my courses, automatically convert that data into a CSV file that I can download. From there I can import that data into which ever anlaysis tool I deem appropriate.

Given all this data has to be stored in a database, it would appear incredibly straight forward for the institution to have already done this. Especially given the emphasis being placed on teaching staff being seen to do something with student feedback. But apparently it’s not that simple.

Perhaps this is where I get into trouble for breaking some policy, protocol, or expectation.

Current situation

The institutional SET system produces a collection of web pages for each offering of a course. Different student cohorts get different pages.

The institutional survey consists of a combination of Likert-type scale questions and free-text questions. In addition, each of the Likert-type scale questions include the option for students to add free-text comments.

The display of the Likert-type scale questions is either in tables or bar graphs, including a comparison against school and university averages. The free text questions are grouped by question and simply listed. Comments added to the Likert-type scale questions are displayed along with the student’s response Likert question.

The problems that arise, including:

  • Combining, comparing and analysing data between cohorts is difficult.
  • Analysing relationships between the responses to different questions is impossible.
  • Passing any of the data – especially the free-text comments – into other systems (e.g. Leximancer, NVIVO etc) for further analysis is next to impossible.

 Ideas for CASA

  1. Greasemonkey script to parse the web page
  2. Publish to a Google spreadsheet using ideas such as (this or this)
    Could use the name of the course in the web page to add to a different sheet. The spreadsheet could become a single place with all the data.
  3. Perl scripts etc could pull the data from there

A potential idea here for Google spreadsheets to become a broad

Structure of the data

The system provides a number of different views. I’m going to focus on the “print view” which produces a web page that contains all of the information in one page.

The data on that page includes

  • Comparative means;
    Table with various stats from Likert style questions (# ans, response rate, std dev, % positive) and the average for each question for the class, course, school, faculty, campus, and USQ.
  • Frequency of responses;
    For each of the 5 possible responses to a Likert style question the count and percentage when that response was chosen.
  • Free text responses
    For each question (text of question is a heading) where the student could provide a free text response, a list of all the free text responses, including the comment, and if the comment is associated with a Likert style question the response the student chose.

Time to convert that into the HTML elements used.

Comparative means

The table doesn’t have an id.  It’s the first table with the class reportDataTables. The table consists of rows alternately of class reportRow or reportRowAlt. Each row has the following cells

  1. Question id (in a span) and question text
  2. Number of answers
  3. Response rate
  4. Class average
  5. Course average
  6. School average
  7. Faculty average
  8. Campus average
  9. USQ average
  10. Std Dev
  11. % positive

Frequency of responses

The second table with class reportDataTables. Same basic structure. The cells on each row are

  1. Question id and text
  2. Number of answers
  3. Response rate
  4. Number of “1” responses
  5. Percentage of “1” responses
  6. Number of “2” responses
  7. Percentage of “2” responses
  8. Number of “3” responses
  9. Percentage of “3” responses
  10. Number of “4” responses
  11. Percentage of “4” responses
  12. Number of “5” responses
  13. Percentage of “5” responses

Free text responses

Are contained within a div with id commentCont. Contains a sequence of divs

  1. class reportCommentsQuestionTitle contains the question title
  2. a follow on div with no class just a style setting padding-right to 10px that contains an unordered list where each element has
    1. The text of the student comment (including their response if associated with likert style question)
    2. A bit of javascript that allows the display of all of the students other responses.
      In theory, this could be used to generate each individual student’s complete survey response.

When the user clicks on the “bit of javascript” some additional content gets added.

Actually it appears that there are a collection of divs (hidden) with ids of the format singleStudentComments7 where 7 seems to be a unique id.  This gives access to all the comments from that student.

Of course, it’s not unique.  With 47 responses there are actually 80+ singleStudentComments# divs. Going to need to filter.

Extract the data and share

At this stage, I could quite easily write a Perl script that would extract the data. The problem is that I could share that particular CASA (kludge). The aim here is to put in a bit of extra work and develop a CASA that others could (fairly easily) adopt. So Greasemonkey it is.

Using Greasemonkey to extract that data is fairly simple (once I refresh my memory), but doing something with the data is a little more difficult, but there appear to be solutions such as this one that will allow a Greasemonkey script to generate a text file to download.

 Use the data

A text file is being produced that contains three sets of data in CSV file format.  The intent is that this is a simple default format that people can re-purpose into other systems for another analysis.

Time to test it by importing into Excel.  Fix up the delimiting characters and replace some others.

Frequency of responses

And hey presto it works. The graph to the right is the simplest example of finally being able to analyse this data directly. Of course, for the likert style questions I still don’t have access to the raw data. But at the very least I can start comparing summary data from different modes and offerings of the same course. More interestingly, I can now finally easily get access to the student responses to the free text questions.

But that’s a task for another day. (FYI: SEC05 is the question “I found the assessmen tin this course reasonable”)

“Me as teacher” – the 2015 focus for NGL

NGL is a course I teach. Participants are asked to spend the semester engaging with networked and global learning as: student, learner, and teacher. They are asked to blog and reflect on this mishmash of experiences throughout the semester. I’m trying to do this as well. Not only to model one version of expectations, but also because I find it a valuable learning process myself. I did “me as learner” last week, this week it’s time for “me as teacher”.

as “meta-“teacher

Last year I wrote this “as teacher” and it had a more traditional focus. I was thinking about me as the teacher of a couple of formal courses. The directions in NGL tries to expand the understanding of “teaching” beyond formal learning to include “as you doing something that helps others learn”. For the rest of this year I’d like to push the boundaries a bit. I’d like to go meta in terms of teaching.

One of my key positions at the moment is that university e-learning is a “bit like teenage sex”. It’s quite horrendous and isn’t dealing well with some difficult problems. Yet another restructure, or a focus on quality standards is not going to help! There’s something more fundamental here. I want to explore what that might be.

In particular, I want to help universities learn how to do e-learning better.

I use that “learn how to” for two purposes. First, to indicate that they need to get better. Second, and much more importantly, is that the way to get better is to focus on the problem as a learning problem. The solution isn’t to analyse the situation and identify the solution (and then implement it). The solution is to recognise that they only way to improve the quality of e-learning is to approach it as a never-ending learning problem. The organisation as a whole always needs to be learning. Trying new things, failing, getting better, finding what works, changing it etc.

Me “as teacher” is actually me as meta-teacher.

Back to the questions asked of NGL participants.

What is your role as a teacher? Who are your students? What is the context?

Let’s keep the context narrow and say within my current institution. The “students” are essentially anyone involved with e-learning at the institution. I’m not formally teaching any of them, but perhaps as I engage within the network of the institution there will be some learning.

Taking a very network centered perspective on learning, my “role as teacher” is to help make connections. Borrowing from the “distributed view” the idea is that

the world is complex, dynamic, and consists of interdependent assemblages of diverse actors (human and not) connected via complex networks.

The mindset underpinning university e-learning is to SET in it’s ways. The question is how to change that?

What role does NGL currently play? How can it help?

My argument is that due to the SET mindset, NGL doesn’t play much of a role at all in university e-learning. While network technology is used increasingly within university e-learning, the practices, conceptions, and processes around it are still largely industrial and completely inappropriate. All the attempts to improve e-learning are trying to do so within the confines of this inappropriate mindset.

I think that really grokking a NGL mindset promises to improve the ability of universities to learn how to do e-learning. Largely because it’s a more appropriate model of the learning that needs to take place.

I’m particularly interested in how the idea of Context-Appropriate Scaffolding Assemblages (CASA) might be implemented and subsequently make it easier for universities to learn how to do e-learning in much more interesting and effective ways.

What difficulties might arise?

The largest barrier is that this requires a mind-shift. A major mind shift. Modern organisations (at least those that still haven’t figured it out) like universities are built on a different mindshift. A NGL mindset is radically different. Different mindsets have been a barrier to reform – especially around computing – before. Pedaling a different mindset is a good way to set yourself up as “strange”.

Then there’s the question of workload. The selfish reason I’m interested in this problem is because I suffer from it. Too much make work, not enough capability to engage in meaningful learning.