Female/male participation rates in IT: an example of what’s easy to log??

There’s an identified problem with learning analytics captured by the quote from Buckingham-Shum in the image below (the image is from a presentation I gave yesterday on learning analytics, so my head is in that space). In the case of learning analytics that data that is easy to capture is generally mouse clicks. The number of times a learner clicks on a website. Computer systems log this information almost by default. It’s the data that exists so it is what learning analytics analyses. It’s the data which defines the way people think about the problem.

Easy to log

This morning there were two different resources come across my PLN about the question of female participation in IT related courses. First, was this research project into female participation in ICT in schools. Second, was this analysis of female participation in IT and Engineering degrees at University.

This is an important issue, both in general and for me. This semester I’m somewhat involved with a course that is helping Secondary Computing pre-service teachers develop their discipline specific knowledge and identity. Interestingly, at least 60% (n is quite small) of the students in the course are female.

I did some initial exploration of enrolments in IPT (a Queensland senior secondary ICT course) a few years ago. The following graph is one artefact from that. It shows the percentage of all OP students (those who will be eligible to apply directly for University enrolment after year 12). It shows that the % of female OP students never got above 10%. It also shows that as of 2010, less than 20% of male OP students enroled in IPT.

Percentage of gender enrolments

Gender is something that appears in most enrolment databases and in most surveys. It’s data that is available. I wonder if there is some “factor Y” (or multiple factors) that isn’t in the data that’s already been gathered?

I imagine the research in this field has already done some thinking about this.

Over the last couple of years (in particular, but not only) parts of the broader computing community/culture haven’t exactly covered themselves in glory when it comes to questions of gender. As a long-term male part of that community/culture I don’t wish this question to be seen as suggesting moving focus away from the question of female participation in computing. There are obviously some important questions to be explored.


Buckingham Shum, S. (2012). Learning Analytics. Moscow. Retrieved from http://iite.unesco.org/pics/publications/en/files/3214711.pdf

Does branding the LMS hurt learning

The LMS used by my institution is Moodle, but the institution has “branded” it as “Study Desk”. Meaning students and teachers talk about finding X on the “Study Desk”. They don’t talk about finding X on Moodle. The following suggests that this branding of the LMS may actually hurt learning.

Update: Via twitter @georgekroner mentioned his post that has some stats on what institutions are branding their LMS.

Google the name (information literacy?)

The biggest course I teach is aimed at helping pre-service teachers develop knowledge and skills around using digital technology to enhance and transform their students’ learning. Early on in the course a primary goal is to help the students develop the skill/literacy to solve their own digital technology problems. The idea is that we can’t train them on all the technologies they might come across (give them fish), we can only help them learn new technologies and solve their own problems (teach them how to fish).

A key part of that process is the “Tech support cheat sheet” from XKCD. A cheat sheet that summarises what “computer experts” tend to do. One of the key steps is

Google the name of the program plus a few words related to what you want to do. Follow any instructions.

How do you “Google the name of the program” if the institution has branded the LMS?

Does branding the LMS mean that students and teachers don’t know “the name of the program”?

Does this prevent them from following the tech cheat sheet?

What impact does this have on their learning?

A brief investigation

Early in the year I was noticing that a few students were having problems with “Google the name”, so I set an option activity that asked them to create a “technology record”. i.e. a record the names of all the technology that they are using. The idea is that having a record of the technology names can help solving problems. I included in that “technology record” that they specify the name of the software that provides the “Study Desk”.

There were 40 (out of ~300) responses including

  • 10 that identified uconnect, the institutional portal;
  • 8 that weren’t sure;
  • 8 that didn’t provide an answer for the Study Desk question;
  • 4 that identified their web browser;
  • 4 that firmly identified Moodle;
  • 3 that identified Moodle but weren’t sure;
  • 2 answered with the URL – http://usqstudydesk.usq.edu.au;

20% of the respondents were able to identify Moodle.

These are 3rd year students. Almost all will have completed at least 16 courses using Moodle. These are students completing an optional activity indicating perhaps a slightly greater motivation to do well/learn. A quick reveal that most of the students have a GPA above 5.

The still don’t know the name of the LMS.

I wonder how many teaching staff know the name of the LMS?

Does this hurt learning?

Perhaps if everything works with the LMS then this doesn’t create any problem. But if the students wish to engage with social and information networks beyond the institution, they don’t know the common name for the object they want to talk about. That has to hurt learning.

I imagine that there are librarians and others who can point to research identifying the inability to know the correct search term hurts search.

What do you think? Does branding the LMS hurt learning?

Re-using a #moodle course design

This semester I’m course examiner for a new course on Secondary Computing Curriculum and Pedagogy. As the name suggests, the course is intended to help pre-service teachers who are aiming to teach computing in Secondary schools. While I’m the course examiner, the course is being developed and will be largely taught by a couple of practicing and experienced Secondary computing teachers (how’s that for “recency of practice”?).

Two weeks before semester start my institution opens up the course sites for students to become familiar with what’s on offer. There are some minimum expected standards to meet. My task today is to meet those standards and in doing so set up the skeleton of the course site for the rest of the semester. To do this I’m going to reuse the structure from EDC3100, perhaps with a few tweaks. Besides saving me some time, four of the five students currently enrolled in the course have done EDC3100.

This is also a bit of an exploration of the difference between an empty Moodle course site (even one with a standard look and feel) and one with a structure.

What I’ll need to do

Can I list all I need to do

  • Structure of the site
  • Study Schedule
  • Assessment
  • Teaching team
  • The initial orientation message.


The basic structure is going to match the EDC3100 template. A collection of topics tied directly to each week of semester with a “jump to” bar at the top of the course site. There will also be a collection of “adminstrative” topics.

The following image shows the top of the 2012 version of the EDC3100 site. In 2015 the institution has adopted a default course structure that does away with the need for the “Course Content” and “Course Background” boxes.


The one question about this approach is that EDC3100 has quite a bit of content in each week. Not sure that EDS4406 will have the same quantity. Hence the separate topics for each week may be a bit of overkill.

As it stands each topic does have a formal title meaning it’s probably valuable to make use of the macro facility I’m using in EDC3100.

Process setting this up includes

  1. Copy the “course format” used in EDC3100.
    “Weekly format” with 12 sections. 10 teaching weeks + an orientation and a “resource” week.
  2. Update the names of each section/topic/week of the course.
    Using the macro facility the names entered into a moodle are in this form {{{W0_TITLE}}} ({{{W0_PERIOD}}}). A bit of Javascript will replace the “variables” with appropriate values.
  3. Put in place the Javascript file to do the translation.
    I’ll create a new one for this course. Copy the EDC3100 js file across and update the values. Week titles first. That’s all done. The problem with week numbers changing because of holidays reared it’s head again.
  4. Stick in the “jump to”
    Oh, that was nice. Copy the HTML from EDC3100, replace the course id, paste it into the EDS4406 site, and hey presto it all works. Even the tooltips get updated with the new topic names.

“Administrative” content – Study schedule, assessessment, and teaching team

These three sections are important but don’t form part of the students’ weekly learning activities. The institutional default course structure provides default tools for display this information, but IMHO they aren’t as useful as using a Moodle book to provide course specific information.

At this stage, the detailed information for these sections isn’t yet written, I’ll just be putting in the initial skeleton. That involves

  1. Creating a Moodle book resource for each section.
  2. Updating the js file to point the default course structure links to these books.
  3. Put in some basic information into the books.

Again the macro system is nice. Copy and paste some HTML from the EDC3100 book that is using the macro approach. Link in the EDS4406 js file and the content automagically updates to EDS4406 information.

Orientation message

This will need to be a little more than a message. Will work on that tomorrow and update this.

Can the Moodle book module be made open and other enhancements

Next week I’m off to Melbourne to Moodlemoot’AU 2015. This post will hold the details and resources associated with this talk which is talking about how the Moodle book module might be enhanced, including by making it more open. It’s the first public talk about the USQ funded Moodle book project.

There’s a Moodle site associated with the Moot presentation.



Since 2012 the course EDC3100, ICT and Pedagogy at the University of Southern Queensland has been taken by over 1500 pre-service teachers learning how they can use digital technologies to enhance their own teaching. In that time, the course has evolved from a fairly traditional “textbook, lecture, and 2 tutes” type of online course into an online course reliant on Open Educational Resources (OERs) and individual student blogs. Core to that evolution has been the Moodle book module. Each week’s “learning path” consists of a range of activities and resources with the Book module providing the scaffolding. In the most recent offering, the course included at least 73 Moodle books with 670 chapters. The success of this approach is evident in student evaluations of the course, with comments such as

Learning paths were great. They were informative and interactive

The most helpful aspect of this course was learning paths.
It was clear what to do and how to do it.

This experience has shown that the Book module can be an integral part of an effective course design. However, it has also revealed a number of areas where the Book module could be usefully enhanced.The aim of this session will be to start an on-going conversation about what enhancements could be made to the Moodle book module and how those might be best made.

As a spark for that conversation the session will outline a range of possible enhancements derived from the experience in EDC3100; discussion with other users of the Book module; and, an analysis of discussions on the Moodle tracker and forums. In particular, the session will outline possible enhancements to the Book module that would provide the option for content managed by the Book module to be transformed into Open Educational Resources (OERs).


Much of this work is being done as part of the Moodle “open” book project funded by a grant from USQ’s Open Textbook Initiative. The project aims to enable Moodle books to be leveraged as OERs and implement more general enhancements to the module. The aim is to ensure that any and all work done is contributed back to the Moodle community.

Initial pre- and post- session discussions will take place in a site associated with the Moot before migrating to a more appropriate location.

Re-building results processing

It’s once again time to process final results for a course and return the final assignment. A process that involves

  1. Checking overall student results for a course, before returning the final assignment.
  2. Identifying all of the students who won’t have final results available by the required date.
  3. Analysing this offering’s performance across the course and comparing it with prior offerings.
  4. Returning the final assignment.
  5. Ensuring the overall results are entered appropriately into the student records system.
  6. Preparing a report for the examiners meeting.

There are three problems driving this

  1. What institutional processes/tools are provided to help with these tasks are far from user friendly.

    e.g. the greasemonkey script I wrote last year to help with task #5 above.

  2. There are no institutional processes/tools for some of these tasks.

    e.g. the only way to get all of the assignment marks into one place is to put them into the Moodle gradebook. But I believe if I do that then the students can see the marks. Given that I’m still moderating the last assignment marks them seeing the marks is not so good.

  3. The management of online assignment submission has changed this year, so some of prior workarounds are no longer viable.

What I used to do

Part of this activity is identifying what I’ve already got. It’s around 7/8 months since I last had to do this, so I don’t remember. I’m not sure the impact of this temporal distance is something that the designers of institutional systems and processes are fully cognisant of.

The old process appears to have been

  1. Extract individual CSV files for each assignment from EASE.
    The old online assignment system generated 2 separate CSV files. I needed both.
  2. Get another CSV file with Professional Experience results.
    Whether a students has passed their Professional Experience is determined by another part of the institution using a different system.
  3. Run a perl script which would

    1. Extract all the data from the CSV files.
    2. calculate the final results (for each student).
    3. Assign a grade to the student based on circumstances
      • FNP if at all assignments not submitted.
      • F regardless of mark if PE failed.
      • IM if PE mark has not yet been received
      • RN if assignment submitted but still being marked.
    4. Generate grade breakdowns for each campus and overall
    5. Output all that as a CSV file, including comments that I’d added
  4. Import that into Excel and do some further checking.

What I need to do

The process will be largely the same. The main difference is the source and format of the CSV files. The changes now are

  1. Assignment 1, Assignment 2 and PE marks are now in Moodle gradebook.
    The will all export nicely. Though it does use the long version of the USQ student number. May need to do a workaround for that.

    what sort of institution is silly enough to have at least 3 versions of the same student number?. Long with leading 0s. Long without leading 0s. Short!

  2. Assignment 3 mark will be in Moodle assignment activity.
    That will work. The grade is there.

    But yet, it uses a different version of the student number (long without leading 0s). But both do have email address, that might be the candidate for unique id. Or the script will just pre-pend leading 0s.

The other major difference is that neither of these CSV files include any mention of the campus or mode the students are using. Which prevents breaking results down. But thankfully I do have database table populated with this information.

Ahh, that’s right. I also have to change the PE mark for some students to -1 to indicate they were exempt. This is to discern students for whom there is no PE result yet, from those students who aren’t going on PE and should still pass.

My other problem is that the CSV file for A3 has a “status” field that includes new line characters. My poor little Perl CSV parsing module doesn’t handle that well.

It’s working well enough to help moderate A3 results. Time to do that.

Still to be done

  • Modify gradebook.csv to have -1 for PE for those students who will never go **** what about when it’s updated?
  • Add the campus calculations back in
  • Figure out how to handle the status field in Moodle assignment csv.
  • Have the script produce an Excel file, not a CSV file