Oh Academia

It’s been one of those weeks in academia.

Earlier in the week the “I quit academia” meme went through my Twitter stream. Perhaps the closest this meme came to me was @marksmithers “On leaving academia” post.

That was about the day when I had to pull the pin on a grant application. Great idea, something we could do and would probably make a difference, but I didn’t have the skills (or the time) to get it over the line.

As it happened, I was reading Asimov’s “Caves of Steel” this week and came across the following quote about the “Medievalists”, a dissaffected part of society

people sometimes mistake their own shortcomings for those of society and want to fix the Cities because they don’t know how to fix themselves

On Tuesday night it was wonder if you could replace “Cities” with “Universities” and capture some of drivers behind the “I quit academia” meme.

And then I attended a presentation today titled “Playing the research game well”. All the standard pragmatic tropes – know your H-Index (mine’s only 16), know the impact factor for journals, only publish in journals with an impact factor greater than 3, meta-analysis get cited more etc.

It is this sort of push for KPIs and objective measures that is being created by the corporatisation of the Australian University sector. The sort of push which makes me skeptical of Mark’s belief

that higher education institutions can and will find their way back to being genuinely positive friendly and enjoyable places to work and study.

If anything these moves are likely to increase the types of experiences Mark reports.

So, I certainly don’t think that the Asimov quote applies. That’s not to say that academics don’t have shortcomings. I have many – the grant application non-submission is indicative of some – but by far the larger looming problem (IMHO) is the changing nature of universities.

That said, it hasn’t been all that bad this week. I did get a phone call from a student in my course. A happy student. Telling stories about how he has been encouraged to experiment with the use of ICTs in his teaching and how he’s found a small group at his work who are collaborating.

Which raises the question, if you’re not going to quit academia (like Leigh commented on Mark’s post, I too am “trapped in wage slavery and servitude”) do you play the game or seek to change it?

Or should we all just take a spoonful?

Processing and Visualizing Data in Complex Learning Environments

The following is a summary and some thinking around

Thompson, K., Ashe, D., Carvalho, L., Goodyear, P., Kelly, N., & Parisio, M. (2013). Processing and Visualizing Data in Complex Learning Environments. American Behavioral Scientist, 57(10), 1401–1420. doi:10.1177/0002764213479368


The ability to capture large amounts of data that describe the interactions of learners becomes useful when one has a framework in which to make sense of the processes of learning in complex learning environments. Through the analysis of such data, one is able to understand what is happening in these networks; however, deciding which elements will be of most interest in a specific learning context and how to process, visualize, and analyze large amounts of data requires the use of analytical tools that adequately support the phases of the research process. In this article, we discuss the selection, processing, visualization, and analysis of multiple elements of learning and learning environments and the links between them. We discuss, using the cases of two learning environments, how structure affects the behavior of learners and, in turn, how that behavior has the potential to affect learning. This approach will allow us to suggest possible ways of improving future designs of learning environments.


Some interesting ideas and frameworks/scaffolding for thinking about learning analytics and understanding educational design. Interesting perspective about “big data” techniques not being limited to small amounts of data about lots of people, but also being useful for large amounts of data about small groups of people. Which is linked to the idea of moving analytics beyond use at the macro level into the “micro”.

Remain unsure that some of the work labeled “learning analytics” is what is commonly called learning analytics.

I wonder whether any thought has been given to the application of analytics techniques going beyond the representation/communication stage and extending into action through integration into the educational design? As yet another tool contributing to co-creation and co-configuration.


Very brief intro to learning analytics “focused on making sense of “big data,” data usually collected from learning management systems” used at course and student levels…often to intervene with low/high achieving students…a type of analytics that doesn’t help designers of learning.

educational design defined as “constructing representations of how people should be helped to learn in specific circumstances (Goodyear & Retalis, 2010, p. 10)” and to include the “design of tools, tasks and interactions associated with learning”.

analytics has “mainly focused on the design of courses and analysis on the macro level” and not on “identifying complex patterns of behaviour”. The argument here is to expand “the principels and applications of learning analytics”. Based on “processing and visualizing data in two complex learning environments”


Networked learning

“Networked learning involves people collaborating with the help of technologies in a shared enterprise of knowledge creation” which raises the question for me of “how much ‘help’ do the technologies provide?”.

Focus is exploration of the physical, digital and human elements within learning environments. Objects have properties, intentions and brings values from choices made during design. Thus objects have “effects on human perception and action”.

A distinction of digital technologies is the capacity to change. i.e. it’s protean.

Aside: interesting that the example they give is of customisation of display and not something deeper.

The objects, perception of them and action with them involve various levels of mental/cognitive effort. Additional complication arises from the combination of objects “Thus, only by analyzing the architecture of networks of objects (the pattern of their relations) can one see how design intentions affect what people do, including what they learn. Research of this nature has implica- tions for design work in education” (p. 1403)

Analytical famework

Four analytics dimensions

  1. set design – the physical stage on which learning activity is situated – tools, artifacts etc.
  2. epistemic design – tasks proposed, knowledge implicated.
  3. social design – roles, divisions of labor.
  4. co-creation and co-configuration – since participants’ activities lead to rearrangement of the learning environment.

Aside: it’s reassuring to see explicit mention of change/modification in the last element.

The analytic framework helps identify and represent key elements of complex learning environments – but there’s a need to “develop methods of analysis that incorporate multiple streams of data to describe multiple tool use across multiple tasks” (p. 1403-1404).

Learner behaviour

The aim is to “reveal process that can inform educational design and student learning processes”. Visualisations of both order and time allow for: “identification of typologies for form” which can help theorize about what works.

Learning analytics

After some common quotes about learning analytics, makes the point that rather than “big data” meaning data from lots of people. It can also be “lots of data” about not many people e.g. “short episodes of collaborative work can rapidly create hundreds of gigabytes of data” (p. 1405). This raises difficulties. Expands on tools the developed and considerations.

“We consider learners’ use of the space as important as what they say and the artifacts they create”.

The following figure offers a summary of how patterns are discovered in data. Most definitions of analytics are based on the use of computational methods. The use of human analysis probably doesn’t fit. But it does capture, I think, what actually happens. Certainly part of the data mining activity.

Discovery of patterns within data by David T Jones, on Flickr

“Finally, the patterns themselves are represented in some way for communication (Figure 1).” (p. 1405)

I have a small problem with the word “finally” in this quote. Not in the context of this paper, but more broadly in learning analytics. At some level representation is enough, but if you do want to make an improvement to educational design, then action is needed. This the argument we make in the IRAC framework – the R is “Representation” i.e. communication. The A is affordance for action.

Now moving onto more specific descriptions of what they’ve done

  • focus “on the demonstration of expertise in individual learners as an indicator of successful collaboration”
  • Case #1 is an informal networked learning environment (iSpot)
    The website (through screenshots) is analysed using semiotics and design to examine the elements of the analytical framework.
  • Case #2 – four mater’s students working on a collaborative task.
    video data and transcripts analysed to identify indicators of expertise.

Case #1

Describes iSpot. Mentions earlier analysis (Clow & Makriyannis, 2011).

The focus here is “on the design adopted to make visible a member’s overall level of expertise” – linked somewhat to the outcome of the earlier analysis.

After an introduction to semiotics there is a description of how iSpot represents and calculates the expertise of an individual to illustrate how a design feature is not only a “design element placed in the stage (set design) but in fact encodes a number of underlying meanings, which ultimately reflect a particular way of structuring knowledge (epistemic design) and roles (social design) within the learning network.” (p. 1409).

While this is claimed as drawing on “notions from learning analytics” I’m not sure I see this from the description of what was done. Screenshots of a website followed by semiotic analysis doesn’t quite align with the common definitions of learning analytics I’m familiar with.

Case #2

Four masters students completing 5 week task. F-t-f meetings captured and analysed. Analytic framework used guide investigation. Automated discourse analysis used examine how learners used the tools, interpreted the task and designed their roles. This group achieved the highest grade in the collaborative component – so looking for identifiable design elements. Description of how this was done.

Through this demonstration, the argument is that the framework has provided added depth to understanding of the co-creation and co-configuration activities in this successful collaboration.


Expands on the options available for further application of learning-analytics techniques, including through the use of a table that draws on components of the figure above as a scaffold.


Learners are influenced by the structure of an environment. The framework here helps identify and theorize about this. Which leads to research/analytics work – at a finer grain. “In so doing, the impacts of design decisions on the behavior of learners can be assessed, and informed redesign work can take place”

I wonder whether the authors are thinking about how environment/tools can be impacted by this “informed redesign”. What if the digital tools capability for modification was informed or even activated by learning analytics? They seem to lean towards this as they finish with

Understanding the relationship between the design of a learning environment and the behavior and learning that occur may enable the design of more effective learning environments.

Creative Commons, Flickr and presentations: A bit of tinkering

The following is a summary of some tinkering to develop a script that will help me appropriately attribute use of Creative Commons licensed images in presentations. Beyond addressing a long-standing problem of mine, this bit of tinkering is an attempt to feel a bit productive.

The problem

When I give presentations I use Powerpoint (not inherently the problem). I use it in a particular way. Lots of slides, little if any text, and each slide with an interesting photo related to the point I’m trying to make. What follows is an example. (Move beyond the first slide for a feel).

The images are all licensed with a Creative Commons licence and I source them from Flickr via the Creative Commons search. According to this source

All Creative Commons licences require that users of the work attribute the creator. This is also a requirement under Australian copyright law. This means you always have to acknowledge the creator of the CC work you are using, as well as provide any relevant copyright information.

The document continues with “For many users of CC material, attribution is one of the hardest parts of the process”. My current practice is to include the URL of the original image on Flickr on each slide. This has three problems

  1. It adds text to each slide, taking away some of the impact of the image.
  2. Doesn’t fulfil the requirements of the CC licence.
  3. With this style of presentation, most 20/30 minutes presentations are getting close to 100 slides often with the same number of images to attribute.

The requirements are

you should:

  • Credit the creator;
  • Provide the title of the work;
  • Provide the URL where the work is hosted;
  • Indicate the type of licence it is available under and provide a link to the licence (so others can find out the licence terms); and
  • Keep intact any copyright notice associated with the work.

There are a range of online services that help with attribution. ImageCodr generates HTML, which I use often. flickr storm does a similar task somewhat differently. The Flickr CC helper will generate HTML or text.

To fit with the workflow I use when creating presentations, I’m after something that will

  1. Parse a text file of the format

    1 http://my.flickr.com/photo
    2 http://my.flickr.com/photo2

  2. Use the Flickr API to extract the information necessary for an appropriate CC attribution.
  3. Add that to a text/HTML file that will form a “credits” slide at the end of a presentation.

    As per the advice from this source

    Alternatively, you can include a ‘credits’ slide at the end of the show, that lists all the materials used and their attribution details. Again, you should indicate the slide or order so people can find the attribution for a specific work.

  4. Optionally, add a message to the photo on Flickr summarising how/where the photo has been used.

Tinkering process

What follows is the planned/actual tinkering process toward implementation of a solution as a Perl script. The script will use the Flickr API to extract the licence information and hopefully add a comment.

Flickr API working – extracting information

Perl has a range of Flickr related modules. Flickr::API2 seems to be the current standard.

The flickr.photos.licences.getInfo method gives a list of all the licenses. When you get a photo by id (part of the URL) Flickr returns a licence id with which you can find the URL and name of the licence for the photo.

Some limitations of the information

  • Flickr doesn’t provide the abbreviation for the CC licences.
    hard-coded into the script.
  • The url_l method for Flickr::API2 doesn’t seem to be working.
    that’s because it’s not a method – page_url works.
  • The owner_name method for Flickr::API2 doesn’t seem to always reliably return the owner’s name.
    Use the username as a supplement.

Generating credits page

Initially, I was going to copy the format used by the flickr cc attribution helper i.e.

cc licensed ( *ABBR* ) flickr photo by *username*:

But this suggests that the title of the work and a link to the licence is also required (though it does mention flexibility). The format they’re using is

*title* by *name* available at *url*
under a *licence name*
*licence url*

Will do this as simple text, single reference to a line. Will also add in the slide number.

After a bit of experimentation the following is what the script is currently generating

Slide 2, 3: “My downhill run!” by Mike Mueller available at http://flickr.com/photos/mike912mueller/6407874723 under Attribution-NonCommercial-ShareAlike License http://creativecommons.org/licenses/by-nc-sa/2.0/

Slide 4: “Question Mark Graffiti” by zeevveez available at http://flickr.com/photos/zeevveez/7095563439 under Attribution License http://creativecommons.org/licenses/by/2.0/

Slide 1: “Greyhound Half Way Station” by Joseph available at http://flickr.com/photos/josepha/4876231714 under Attribution-NonCommercial-ShareAlike License http://creativecommons.org/licenses/by-nc-sa/2.0/

Modified to recognise that I sometimes use an image on multiple pages. I should perhaps add a bit of smarts into the code to order the slides correctly, but time is short.

Adding comment on Flickr

The flickr.photos.comments.addComment method seems to offer what I need. Of course it’s not that simple. To make a comment the script needs to be authenticated with flickr. i.e. as me.

The documentation for Flickr::API2 is not 100% clear on this and the evolution of authentication means that flickr is moving on, but the following process seems to work

  • Get a “frob”
    use Flickr::API2;
    my $api = Flickr::API2->new({
                    'key'    => <em>mykey</em>,
                    'secret' => <em>mysecret</em> });
    my $result = $api->execute_method( 'flickr.auth.getFrob' );
    my $frob = $result->{frob}->{_content};
  • Get a special URL to tell Flickr to authorise the script
    my $url = $api->raw->request_auth_url( 'write', $frob );
    print Dumper( $url );
    # wait until I visit the URL and hit enter
  • Get the token
    my $res = $api->execute_method( 'flickr.auth.getToken', { 'frob' => $frob} );
    print Dumper( $res );
  • Copy the token that’s displayed and hard code that into subsequent scripts, including adding a comment using my flickr account.
    my $comment =<<"EOF";
    G'day, This is a test comment. 
    my $response = $api->execute_method( "flickr.photos.comments.addComment",
                          { photo_id => 3673725336, comment_text => $comment, 
                            auth_token => <em>the token I got</em> } );

Put it all together

I’m going to use a small presentation I use in my teaching as a test case. I’ll hardcode the link between image and slide number into the initial script. Longer term the script will rely on there being a text file of the format

1,flickr photo url
2,flickr photo url

(see below for some ideas of how I’ll do this)

It all works. Up above you can see the credit text produced based on a small presentation I use in my teaching. The following is one of the images used in that presentation. If you click on the image you can see the comment that was added by the script.

Greyhound Half Way Station by joseph a, on Flickr
Creative Commons Attribution-Noncommercial-Share Alike 2.0 Generic License  by  joseph a 

What follows are various bits of the script, happy to share the file, but I don’t imagine that there’s a lot of folk with Perl installed and configured that would want to use it. There needs to be some more work tidying up and adding in error checking. But it works well enough for now.

The main logic of the script is

use strict;
use Flickr::API2;

# hard-code abbreviations for CC licences based on Flickr id
my %CC = ( 1 => "BY-NC-SA",  2 => "BY-NC",  3 => "BY-NC-ND",
           4 => "BY", 5 => "BY-SA", 6 => "BY-ND" );

my $TOKEN = "my token";
my $auth = {
    'key'    => 'my key',
    'secret' => 'my secret' 

# which flickr URLs appear on which slides
# flickr photo URL is the key, value is array of slides on which the image appears
    'http://www.flickr.com/photos/7150652@N02/4876231714/' => [ 1 ],
    'http://www.flickr.com/photos/27933068@N03/6407874723/' => [ 2, 3 ],
    'http://www.flickr.com/photos/zeevveez/7095563439/' => [ 4 ]

my $COMMENT =<<"EOF";
--whatever comment I want to add

my $API = Flickr::API2->new( $auth );
my $credits = generate_credits( $PHOTO_SLIDES, $API );
add_comment( $PHOTO_SLIDES, $COMMENT, $API );
print $credits;

To add the comments (I’m guessing the extraction of the Flickr ID will break eventually)

sub add_comment($$$) {
    my $photo_slides = shift;
    my $comment = shift;
    my $api = shift;

    foreach my $photo_url ( keys %$photo_slides ) {
        if ( $photo_url =~ m#http://www\.flickr\.com/photos/.*/([0-9]*)/# ) {
            my $id = $1;
            my $response = $api->execute_method(
                { photo_id => $id, comment_text => $comment,
                  auth_token => $TOKEN } );

And finally generating the attribution information

sub generate_credits( $$ ) {
    my $photo_slides = shift;
    my $api = shift;

    ## Get the licence options
    my $response = $api->execute_method( "flickr.photos.licenses.getInfo" );
    my $licences = $response->{licenses}->{license};

    my $content = "";

    foreach my $photo_url ( keys %$photo_slides ) {
        # extract the id
        if ( $photo_url =~ m#http://www\.flickr\.com/photos/.*/([0-9]*)/# ) {
            my $id = $1;
            my $photo = $api->photos->by_id( $id );

            #  get the licence
            my $info = $photo->info();
            my $licence = getLicence( $info->{photo}->{license}, $licences);
            die "No CC licence found for $photo_url\n"
                if ( ! defined $licence ) ;
            $content .= displayInfo( $licence, $photo, $info, $photo_slides->{$phto_url} );
    return $content;

sub displayInfo( $$$ ) {
    my $licence = shift;
    my $photo = shift;
    my $info = shift;
    my $slides = shift; # array of slide numbers

    my $slide = join ", ", @$slides;

    my $url = $photo->page_url;
    $url =~ s/ //g;
    my $name = $photo->owner_name;
    $name = $info->{photo}->{owner}->{username} if ( $name eq "" );

    return <<"EOF";
Slide $slide: "$photo->{title}" by $name available at $url under $licence->{name}


sub getLicence( $$ ) {
    my $id = shift;
    my $licenses = shift;

    foreach my $licence ( @{$licenses} ) {
        return $licence if ( $id == $licence->{id} );

    return undef;

Getting the URLs of images

The final script assumes I have a text file of the format

The question of how to generate this text file remains open. I can see three possible options

  1. Construct the file manually.

    This would be painful and have to wait until after the presentation file is complete. Manual is to be avoided.

  2. Extract it from the Slideshare transcript.

    As well as producing an online version of a presentation, Slieshare also produces a transcript of all the text. This includes flickr photo URLs. This currently works because of my practice of including the URLs on each slide, something I’d like to avoid. As a kludge, I could probably include the URL on each slide but place it behind the image. i.e. make it invisible to the eye, but still to slideshare?

  3. Extract it from the pptx file.

    Powerpoint files are now just zip file collections of xml files. I could draw on perl code like this to extract the URLs. Perhaps the best way is to insert the Flickr URL of the photos used in the notes section (as they too are XML files).

#3 is the long term option. Will use #2 as my first test.

Supporting Action Research with Learning Analytics

The following is a summary and some thoughts on

Dyckhoff, a. L., Lukarov, V., Muslim, A., Chatti, M. a., & Schroeder, U. (2013). Supporting action research with learning analytics. In Proceedings of the Third International Conference on Learning Analytics and Knowledge – LAK ’13 (pp. 220–229). New York, New York, USA: ACM Press. doi:10.1145/2460296.2460340


Bringing in reflection, action research and the idea of learning analytics enabling these reinforces one of my interests. So I’m biased toward this sort of work.

Some good quotes supporting some ideas we’re working on.

Find it interesting that the LA research work tends to talk simply about indicators, i.e. the patterns/correlations that are generated from analysis, rather than on helping users (teachers/learners) actually do something.


My emphasis added.

Learning analytics tools should be useful, i.e., they should be usable and provide the functionality for reaching the goals attributed to learning analytics. This paper seeks to unite learning analytics and action research. Based on this, we investigate how the multitude of questions that arise during technology-enhanced teaching and learning systematically can be mapped to sets of indicators. We examine, which questions are not yet supported and propose concepts of indicators that have a high potential of positively influencing teachers’ didactical considerations. Our investigation shows that many questions of teachers cannot be answered with currently available research tools. Furthermore, few learning analytics studies report about measuring impact. We describe which effects learning analytics should have on teaching and discuss how this could be evaluated.


Starts with the proposition that “teaching is a dynamic activity” where teachers should “constantly analyse, self-reflect, regulate and update their diadactical methods and the learning resources they provide to their students”

Of course, learning is also a dynamic activity. Raising the possibility that the same sort of analysis being done here might be done for learners.

Moves onto reflection, its definition and how it can foster learning if “embedded in a cyclical process of active experimentation, where concrete experience forms a basis for observation and reflection”. Action research is positioned as “a method for reflective teaching practice” ending up with learning analytics being able to “initiate and support action research” (AR).

Noting multiple definitions of learning analytics (LA) before offering what they use

learning analytics as the development and exploration of methods and tools for visual analysis and pattern recognition in educational data to permit institutions, teachers, and students to iteratively reflect on learning processes and, thus, call for the optimization of learning designs [39, 40] on the on (sic) hand and aid the improvement of learning on the other [14, 15].

Relationship between LA and AR

  • LA arise from observations made with already collected data.
  • AR start with research question arising from teaching practice
  • AR often use qualitative methods for more holistic view, LA mostly quantitative.

Important point – the creation of indicators from the LA work has “been controlled by and based on the data available in learning environments”. Leading to a focus on indicators arising from what’s available. AR starts with the questions first, before deciding about the methods and sources. Proposes that asking questions without thought to the available data could “improve the design of future LA tools and learning environments”.

Three questions/assumptions

  1. Indicator-question-mapping
    Which teacher questions cannot be mapped to existing indicators? Which indicators could delivery what kind of enlightenment?
  2. Teacher-data-indicators
    Current indicators don’t “explicitly relate teaching and teaching activities to student learning” (p. 220). Are there tools that do this? How should it be done?
  3. Missing impact analysis
    Current LA research “fails to prove the impact of LA tools on stakeholders’ behaviors” (p. 221). How can LA impact teaching? How could it be evaluated?

Paper structure

  • Methods – research procedure and materials
  • Categorisation of indicators
  • Analysis and discussion
  • Conclusion


  1. Results of qualitative meta-analysis investigating what kind of questions teachers ask while performing AR in TEL. – see table below
  2. Collected publications on LA tools and available indicators.
  3. 2 of the researchers developed a categorisation scheme for the 198 indicators.
  4. Further analysis of LA tools and indicators.
  5. 2 researchers mapped teachers’ questions to sets of available indicators

Teachers’ questions

The questions asked by teachers – summarised in the following table – are taken from

Dyckhoff, A.L. 2011. Implications for Learning Analytics Tools: A Meta-Analysis of Applied Research Questions. IJCISIM. 3, (2011), 594–601.

Must read this to learn more about how these questions came about. Strike me as fairly specific and not necessarily exhaustive. Authors note that some questions fit into more than one category.

(a) Qualitative evaluation (b) Quantitative measures of use/attendenace (c) Differentiation between groups of students
How do students like/rate/value specific learning offerings?
How difficult/easy is it to use the learning offering?
Why do students appreciate the learning offering?
When and how long are student accessing specific learning offerings (during a day)?
How often do students use a learning environment (per week)?
Are there specific learning offerings that are NOT used at all?
By which properties can students be grouped?
Do native speakers have fewer problems with learning offerings than non-native speakers?
How is the acceptance of specific learning offerings differing according to user properties (e.g. previous knowledge)?
(d) Differentiation between learning offerings (e) Data consolidation/correlation (f) Effects on performance
Are students using specific learning materials (e.g. lecture recordings) in
addition or alternatively to attendance?
Will the access of specific learning offerings increase if lectures and exercises on the same topic are scheduled during the same week?
How many (percent of the) learning modules are student viewing?
Which didactical activities facilitate continuous learning?
How do learning offerings have to be provided and combined to with support to increase usage?
How do those low achieving students profit by continuous learning with etest compared to those who have not yet used the e-tests?
Is the performance in e-tests somehow related


Provides a list of tools chosen for analysis. Chosen given presentation in literature as “state-of-the-art LA-tools, which can already be used by their intended target users”.

Categorisation of indicators

Categorisation scheme includes

  • Five perspectives categories – “point of view a user might have on the same data”
    1. individual student

      “inspire an individual student’s self-reflection” on their learning. Also support teachers in monitoring. Sophisticated systems recommend learning activities. Includes a long list of example indicators in this category with references.

    2. group

      As the name suggests, the group.

    3. course
    4. content
    5. teacher

      Only a few found in this category – including sociogram of interaction between teacher and participant.

  • Six data sources categories
    1. student generated data

      Students’ presence online. Clickstreams, but also forum posts etc.

    2. context/local data
    3. academic profile

      Includes grades and demographic data.

    4. evaluation

      student responses to surveys, ratings, course evlaluations.

    5. performance

      Grades etc from the course. # of assignments submitted.

    6. course meta-data

      course goals, events etc.

Analysis and discussion


Mapped indicators from chosen tool to questions asked by teachers. Missing documentation meant mapping was at times subjective.

“Our analysis showed that current LA implementations still fail to answer several important questions of teachers” (p. 223).

Using categories from the above table

  • Category A – almost all “cannot yet be answered sufficient”. Deal with questions of student satisifaction and preferences.
  • Category B – most questions can be answered. A few cannot (e.g. use of service via mobile or at home) Aside: while a question teachers’ might ask, not sure it’s strongly connected to learning.
  • Category E – generally no. Most systems don’t allow the combination of data that this would require. I would expect in large part because of the research nature of these tools – focused on a particular set of concerns. The paper raises the question of learner privacy issues.
  • Category F – can be difficult depending on access to this information.


“We did not find tools or indicators that explicitly collect and present teacher data”. (p. 224). The closest are indicators related to course phases and interactions between teachers and students.

Activity logs contain some teacher data. But other data missing. Information on lectures etc is missing.

If teachers had indicators about their activities and online
presence, they might be inspired and motivated to be more active in the online learning environment. Hence, their presence in discussions might stimulate students likewise to participate more actively and motivate them to share knowledge and ideas.

Authors brainstormed some potential indicators

  • Teacher form participation indicator.
  • Teacher correspondence indicator

    Tracking personal correspondence, and tracking interventions and impact on student behaviour.

  • Average assignments grading time.

    Would be interesting to see the reaction to enabling this.

    The authors mention privacy issues and suggest only showing the data to the individual teacher.

Missing impact analysis

Provides a table comparing AR and LA.

“very few publications reporting about findings related to the behavioural reactions of teachers and students, i.e. few studies measure the impact of using learning analytics tools” (p. 225) Instead LA research tends to focus on functionality, usability issues and percieved usefulness of specific indicators. …”several projects have not yet published data about conducting reliable case studies or evaluation results at all”.

Proceed to offer one approach to measuring impact of LA tools – an approach that could “be described as design-based research with a focus on uncovering action research activities”.

The steps

  1. Make the tools available to users.

    A representable group of non-expert teachers and students. Need to know about the course. How it’s operating without LA and a great deal of information to use as a reference point for comparison later on. Including interviews/online surveys with staff and students.

  2. Identify which activities are likely to be improved by LA.

    Hypothesise about the usage and impact of LA.

  3. Interview after use.

Limitations of this approach

  • long time required.
  • significant effort from researchers and participants.
  • analysis of qualitative data prone to personal interpretation.
  • Clear conclusions may not be possible.

Limitations of this study

  • meta-analysis from which questions were drawn was limited to case studies described in the conference proceedings of a German e-learning conference.
  • identification of indicators limited to 27 tools, there are other research especially from EDM. “The challenge is, how to make them usable”
  • subjectivity of the questions and the indicators – addressed somewhat by two researchers – but not an easy process.


Learning Analytics tools should be an integral part of TEL. The tools aim at having an impact on teachers and students. But the impact has not been evaluated. The concern we are raising is that LA tools should not only be usable, but also useful in the context of the goals we want to achieve. (p. 227)

  • present indicators focused on answering questions around usage analysis
  • “currently available research tools do not yet answer many questions of teacher”
  • qualitative analysis and correlation between multiple data sources can’t yet be answered.
  • “causes for these shortcomings are insufficient involvement of teachers in the design and development of indicators, absence of rating data/features, non-used student academic profile data, and absence of specific student generated data (mobile, data usage from different devices), as well as missing data correlation and combination from different data sources”.
  • teachers data is not easily visible.
  • future tools will probably have rating features and that data should be used by LA tools.
  • “researchers should actively involve teachers in the design and implementation of indicators”.
  • researchers need to provide guidelines on how indicators can be used and limitations.
  • Need to create evaluation tools to measure impact of LA.


Dyckhoff, a. L., Lukarov, V., Muslim, A., Chatti, M. a., & Schroeder, U. (2013). Supporting action research with learning analytics. In Proceedings of the Third International Conference on Learning Analytics and Knowledge – LAK ’13 (pp. 220–229). New York, New York, USA: ACM Press. doi:10.1145/2460296.2460340

Strategies for curriculum mapping and data collection for assuring learning

The following is a summary of and some reaction to the final report of an OLT funded project titled “Hunters and gatherers: strategies for curriculum mapping and data collection for assuring learning”. This appears to be the project website.

My interest arises from the “Outcomes and Analytics Project” that’s on my list of tasks. That project is taking a look at Moodle’s new outcomes support (and perhaps other tools) that can be leveraged by a Bachelor of Education and try to figure out what might be required to gain some benefit from those tools (and whether it’s worth it).


The recommended strategies (holistic, integrated, collaborative, maintainable) could form a good set of principles for some of what I’m thinking.

In terms of gathering data on student performance, assessment and rubrics appear to be the main method. Wonder if analytics and other approaches can supplement this?

It would appear that no-one is doing this stuff very well. The best curriculum mapping tool is a spreadsheet!!! And the data gathering tools are essentially assignment marking tools. Neither set of tools were well evaluated in terms of ease of use.

Lots of good principles and guidelines for implementation, but crappy tools.

Student centered much?

AoL is defined as determining program learning outcomes and standards and then gathering evidence to measure student performance. use for curriculum development, continuous improvement and accrediatation – but no mention is made of helping students develop eportfolios for employers. Especially in professional programs with national standards, this would seem an obvious overlap. Student centered much?

Interesting given that AoL is meant to be based on student centered learning.

Staff engagement

Is positioned as a difficulty.


AN earlier project (on which this was built) was titled “Facilitating staff and student engagement with graduate attribute development”. Apparently limited to helping staff design criteria to assess GAs and students self-evaluate against those. Seems to be only a small part of the facilitation process.

GAs versus professional standards

I have to admit to being deeply skeptical around the notation of institutional graduate attributes. Always struck me as a myth created by high-priced senior management to justify what was unique about the vision they were creating and hence with little connection to the reality of teaching at a university. In particular, that the standards/attributes set by the professional bodies associated with certain disciplines would always count for more.

Of course institutional GAs have been apparently required by the government since 1992. I wonder if this is like some of those other legal requirements that have been discovered to have never existed, ceased to exist or which were misinterpreted?

Executive summary

Assurance of Learning (AoL) “evaluates how well an institution accomplishes the educational aims at the core of its activities”…AoL provides “qualitative and quantitative indicators for the assessment of the quality of award courses” Thus be used for

  • institutional/management ends: strategic directions, priorities, quality assurance, enhancement processes.
  • individual curriculum development.
  • Valid evidence to external constitutents.

Focus of this project on

  1. mapping program learning outcomes.
  2. Collecting data on student performance against each learning objective.

Aside: interesting that they’ve already used both outcome and objective. Does this mean these are different concepts, or just different labels for the same concept?

Investigation was done via

  • Exploratory interviews with 25 of 39 associate deans L&T from Oz Unis.
  • 8 focus groups with 4 good practice institutions, 2 at each institution – one with a senior leader, the other with teaching staff.
  • Delphi method – but who with?
  • Interviews with experts.
  • online survey.

Recommended good practice strategies

  • holistic – whole of program, to ensure student’s progress and introduction of GAs prior to demonstration.
  • integrated – GAs must be embedded in the curriculum and links to assessment.
  • collaborative – developed with teaching in an inclusive – not top down – approach to engage staff.
  • maintainable – must not be reliant on particular individuals or extra-ordinary resources.

They then proceed to mention typical cultural change strategies.

Also did an independent review of existing tools. Interestingly the Blackboard 9.1 goals/standards service gets a mention.

Chapter 1 – Project overview

Identifies 7 key stages in assuring learning from an AACSB White Paper

  1. establishing graduate attributes and measurable learning outcomes for the program;
  2. mapping learning outcomes to suitable units of study in the program (where possible allowing for introduction, further development and then assurance of the outcomes);
  3. aligning relevant assessment tasks to assure learning outcomes;
  4. communicating learning outcomes to students;
  5. collecting data to show student performance for each learning objective;
  6. reporting student performance in the learning outcomes;
  7. reviewing reports to identify areas for program development (‘Closing the Loop’).

Explains the growing requirement for this, lots of acronyms and literature.

Mentions their prior project which development the ReView online assessment system to help staff develop criteria that assessed GAs within the set assignments. Students have self-evaluations.

Project aims to inform strategy to identify efficient and manageable assurance mechanisms (effective not important?).

Chapter 2 – methodology

Key guiding questions

  1. How is mapping of GAs being done?
  2. How is the collection of GA data being done?
  3. What are the main challenges in mapping and collecting?
  4. Are there identifiable good practice principels?
  5. What are the tools currently being used?

Chapter 3 – Literature review

here come the standards

Standards are defined as “the explicit levels of attainment required of and achieved by students and graduates, individually and collectively, in defined areas of knowledge and skills” (TEQSA, 2011, p. 3)……Academic standards are learning outcomes described in terms of core discipline knowledge and core discipline-specific skills, and expressed as the minimum learning outcomes that a graduate of any given discipline (or program) must have achieved (Ewan, 2010).

TEQSA is apparently requiring academic standards “be expressed as measurable or assessable learning outcomes”.

Determining standards and then collecting data against those is complex. Coates (2010) acknowledges the complexity and suggests a need for cultural change. And there is apparently an urgent need for “new, efficient and effective ways of judging and warranting” (Oliver, 2011, p. 3).

Extant literature

AoL finds its pedagogical basis in student-centered learning.

Curriculum mapping in AOL is the process of embedding learning outcomes related to GAS into units of study where these are introduced, developed and then assured.

AUQA required curriculum mapping as do most professional accrediting bodies – hence the 2009 observation from Barrie (et al 2009) that most Australian universities have some sort of strategic project underway.

The higher ed mapping literature is scant but suggests it’s useful for (all backed up with citations)

  • identifying gaps in a program
  • monitoring course diversity and overlap
  • providing opportunity for reflection and discourse
  • reducing confusion and overlap and increasing coherence

There are more, but there does seem to be some overlap.

Mentions the problem of the compliance culture, others include

  • difference between the intended and the enacted curriculum from the students’ perspective
  • how to contextualise GAs into a discipline.
  • mapping seen as threatening, as a course cutting exercise, criticisms of teaching material etc.
  • labour intensive exericse.

Staff engagement is seen as the key and current suggestions for improvement include

  • develop a conceptual framework for developing GAs, including 3 elements
    1. clear statement of purpose for curriculum mapping.
    2. a tool that allows an aggregate view of a course.
    3. a process for use of the tool
  • map GAs using extensive audits of each course.
  • a cyclical process including visual representations to enable a fluid/adaptable curriculum
  • availability of sufficient resources.
  • use of alignment templates (isn’t this a tool?)
  • professional development to integrate and contextualise GAs.
  • having specialists who can teach a particular attribute.
  • whole of program approach, focus on team co-operation and more time spent on design.
  • staff support where workloads increase.
  • linkages between GA development and professional development.

Embedding versus standardised testing

Mention of various standardised tests at the end of study. Talks about plusses and minuses.

Data collection for AoL

Focused on entering student performance outcomes against each learning objective. Need a “systematic method to collect data and explore the achievement levels of students in each of the selected attributes” to inform on-going development.

There are challenges in collecting and providing evidence – highlighting the need for efficiency and streamlining the process.

Assessment rubrics (formative and summative) are key. But there are challenges. Don’t want a “tick list”. Some skills are ill-defined, overlapping and difficult to measure. And the question of standardisation – homegenisation or pursuit of common goals. Multiple interpretations of criteria.

Rubrics can become to be used for comparison between institutions; assurance of content/process/outcomes across courses.

Continuous improvement/closing the loop

Apparently the “raison d’etre for assessing student learning” and also something that institutions are “most confused about how to go about closing the loop (Martell, 2007)”…..”integration of the assessment of learning outcomes into developmental approaches in the classroom has been somewhat intangible (Tayloer et al, 2009)”.

Curriculum mapping

important features for selecting a CM system to support AoL

  • support an inclusive and participatory process;
  • foster a program-wide approach to produce a mapped overview;
  • map by assessment task;
  • develop student awareness of attributes and their distribution within the program;

The standout tool was a spreadsheet!

Data collection

Important features for a data collection system included

  • implement a consistent criteria for attributes across programs;
  • extract outcome-specific data;
  • embed measurement in the curriculum
  • produce built-in reports;
  • conduct analysis for closing the loop;
  • implement multiple measures of AoL for program wide view.

ReView was seen as the stand out. But then, it arose from the last OLT project. But then that makes this comment interesting

ReView does not rate that well on ‘ease of use without the need for much supplementary professional development

Technology-enhanced learning – workloads and costs

The following is a summary and some thoughts on the final report of an OLT funded project titled e-Teaching leadership: planning and implementing a benefits-oriented costs model for technology enhanced learning.

The final report adds “Out of hours” to the title and captures my interest in this area. In particular, I think that the workload for academic staff (and hence the quality of learning and teaching) is being directly impacted by the poor quality of both the institutional tools and how they are being provided. Improving these is where my research interests sit, so I’m hoping this report will provide some insights/quotes to build upon. I also think that the next couple of years will hold “interesting” conversations about workloads and workload models.


The work identifies that

  1. No Australian university really has an idea about workload allocation when it comes to online/blended learning.
  2. Academics are reporting significant increases in workload due to the rise of online/blended learning.

Some of the key recommendations appear to be

  1. “DEEWR in tandem with Universities Australia and other agencies should initiate a multi-level audit of teaching time and WAMs”.
  2. “Define clearly what it means in each program to teach online for staff, learn online for students and manage staff allocation within higher education institutions so that all stakeholders as well as Finance Officers can participate in workload model development.”

Both appear to assume that what currently passes for teaching online is as good as it gets. I’m thinking we still haven’t figured out how to do this well enough. We’re still in the process of recasting what it means to teach online. So, I wonder if putting a lot of effort into workload allocation, prior to figure out how to do the work, is putting the cart before the horse?

Of course, formulating a workload formula is much easier than recasting the nature of teaching, learning and the institution in which those take place.

Executive summary

Project aims changed due to “lack of consistent sector information on real teaching costs in universities” (p. 2). Also no rigorous cost-accounting protocol is applied to e-teaching. “Unsurprisingly, the study found overload due to e-teaching was a significant factor in staff dissatisfaction” (p.2).

Of course, the conclusion from this is that “Workload models needed to change to accommodate the additional tasks of e-teaching” but you wonder why other factors weren’t considered. e.g. are the tools provided crap? are teaching staff not changing practice based on the changing nature of the task? etc.

Hoping that others will build on this with a sector-wide survey

Four outcomes

  1. Analysis of literature on costs/benefits of online teaching.
  2. Data of workload implications to help in developing workload models.
  3. Four case studies of staff perceptions of workload with TEL.
  4. Recommendations


  • Literature review revealed: lack of reporting and no documentation of the impact on workload when teaching online or in blended modes.
  • 88 interviews across four institutions showed poorly defined policy frameworks for workload allocations and staff didn’t understand those models.
  • The new technologies with new teaching methods have “increased both the number and type of teaching tasks undertaken by staff, with a consequent increase in their work hours” (p. 3)

Part 1 – Project outline and processes

As a result, institutional policies are often guided
more by untested assumptions about reduction of costs per student unit, rather than being evidence-based and teacher-focused, with the result that implementation of new technologies for online teaching intended to reduce costs per student ‘unit’ results in a ‘black hole’ of additional expense (p. 4).

Part 2 – Literature review

Despite predictions otherwise “evidence of productivity gains and cost reduction due to e-teaching/learning is scant”.

Suggests that this project’s focus is on the everyday experience with ICTs, specifically, the workload factors. But I wonder if it will touch on the other factors identified in quotes in the literature review.

Expands on four broad influences: globalisation; technological innovation; macro/micro economic settings; and, renewed cultural emphasis on individualism.

One striking feature of the Gosper et al. study is that 75 per cent of staff had not altered the structure of their unit to incorporate new technologies, despite the clear evidence of Laurillard (2002), Bates (1995) and Twigg (2003) that re-design is crucial in utilising the web.

Part 3 – Project approach

Describes the interviews of 88 academic staff across 4 institutions and the analysis approach.

Part 3 – Aggregated results of interview analysis

Lists each of the questions and summarises results

76 out of 88 did not think workload matched actual work.

Types of online learning

  • 73 / 88 – discussions.
  • 63 – traditional learning resources.
  • 51 – podcasts
  • 42 – Assessment (what does this mean when..)
  • 23 – Assessment quizzes
  • 11 – Assessment submission and marking

Dissenting views of institutional e-learning

The following two quotes are talking about the e-learning context at the same institution at about the same time (2009 through about 2011).

The great

no name of institution removed interviewees commented on the impact of technology. It is probable that since the institution had undergone a large review and renewal of technology in the learning management system where processes to support academics were put in place and where academics were included in decision making and empowered to change and upskill, negative attitudes towards the general impact of technology were not an issue for staff. One can hypothesise that these issues were principally resolved.

The not so great

During training sessions … several people made suggestions and raised issues with the structure and use of Moodle. As these suggestions and issues were not recorded and the trainers did not feed them back to the programmers … This resulted in frustration for academic staff when teaching with Moodle for the first time as the problems were not fixed before teaching started…..[t]he longer the communication chain, the less likely it was that academic users’ concerns would be communicated correctly to the people who could fix the problems

Seems to be a problem of communication somewhere in there.

I wonder which view was closer to the truth (whatever that is)? Given that the first quote is from a nationally funded research project (the second from a peer-reviewed journal publication), I wonder what implications this has for the practice of institutional e-learning? Or, what it is that institutions say about their practice of e-learning?