TAM, #moodle, online assignment submission and strategic implementation

The following is an attempt to expand upon a mention in the last post about exploring one set of thoughts about why/how we might extend/reuse/build upon some prior research using TAM (Technology Acceptance Model) and its application to understanding the use (or not) of Online Assignment Submission and Management (OASM) in higher education.

In summary,

  1. In 2005, we published a couple of papers (Behrens et al, 2005; Jones et al, 2005) in which the Technology Acceptance Model (TAM) (Davis, 1989; Venkatesh et al, 2003; Venkatesh & Bala, 2008) was used to explore why an increasingly widely used online assignment submission and management tool (OASIS) was successful.
  2. In 2009/2010, that system was replaced as part of an project to adopt a single LMS (Moodle) that was designed to provide “appropriate support for staff and students to access and use ICT effectively in learning and teaching” (Tickle et al, 2009, p. 1040)
  3. There have been mixed messages about the success of that project. For example, Tynan et al (2009) suggest this

    It is probable that since the institution had undergone a large review and renewal of technology in the learning management system where processes to support academics were put in place and where academics were included in decision making and empowered to change and upskill, negative attitudes towards the general impact of technology were not an issue for staff. One can hypothesise that these issues were principally resolved.

    . While Rossi and Luck (2011) report on a range of issues with the transition including a significant loss of functionality in terms of online assignment submission and management.

    There have also been some significant anecdotal comments about issues with the Moodle OASM functionality with large classes.

  4. It’s now 3/4 years since the implementation of Moodle. Now would appear to be a good time to explore the usage of the Moodle OASM functionality and the perceptions of the teaching staff. This would enable some comparison with the earlier findings from the 2005 work. Especially given findings from this work

    The study concludes that staff perceptions have indeed changed and whilst more staff are using online systems for assignment submission, marking and feedback, many do not have a positive attitude towards it. This could be explained by the increased prevalence of available systems and tools alongside their mandated presence.

    but at the same time “but students wholeheartedly in support” of OASM.

There’s a bunch of stuff to unpack here, an initial start includes

  • The original OASM system was entirely optional. There was no perception that OASM was compulsory or mandatory in the early noughties. As Huber (2013) suggests there is a growing trend toward the expectation or the explicit mandating of OASM. TAM research suggests that optional and mandatory adoption decisions have different impacts/factors.

    Is OASM now seen as mandatory?

  • What is the actual use of OASM?

    We can use “learning analytics” to examine the trends in adoption and use. We did this in 2009 (Beer et al, 2009). The image below (click on it to see it larger) compares the use of “evaluating students” features between the institution’s prior LMSes: Blackboard and Webfuse. “Evaluating students” includes both OASM and quizzes. The purple and green lines indicate the max/min adoption rates expected for this feature from Malikowski et al (2007).

  • The impact of prior use.

    As Huber (2013) suggests OASM is increasingly more widespread. Obviously, prior experience with OASM will influence perceptions (versions of TAM suggest this as well). But will there be a difference between people who have used the Webfuse OASM system and those who have used other systems?

  • What are the factors that impact perceptions.

    The free text responses we focused on in earlier papers are useful for identifying what it is that people like (or don’t) about these systems. This could be interesting.

  • How specific do we get?

    Huber (2013) draws on surveys that are much more explicit in exploring different uses of OASM. The TAM survey is more generic and open ended. What’s the right mix?

The adoption and acceptance of learning analytics

Much earlier this year I was invited to participate with some folk much cleverer than I around the question of the adoption of learning analytics and a project to explore this using the Technology Acceptance Model (TAM). Going by the date embedded in the URL of this post, that was way back in August. It’s December and I’m now trying to get back to this post to capture some of my thinking.

If I had to summarise my thinking now, prior to completing the post below, it would consist of

  1. Based on the experience with business intelligence systems in the broader business world and the LMS/e-learning within universities, adoption of learning analytics is likely to be problematic in terms of both quantity and quality.
  2. The centrality in TAM of an individual’s perceptions of the usefulness and ease of use of an IT innovation on adoption panders to my beliefs and prejudices.
  3. I have some qualms (from both the literature but also my limitations) about the value of research based on TAM and surveys of intention to use.

And now some random thoughts.

Deja vu all over again

Based on my current observations, my fear is that learning analytics as implemented by universities is going to suffer similar problems to most prior applications of ICTs into university learning. For example, Geoghegan’s (1994) identification of the chasm as it applied to instructional technology, the findings 10+ years later that usage of the LMS by academics was limited in terms of both quantity and quality, and more recent reports that understanding the information provided by learning analytics is really hard.

The Technology-Adoption-Model

For better or worse, the current research is looking at leverage the Technology Adoption Model (TAM) for exploring the likely acceptance of learning analytics. TAM is one of the “big theories” associated with the Information Systems discipline and has been widely used. TAM provides an instrument through which predictions can be made about whether or not some new technological tool is going to be adopted within a particular group or organisation. The idea is that based on the beliefs about the tool held by the individuals within that group, you can make predictions about whether or not the tool will be used. The particular beliefs that tend to be at the core are perceived usefulness (often the most influential) and perceived ease of use.

TAM is not without its criticisms, including Bagozzi (2007). It has evolved somewhat, currently at TAM3 (Venkatesh, et al 2008). One of the criticisms of TAM has been that it doesn’t provide practitioners with “actionable guidance”. i.e. how do you increase the likelihood of adoption.

TAM work is traditionally survey based. Venkatesh and Bala (2008) identify three broad areas of TAM research

  1. Replication and testing of the constructs.
  2. Develop theoretical underpinnings for TAM constructs.
  3. The addition of new constructs as determinants of TAM constructs.

    Leading to four different types of determinant: individual differences, system characteristics, social influence, facilitating conditions.

The determinants above arose in the development of TAM2. In developing TAM3, Venkatesh and Bala (2008) suggested the following additions:

  • Perceived usefulness

    • Subjective norm
    • Image
    • Job relevance
    • Output quality
    • Result demonstrability
  • Perceived ease of use
    • Computer self-efficacy
    • Perceptions of external control
    • Computer anxiety
    • Computer playfulness
    • Perceived enjoyment
    • Objective usability

With experience and voluntariness as potential moderator factors. Perhaps the above illustrates Bagozzi’s (2007) suggestion that

On the other hand, recent extensions of TAM (e.g., the UTAUT) have been a patchwork of many largely unintegrated and uncoordinated abridgements

Bagozzi (2007) points out that there can be an “potentially infinite list of such moderators” that has the result in making the broadenings of TAM “both unwieldy and conceptually impoverished”. The advice being that introduction of these moderating variables should be theory based.


As it happens, Ali et al (2012) have taken TAM and done some work around learning analytics described as

While many approaches and tools for learning analytics have been proposed, there is limited empirical insights of the factors influencing potential adoption of this new technology. To address this research gap, we propose and empirically validate a Learning Analytics Acceptance Model (LAAM), which we report in this paper, to start building research understanding how the analytics provided in a learning analytics tool affect educators’ adoption beliefs. (p. 131)

Factors examined

  1. Pedagogical knowledge and information design skills
  2. Perceived utility of a learning analytics tool
  3. Educators perceived ease-of-use of a learning analytics tool

Identifying what influences usefulness and ease of use

Back in 2006 a group of us used TAM to explore perceptions of an online assignment submission system (e.g. Beherens et al, 2006). However, rather than trying to predict levels of usage of a new system, this work was exploring perceptions of a system that was already being used. The intent was to explore what was making this particular system successful. TAM1 was used in a survey but which included free text responses for respondents to talk about what influenced their perceptions.

Having re-read this again, there’s probably some value in exploring this research again. Especially given that the institution has moved onto using another system.

Some thoughts on TAM and learning analytics

I see the need for identifying and exploring the factors that will make learning analytics tools likely to be used. Not sure TAM or its variants are the right approach. Some reasons following.

Are there large groups of people actually using learning analytics?

How do you measure individual perceptions of something that many people haven’t used yet?

Ali et al (2012) got a group of educators together and had them experiment with a particular tool.

This approach raises a problem

Is there any commonality between learning analytics tools?

If the aim is to test this at different institutions, is each institution using the same set of learning analytics tools? I think not, currently most are doing their own thing.

Running TAM surveys on different tools would generate other problems.

Identifying the factors before hand

The survey approach is based on the assumption that you can identify the model beforehand. i.e. you figure out what factors will influence adoption, incorporate them into a model (in this case integrating with TAM) and then test it. Ali et al (2012) included pedagogical knowledge and information design skills of educators.

You might be able to argue that given the relative novelty (which itself is arguable) of learning analytics that you might want to explore these a bit more.

I think this comes back to my humble nature/stupidity and not thinking I can know everything up-front. Hence my preference for emergent/agile development.

Doesn’t offer tool developers/organisations guidance for intervention

There was a quote from the literature identifying this as a weakness of TAM. But as a wannabe developer of learning analytics enhanced tools, TAM appears to be of fairly limited use for another reason. As mentioned above TAM is focused on the internal beliefs, attitudes and intentions. Do you think this tool is easy to use? Do you think it’s useful? Or picking up on Ali et al (2012): what is your level of pedagogical knowledge or information design?

This doesn’t seem to provide me with any insight about how to make the learning analytics useful or easy to use? Or at least not insight that I couldn’t gain from a bit of user-centered design. As a tool developer, how do I change the users perceptions of computer self-efficacy or anxiety? An organisation might think it can do this via training etc, but I have my doubts.

Teacher conceptions of teaching and learning

If a factor were to be added for using TAM and learning analytics, I do think that the conceptions of teaching and learning work would be a strong candidate. In fact, the introduction to (Steel, 2009) cites some research to indicate that “teacher beliefs about the value of technology use are a significant factor in predicting usage”.

Where to know?

Not sure and time to go home. More thinking and reading to do.


Ali, L., Asadi, M., Gašević, D., Jovanović, J., & Hatala, M. (2012). Factors influencing beliefs for adoption of a learning analytics tool : An empirical study. Computers & Education, 62, 130–148.

Bagozzi, R. (2007). The Legacy of the Technology Acceptance Model and a Proposal for a Paradigm Shift. Journal of the association for information systems, 8(4), 244–254.

Behrens, S., Jamieson, K., Jones, D., & Cranston, M. (2005). Predicting system success using the Technology Acceptance Model: A case study. In 16th Australasian Conference on Information Systems. Sydney.

Geoghegan, W. (1994). Whatever happened to instructional technology? In S. Bapna, A. Emdad, & J. Zaveri (Eds.), (pp. 438–447). Baltimore, MD: IBM.

Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision sciences, 39(2), 273–315. doi:10.1111/j.1540-5915.2008.00192.x

Blogs, learning analytics, IRAC and BIM

In 2014 I am hoping to make some changes to BIM that will enhance the course I’ll be teaching. The hope is to leverage various learning analytics to enhance student learning. The following is an attempt to use the IRAC framework to think about what might be done. Essentially a bit of brainstorming about possible future development.

Each of the headings below link to the IRAC framework. First off the content and the purpose of this use of learning analytics is described. Then each of the four components of the IRAC framework – Information, Representation, Affordances and Change – are considered.

I’ve just learnt about the proceedings from the 3rd Workshop on Awareness and Reflection in Technology-Enhanced Learning, will need to read through that content for any additional insights.


The course is a 3rd year course in a Bachelor of Education. It’s taken by folk hoping to become teachers at every level from prep, through Grade 12 and into the VET sector. The focus is the students be able to use Information and Communication Technologies to enhance/transform the learning of their students. During the course the students complete a three week practical in a school setting. The course is offered twice a year. The first offering has average around 300 students spread over three campuses and online. The second offering averages around 100 students all online. The students in the course are not necessarily all that ICT literate.

The students are required to maintain an individual blog that they use as a learning journal. The learning journal is intended to be used for capturing experiences, feelings and reflections. Contributions to the learning journal contribute 15% of the final mark. There is no formal marking of blog posts. Marking is done on the basis of the number of posts per week, the average word count, and the number of links to both external resources and blog posts from other students.

2013 was the first year the learning journal assessment was used. All 2013 student blog posts are archived in two instances of BIM. The plan is to use learning analytics to explore this data and test out various approaches that could be integrated into BIM and the course’s operation in 2014.


At a high level, improve student learning while keeping the staff workload appropriate. Briefly, the pedagogy in the course is trying to encourage students’ self-regulation, reflection and building a PLN/making connections. I want the students to take ownership of their learning around ICTs and Pedagogy and for them to create and share a range of artefacts, insights and perhaps knowledge. The purpose that learning analytics can play in this is helping both the students achieve this and help the teaching staff support this.

Some high level aims for harnessing learning analytics.

  1. Provide students with some idea of how they are going and perhaps more importantly how to improve.
  2. Increase the diversity, quantity and quality of the connections between students and their posts and blogs.
  3. Allow teaching staff to identify who is struggling, who is doing well and who is in between and then help support staff in engaging appropriately.

A quick skim of 2013 course evaluation responses reveal some comments (emphasis added) from the semester 1 offering

The blog is a good idea to ensure the students are trying new ICTs during the course however the assessment was pointless. There was no real reason for us to be writing a certain amount of blogs per week. I found it a nuisance to maintain (on-campus student)

Probably the blogging was unnecessary but I still didn’t mind that. (on-campus student)

The blogs were very time consuming – and considering they were marked without being reviewed/marked then I am concerned that we could have done what ever we wanted! (on-campus student)

I also found blogging to be very beneficial in building my PLN (online student – most effective aspects of the course)

being forced to blog was actually great as it brought online students together as we shared resources and got to know each other. (online student – most effective aspects of the course)

the blog we had to keep, it had no purpose to it (online student – least effective aspect of the course)

the amount of blogs expected (online student – least effective aspect of the course)

The blogging, although I can see why we had to do it, I found it was hard to keep to the time frames as an online student (online student – least effective aspect of the course)

I don’t believe that I gained from blogging 3 times per week. I would rather have been assess on the quality of 1 blog per week and professional feedback that I could have provided to another student (rather than just links in the 3 blogs) (online student – least effective aspect of the course)

There were other comments on the blogs, common themes so far seem to be

  • The purpose of the blogs was non-existent to some, especially given that they weren’t marked based on quality by a human.
  • Blogs were potentially seen as more problematic in Semester 1 because of other issues with the course.


Change is actually the last part of the IRAC acronym, but I’ll put it first. Mainly because it is the IRAC components that is least considered in learning analytics related projects (IMHO) and the one that I think is the most important.

In this case, I can see there needing to be three types of change considered: going outside of Moodle, using features inside of Moodle, and insidie BIM.

Outside Moodle

In short, thinking about, designing and implementing the type of changes to BIM and pedagogy outlined below is inherently a learning experience. I’m not smart enough to predict what is going to happen prior to implementation. I always gain insight when engaged in these activities that I want to leverage straight way into new approaches and new technological capabilities. i.e. I want to be able to make changes to BIM during the semester.

Not something I can do with the standard processes used for supporting a Universities institutional LMS. Hence the need to look at how I can do changes to BIM outside of Moodle and the institutional installation. In 2013, I did this via a kludge, essentially some Perl scripts and a version of Moodle/BIM running on my laptop.

Beyond the constraints of the institutional LMS processes, there’s the question of information and resources other than what is typically available to a Moodle module. Some examples include

  • Activity completion.

    Currently a small part of the 15% for the learning journal assessment in this course is based on students completing the activities for set weeks. This is in Moodle, but a module like BIM will typically not be able/expected to access this information.

    QUESTION: Can/how a module access information from other parts of a Moodle course site?

  • Student demographic and academic data.

    e.g. GPA of a student, how many times they’ve taken the course, might be used to help identify those at risk. Typically not information in Moodle.

  • Student dispositions.

    Data about students dispositions and self-regulation may be useful (see below) in providing advice. This would have to be gathered via surveys and would not be normally in Moodle.

  • Computationally heavy analytics.

    It is likely that a range of natural language processing and other potentially computationally heavy algorithms could be used to analyse student posts. Most enterprise IT folk are not going to want to run these algorithms on the same server as the institutional LMS.

All of this combined, means I’ll likely explore the use of LTI mentioned in this post from earlier in the year. i.e. use LTI to enable the version of BIM used in the course to be hosted on another server. A server only used for BIM in this course so that change can happen more rapidly.

In addition, that other server is also likely to run a range of other software for the computationally heavy analytics – rather than try and shoe-horn it into a Moodle module.

Inside Moodle

There’s a line of though – with which I agree – that learning analytics are most useful when supporting a specific learning design. The more specific, the more useful. This is in tension with the tendency of LMS tools to being generic. For example, much of what I’m talking about here moves BIM away from it’s original pedagogy of students answering questions to be marked by markers toward a more connectivist approach. Becoming more specific may limit the people who can use BIM. Not a big worry at the moment, but a consideration.

Moodle 2.0 has evolved somewhat in its ability to support change. For example, the introduction of renderers separates out the representation of BIM from the data and allows different themes to override a rendered. In theory, allowing other people to modify what is shown. However, the connection with a theme is potentially a bit limiting.

Task: Explore the concept of renderers more often.

Inside BIM

There is much that could be done to the structure of BIM to enable and support rapid development. e.g. Moodle is now supporting unit tests, BIM needs to move toward supporting this.


To scaffold this look at the information that could be drawn upon, I’ll use the DAI acronym. i.e. the information to be used in learning analytics can be listed as

  • Data – raw data that is the starting point (e.g. blog posts for BIM).
  • Analysis – what method/algorithm is going to be used to analyse and transform the source information into….
  • Insight or perhaps Information – something that potentially reveals something new (e.g. how good is the reflection in this blog post)


Information we currently have access to

  • All student blog posts from 2013.

    As part of the BIM database tables in the Moodle database.

  • The date and time when posts were made.
  • Student performance on assignments and in the course.

    Currently in a database in another non-Moodle assignment submission system. Pondering if this needs to move to the Moodle assignment submission system and thus the Moodle gradebook. But which raises a question..

    Question: Can/how would a module like BIM get access to Moodle gradebook data in the same course?

  • Some student demographic data.

    Currently as a CSV file manually downloaded from Peoplesoft by someone else. Includes age, postcode, sector, GPA.

  • Course and institution related dates.

    e.g. assignment due dates, semester start and end dates etc.

Information that we don’t have access to, but which might be useful

  • Comments on student blog posts.

    There’s no really standard way between different blogging engines of tracking and archiving the comments made on blog posts. So we don’t record those. Anecdotal observations suggest that many of the “connections” between students occur as comments. EduFeedr did some work around this.

  • Student perceptions of the learning journal assessment.

    Might be some mention in the 2013 course evaluation results.

    TASK: Take a look at the 2013 course evaluation results and see what mention is made.

  • Student dispositions and mindsets – e.g. this work.


A very limited list of possible forms of analysis on the information we currently have

  • Link and social network analysis etc.

    Who is linking to who? etc.

  • Natural language processing, computational linguistics etc – which might open up possibilites such as

Combining the above with student demographic information and dispositions could also reveal interesting correlations and relationships.

I need to become more aware of what possible forms of analysis might exist. At the same time, the list of affordances (see below) may also suggest forms of analysis that are required.


Early suggestions for representation might include

  • Social network diagrams of various types.

    For students and teachers to see the structure and evolution of the social network of posts/blogs. e.g. this EduFeedr scenario

  • “My progress”

    Allow students to see a collection of stats about their blog and to see it in connection with others.

  • Student posting

The work reported in this paper on using badges gives on possibility for representation and also in terms of affordances for students to compare what they’re doing with others.


The actual definition of affordances in the IRAC framwork – like the IRAC framework itself – is still in the early days of refinement. Here I’m going to use affordances as functionality that BIM might provide. Obviously influenced by the purpose from above.

  • Help students find interesting and relevant posts from other students.
  • Help students find interesting and relevant external links.
  • Allow students to see how “good” their blog is.
  • Show students how their blog compares to other students.

    There are reservations about this.

  • Allow all participants to get some idea of the important topics being discussed each week and over other time periods.
  • Show staff a progress bar/heat map/visualisation of some sort of student progress against expected milestones/questions.

    The EduFeedr progress visualisation below (click on it to see it bigger) is an inspiration.

  • Help staff to intervene and track interventions with all students.
  • Support staff in creating auto-marking approaches.

EduFeedr Progress

Measuring impact and improvement

If we ever get around to doing something in 2014, how will we know what’s changed? Alternatively, what might be useful to learn about the use of the student blogs in 2013?

Some possibilities

  • When did student post?

    Students were expected to have a number of posts each week, however, it was only assessed over a 3 or 4 week period.

    • How many students posted consistently each week and how many did the mad dash toward the end of the 3 or 4 week period?
    • Was there any correlation between when posts were made and the content of the posts, the students performance in the course, their GPA or anything else?
  • How did (if at all) did student posts change over the semester?
    • Is it possible to tell when holidays, professional experience, other assignments were due etc. from the student posts?
    • Did the emotions in posts change over semester?

      The course is quite heavy going. Especially in the first few weeks. I would expect some great nashing of teeth in the early weeks and perhaps in the leadup to assessment.

    • How did the connections between posts/students change over the semester?
  • Is it possible to develop indicators that might identify certain types of students/posts?
    • Indicators to identify students who are about to drop out?
    • Indicators to identify popular posts?
    • Indicators of students at all levels?

      e.g. what does a “good” student write about that an “ok” student writes about?

  • What were the most mentioned concepts during the semester?

To do

Some tasks left to do include, in no particular order

  • 2013 blog posts
    • Do some analysis of the 2013 blog posts.
    • Test out some of the planned analytics on these posts.
  • BIM
    • Explore the transition to renderers.
    • Explore unit tests.
    • Explore the “Moodle way” for assignments, marking, rubrics, outcomes etc.
    • Develop the “automated” marking feature.
    • Explore how the select “analytics” features will be identified.
  • LTI
    • Identify a good external hosting service.
    • Confirm that an LTI version of BIM will work with the course.
  • Purpose
    • Clarify exactly what pedagogical aims are going to be valuable.
    • Explore the self-regulated learning literature.
    • Look at the course evaluation responses from 2013 and see if there’s anything important to address.
    • Eventually identify a specific set of outcomes I want to work toward.
  • Information
    • Explore the various analysis methods that could be useful.
    • Explore how the analysis is best done with BIM, Moodle and PHP.
  • Representation
    • Explore how/if badges might be a possibility? USQ Moodle version and capabilities.
    • What PHP support is there for visualising social network diagrams?
  • Affordances
    • Get more into the literature around affordances, especially any work people have done on how to design affordances for learning/teaching.

#moodle, blogs, Moodle blogs and #bim

BIM is a Moodle activity module that I developed and use. BIM evolved from an earlier system called Blog Aggregation Management (BAM). BIM’s acronym is BAM Into Moodle. As the name suggests, BIM is essentially a port of all of BAM’s functionality into Moodle. Both BAM and BIM are designed to help with the task of managing students in a course writing and reflecting on their own individual web blogs. In particular, it was designed to do this for courses with hundreds of students.

The aim of this post is to explore and explain a comment that often arises when BIM is first mentioned. i.e. doesn’t Moodle already offer blog integration? The following tweet from @tim_hunt is an example of this.

The aim here is to answer the question, “What does BIM offer that Moodle’s existing blog integration doesn’t already provide?”

In short,

  • Blogs in Moodle are focused at providing a way for authors to create a blog inside of a Moodle instance.
  • BIM is focused on supporting teaching staff in managing a course where all students are expected to write on their own externally hosted blog.

Blogs in Moodle

Each user in Moodle has their own blog. i.e. the user’s (student, teacher or other) blog resides in Moodle. The functionality used to create and edit blog posts is provided by Moodle.

Each user’s blog can have an RSS feed if configured (by default this is turned off). However, standard advice appears to be to have RSS feeds secured (i.e. only people who can login to Moodle can access the feed).

There is support for “course tags” which allow particular posts to be associated with a course. Posts associated with courses in this way are still visible elsewhere.

If the Moodle administrators have enabled it, users can register their external blog with their Moodle blog. For example, if I registered this blog with a Moodle blog, then anything I post to this blog would also appear in my Moodle blog. Posts from an external blog can be deleted from a Moodle blog, but can’t be edited.


Moodle’s blog functionality is focused on helping users create and maintain a blog that sits within a Moodle instance.

It is user-focused, not course-focused. e.g. it appears to offer no functionality for teaching staff to find out which students have blogged or haven’t, and no functionality to mark blog posts.

The problem here (at least for some) is that

Reflective learning journals are demanding and time-consuming for both students and staff (Thorpe, 2004, p. 339)

Blogs with BIM

BIM doesn’t provide any functionality for students or teachers to create a blog. Instead, BIM relies on the author creating a blog on their choice of blogging platform (e.g. I always recommend WordPress.com). This means that the students’ blogs (it’s almost always student blogs that BIM works with) are hosted external to the LMS. Each student’s blog is their individual blog.

What BIM does is

  • Make a copy of all the posts students make on their blog within the LMS just in case the dog eats it.
  • Provide a couple of aggregated views that shows you who has blogged, how much they’ve blogged and how recently they’ve blogged.
  • Allows different teaching staff to see these aggregated views for the students they are responsible for (while the “in charge” teacher can see all).
  • Shows which students haven’t registered their blogs yet and provides a mail merge facility to remind them to do it.
  • Provides an interface so students can check what BIM knows about their posts.
  • If you really want to, allows you to mark student posts.

    This is done by specifying a set of questions that student posts should respond to, and the provision of a marking and moderation interface. Finally, the marks will integrate into the Moodle gradebook.


BIM functionality is focused on managing (and marking) of student blog posts. It aims to reduce the time-consuming nature of reflective journals implemented using blogs.

What functionality BIM currently provides for this task remains essentially the same as was designed into BAM in 2007. I’m hoping 2014 will see some long overdue evolution in functionality.

Moodle blogs and BIM?

The Moodle blog functionality is all about helping authors produce blogs. BIM is currently all about helping teachers manage and mark the student use of blogs. It is possible to argue that neither do an overly fantastic job.

This means that it should be possible for the two to work together. i.e. a student could register their Moodle blog with BIM, rather than using WordPress or some other external service. Indeed it is. I’ve just successfully registered a Moodle user blog in BIM.

This is of potential interest in situations where what the students are reflecting on might raise privacy concerns (e.g. nursing students – or just about any other profession – reflecting on their placement experiences). In this situation, the students could create their blog within Moodle and register the RSS feed with BIM.

However, the privacy of this approach depends on the blog visibility settings within Moodle and their impact on the generation of the RSS file. There appear to be three relevant settings for “blog visiblity” in Moodle

  • “The world can read entries set to be world-accessible”
  • “All site users can see all blog entries”
  • “Users can only see their own blog”

The question is what effect this visibility setting will have on the RSS file required by BIM. i.e. If visibility is set at “Users can only see their own blog” will this stop generation of the RSS file? A quick test seems to suggest that the RSS file is still generated.

This begs another question about privacy. The “security” or “privacy” of the RSS file generated by a Moodle blog is an example of security through obscurity. i.e. if you know the URL for the RSS file, you can view. The “security” arises because the URL includes a long list of hexadecimal numbers that make it hard to guess.


Thorpe, K. (2004). Reflective learning journals : From concept to practice. Reflective practice: International and Multidisciplinary Perspectives, 5(3), 327–343.

The IRAC framework: Locating the performance zone for learning analytics #ascilite

The following is a draft version of the presentation I’ll be giving at ASCILITE tomorrow. Hopefully this will go close to fitting within the 12 minute time frame.

Other resources

The paper on which this presentation is based is available. As is @cfellows insightful and interesting annotated response (an accessory every #ascilite paper should come with).

The slides are also available on Slideshare.

The presentation

Click on the images below to see a larger version.


The aim of this talk is to give an introduction and rationale to the IRAC framework. In short, the rationale is that use of the IRAC framework – especially when its more complete – provides a useful lens to improve the likelihood that learning analytics interventions will actually be used of learners, teachers and others and subsequently be more likely to improve learning.


The motivation for this work is our observation that most of what universities are doing to implement learning analytics relates metaphorically to the steaming pile in this image. It’s also based on our beliefs and observations that the literature around learning analytics has some areas of over and under emphasis.


We’ve been especially annoyed/frustrated with the influx of business intelligence folk and vendors into the learning analytics area. In no small part because they haven’t really been able to get business intelligence working in the much simpler and older uses of business intelligence. If they couldn’t get it to work effectively there, they certainly won’t be able to get it to work in an area as complex and different as learning and teaching.

But I wouldn’t want to limit my criticism to these folk. I don’t think that many of the folk responsible for the transformational improvements to the quality of learning and teaching from the wildly successful application of technologies such as the LMS, lecture capture and copy detection within universities are likely to do significantly better with learning analytics.

And finally, I have some significant cognitive limitations. I need some help thinking about how to design learning analytics interventions.


Which leads to the question of how we can help do this better? How do you do it better?

This is where the IRAC framework comes in. We think that there is value in having a framework that can be used to scaffold the analysis, evaluation and design of learning analytics interventions. In fact, we found ourselves wanting and needing such a framework. Especially one which didn’t necessarily suffer some of the limitations of existing frameworks.

The IRAC framework is our early attempt to achieve this.


Because it is still early days, we are still questioning much of the IRAC framework and hope to hear from you what your questions might be about it.


So, rather than define learning analytics. I’m going to start by defining what we think the purpose of learning analytics is.

For us learning analytics is yet another tool in our arsenal to help improve learning. There is no point in learning analytics unless it can help improve learning.

In order for this to happen, the learning analytics interventions we design have to be integrated into the practice of learning and teaching. If the students, teachers and others involved aren’t using these learning analytics interventions, then there is no way the interventions can help improve learning.

To be clear, like much of institutional e-learning we don’t think most institutional learning analytics interventions are destined to be used to an great level of quantity or quality.


There are already a range of models looking at various aspects of learning analytics. In fact, the Atif paper at this conference adds a nifty little conceptual analysis of one part.

The model we’re going to use here is one by George Siemens from his journal article earlier in the year. Throughout the presentation as we introduce the IRAC framework we’ll show how it relates to the Siemens model. This will help make one of our points about the apparently over-emphasis on certain topics that perhaps aren’t as important as others.


To illustrate the value of the IRAC framework we’re going to use it to analyse an example learning analytics intervention.

In applying the IRAC framework we currently think it’s always done by starting with thinking of it applied to a particular context – a nuanced appreciation of context is the main defence against adoption of faddish IT innovations – and with a particular purpose.

The context we’re going to consider is CQUniversity and the purpose is identifying at risk students. A purpose that is close to the consideration of most current institutions.


Of course, in keeping with my identified purpose of learning analytics, I think the purpose should be rephrased as helping at risk students. Identifying them is pointless if nothing is done to help them.


This is the SSI system. This particular view is what the course coordinator – the person in charge of a single course or unit – would see. It lists all of the students in their course ranked from lowest to highest on the basis of an “Estimation of Success” indicator.

I imagine your institution has or is developing something that will fulfill a similar purpose. The Atif paper also at this conference actually looked at three such tools ECU’s C4S, UNE’s AWE and OUA’s PASS. Everyone is doing it.

So how do we use IRAC to analysis and think about SSI?


Let’s start at the beginning. IRAC is an acronym. The first part of the acronym is I for information. What is the information you are drawing on, how are you analysing it, and what considerations around ethics/privacy etc exist.

As you can see from this representation at least three-quarters of the Siemens’ model is focused on Information. No great surprise that we think that this is not helpful. It’s a necessary first step but it is by no means a sufficient step if your purpose for learning analytics is to improve learning.


In fact, given that most of the people involved in the origins of learning analytics come from a data mining, business intelligence, computer science background this emphasis on information is no great surprise. It’s a perfect example of Kaplan’s law of instrument. i.e. if all you have is a business intelligence system, then every task looks like…..


The IRAC framework is still very much under development and this is the first public run of the “Information – SAO” model. The idea is that when you’re pondering the Information component of a learning analytics intervention a useful way of thinking about this might be the combination of Source, Analysis and Output – SAO. Let’s demonstrate with SSI.

The source information from which the EOS is calculated includes demographic and enrolment information about the student (e.g. # of courses enrolled, # passed, their GPA etc) and their activity in the LMS. This is a fairly standard combination of basic data used in most similar systems.

The Analysis – how the raw information is transformed into the output – is by way of a formula. Essentially, points are “awarded” based on the value for each of the bits of information. i.e. if you’ve failed 100% of courses and are enrolled in 8 courses you are going to get a very large negative number in terms of points.

The output is a single number. Which eventually is represented (the second component of IRAC) as a traffic light.


The idea here is that you can start to compare other systems. So drawing on the Atif paper from yesterday it’s possible to say that the AWE and PASS systems (similar to SSI) draw on discussion forum and social media pages as additional sources of information. Similar comparisons can be done with the other parts of the SAO model and of the IRAC framework.


It’s possible to make other observations about SSI. The proxy it uses for learner behaviour/activity is clickstream data. How many times they’ve clicked on the course website. This focus on the clicksteam as the source of information about the learner is problematic for a number of reasons that have been explained in the literature.


We are doing work that extends beyond the clickstream but due to the constraints of a 12 minute presentation, I’m not getting into details here. So very quickly. I’m the author of a Moodle module called BIM – find out more at the URL on the slide. BIM aggregates, mirrors and manages students blogging. The data BIM focuses on is not clickstreams, but what students are writing. The students’ blog posts. It is through this that BIM allows moving away from simple behaviour – which is all you can get from the clickstream – into information that has some cognitive component.


In Sunday’s keynote at the A-LASI workshop Dragan Gasevic used the term “predict-o-mania” to label the practice of using the same predictive model for all situations with a complete ignorance of context. He also then showed a range of research to show just how silly such a practice is.

This is a significant weakness of SSI. The Estimation of Success is calculated using the same formula for all courses regardless of the context. The formula has been tested with past data at the institution and is somewhat generalisable, but this is still a significant limit.

This limitation has been identified by the Desire2Learn folk and their system – alongside other tools like the Moodle engagement block – provides support for more context-specific variety in the models being used.


Another observation from the Desire2Learn folk that identifies a limitation with the EOS calculation. i.e. that it is a black box or in the words of Judy Kay from Sunday, it’s not scrutable. In fact, Colin reports that there have been academics using SSI that first asked to see the detail of the EOS analysis/formula so that they could understand what it is telling them.


From this we can start populating the IRAC framework for SSI. The idea being is that if you did this for different learning analytics interventions you could start making some judgements about how well the intervention may suit your context and where you might like to make improvements if you’re the developer of such a system. You could also use this type of analysis to compare different learning analytics interventions and draw conclusions about the appropriateness for your context and purpose.


The R in IRAC is representation. You’ve gathered your information and analysed, now you have to represent it so that people can understand and act upon it. The trouble is that most institutional implementation of learning analytics often pays too little attention to representation. Though the research literature does have some people doing some interesting things.

This is perhaps the first adoption barrier for learning analytics. If the representation is hard to understand or hard to access (or inappropriate) it won’t be used.


One of the advantages SSI provides is that the representation of the data does effectively integrate a range of information that was previously not available in one place. Too often the information silos are a result of different institutional systems (and their support and control processes) that never talk to each other. The value of bringing this information together in a form that is easily accessible to teaching staff is not to be under-estimated.

The representation in this case is tabular. It hasn’t made significant use of advanced visualisation techniques. This might be seen as a problem


So that adds integrated and tabular to the IRAC framework for SSI. A tick here indicates presence in the tool not that the particular feature is good or bad.


A good time to mention my opinion that “dashboard suck”.

Dashboards are the other focus for institutional learning analytics projects and they suck. But they also illustrate the other limitation of much of the learning analytics work. It stops at representation.

i.e. a dashboard represents the finding. A dashboard doesn’t help you do anything. They tend to have little or no affordances for action.


As mentioned above, we believe that learning analytics is only useful if it leads to changes in learning and teaching. It has to lead to action. The A in RAC is affordances. Or what sort of actions does the learning analytics application afford? What does it help people do in response to the the insight it represents?


Just quickly the theoretical foundations of the IRAC framework arise from Don Norman’s work around cognitive artefacts. In particular, how the literature around Electronic Performance Support Systems (EPSS) has used these principles to develop the idea of the performance zone. This is talked about more in the paper, if you’re interested.

Affordances are not a new topic for the ASCILITE crowd. In short, the idea here is that a learning analytics tool should help or make easy certain actions on the part of the appropriate individuals. If action isn’t afforded, then it is likely that nothing will get done. If nothing gets done, then how does learning improve?

With recent literature highlighting the increasing workload associated with online/blended delivery in university learning and teaching the idea of systems that make the right tasks easier sounds good. Though it does raise a range of questions about what is right and many, many more.


So what affordances for action does SSI provide? One is the idea of interventions. There’s an intervention log that allows course coordinators to record details of various types of interventions and to show when those interventions occurred in relation to the students involved. It also provides a mail merge facility that makes it easier to provide apparently personal messages to groups of students.


After selecting students from the SSI interface the mail merge allows the course coordinator to frame an email message. The message can include variables – picked from several provided lists – that will be replaced with student specific values when the email is sent. Experience shows that many students currently see these emails as personal.


Affordances are not a given. What is afforded by an artifact depends on the people doing the perceiving. In addition, exaptation – the use of the artefact for unintended purposes – may play a role.

For example, experience has shown that up to 30% of the messages being sent through the SSI mail merge are not related to at risk students. Instead course coordinators are using to distribute information to students. Obviously, there is something about the affordance offered by the SSI email merge tool that is missing from the range of other available tools.


There are other less obvious affordances built into SSI. The most obvious is the default presentation of the information. The students that are the top of the table are those at risk. And we know that those at the bottom of the list are less likely to receive attention. SSI affords a focus on those students at risk. This may be a good thing, but it also means that those students in the middle or those who are doing very well are likely to receive less attention, if any at all.


SSI’s email merge facility is arguably an example of an important type of affordance for these systems that I’ve labeled “CRM”. CRM as in Customer Relationship Management. It’s not a great name but links with the closest common functionality that many in higher education are currently familiar with. The idea of something that scaffolds appropriate communication.

The idea of affordances for action is somewhat under-represented in work around learning analytics, but it’s coming. There’s much work to be done identifying what affordances might be useful in a range of contexts and explore what might make sense within something called “CRM” functionality.


Arguably related to the idea of affordances is the proposal from Lockyer et al (2013) of checkpoint and process analytics. Analytics that are specific to specific learning designs. Obviously something that a system like SSI does not provide. But which when integrated into tools that support specific types of learning designs it opens up the possibility of specific affordances. I’m particularly, interested in the affordances learning analytics offers to a tool like BIM and its intent to encourage students to engage in reflection and also to construct a PLN.


The PassNote app is another example of what “CRM” affordances might include. PassNote is from the folk at Purdue University who produce Course Signals. Course Signals is perhaps the most famous SSI-like learning analytics intervention – especially in recent times for perhaps not the best of reasons.

Course Signals uses a formula to identify students as being “red”, “yellow” or “green” based on their level of “at-riskness”. Pass Note is designed to help the teacher frame the messages to send to the student(s).


PassNote provides a range of potential messages that can be chosen on the basis of the students’ “colour” and the topic of the message. The content of these messages has apparently been designed on the basis of research findings.

This page shows the range of possible messages that a teacher might select for a student showing up as “read” if the topic of concern is “Attendance”. It appears at this stage, the teacher must copy and paste the suggested message content into whatever communication mechanism they are using. It would appear that a merger between this functionality and the email functionality of SSI might be useful.

The four labels added to this page are summaries of some of the principles underpinning the content of these messages and are taken from the Pass Note page.


Perhaps stretching a bit, but PassNote is encroaching onto the idea of “pedagogic scaffolding”. i.e. the system – like Passnote – draws on a range of theories or research findings to improve and scaffold any action the person might take. SSI doesn’t do provide this affordance.


Just quickly as another example of “pedagogic advice”. This is a screen shot from a project led by Dan Meyer to help students use mathematics to model a situation and make predictions. All of the activity takes place within this environment and based on what the student does, the system offers some pedagogical scaffolding to the teacher in the form of questions they may wish to ask particular students.


The C in IRAC stands for change and the loop in George’s model captures this nicely. However, in my experience, the people involved with university e-learning pay almost no attention whatsoever to the need to respond productively to on-going change – even with all the rhetoric within the university sector about the constancy of change. In fact, my ASCILITE paper from last year argues that the conceptions of product (e.g. the LMS as an enterprise system) and process (big up-front design) that is endemic to university e-learning is completely and utterly unsuitable for the nature of the task.

For us, it is fundamental that any learning analytics intervention have built into the ability to change and to change everything. The information it collects, the analysis methods it uses, how it represents the insight and the affordances it provides. It’s also important how quickly it can change, how fine-grained that change can be and who can change it.


PassNote also offers an example of one of the main rationales for change.

The PassNote app wasn’t originally part of the Course Signals project. PassNote arose out of the experience with Course Signals. Especially the observation that the actions being taken by teaching staff in response to the Course Signals information was less than optimum.

The experience of using Course Signals generated new requirements that had to be addressed.


This is not a new trend. HCI research even has a name for this type of situation. It’s called the task-artifact cycle.

The task of identifying at risk students generated some requirements that led to the development of the Course Signals artifact. The use of Course Signals created some new possibilities and tasks. i.e. the need to communicate effective with the at risk students to ensure something helpful was done. This in turn generated a new set of requirements that led to the development of PassNote.

The important point here is that the cycle doesn’t end with PassNote.

What happens when 30%, 50% or 100% of the teaching staff at a University start using Course Signals and PassNote? What possibilities might this create? What new tasks? What new requirements?


So the task-artifact cycle gets added under change.

One of the strengths of SSI is that the designers are very much aware of this need. SSI is not a traditional Information Systems project where the assumption is that the smart people running the project can predict what will be needed before the project is underway. The “identification of at risk students” initial purpose was mostly a label to get buy in from senior management because everyone else is doing it. The actual intent of the project is much more ambitious.

For this reason, the project is not implemented within a heavy-weight, enterprise level IT infrastructure because such infrastructures may be reliable, but they are also incredibly static. They can’t respond to change.

i.e. the SSI Project has rejected the conceptions of process and product that infect institutional e-learning.


The need for change is further supported by 30 odd years of research into Decision Support Systems (DSS) – of which data warehouses are a part, and I assume that learning analytics has a close relationship. That research has established some fundamental design principles including evolutionary development.

Arguably the ability to change is more important than all of the other components of the IRAC framework. After all, how can you learn if you are unable to change?


The need for change has also been identified in the learning analytics literature.

One of the big dangers of an inability to change brings to learning analytics is that much of what learning analytics is based on – e.g. the clickstream – data that is simple to log. Data that is coming from the systems and the pedagogical practices that we currently have. Pedagogical practices that are perhaps not all that they should be now or into the future. If we are not to be caught in the morass of historical and crappy processes then evolutionary development of learning analytics – and all of e-learning – is essential and for us is not present.

What I think will be most interesting about what Colin and Damien are doing at CQUniversity is that they have embraced – and for now are being allowed somewhat by the institution – to engage in evolutionary development. They technologies and approaches they are adopting allow them to evolve SSI, MAV and other systems they are working on much more quickly and more in step with the needs of the students and teachers at CQUniversity than other approaches I’ve seen. It’s this that is going to give them a much greater chance of getting their learning analytics interventions integrated into practice.

It’s not a question of how badly (or how well) you start, it’s a question of how quickly you can get better.


One illustration of this ability to change is a recent addition to SSI. This column in the SSI output summarises the total number clicks the student has made on the Moodle course site for this course for the entire semester. This total has always been there, but what hasn’t been there is the link. Each number is a link.


If you click on that link you get taken to the Moodle course site for the course. But not the standard Moodle course site like what you see here.


Instead, SSI uses another system under development at CQU (MAV – Moodle Activity Viewer) that modifies the entire Moodle course site to generate a heat map of clicks. In this case, the heat map MAV generates shows where a particular student has clicked and how many times. This particular students hasn’t clicked many times so you can’t actually see much difference in the heat map, i.e. there are no red areas indicating large numbers of clicks.

Now this still relies on the clickstream. i.e. behavioural and not cognitive data. However, it is a representation that can help the teacher make some more informed decisions than simply just having the number of clicks. An example of learning analytics helping by augmenting human decision making rather than replacing it. In this case, helping the teacher draw on their knowledge about the course structure and what is happening to leverage the limited value of the click stream.


This adds the idea of “In place” representation. This also provides a link back to the Electronic Performance Support Systems literature that underpins IRAC and the idea that we should be providing help to folk at the point they need it most. Not separate to the learning environment, but embedded within it. Not a dashboard implemented in some data warehouse that means I need another application to access it, but embedded into the learning environment. By providing the information as part of the learning environment – rather than somewhere else – the cognitive load is reduced and the existing knowledge of the student/teacher can be more easily leveraged.


MAV was actually developed separately from SSI. It’s original purpose was to allow teaching staff to generate a heat map of the clicks for their course site for all students, groups of students or individual students. This particular representation is of MAV working for all students on a different course site. In this case, showing the number of students.

Since the work at CQUni recognises the importance of change, it was possible for them to quickly combine these two systems.


Time to finish – perhaps past time. Our hypothesis here is that the IRAC framework – especially in its completed state – offers insights that can help in the analysis, design and implementation of learning analytics interventions. We think that this analysis, done with a particular context and purpose in mind, will lead to learning analytics interventions that are more likely to be used and thus more likely to actually improve learning.


And in short, provide a way in which university learning analytics interventions a less like a stinking pile of …..


Still very early days. We’ve got lots of work still to do and lots of questions to ask. It would be great to start with your questions, questions?

Some of the short-term work we have planned includes to do an analysis of the learning analytics literature for two reasons.

  1. Identify a range of topic lists, frameworks, models and examples that fit under each of the IRAC components, and
  2. Explore what, if any, components of the IRAC framework are under-represented in the literature.

We’re also keen to undertake some design-based research using the IRAC framework to design modifications to a range of learning analytics interventions.

Of course, this doesn’t capture the full scope of the potential questions of interest in all of the above.




Image attribution

Slide 5, 49: “Question Everything / Nullius in verba / Take nobody’s word for it” by Duncan Hull available at http://flickr.com/photos/dullhunk/202872717 under Attribution License http://creativecommons.org/licenses/by/2.0/

Slide 53: “University of Michigan Library Card Catalog” by David Fulmer available at http://flickr.com/photos/dfulmer/4350629792 under Attribution License http://creativecommons.org/licenses/by/2.0/

Slide 3: “Warehouse” by Michele Ursino available at http://flickr.com/photos/micurs/6118627854 under Attribution-ShareAlike License http://creativecommons.org/licenses/by-sa/2.0/

Slide 15: “Stream” by coniferconifer available at http://flickr.com/photos/coniferconifer/9535872266 under Attribution License http://creativecommons.org/licenses/by/2.0/

Slide 50, 51, 52: “The British Library” by Steve Cadman available at http://flickr.com/photos/stevecadman/486263551 under Attribution-ShareAlike License http://creativecommons.org/licenses/by-sa/2.0/

Slide 17: “Lawyer Crystal Ball” by CALI – Center for Computer-Assisted Legal Instruction available at http://flickr.com/photos/cali.org/6150105185 under Attribution-NonCommercial-ShareAlike License http://creativecommons.org/licenses/by-nc-sa/2.0/

Slide 23: “Dashboard” by Marko Vallius available at http://flickr.com/photos/markvall/3892112410 under Attribution-NonCommercial-ShareAlike License http://creativecommons.org/licenses/by-nc-sa/2.0/

Slide 4: “framework” by kaz k available at http://flickr.com/photos/kazk/198640938 under Attribution License http://creativecommons.org/licenses/by/2.0/

Slide 47: “25.365” by romana klee available at http://flickr.com/photos/romanaklee/5391995939 under Attribution-ShareAlike License http://creativecommons.org/licenses/by-sa/2.0/

Slide 18: “The Internet” by Martin Deutsch available at http://flickr.com/photos/MartinDeutsch/3190769121 under Attribution-NonCommercial-NoDerivs License http://creativecommons.org/licenses/by-nc-nd/2.0/

Slide 8, 9: “day 140” by mjtmail (tiggy) available at http://flickr.com/photos/mjtmail(tiggy)/2518317362 under Attribution License http://creativecommons.org/licenses/by/2.0/

Slide 6: “Purpose” by Seth Sawyers available at http://flickr.com/photos/sidewalkflying/3534131757 under Attribution License http://creativecommons.org/licenses/by/2.0/

Slide 12: “Making Omelettes” by PhotoGraham available at http://flickr.com/photos/PhotoGraham/260939952 under Attribution-NonCommercial-ShareAlike License http://creativecommons.org/licenses/by-nc-sa/2.0/

Slide 2, 48: “Smoking pile of sh*t” by David Jones available at http://flickr.com/photos/DavidTJones/3626888438 under Attribution-NonCommercial-ShareAlike License http://creativecommons.org/licenses/by-nc-sa/2.0/

Slide 40: “Change Allley sign” by Matt Brown available at http://flickr.com/photos/MattFromLondon/3163571645 under Attribution License http://creativecommons.org/licenses/by/2.0/

Reviewing the past to imagine the future of elearning #ascilite

Cathy Gunn, Reviewing the past to imagine the future of elearning

The technologies that make a difference aren’t those that are hyped.

Comment: But example given of access to online journals is an example of the Web – the information superhighway. Perhaps even Vannevar Bush. It was/is hyped. But it wasn’t hyped as being specific to education.

1993 – the idea that “the fundamental nature of teaching and learning is shifting”. Link to constructivist etc. Hypertext, multimedia. The start of massification/diversification.

2013 – shift still happening. But now Connectivism – shifting to – collectivism and peer learning.

From review/recommend to collaborate/critique

Specialised use to mass engagement e.g. ride on a plane and see the use.

Comment: But the the type of “mass” engagement is still very unique/individual. The platform enables self-customisation. Not something that applies to institutional systems – is that a factor in their limited use?

50 years of learning technology research required to find that “the type of media has no reliable effect on learning”

So, what should we study and what methods should be used?

Links to Gunn & Steel (2012)

Sticky problems in 2013?

  • Most teachers still don’t make use of the potential.
  • Push to standardise, secure and control at odds with open access, free tools & experimentation
  • Research methods are not established enough to move field forward
  • Funding models don’t provide for sustainable development


  • LA + ability to exploring learning design intent will add a missing link to methodology
  • Discoverable OERs, MOOCs etc will realise the dream of a learning object economy
  • Imperatives for change & the affordances of technology will synergise developments.
  • Dominant designs will emerge and re-engineeer IT industries
  • Power of collective consciousness will transform education and knowledge creation.

Enhancing learning analytics by understanding the needs of teachers #ascilite

Linda Corrin, Gregor Kennedy, Raoul Mulder, Enhancing learning analytics by understanding the needs of teachers

Looking at the needs of lecturers.

LA is still new and emerging.

Research focus is on tools or specific problems.

Note: How does that link to the IRAC idea of affordances being limited.

All this research is being fed down to the teachers. This research is trying to go the other way.

Based on committee work at Uni Melbourne. What do lectuerers really need to know.

  1. What are the key L&T problems/situations that teachers face for which learning analytics could be useful?
  2. What data could be used to address these problems?
  3. ???

Ran focus groups with selected samples of undergraduate degrees. ADL&T and Program Coordinators nominate “important” teachers.


  • student performance – at risk students, attendance, access to learning resources, participation in communication in class settings, performance in assessment

    A lot of staff wanted to see what the data would say about the combination of student performance + engagement. Leading to the concept of the “ideal student”.

    Different groups had different thoughts on whether students should have access to data

    Provision of feedback – combining performance and engagement. How do/can the students interpret the feedback.

  • student engagement –
  • the learning experience – greater understanding of how students develop knowledge; track prior knowledge and its development through learning activities. Data??
  • quality of teaching and the curriculum – automated textual analysis of messages students sending to student support services/discussion forums; formative and summative assessment to identify areas for review; access to support resources
  • administrative functions associated with L&T – assesment of consistency of student placements; enrolment and profiling tutorial groups; tracking safety requirements for field trips; student selection of units


  • needs not currently met by available presentations – level of detail; timing; multiple data sources
  • Skills and time to interpret
  • How to measure learning
  • privacy/ethics
  • Impact on curriculum design – i.e. management saying you must use tool X so we can measure


  • professional development on LA
  • policy guidelines


Interesting findings but limited by the constraints of the “requirements analysis” process. i.e. the assumption that people can think of all the factors in a situation where they haven’t had much experience with a system or indeed aren’t in the process of using a system. Especially given the early comments that the learning analytics field itself is still at an early stage of development.

A window into lecturers’ conversations #ascilite

Negin Mirriahi – A window into lecturers’ conversations: With whom are they speaking about technology and why does it matter?

How can HE institutions enhance technology adoption?

How can the top down initiatives be improved. How do we engage academics in using the technologies we want them to?

Comment: Well choosing it for them is perhaps not a good start. (Though unavoidable).

Interviews with 23 lecturers in foreign language teacher.

How did they heard about the LMS. Large section from colleagues. Aim is to drill down on what those discussions were about.

Questionaires used to identify range of things, including who they are talking with. Network maps showing connections about who is talking to who.

So what types of conversations they are using.

Informal conversations –
Format conversations – formal meetings for a project team, formal discipline networks

Finding (not sure how this was concluded) it’s the formal and informal conversations that makes the difference. i.e. not the workshops etc that are put in place.

What about those who don’t have conversations? Some quotes

We already have the tools, I explore it myself, I’m good at learning computer things

Seem to indicate that these folk might be good mentors, how to do that?

What can we do?

Provide opportunities for

  • informal conversations -shared offices, e-learning showcase, conferences
  • formal conversations – regular meetings, mentorship, CoP
  • other – workshops, educational technologists support, online resources

Question: What about the informal conversations at the moment of need? i.e. a helpdesk process that hasn’t had the informal chat nature abstracted away by the adoption of IT enterprise helpdesk system. Helping connections between the different people across an organisation be created.

Using the e-learning Maturity Model to Identify Good Practice in E-Learning #ascilite

Live blogging from a talk by Stephen Marshall – Using the e-learning Maturity Model to Identify Good Practice in E-Learning

Different ways of talking about quality as

  • perfection
  • exception – surpassing of standards
  • functionality – degree of utility.
  • adequate return – cost benefit.

Comment: Quality as a big stick.

Focus here is quality as sensemaking. Not as ranking, ordering etc.

We shall never be able to esacpe the ulimate dilemma that all our knowledge is about the past, and all our decisions are about the future.

Wilson (2000), From scenario thinking to strategic action

Brief description of maturity models – assumes “continuous improvement” – optimising is the ultimate

Comment: Does this model actually apply in a dynamic environment? Can an organisation ever know

Showing the reports for Oz Unis against the eMM.

Universities not strong on self-criticism. Focused on looking good in public.

Comment: Surprise, surprise.

Without these conversations – acknowledging things need to be improved – limits way forward.

eMM based on hueristics and the idea that we don’t know yet how – as institutions – to do e-learning well.

Now using various elements of the eMM to illustrate examples of good practice at various Oz universities.


Perhaps the most useful application and perspective on quality and eMM that I’ve seen. Of course, when most senior management think about quality, sensemaking is perhaps the last thing they are thinking of. Especially given the observation that Universities aren’t good at being self-critical.

Not to mention “most universities don’t measure what they do” and the comment that this sort of work goes around in cycles as accountability becomes more important.

Sensemaking – #ascilite

Live blogging of workshop run by Associate Professor Gregor Kennedy – early work from MM mentioning audit trail. Something that Reeves and Hedberg (2003) criticise as being hard to impossible without the students themselves explaining.

Talked about as early skepticism which disappeared with the arrival of big data.

Comment: But perhaps the skepticism has just been swamped by the fad.

A fair bit of time on workshop activities

How are learning analytics used

In order of prevalence

  • Detect at risk students – majority here
  • Teaching and learning research and evaluation
  • Student feedback for adaptive learning
  • Track students’ skills devleopment within curricula

Sensemaking – fundamental issues

Process of analytics: measure, parse, analyse, interpret, report

Note: Sticking to the analytics as simply information, not a foundation for action as suggested in IRAC.

Each of the steps require decisions to be made: metric selection, granularity of analysis, analysis sophistication, meaning making behaviour != cognition, timely representation, provision to multiple audiences

Behaviour != cognition

Basic level analytics data record the behavioural responses of users

Some – free-text responses – can have a cognitive component

Cognitive component is absent

Thus easy to answer what, but not why.

metric selection

Dashboard views provide aggregated student or class view. Done in a way that is known or not known.

Typical metrics

  • How many times did they do something
  • How much time did they spend.
  • Some sort of standardised score – assessment etc.


At what level do you collect and analyse data

  • Every click
  • Key components of a task – particular aspects specific to a learning task
  • Key aspects of your online subject
  • Key aspectis of your online course

Top down and bottom up

Computer science – bottom up – data mining for meaningful patterns.

L&T folk – top down – pedagogical models and specific learning designs

Hard to do it only one way. Usually a combination of both required

The IMS white paper on learning measurement for analytics identified as an example of someone starting to do both.

Note: this model might be useful for the 2009 extension paper.

The affordances of the tool also influences the analysis.

Has a iterative model of analysis from macro down to specific.