Your experience of organisational digital technology?

What is your experience of the digital technologies provided by the organisations for which you work?

If you’d like to share, please complete the poll below, more detail below.

About the poll

The poll is a semi-serious attempt to gather the perceptions of how people perceive organisational digital technologies. The idea (and the text from the two poll options) comes from this conference paper. The presentation will be on Friday 30th September with additional presentation resources coming to this blog soon.

Exploring Moodle Book usage – Part 7 – When are they used?

The last post in this series looked briefly at the contents of Moodle Book resources. This post is going to look at when the book resources are used, including:

  • What time of day are the books used?
  • When in the semester are they used?

By the end I spent a bit of time exploring the usage of the Book resources in the course I teach.

What time of day are they used?

This is a fairly simple, perhaps useless, exploration of when during the day. More out of general interest and laying the ground work for the code for the next question.

Given the huge disparity in the number of views versus print versus updates, there will be separate graphs for each. Meaning 3 graphs per year.  For my own interest and for the sake of comparison, I’ve included a fourth graph which is the same analysis for the big 2015 offering of the course I teach.  This is the course that perhaps makes the largest use of the Book and also the offering in which  I did lots of updates.

The graphs below show the number of events that occurred in each hour of the day. 12pm to 1am, 1am to 2am,…and so on.  Click on the graphs to see expanded versions.

There is no graph for prints per hour for 2012 as there were none in the database. This appears likely to be a bug that needs to be addressed.

Overall findings from time of day

Growth – The maximum number of events has grown each year (as expected given earlier indications of growth).

  • max views per hour: 2012 just less than 35K to 2015 over 150K
  • max prints per hour: 2013 just over 400 to 2015 over 1500
  • max updates per hour: 2012 just over 500 to to 2015 over 6000.

Similarity – The overall of shapes of the graphs stay the same, suggesting a consistent pattern in interaction.

This is especially the case for the viewing events. Starting with a low number from midnight to 1am, a on-going drop in events until 5am when it grows until the maximum per hour between 11am and midday. Then there is a general drop away until 7pm to 8pm when it grows again until dropping away after 9pm

Views per hour each year

2012 views per hour

2013 views per hour

2014 views per hour


2015 views per hour

EDC3100 2015 S1

EDC3100 2015 1 views per hour

Prints per hour each year


2012 prints per hour


2013 prints per hour


2014 prints per hour


2015 prints per hour

EDC3100 2015 S1

EDC3100 2015 1 prints per hour

Updates per hour each year


2012 updates per hour


2013 updates per hour


2014 updates per hour


2015 updates per hour

EDC3100 2015 S1

EDC3100 2015 1 updates per hour

Calendar Heatmaps

A calendar heatmap is a fairly common method of representing “how much of something” is happening each day of the year. The following aims to generate calendar heatmaps using the same data shown in the above graphs. The plan is to use the method/code outlined on this page.

It requires the generation of a two-column CSV file. First column the date in YYYYMMDD format and the 2nd column the “how much of something” for that day. See the example data on the blog post.  Looks like it might be smart enough to figure out the dates involved.  Let’s see.

It is, but doing all of the years together doesn’t work all that well given the significant increase in numbers of courses using the Book as time progresses and the requirement for the heatmap to use the same scale for all years. As a result the 2012 usage doesn’t show up all that well. Hence each of the years were mapped on separate heatmaps.

The following calendar heatmaps show how often the Book resources were viewed on each day. The events counted are only those for Book resources from courses offered in the given year. In 2012, 2013 and 2014 this means that there is a smattering of views of a books early in the following year (semester 3 stretches from Nov to Feb). There is no similar usage for the 2015 books because the data does not include any 2016 events.

The darker the colour the greater the use. In the 2012 image below you should be able to see a tool tip showing a value of 81 (out of 100) that is quite dark, but not the darkest.


The 2012 map seems to establish the pattern.  Heavy use at the start of semester with a gradual reduction through semester. A few upticks during semester and toward the end of semester.

I no longer have easy access to specific dates for 2012 and 2013. The 2014 heatmap has some specific dates which should broadly apply to these earlier years.
2012 Book usage


2013 Book usage - calendar heatmap


The institution maintains a web page that shows the important dates for 2014, it includes:

  • March 3 – Semester 1 starts.
    Course websites open 2 weeks before this date – 17th Feb
  • June 16 – Semester 1 exams start.
  • July 21 – Semester 2 starts
    Course websites open 2 weeks prior – 7th July.
  • November 3 – Semester 2 exams start.
  • November 17 – Semester 3 starts.

Screen Shot 2016-09-11 at 4.52.36 pm


The semester 1 2015 offering of my course had the following due dates for its 3 assignments

  1. 30th March – which appears to coincide with a heavy usage day.
  2. 4th May – also a slightly heavy usage day, but not as heavy.
  3. 15th June – two somewhat heavy usage days before and on this date.

Raising the question of what the heatmap for that course might look like – see below

Screen Shot 2016-09-11 at 4.53.10 pm

EDC3100 – S1, 2015

Focusing just on my course the increase in usage just before the due date for the assignments is more obvious. One of the reasons for this is that all the Assessment information for the course is included in a Moodle Book resource.
EDC3100 S1 2015 book usage - calendar heatmap
Other time periods relevant to this course are:

  • April 6 to 17 – the two week mid-semester break; and,
    Which correspond to two of the lightest periods of usage of book resources.
  • May 18 to June 5 – a three week period when most of the students are on Professional Experience within schools.
    Which also corresponds to a light period of usage.

The two heaviest days of usage are the 9th and 10th of March. The start of Week 2 of semester. It’s a time when the pressure is on to get a blog created and registered and start completing learning paths.

After the peak of the first three weeks, usage of the Book resources drops to around 50% per day.

Questions to arise from this

  • Does the learning journal assessment item for EDC3100 change when students interact with the course site?
  • Is the pattern of usage (down to 50% a day) indicative of students turning off, or becoming more familiar with the approach?
  • Does the high level of usage indicate

It also begs the question about whether particular offerings of the course show any differences.

2012 – S2

The 2012 S2 pattern is quite a bit different. It is a bit more uneven and appears to continue well after the semester is finished.  This is due to this being the first semester the course used the Book module and also because there was a semester 3 offering of the course for a few students that used the same resources.
EDC3100 2012 2 - Book usage

The 2012 heatmap also shows a trend that continues. i.e. usage of the Book resources continue well past the end of semester. It’s not heavy usage, but is still there.

Question: is that just me, or does it include students?

2013 – S1

2013 S1 is a bit different as well. Lighter use at the start of semester. A bit heavier usage around assignment due dates. My guess is that this was still early in the evolution of how the Book was being used.

EDC3100 2013 S1 - Book usage

2013 – S2

This map seems to be evolving toward the heavy use at the start of semester.
EDC3100 2013 S2 - Book usage

2014 – S1

And now the pattern is established. Heavy use at the start of semester and in the lead up to Assignment 1. A slight uptick then for Assignments 2 and 3. With the light usage around Professional Experience evident.

EDC3100 2014 S1 - Book usage

2014 – S2

EDC3100 2014 S2 - Book usage

2015 – S2

  EDC3100 2015 S2 - Book usage
What about just the students?

The following shows just the student usage for the 2013 S1 offering. Not a huge difference to the “all role” version above suggesting that it is students who are doing most of the viewing. But it does confirm that the on-going usage of the Book resources past the end of the semester are students who appear to have found some value for the information post the course.

EDC3100 2013 1 - Just students

Which comes first? Pedagogy or technology?

Miranda picks up on a common point around the combination of technology and pedagogy with this post titled Pedagogy First then Technology. I disagree. If you have to think in simple sequential terms, then I think pedagogy should be the last consideration, not the first. The broader problem though is our tendency to want limit ourselves to the sequential

Here’s why.

The world and how we think isn’t sequential

The learning and teaching literature is replete with sequential processes such as ADDIE, Backwards Design, Constructive Alignment etc. It’s replete with such models because that’s what academics and experts tend to do. Develop models. The problem is that all models are wrong, but some of them are useful in certain situations for certain purposes.

Such models attempt to distill what is important from a situation to allow us to focus on that and achieve something useful. The only trouble is that the act of distillation throws something away. It’s an approach that suffers from a problem identified by Sir Samuel Vimes in Feet of Clay by the late Terry Pratchett

What arrogance! What an insult to the rich and chaotic variety of the human experience.

Very few, if any, human beings engage in anything complex or creative (such as designing learning) by following a sequential process.  We are not machines. In a complex task within a complex environment you learn as much, if not more, by engaging in the process as you do planning what you will do beforehand.

Sure, if the task you are thinking about is quite simple, or if it is quite complicated and you have a lot of experience and expertise around that task, then you can perhaps follow a sequential process. However, if you are a teacher pondering how to transform learning through the use of digital technology (or using something else), then your task is neither simple, nor is it complicated, nor is it something you likely have experience or expertise with.

A sequential process to explain why technology first

Technologies for Children is the title of a book that is designed to help teachers develop the ability to help learners engage with the Australian Curriculum – Technologies learning area. A curriculum that defines two subjects: Design and Technologies, and Digital Technologies. In the second chapter (Fleer, 2016) the author shares details of how one year 4/5 teacher integrates this learning area into her class. It includes examples of “a number of key statements that reflected the technological processes and production skills” (Fleer, 2016, p. 37) that are then turned into learner produced wall charts. The following example wall chart is included in Fleer (2016, p. 37). Take note of the first step.

When we evaluate, investigate, generate designs, generate project plans, and make/produce we:

  1. Collaboratively play (investigate) with the materials.
  2. Evaluate the materials and think about how they could be used.
  3. Generate designs and create a project plan for making the item.
  4. Produce of make the item.
  5. Evaluate the item.
  6. Write about the item and talk with others.
  7. Display the item.

Before you can figure out what you are going to do with a digital technology, you need to be fully aware of how the technology works, what it can do, what are the costs of doing that, what it can’t…etc. Once you’ve got a good handle on what the digital technology can do, then you can figure out interesting and effective ways to transform learning using the technology. i.e. pedagogy is the last consideration.

This is not to suggest that pedagogy is less important because it comes last. Pedagogy is the ultimate goal

But all models are wrong

But of course all models are wrong. This model is (arguably) only appropriate if you are not familiar with digital technology. If you know all about digital technology or the specific digital technology you are considering, then  your need to play with the digital technology first is lessened.  Maybe you can leap straight to pedagogy.

The trouble is that most teachers that I know have fairly limited knowledge of digital technologies. In fact, I think many of the supposed IT experts within our institutions and the broader institution have somewhat limited understandings of the true nature of digital technologies. I’ve argued that this limited understanding is directly impacting the quality of the use of digital technology for learning and teaching.

The broader problem with this “technology first” model – as with the “pedagogy first” model – is the assumption that we engage in any complex task using a simple, sequential process. Even the 7 step sequential process above is unlikely to capture “the rich and chaotic variety” of how we evaluate, investigate and generate designs for using digital technology for learning and teaching. A teacher is just as likely to “play (investigate)” with a new digital technology by trying out in a small safe to fail experiment to see how it plays out. Perhaps this is repeated over a few cycles until the teacher is more comfortable with how the digital technology works in the specific context, with the specific learners.


Fleer, M. (2016). Key ideas in the technologies curriculum. In Technologies for Children (pp. 35–70). Cambridge University Press.

Making course activity more transparent: A proposed use of MAV

As part of the USQ Technology Demonstrator Project (a bit more here) we’ll soon be able to play with the Moodle Activity Viewer. As described the VC, the Technology Demonstrator Project entails

The demonstrator process is 90 days and is a trial of a product that will improve an educator’s professional practice and ultimately motivate and provide significant enhancement to the student learning journey,

The process develops a case study which in turn is evaluated by the institution to determine if there is sufficient value to continue or perhaps scale up the project.  As part o the process I need to “articulate what it is you hope to achieve/demonstrate by using MAV”.

The following provides some background/rationale/aim on the project and MAV. It concludes with an initial suggestion for how MAV might be used.

Rationale and aim

In short, it’s difficult to form a good understanding of which resources and activities students are engaging with (or not) on a Moodle course site. In particular, it’s difficult to form a good understanding of how they are engaging within those resources and activities. Making it easier for teaching staff to visualise and explore student engagement with resources and activities will help improve their understanding of student engagement. This improved understanding could lead to re-thinking course and activity design. It could enhance the “student learning journey”.

It’s hard to visualise what’s happening

Digital technologies are opaque. Turkle (1995) talks about how what is going on within these technologies are hidden from the user. This is a problem that confronts university teaching staff using a Learning Management System. Being able to identify what resources and activities within a course website students are engaging with,which resources they are not, and identifying which students are engaging can take a significant amount of time.

For example, testing at USQ in 2014 (for this presentation) found that once you knew which reports to run on Moodle you had to step through a number of different reports. Many of these reports include waiting for minutes (in 2016 the speed is better) with a blank page while the server responds to the request. After that delay, you can’t actually focus only on student activity (staff activity is included) and it won’t work for all modules. In addition, the visualisation that is provided is limited to tabular data – like the following.

EDC3100 2016 S1 - Week 0 activity

Other limitations of the standard reports, include:

  • Identifying how many students, rather than clicks have accessed each resource/activity.
  • Identify which students have/haven’t accessed each resource/activity.
  • Generate the same report within an activity/resource to understand how students have engaged within the activity/resource.

Michael de Raadt has developed the Heatmap block for Moodle (inspired by MAV) which addresses many of the limitations of the standard Moodle report. However, it does not (yet) enable the generation of a activity report within an activity/resource.

The alternative – Moodle Activity Viewere (MAV)

This particular project will introduce and scaffold the use of the Moodle Activity Viewer (MAV) by USQ staff. The following illustrates MAV’s advantages.

MAV modifies any standard Moodle page by overlaying a heat map on it.  The following image shows part of a 2013 course site of mine with the addition of MAV’s heatmap. The “hotter” (more red) a link has been coloured, the most times it has been clicked upon. In addition, the number of clicks on any link has been added in brackets.

A switch of a MAV option will modify the heatmap to show the number of students, rather than clicks. If you visit this page, you will see an image of the entire course site with a MAV heatmap showing the number of students.

EDC3100 S2, 2013 - heat map

The current major advantage of MAV is that the heatmap will work on any standard Moodle links that appear on any Moodle page. Meaning you can view a specific resource (e.g. a Moodle Book resource) or an activity (e.g. a discussion forum) and use the MAV heatmap to understand student engagement with that activity.

The following image (click on it to see larger versions) shows the MAV heatmap on a discussion forum from the 2013 course site above.  This forum is the “introduce yourself” activity for the course. It shows that the most visited forum post was my introduction, visited by 87 students. Most of the other introductions were visited by significantly less students.

This illustrate a potential failure for this activity design. Students aren’t reading many other introductions. Perhaps suggesting a need to redesign this activity.
Forum students

Using MAV

At CQU, MAV is installed and teaching staff can choose to use it, or not. I’m unaware of how much shared discussion occurs around what MAV reveals. However, given that I’ve co-authored a paper titled “TPACK as shared practice: Toward a research agenda” (Jones, Heffernan, & Albion, 2015) I am interested in exploring if MAV can be leveraged in a way that is more situated, social and distributed.  Hence the following approach, which is all very tentative and initial.  Suggestions welcome.

The approach is influenced by the Visitor and Resident Mapping approach developed by Dave White and others. We (I believe I can talk with my co-authors) found using an adapted version of the mapping process for this paper to be very useful.

  1. Identify a group of teaching staff and have them identify courses of interest.
    Staff from within a program or other related group of courses would be one approach. But a diverse group of courses might help challenge assumptions.
  2. Prepare colour print outs of their course sites, both with and without the MAV heatmap.
  3. Gather them in a room/time and ask them to bring along laptops (or run it in a computer lab)
  4. Ask them to mark up the clear (no MAV heatmap) print out of their course site to represent their current thoughts on student engagement.
    This could include

    • Introducing them to the idea of heatmaps, engagment.
    • Some group discussion about why and what students might engage with.
    • Development of shared predictions.
    • A show and tell of their highlighted maps.
  5. Handout the MAV heatmap versions of their course site and ask them to analyse and compare.
    Perhaps including:

    • Specific tasks for them to respond to
      1. How closely aligned is the MAV map and your prediction?
      2. What are the major differences?
      3. Why do you think that might be?
      4. What else would you like to know to better explain?
    • Show and tell of the answers
  6. Show the use of MAV live on a course site

    1. changing between # of clicks or # students
    2. focus on specific groups of students
    3. generating heatmaps on particular activities/resources and what that might reveal
  7. Based on this capability, engage in some group generation of questions that MAV might be able to help answer.
  8. Walk through the process of installing MAV on their computer(s) (if required)
  9. Allow time for them to start using MAV to answer questions that interest them.
  10. What did you find?
    Group discussion around what people found, what worked, what didn’t etc.  Including discussion of what might need to be changed about their course/learning design.
  11. Final reflections and evaluation

University digital technology: problems, causes, and suggested solutions

The level of support provided by digital technologies to broad learning and teaching tasks within my little part of my current institution is extremely limited. The following is one explanation why this is the case, and one set of suggestions for what might be done, both immediately and longer term.

The problems and a cause

There are lots of possible explanations for poor level of support offered by institutional digital technologies. The one I’m using here goes like this

  1. Activities that are easy to do get done, activities that are hard to do are not apt to get done.
  2. Learning, teaching and the activities that support learning and teaching are situated – context matters.
    For example, the most effective ways for 3rd year pre-service teachers to develop their abilities as teachers, are not likely to work effectively for 1st year mechanical engineers. The activities that someone teaching pre-service teachers wants to engage in, will not be entirely the same as someone teaching engineers, nurses, accountants, musicians etc.
  3. The implementation of institutional digital technologies explicitly de-values context and specificity.
    For example, a fundamental principle of Enterprise information technology architecture is (emphasis added) “provide maximum benefit to the enterprise as a whole“. Here’s that principle expressed by a UK university, and where it is (see principle #2) mentioned in “The Open Group Architecture Framework”. Principle #5 adds this “Development of applications used across the enterprise is preferred over the development of similar or duplicative applications which are only provided to a particular organization”. Check your organisation’s enterprise architecture framework, you may well see a copy and paste of those principles.

While there is a logic behind those principles, these principles also create at least two problems:

  • Lowest common denominator, or the if all you have is hammer problem; and,
  • Starvation.

Lowest common denominator

If you work for my institution and you need to create a website for some purpose then you have two options: Moodle or Sitecore. Moodle is the LMS and Sitecore is the content experience (really sitecore, experience?) management system used by marketing to manage the corporate website. This is what we have, so every request to have a website must use one of these.

This has lead to a huge number of Moodle courses sites being created for purposes that are so far removed from the intent of Moodle (or sitecore). Not surprisingly these sites tend to be largely inactive, largely because Moodle (or sitecore) does not make it easy to complete the sort of activities that the purpose required. Those activities become too hard, so they don’t get done. They work as well as using a hammer to open a boiled egg.


The focus on the whole organisation means that enterprise IT suffers from a version of the reusability paradox. As they focus more and more on making a digital technology reusable across the entire organisation, they must remove from that digital technology anything that provides value within specific contexts. Anything that helps pre-service teachers learn, gets removed because they don’t represent the whole organisation.


Any attempt to develop/adopt/use a digital technology that is not common across the whole organisation (i.e. a digital technology that actually provides value) suffers from starvation. The resources to develop/approve a digital technology within an organisation are limited. Priorities have to applied. A digital technology of value to a subset of the organisation is always going to be placed at a lower priority than a digital technology of value to the entire organisation. It will always be starved of resources.

This starvation is made worse by the observation that the people charged with supporting the use of digital technology within organisations tend to become experts in, and even employed explicitly to support specific digital technologies.  Whenever a requirement is raised, it can only ever be understood/responded to within the context of existing organisational digital technologies and thus returning to the “hammer” problem.

Enterprise IT has become too much about how “we can help you use the digital technologies we already have” and not enough about “what is important to you and how can we make it easier for you to do it well”.

Context specific solutions

Based on the above, if we want to actually add real value to what we do, then we have to figure out how to adopt/develop/use digital technologies that make it easy for us to do what is important. We have to figure out how to adopt/develop/use digital technologies that are more contextually specific.

The following suggestions a “simple” two-step process

  1. Identify the activities that are important to use and are currently too hard.
  2. Figure out how we can adopt/develop/user digital technologies that will help make those activities easy.

What follows is an attempt to illustrate what this might look like. It will have limitations due to my limited knowledge of both the activities and the digital technologies.

This two-step process and the suggestions below open up all sorts of research opportunities.

Important, but difficult activities

What follows is a list of potentially important, but currently difficult to accomplish activities around Initial Teacher Education (ITE) at my institution. Some or all of them could be arguable, and there are likely far more important activities.

  1. Program-level activities: Ensuring that students in our ITE programs
    • successfully complete specific tasks (e.g. have a valid Blue Card);
    • have a space to socialise with others within the programs;
    • start to develop their sense of professional identity as a teacher;
    • identify information about learners, courses etc at program level.
  2. Professional Experience: All aspects of organising and supporting the placement of pre-service teachers on Professional Experience.
  3. Know thy students: Have some idea about how and what are students are doing during semester in our own courses and beyond, and be able to respond appropriately based on  what we know.
  4. Learning and teaching: Like most university e-learning our courses do not include widespread effective use of digital technology to amplify and transform student learning (not at all surprising if we’re using generic tools).
  5. Standards, portfolios and outcomes: Understand how well our students and the students learning maps against the APSTs.
  6. Program and course development: Plan and manage the development of the proposed new programs and the raft of new courses those programs will require. Support the on-going evolution and management of those courses. For example, being able to see and query due dates or other details across the program.
  7. Teacher specific activities: Teachers (and thus our pre-service teachers) have to develop and demonstrate capabilities around teacher specific activities (e.g. lesson and unit planning). Increasingly these activities should (but generally aren’t) actively supported by digital technologies (a Word template for lesson planning is not active support by digital technologies).

Below there are some initial suggestions about how each of the above might be addressed.

How we might support these activities

Important: The addition of digital technology will not magically help make these activities easier. It’s only when the digital technology is integrated effectively into how we do things, that the magic will happen. Achieving that goal is not easy. The following are not magic silver bullets.

There are three broad strategies that can be used

  1. Make use of existing organisational processes and technologies and push them further.
    e.g. the ICT Technology Demonstrators project, digging deeper into the capabilities of Moodle for learning and teaching.
  2. Complement, workaround, or replace existing organisational processes and technologies.
    e.g. existing use of cloud-based technologies (Google docs etc) and other forms of digital technology modification. (Jones, Albion & Heffernan, 2016).
  3. Explore how and if digital technologies used by teachers, related organisations, and beyond can be leveraged.
    20 years ago Universities provided banks of dial-up modems to provide Internet access to staff and students. We don’t need to do this anymore. Increasingly there are more and better digital technology in society, than in universities. Not only in broader society, also in teaching. For example, Scootle, the Australian Curriculum site, AITSL, and The Learning Place are used to varying levels. If we wish to better prepare our pre-service teachers within the profession, then using the technologies used by teachers and broader society is important.

Personally, I believe the best outcomes will arise if we’re able to creatively intermingle all three of these strategies. The problems will arise if we try to follow one or the other.

Existing processes and technologies and push it further

Moodle now has support for outcomes. It is possible that these could be used to map student activities and assessment against APSTs and contribute toward Standards, portfolios and outcomes. If the program(s) wanted to take a more coordinated approach, there might be some value in this.

In terms of Program-level activities and, in particular, students one solution might be to request BEdu/BECH specific functionality in UConnect. UConnect is the portal which students use to gain access to USQ and its various other systems. UConnect is implemented using Drupal. Drupal is a content management system and thus it should be technically possible for it to be modified to present a BEdu/BECH specific view. Such a specific view could be used to present specific information (e.g. expiry date of the Blue card etc) and other functionality.

There are a lot of smart people in institutional IT (and elsewhere). Bringing that knowledge closer to use and our needs could result in lots of interesting ideas. Hence, something like a hackathon could be useful.

The current ICT Technology Demonstrators project is one existing process that can be leveraged to produce more specific outcomes.We should be looking being more aware of and leveraging existing work from this project, and also more actively identifying work that would be important for our part of the organisation.

For example (know thy students), I’m currently involved in a demonstrator project that should be bringing MAV to USQ for at least a short time. Using MAV to explore how students are engaging with course Study Desks could be beneficial. This use of MAV is connected to the Digital QILTers project, which arose out of the school 2015 planning day.

Related to this would be engagement with Hazel’s PhD study, which would help leverage existing capabilities within Moodle to know thy students.

Also related to analytics and MAV is the potential introduction of CQUni’s EASI system at USQ. EASI would help both know thy students and program-level activities.

Existing enterprise IT have yet to fully grasp, let alone respond to, the changing nature of digital technologies. Yoo et al (2012) give one view of the changing nature of digital technologies, which they label as pervasive digital technologies. Organisations and their IT departements are still operating from the perspective of digital technologies being scarce, not pervasive.  Yoo et al (2012) identify three traits of pervasive digital technologies

  1. the importance of digital technology platforms;
    i.e. “the proliferation of dig- ital tools or digital components allows firms to build a platform not just of products but of digital capabilities used throughout the organization to support its different functions” (p. 1400)
  2. the emergence of distributed innovations; and,
    i.e. “Not only are innovations increasingly moving toward the periphery of an organization, but the distributed innovation spurred by pervasive digital technology increases the heterogeneity of knowledge resources needed in order to innovate” (p. 1401)
  3. the prevalence of combinatorial innovation.
    i.e. “Increasingly, firms are creating new products or services by combining existing modules with embedded digital capabilities. Arthur (2009) notes that the nearly limitless recombination of digital artifacts has become a new source of innovation” (p. 1402)

Our institution has yet to even think of developing a university platform that would support distributed innovations and combinatorial innovation. It is distributed innovations that offer the potential to solve the dual problems of lowest common denominator and starvation.

The MAV and “more student details” projects mentioned below are primitive first steps in developing an institutional (perhaps even teacher education) digital platform upon which to build truly interesting ideas. For a number of years Universities have been developing applications programming interfaces (APIs) that are made available to students, teachers and others. This is one list of related resources. Here’s a description from a US student titled “How personal APIs let students design their universities”.

Pushing the institution out of its comfort zone into this area is important longer term and might actually allow the institution that it has the digital acumen that is seen as “a critical enabler” (CAUDIT, 2016).

Complement, workaround, replace org systems

In terms of Program and course development, which at some level is a project management task, then a tool like Trello might be a good match. It allows groups of people to collaboratively visualise and manage tasks and progress. Using it conjunction with Google Drive or similar could offer a way to manage the development of the new programs.  Not to mention, Trello is also being used in education (schools) in a variety of different ways.

In terms of Program-level activities and  promoting social connections amongst students a system like UCROO potentially offers functionality more in line with social media (think Facebook) than current approaches that rely on using the LMS.

In this paper (Jones et al, 2016), Peter, Amanda and I share a range of different digital modification strategies we’ve undertaken to make it easier to do what we need to do as teachers. A project that actively identifies what others are doing, shares that work, and then seeks how we can distribute those practices across the school’s courses would be interesting.

The “more student details” workarounds I use could potentially be expanded and customised to other courses.  Especially if MAV sticks around (it’s based on the same technology and infrastructure).

As mentioned above, MAV and “more student details” are primitive steps toward providing a platform that enables distributed innovation. The platform offers the chance to move beyond generic tools to specific tools. Pedagogical skins are an idea that seek to put the context and the value back into the LMS to increase the pedagogical value and thus improve the quality of Learning and teaching.

Integrate with teacher digital technologies and beyond

Perhaps the most immediate example of this from the Standards, portfolios and outcomes activity. Currently students are encouraged to have a USQ-hosted e-portfolio.  This is such a hackneyed approach of which I’ve long been critical. A more contemporary approach is offered by the Domain of One’s Own (DoOO) project from UMW – (see here for some background or here for a broader view). It’s an approach that is spreading across multiple institutions in the US and Charles Sturt has been starting to play.

Beyond more general technologies, there is the idea of working more closely with teacher specific digital technologies such as Scootle etc. One possibility might be to develop processes by which our students are engaging with renewable assessments (more here).

It might mean integrating a lesson/unit planning tool that actively integrates with the Australian Curriculum.


CAUDIT. (2016). CAUDIT 2016 Top Ten Issues. Retrieved from library/Resources and Files/Strategic Initiatives/CAUDIT Top Ten Report 2016 WEB.pdf

Jones, D., Albion, P., & Heffernan, A. (2016). Mapping the digital practices of teacher educators: Implications for teacher education in changing digital landscapes. In Proceedings of Society for Information Technology & Teacher Education International Conference 2016 (pp. 2878–2886). Chesapeake, VA: Association for the Advancement of Computing in Education.

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

Exploring Moodle Book usage – Part 6 – What do they contain?

Part 6 of this series diverges a bit from the last post and moves away from what people are doing with the Book resources to focus on the contents of the Book resources themselves.  Questions I’m hoping to explore in this post include:

  • How long are the Book resources?
    Measured perhaps in number of chapters, bytes, and perhaps textual word count.
  • Are the Book’s web or print documents?
    Do they include links? To other books in the course? To external sites? Which sites? Do they include multimedia?
  • What does one book with 500+ links actually link to?
  • How readable is the text?

NOTE: Click on the graphs below to see larger versions.

How long are the Book resources

A Moodle Book resource is a collection of “chapters” and “sub-chapters”, which are essentially web pages. The following starts looking in more detail at these chapters and their contents.

Where did they come from – import or create?

Looking more closely at the chapters provides an opportunity to find out how they were created.

Each chapter has a field importsrc which specifies the name of a file from which the content was imported.  Indicating that the chapter was created by uploading a already written file, rather than using the Book online editing interface.

Analysis shows that only

  • 9.8% (2397 out of 24408) of chapters are imported;
  • these belong to 10.2% (287 out of 2801) of books; and,
  • 11.8% (44 out of 374) of courses.

i.e. ~ 90% of chapters, books and courses are created by using the online Book interface.  Not a great way to create.

How many chapters per book?

The next step is to have a look at how long each book is based on the number of chapters. This isn’t a great indication of length because each chapter is simply a web page, it could be quite short, or quite long.

The following graph shows the number of chapters in every book grouped by year. Overall the number of chapters stays pretty much the same.  However, there are a couple of strange outliers tending toward 100 chapters in a book. The median number of chapters per book has increased from 6 in 2012 to 8 in 2015.

chapters per book per year

The total number of books shown in the above graph for each year is a bit out from earlier data. I will need to come back to these analysis and nail down what courses/books are counted in each analysis.

How many words in each book?

To get a better idea of the size of books the aim here is to convert the chapter content to plain text and do some analysis of the text.  This is where the beauty of Perl (confirmation bias) comes to the fore.  There’s a module for that.

The following graph maps the number of words for each book by year.  It shows that in 2014 and 2015 the number of words per chapter/book was certainly getting longer.  The median went from 1157 words per book to 1718 per book (with a dip in 2013 back to 1004 words per book). The upper limit moved from 5282 words in a book to 6930 words per book. Scarily, there are outlier books that are approaching (and in some cases bypassing) 60,000 words in length.

To give you some idea of read time, I’ll use Medium’s method for calculating read time (ignoring images) to convert the numbers into minutes to read:

  • Around the median word count – 1700 words – equates to about 6.1 minutes.
  • The maximum upper word count – 6930 words – equates to about 25.2 minutes.
  • The outliers – around 60,000 words – equates to about 218.2 minutes, which is approaching 4 hours.

Adding to this is that I’m not sure the typography and design of your typical Moodle Book is going to match what you might expect on Medium. Not to mention that Medium don’t mention if their average adult reading spead (275 words per minute) is for words on print or screen.

words per book per year


The module that calculates words also does readability tests, including the Flesch reading-ease test. The following graph shows the results on that test for each of the books grouped by year.

Grain of salt – The graph does exclude a number of books that achieved negative results on the test. Initially, it appears that this may be due to the conversion to text only not handling some special characters which worsen the readability.  (Apparently it is possible to get a negative value on the test). This may also be decreasing the “reading ease” of other books.  This will be examined more closely later.  But then again, quoting Wikipedia

While Amazon calculates the text of Moby Dick as 57.9,[9] one particularly long sentence about sharks in chapter 64 has a readability score of −146.77.

The median moves between 43.7 and 47.3, which is apparently around the 45 that Florida law requires for life insurance policy (thank you again Wikipedia).  However, the lower bound loiters around 5 suggesting very difficulty to read.  Wikipedia suggestions 30 to 50 as being the range for “college” and being difficult to read.

flesch per book per year

And my books?

Which has me wondering about mine. I think I’ve developed a tendency to reading difficulty.  The following graph shows the distribution for the latest offering of my main course that is contained in the data set.

That’s a nice-ish surprise.  Median at 60. Worst is 40 and best is 77. With better than 75% of the books above 50 which is the lower bound of the 10th  to 12th grade boundary.

However, I believe these results may be a little padded by the fact that I write most of my books in straight HTML. Meaning there’s no increase in complexity because of the difficulty of converting it into clean text.
EDC3100 S2 2015 readability
Which has me wondering about the evolution of readability.  The following graph shows the results from all offerings of the course that use the Book. A bit of a dip at the start with a small upward trend over time.  Not bad – but then of limited use given the limitations of this type of thing.

edc3100 readability through the ages

What about links – links per book?

One of the questions I’d like to answer is whether or not the people using the Book are using it as a poor-man’s replacement for a collection of paper, and how many are using it as a collection of web pages.  First exploration of this question is the rough indicator of how many links per book?

The following graph shows the number of links per Book per year. “Link” is defined here as any type of link, excluding a link to a style sheet. That means links to images, youtube videos etc are all counted as links.

As the graph shows there are a large number of books that have no links.  The median number of links is increasing each year. Starting at 11 in 2012 and moving through 13, 14, and finally 17 in 2015.  As the graph shows there are some major outliers with some Books having hundreds of links, including some with over 500 links.  These might include some of the very long books included above, but it might also include other books that contain huge numbers of links

In terms of books with very few links in 2012, 15.4% of the books had less then 3 links (remember that includes images, links, embedded videos etc) with 2014 having 16.1% and 2015 having 15.3%

num links per book per year

Links per book in EDC3100?

For a quick comparison, the following graph shows the number of links per Book for EDC3100 (the main course I use the Book in). Over time I  have been trying explicitly to think of the Book resources as collections of web pages.

The median # of links per book for all courses moved from 11 to 17. In EDC3100, the median has moved from 14 at its lowest (2013 S2 – a bad semester for links) up to 30 in 2015 (both semesters).  Similarly, the upper range for all courses ranged from 46 to 74 (driven by some truly large link numbers), for EDC3100 the upper range went from 43 in (2013 S2) up to 111 in 2015.

EDC3100 books links

Exploring types of links a bit more

The above couple of link graphs are limited because I really haven’t yet explored the diversity of link types that are included.  I had removed CSS links, but not script links.  I also haven’t split apart the different types of links. An examination which might shed some light on those strange books with 500+ links. Time than to explore.

Will try to identify the different types of links, generate stats for all the types, but when counting links, limit to more standard types (img/a)

Types of link to exclude from the count of links: iframe, embed, object, meta – handle link better.

The presence of <tag meta=”generator” looks like being one way of identifying chapters coming from Word.

Cleaning up the links does bring the numbers down a bit. e.g. the media for 2015 goes from 17 to 15, but the other medians stay the same. The upper for 2013 onward comes down by 1 to 3.

What about the 500+ books? What are those links

I’m interested in the books that have 500+ links.  What are they linking to?

One with 517 links has 510 <a links and 7 <img links. What are those 517 <a links?

Lots of internal links and all sorts of other links – other book chapters, readings. Looks like it might be a large book, is it?

29 chapters and 32,871 words – so a big, all in one book.

Exploring Moodle Book usage – Part 5 – more staff and student use

Continuing the exploration of how the Moodle Book module is being used, this post picks up from the last and will

  • Revisit the who is updating/creating posts, including data from the second half of 2015.
  • Explore the balance of all actions (print/view/update) by staff.
  • Explore the balance of all actions by students.

Who is updating/creating posts

The last post included a graph that showed generally (apart from two course offerings) that the core teaching staff appear to be doing the creation of books.  That graph had a few problems, including

  • Limited data from the 2nd half of 2015.
    Due to the switch in how Moodle logged events.  Need to handle the new log format.
  • Didn’t handle all roles.
    Appears there are some non-standard Moodle roles that the previous query didn’t handle.
  • Handling deleted books and chapters.
    I believe this is an issue for the new logging process which has connections back into the book and book chapters table. Which works nicely until books/chapters are deleted.

With those changes fixed, the following graph emerges show how many times each of the roles updated a Book resource in every course.  The changes between the following and the same graph in the last post, includes:

  • Significant increase in the number of updates for most roles (e.g. examiner up from 21968 to 31343; assistant examiner has almost doubled from 5144 to 10708)
  • Addition of the UNKNOWN role not in the previous graph

It should be noted that the following graphs do not include ~20K updates that I did in one course in one semester.

All book updates by role

And I thought it would be interesting to break down the updates by year to see what if there was any growth. Given the growth in the number of courses using the Book (17 in 2012 to 152 in 2015) there should always have been some growth.

Book updates by role by year

The graph above shows examiners making 2152 updates in 2012 and 13649 in 2015.  That’s a 6.3 times growth in number of updates for 12.6 times growth in the number of updates. Or, alternatively in 2012 a course examiner (on average) made 179 updates. In 2015 a course examiner (on average) made 90 updates.

Suggesting that the examiners are making less updates. Perhaps farming out the updating to other staff. The growth in edits by moderator and assistant examiner roles in 2014 and 2015 suggest that.  But more exploration is required.

Role balance of actions

Updating/creating is not the only action that can be done with a Book, you can also view and print parts or all of a Book resource. This step aims to explore what balance of actions each of the roles are involved with

For this purpose I’ve grouped log events into the following actions someone can perform on a Book

  • view – view a chapter or the entire book online
  • print – print a chapter or entire book
  • modify – delete or update a chapter/book
  • create – create or add a chapter or book
  • export – use the export to IMS option

The above updating/creating graphs including both modify and create actions.

The table shows the total events on all books by all roles from 2012 through 2015. It shows how viewing the book is by far the most prevalent action, accounting for 97.6% of actions.

Interestingly, at least for me, is that the percentage of modifications (1.1%) exceeds the percentage for printing (0.9%). I assume this is due to my outlier behaviour in 2015 in modifying a huge number.  Indeed it does.  The numbers in brackets in the table indicate a recalculation taking out that outlier.

Action # actions %
View 5040285 97.6  (98)
Print 46162 0.9 (0.9)
Modify 56754 (35867) 1.1 (0.7)
Create 18537 0.4 (0.4)
Export 1 1.9373E-05

Given the preponderance of viewing, the graphs tends to be a little less than useful by role. But the following look at usage by students and examiners.


Student usage

The graph below shows the spread of actions by students with the books. It shows that the most common action performed by students is viewing books. The table following the graph provides the raw data for the graph.

Student actions by year

Both this table and the one below for examiners show no print actions.  This suggests a bug in the analysis.

Another interesting point is the dip in printing between 2014 and 2015.  Even though the number of courses using books, and the number of views by students on books increased from 2014 to 2015, the number of print actions dropped. I wonder if this has anything to do with the large number of modify/create actions by students in 2015. Were the students creating the books/books created by students less likely to be printed?

Year View Print Modify Create
2012 386101 41 2
2013 812133 4487
2014 1447190 20310
2015 1967047 15198 1335 28


Examiner usage

The graph below shows the spread of actions by examiners with the books. The table following the graph provides the raw data for the graph.

The relative increase of modify/create actions by examiners between 2014 and 2015 is another indication of the 20000 updates I performed in 2015.

Examiner actions by year

The views and prints by examiners drop between 2014 and 2015

Year View Print Modify Create
2012 7193 2072 80
2013 26774 105 4850 495
2014 35855 647 8364 1833
2015 35185 452 26790 7746


Further questions to explore

  • What are the UNKNOWN roles?
  • How are the updates and other actions shown above distributed between users? Are there a small number of users making up the lion share of the actions (e.g. me and updates in 2015; and the one or two courses that had students updating books).
  • How many chapters do each student read? What about printing? Do they print and read online?
  • What is happening with print actions in 2012? Was there really no-one printing books?
  • Were the books created by students less likely to be printed? Did this account for the drop in print actions by students between 2014 and 2015? If not, what did?
  • Remove my 2015 outlier actions from the examiner actions graph and see what changes are made.

Exploring frameworks to understand OER/OEP

Some colleagues and are re-starting an exploration of OEP in Initial Teacher Education (ITE). A first task is an attempt to get a handle on what has been done/is known about OEP/OER. Yes, we’re looking for spectrums/frameworks/models etc that help map out what might be done with OEP/OER.  We’re interested in using this to understand what’s been done around OEP within ITE and also what we’ve already done.

The following is a summary of a quick lit review. No real structure and includes a range of strange notes.

OER adoption: a continuum for practice

Stagg (2014) offers the following continuum of practice

The proposed model seeks to acknowledge the complexity of applied knowledge required to fulsomely engage with open education by examining practitioner behaviours and the necessary supporting mechanisms. This conceptual model aims to be of use to both practitioners and also those responsible for designing professional development in an educational setting.

A continuum of practice - OEP

A Google Scholar search reveals some use this continuum.

Including Falconer et al (2016), which includes

We view our fourth category, enhancing pedagogy, as fundamentally different to that of producing high quality materials efficiently or cost effectively, in that it is underpinned by altruistic positions rather than a business model approach. It puts its emphasis on the value of the OER development process, rather than on the value of the OER content produced. (p. 99)

Through our analysis, some fundamental tensions have become apparent that will need to be resolved if the purposes of OER release are to be realised. (p. 101)

This limits imposed by a reputation-building motive are exacerbated at present as higher education institutions are encouraged to become increasingly competitive, elevating the importance of brand recognition. The consequence is a move away from risk-taking, towards a demand for predictable quality outcomes. This discourages innovation unless direct benefits can be proven in terms of new markets, student numbers, or shared costs of development and teaching. The benefits of OER in terms of institutional showcasing and attracting potential students, may prove attractive to institutional managers and gain institutional support for OER, but unless culture changes, they place inherent limitations on efficiency gains and the adoption of more open practices which are ultimately founded on a commitment to academic commons. (p. 102)

And develops some frameworks/continuums

Framework for assessing OER implementation strategies


A continuum of openness

Assessing the potential for openness

Stagg (2014) is also cited by Judith and Bull (2016)

While this literature has been significant in driving forward the open agenda, there has been relatively little published about the practicalities of implementing openly licensed materials in higher education courses (p. 2)

which raises the question of just how much more difficult the idea of implementing open educational practices are going to be. i.e. if sharing materials is hard enough.

OER engagement ladder

Masterman and Wild (2013) bring in the OER engaement ladder, which is talked more about in this blog post. (Interestingly the institutional repository URL for the full research report is now broken, but blog posts and slideshare resources remain)

OER engagement ladder


Falconer, I., Littlejohn, A., McGill, L., & Beetham, H. (2016). Motives and tensions in the release of Open Educational Resources: the JISC UKOER programme. Australasian Journal of Educational Technology, 32(4), 92–105. doi:10.14742/ajet.2258

Judith, K., & Bull, D. (2016). Assessing the potential for openness: A framework for examining course-level OER implementation in higher education. Education Policy Analysis Archives, 24(42). doi:10.14507/epaa.24.1931

Masterman, L., & Wild, J. (2013). Reflections on the evolving landscape of OER use. Paper presented at OER13: creating a virtuous circle, Nottingham, UK

Stagg, A. (2014). OER adoption: a continuum for practice. Universities and Knowledge Society Journal, 11(3), 151 – 164. doi:10.7238/rusc.v11i3.2102

Exploring Moodle book usage – part 4 – students and initial use

Yesterday’s part 3 in this series of posts continued the higher level examination of book usage. i.e. what types of courses use the Moodle Book module (the Book). This post is going to continue that a little and then start to make some forays into looking more closely at how resources produced using the Book are actually used. In particular, it’s going to look at the following:

  • Compare the number of online students in courses that use the Book, versus those that don’t use the Book.
  • Who is actually creating and revising the Book resources?

At this stage, I’m not sure if I can answer these questions with the data I have to hand.

Yep, that worked.  Still a fair bit to do, the next post(s) will

  • Revisit the staff usage of the Books to include more recent data and fix some of the other limitations of what’s below.
  • Start exploring how (if?) students are using the Books.

Identifying type of students in courses

The last post identified that the Book is generally used in larger courses. A possible implication of this is that the Book is more likely to be used if the course has distance education/online students. The thinking here is that such courses have historically had print-based study guides, which could be converted into the Book module. Also, that on-campus courses are more typically going to rely on lectures and tutorials as the primary form of teaching method. This links directly back to the idea of horsey, horseless carriage thinking.

To explore this further I need to identify whether or not the current data set will allow me to identify the types of students….turns out group allocation allows this.

Plotting the number of online students enrolled in courses using the Book gives the following graph.  It shows that the number of online students in courses using the Book was initially quite low. For example, in 2012 50% of courses using the Book had less than 4 online students. Many of that 50% had no online students. In fact, the only courses using the Book in the first half of 2012 had no online students.

However, over time the number of online students in courses using the Book increased. In 2015, though there remained a large number of these courses that few if any online students.

online Students

Rather than focusing on the number of online students in courses using the Book, the following graph focuses on the percentage of online students in those same courses. It shows that in 2015 there was a significant increase in courses with higher percentages of online students starting to use the Book. Before that a majority of courses using the Book had less than 20% online students. 2012 appears have included only 1 course that had online students – the big outlier with 100%.

Online percentage students

For me this raises a couple of interesting questions

  • How and why did the courses with 0% online students use the Book?
    The use of the Book by these courses challenges my assumption.
  • Why does 2015 appear to have been a turning point for using the Book by courses with higher percentage of online students?
    My current guess is that this correlates with the cessation of the previous method for placing traditional print-based study guides online. That tool stopping meant the courses had to look for an alternative.

 Who is creating the book resources?

My experience is that creating resources using the Book module is not necessarily a straight forward process. I’ve kludged together various tools and practices to reduce the difficulty, but I’ve heard other staff give up on using the Book because they couldn’t. This has me wondering who and how these Book resources have been created in other courses.

Answering this question requires taking a closer look at who is doing what with the Book resources, which requires a bit of work. It’s also the foundation for most of the subsequent interesting analysis.

As a result of getting this working, an interesting question suggested itself

  • For each course, how many “events” happen around the books in those courses?
  • What percentage of events for the whole course, do those book events represent?
  • What about breaking those events down into read, change, and print?

The first rough cut at answering the question is given in the following graph. It shows the number of update events associated with Book resources grouped by each user role. It apparently shows that the core teaching staff (examiner, moderator and assistant examiner) are making most of the updates.  Interestingly, the student role is next in line in terms of number of updates. But there are some insights/limits/caveats to this graph.
Book updates by role - 2012 to 2015
The insights/limits/caveats include

  • The 1376 updates made by students are from two courses only. One course with 1335 (~97%) of the updates. Indicating a specific pedagogical choice for that course.
  • There was one course offering where the idiot examiner (i.e. me) almost doubled the number of updates by examiners. This offering has been excluded from the above graph.
  • The notion of an “update” event doesn’t provide any indication of how much was updated/created.  It might be as simple as deleting a character, or perhaps importing a whole new book.
  • The above data (so far) does not include data from second half of 2015 when the new Moodle event logging was implemented.
  • The mapping between old and new style logging needs to be smoothed out
  • There are events that aren’t logged for the book (e.g. this tracker item).
  • The mapping of logged events to changes to the book need to be rechecked.


On the value or otherwise of SAMR, RAT etc.

Updated 30 August, 2016: Added mention of @downes’ pointers to peer review literature using SAMR. Evolved into a small section

There definitely seems to be a common problem when it comes to thinking about evaluating the use of digital technology in learning and teaching. Actually, there are quite a few, but the one I’m interested in here is how people (mostly teachers, but students as well – and perhaps should throw organisations in here as well) perceive what they are doing with digital technology.

This is a topic that’s been picked up recently by some NGL folk as the course has pointed them to the SAMR model (originally), but now to the RAT model. Both are acronyms/models originally intended to be used by people introducing digital technology into teaching to self-assess what they’ve planned. To actively think about how the introduction of digital technology might change (or not) what learners and teachers are doing. The initial value of these models is to help people and organisations avoid falling into this pitfall when applying digital technology to learning and teaching.

SAMR has a problem

SAMR has received a lot of positive attention online, but there is also some negative reactions coming to the fore. One example is this open letter written to the SAMR creator that expresses a range of concerns. This open letter is picked up also in this blog post titled SAMR: A model without evidence. Both these posts and/or the comments upon them suggest that SAMR appears to have been based/informed by the work of Hughes, Thomas and Scharber (2006) on the RAT framework/model.

A key problem people have with SAMR is the absence of a theoretical basis and peer-reviewed literature for SAMR. Something with the RAT model does have. This is one of the reasons I’ve moved away from using SAMR toward using the RAT model. It’s also the reason why I’ll ignore SAMR and focus on the RAT model.

SAMR and literature

Update: @downes points to a collection of literature the includes the SAMR model. This addresses the question of whether or not there is peer reviewed literature using SAMR, but whether this addresses the perceived (and arguable) need for a “theoretical basis” to underpin SAMR. Most of the literature I looked at made use of the SAMR model for the same purpose I’ve use it, the RAT model and the Computer Practice Framework (CPF). As a method for evaluating what was done, for example

A related Google Scholar search (samr Puentadura) reveals a range of additional sources. But that search also reveals the problem of misspelling the SAMR author’s surname. A better search would be (samr Puentedura) which reveals material from the author and their related citations.  However, this search also reveals the weakness identified in the open letter mentioned above. The work developing/sharing the SAMR model by Puentedura is only visible on his website, not in peer-reviewed publications

Whether this is a critical weakness is arguable. Personally, it’s sufficient to prompt a search for something that performs a similar job, but doesn’t suffer this weakness.

What is the RAT model for?

The “model without evidence” post includes the following

SAMR is not a model of learning. There is no inherent progression in the integration of technology in learning within SAMR. Using SAMR as a model for planning learning and the progression of learning activities is just plan wrong

The same could be said for the RAT model, but then the RAT model (and I believe SAMR) were never intended to be used as such. On her #ratmodel page Hughes offers this

The original purpose of the RAT framework was to introduce it as a self-assessment for preservice and inservice teachers to increase critical technological decision-making.

The intended purpose was for an educator to think about how they’ve used digital technologies in a learning activity they’ve just designed. It’s a way for them to think about whether or not they’ve used digital technologies in ways that echo the above cartoon. It’s a self-reflection tool. A way to think about the use of digital technologies in learning

It’s not hard to find talk of schools or school systems using SAMR as an evaluation framework for what teachers are doing.  I’m troubled by that practice, it extends these models beyond self-reflection.  In particular, such use breaks the “best practices and underlying assumptions for using the R.A.T model” from Hughes (emphasis added)

  1. The R.A.T. categories are not meant to connote a linear path to technology integration, such as teaching teachers to start with R activities, then move to A and ultimately T. Rather, my research shows that teachers will have an array of R, A, and T technology integration practices in their teaching. However, T practices seem more elusive.
  2. The key to Transformative technology integration is opportunities for teachers to learn about technology in close connection to subject matter content. For example, supporting subject-area teachers learning in a PLC across a year to explore subject area problems of practice and exploration of digital technology as possible solutions.
  3. Discrete digital technologies (e.g., Powerpoint, an ELMO, GIS software) can not be assessed alone using the R.A.T. model. One needs rich instructional information about the context of a digital technology’s use in teaching and learning to begin a RAT assessment. Such rich information is only known by the practitioner (teacher) and explains why the model supports teacher self-assessment. For use in research, the RAT model typically requires observations and conversations with teachers to support robust assessment.

It’s not the technology, but how you use it

Hughes’ 3rd point 3 from the above (the one about discrete digital technologies) is why I’ve grown to dislike aspects of diagrams like the Padagogy Wheel pointed to by Miranda.

Whether you are replacing, amplifying, transforming (RAT model) OR you are remembering, analysing, creating, understanding etc (Blooms Taxonomy) does not arise from the technology. It arises from how the technology is used by those involved, it’s what they are doing which matters.

For example, one version of the padagogy wheel suggests that Facebook helps “improve the user’s ability to judge material or methods based on criteria set by themselves of external sources” and thus belongs to the Evaluate level of Blooms’ taxonomy. It can certainly be used that way, but whether or not how I’ve used it in my first lesson from today meets that criteria is another matter entirely.

The problem with transformation

Transformation is really, really hard. For two reasons.

The first is to understand the difference between amplification and transformation. Forget about learning, it appears difficult for people to conceive of transformation in any context. I try to help a bit through the use of print-based encyclopedia versus Encarta (replacement) versus Wikipedia (transformation).  Both Encarta and Wikipedia use digital technologies to provide an “encyclopedia”, however, only Wikipedia challenges and transforms some of the fundamental assumptions of “encyclopedia”.

The second is related to the horsey horseless carriage problem. The more familiar you are with something, the harder it is to challenge the underlying unwritten assumptions of that practice. I’d suggest that the more involved you are with print-based encyclopedia’s, the harder it was to see value in Wikipedia.

It’s made that much harder if you don’t really understand the source of transformation. It’s hard for people who aren’t highly digitally literate and have high levels of knowledge around learning/teaching/context to be able to conceive of how digital technologies can transform learning and teaching.

What do you compare it against?

To decide if your plan for using digital technologies for learning is an example of replacement, amplification or transformation, most people will compare it against something. But what?

In my undergraduate course, I ask folk to think about what the learning activity might look like/be possible if there wasn’t any digital technology involved. But I wonder whether this is helpful, especially into the future.

Given the growing prevalence of digital technologies, at what stage does it make sense to think of a learning activity as not involving some form of digital technology?

I wonder whether this is part of the reason why Angela lists as Substitution the use of the Internet for research?

Amplification, in the eye of the beholder?

Brigitte connects to Angela’s post and mentions a recent presentation she attended where SAMR (and the Technology Acceptance Model – I believe) were used to assess/understand e-portfolios created by student teachers. A presentation in which – Brigitte reports – that how students perceived themselves in terms of technical skills influenced their self-evaluation against the SAMR model

For example, a student with low technical skills might place themselves at the Substitution level in terms of creating an e-porfolio, however what they produced might be classified as sitting at the Modification or even Redefinition level when viewed by the assessors. Conversely, a student might classify themselves as at Redefinition but their overconfidence in using the tool rather than their skill level meant they produced something only at Substitution level.

I wonder how Brigitte’s identification of her use of a blog for reflecting/sharing as being substitution connects with this?

Focus on the affordances

Brigitte identifies her blog-based reflective practice as being substitution. Typically she would have been using other digital technologies (email, discussion boards) and face-to-face discussions to do this, and for her there is no apparent difference.

However, I would argue differently. I would point to particular advantages/differences of the blog that offer at least some advantage, but also potentially change exactly what is being done.

A blog – as used in this case – is owned by the author. It’s not hosted by an institution etc. Potentially a blog can help create a great sense of identity, ownership etc. Perhaps that greater sense of ownership creates more personal and engaged reflections. It also offers one way to react to the concerns over learning analytics and privacy Brigitte has raised elsewhere.

The blog is also open. DIscussion boards, email, and face-to-face discussions are limited in space and time to those people allowed in. The blog is open both in space and time (maybe). There’s no limit on how, why and whom can connect with the ideas.

But this brings up an important notion of an affordance.  Goodyear, Carvalho and Dohn (2014) offer the following on affordances

An assemblage of things does not have affordances per se; rather, it has affordances in relation to the capabilities of the people who use them. These evolve over time as people become better at working with the assemblage. Affordance and skill must be understood, not as pre-given, but as co-evolving, emergent and partly co- constitutive (Dohn, 2009). (p. 142)

Just because I might see these affordances/advantages, it doesn’t mean that Brigitte (or anyone else) will.

Does that mean I’m right and Brigitte is wrong? Does it mean that I’ve failed in the design of the NGL course to provide the context/experiences that would help Brigitte see those affordances? Does this meant that there is no right answer when evaluating a practice with something like the RAT model?

Should you be doing it at all?

Of course, the RAT (or SAMR) models don’t ask the bigger question about whether or not you (or the learners) should really be doing what you’re doing (whether with or without digital technologies).

A good current example would appear to be the push within Australia to put NAPLAN online.  The folk pushing it have clearly identified what they think are the benefits of doing NAPLAN with digital technologies, rather than old-school pen(cil) and paper. As such it is an example (using the RAT model) of amplification. There are perceived benefits.

But when it comes to standardised testing – like NAPLAN – there are big questions about the practice. Just one example is the question of just how comparable the data is across schools and years. The question about comparability is especially interesting given research that apparently shows

The results from our randomised experiment suggest that computer devices have a substantial negative effect on academic performance



Goodyear, P., Carvalho, L., & Dohn, N. B. (2014). Design for networked learning: framing relations between participants’ activities and the physical setting. In S. Bayne, M. de Laat, T. Ryberg, & C. Sinclair (Eds.), Ninth International Conference on Networked Learning 2014 (pp. 137–144). Edinburgh, Scotland. Retrieved from

Hughes, J., Thomas, R., & Scharber, C. (2006). Assessing Technology Integration: The RAT – Replacement, Amplification, and Transformation – Framework. In C. Crawford, R. Carlsen, K. McFerrin, J. Price, R. Weber, & D. A. Willis (Eds.), Society for Information Technology & Teacher Education International Conference 2006 (pp. 1616–1620). Orlando, Florida: AACE. Retrieved from