Exploring “post adoptive usage” of the #moodle Book module – a draft proposal

For quite some time I’ve experienced and believed that there how universities are implementing digital learning has some issues that contribute to perceived problems with the quality of such learning and its associated teaching. The following is an outline of an exploratory research project intended to confirm (or not) aspects of this belief.

The following is also thinking out loud and a work in progress. Criticisms and suggestions welcome. Fire away.

The topic of interest

Like most higher education institutions across the global, Australian universities have undertaken significant investments in corporate educational technologies (Holt et al., 2013). If there is to be any return on any investment in information technology (IT), then it is essential that the technologies are utilised effectively (Burton-Jones & Hubona, 2006). Jasperson, Carter and Zmud (2005) suggest that the potential of most information systems is underutilised and that most “users apply a narrow band of features, operate at low levels of feature use, and rarely initiate extensions of available features” (p. 525).

While Jasperson et al (2005) are talking broadly about information systems, it’s an observation that is supported by my experience and is likely to resonate with a lot of people involved in university digital/e-learning. It certainly seems to echo the quote from Prof Mark Brown I’ve been (over) using recently about e-learning

E-learning is a bit like teenage sex. Everyone says they’re doing it but not many people really are and those that are doing it are doing it very poorly (Laxon, 2013)

Which begs the question, “Why?”.

Jasperson et al (2005) suggest that without a rich understanding of what people are doing with these information systems at “a feature level of analysis (as well as the outcomes associated with those behaviours)” after the adoption of those systems, then “it is unlikely that organizations will realize significant improvements in their capability to manage the post-adoptive life cycle” (p. 549). I’m not convinced that the capability of universities to manage the post-adoptive life cycle is as good as it could be.

My experience of digital learning within Universities is that the focus is almost entirely on adoption of the technology. A lot of effort is placed into deciding which system (e.g. LMS) should be adopted. Once that decision is made that system is implemented. The focus is then on ensuring people are able to use the adopted system appropriately through the provision of documentation, training, and support. The assumption is that the system is appropriate (after all it wouldn’t have been adopted if it had any limitations) and that people just need to have the knowledge (or the compulsion) to use the system.

There are only two main types of changes made to these systems. First, is upgrades. When the adopted system is upgraded, the institution ensures that it maintains currency and upgrades. Second, are strategic changes. That is, senior management want to achieve X, system doesn’t do X, modify system to do X.

It’s my suggestion that changes to specific features of a system (e.g. LMS) that would benefit end users are either

  1. simply not known about; or,
    Due to the organisations lack of any ability to understand what people are experiencing and doing with the features of the system.
  2. are starved of attention.
    Since these are complex systems. Changing them is expensive. Thus only strategic changes can be made. Changes to fix features used by small subsets of people can never be seen as passing the cost/benefit analysis.

I’m interested in developing a rich understanding of the post-adoptive behaviours and experiences of university teachers using digital learning technologies. I’m working on this because I want to identify what is being done with the features of these technologies and understand what is working and what is not. It is hoped that this will reveal something interesting about the ability of universities to manage digital technologies in ways that enable effective utilization and perhaps identify areas for improvement and further exploration.

Research Questions

From that, the following research questions arise.

  1. How do people make use of a particular feature of the LMS?
    Seeking to measure what they actually did when using the LMS for actual learning/teaching. Not what they describe they did, or what they intend to do.
  2. In their experience, what are the strengths and weaknesses of a particular feature?
    Seeking to identify what they thought the system did to help them achieve their goal and what the system made harder.

Following on from Jasperson et al (2005) the aim is to explore these questions at a feature level. Not with the system as a whole but with how people are using a specific feature of the system. For example, what is their experience of using the Moodle Assignment module, or the Moodle Book module?

Thinking about the method(s)

So how do you answer those two questions?

Question 1 – Use

The aim is to analyse how people are actually using the feature. Not how they report their use, but how they actually use it. This suggests at least two methods

  1. Usability studies; or,
    People are asked to complete activities using a system whilst within a controlled environment that is capturing their every move, including tracking the movement of their eyes.

    On the plus side, this captures very rich data. On the negative side, I don’t have access to an usability lab. There’s also the potential for this sort of testing to be removed from context. First, the test appears in the lab, a different location than the user typically uses. Second, in order to get between user comparisons it can rely on “dummy” tasks (e.g. the same empty course site).

  2. Learning analytics.
    Analysing data gathered by the LMS about how people are using the system.

    On the plus side, I can probably get access to this data and there are a range of tools and advice on how to analyse it. On the negative side, the richness of the data is reduced. In particular, the user can’t be queried to discover why they performed a particular task.

Question 2 – Strengths and Weaknesses

This is where the user voice enters the picture. The aim here is to find what worked for them and what didn’t within their experience.

Appear to be three main methods

  1. Interviews;
    On the plus side, rich data. On the negative side, “expensive” to implement and scale to largish numbers and a large geographic area.
  2. Surveys with largely open-ended questions; or,
    On the plus side, cheaper, easier to scale to largish numbers and a large geographic area etc. On the negative side, more work on the part of the respondents (having to type their responses) and less ability to follow up on responses and potentially dig deeper.
  3. LMS/system community spaces.
    An open source LMS like Moodle has openly available community spaces in which users/developers of the system interact. Some of the Moodle features have discussion forums where people using the feature can discuss. Content analysis of the relevant forum might reveal patterns.
    The actual source code for Moodle as well as plans and discussion about the development of Moodle occur in systems that can also be analysed.
    On the plus side, there is a fair bit of content in these spaces and there are established methods for analysing them. Is there a negative side?

What’s currently planned

Which translates into an initial project that is going to examine usage of the Moodle Book module (Book). This particular feature was chosen because of this current project. If anything interesting comes of this, the next plan is to repeat a similar process for the Moodle Assignment module.

Three sources of data to be analysed initially

  1. The Moodle database at my current institution.
    Analysed to explore if and how teaching staff are using (creating, maintaining etc) the Book. What is the nature of the artefacts produced using the Book? How are learners interacting with the artefact produced using the Book?
  2. Responses from staff at my institution to a simple survey.
    Aim being to explore relationships between the analytics and user responses.
  3. Responses from the broader Moodle user community to essentially the same survey.
    Aim being to compare/contrast with the broader Moodle user community’s experiences with the experiences of those within the institution.

Specifics of analysis and survey

The analysis of the Book module will be exploratory. The aim is to develop analysis that is specific to the nature of the Book.

The aim of the survey is to generate textual descriptions of the users’ experience with the Book. Initial thought was given to using the Critical Incident Technique in a way similar to Islam (2014).

Currently the plan is to use a similar approach more explicitly based on the Technology Acceptance Model (TAM). The idea is that the survey will consist of a minimal number of closed questions mostly to provide demographic data. The main source of data from the survey will come from four open-ended questions, currently worded as

  1. Drawing on your use, please share anything (events, resources, needs, people or other factors) that have made the Moodle Book module more useful in your teaching.
  2. Drawing on your use, please share anything (events, resources, needs, people or other factors) that have made the Moodle Book module less useful in your teaching.
  3. Drawing on your use, please share anything (events, resources, needs, people or other factors) that have made the Moodle Book module easier to use in your teaching.
  4. Drawing on your use, please share anything (events, resources, needs, people or other factors) that have made the Moodle Book module harder to use in your teaching.

Future extensions

The analysis of Moodle usage might be usefully supplemented with interviews with particular people to explore interesting patterns of usage.

It’s also likely that the content analysis of the Moodle community discussion forum around the Book will also be completed. That’s dependent upon time and may need to wait.

Analysis of the Moodle source code repository or the tracker may also be usefully analysed. However, the focus at the moment is more on the user’s experience. The information within the repository and the tracker is likely to be a little too far away from most users of the LMS.

It would be interesting to repeat the institutionally specific analytics and survey at other institutions to further explore the impact of specific institutional actions (and just the broader contextual differences) on post-adoptive behaviour.


Burton-Jones, A., & Hubona, G. (2006). The mediation of external variables in the technology acceptance model. Information & Management, 43(6), 706–717. doi:10.1016/j.im.2006.03.007

Holt, D., Palmer, S., Munro, J., Solomonides, I., Gosper, M., Hicks, M., … Hollenbeck, R. (2013). Leading the quality management of online learning environments in Australian higher education. Australasian Journal of Educational Technology, 29(3), 387–402. Retrieved from http://www.ascilite.org.au/ajet/submission/index.php/AJET/article/view/84

Islam, A. K. M. N. (2014). Sources of satisfaction and dissatisfaction with a learning management system in post-adoption stage: A critical incident technique approach. Computers in Human Behavior, 30, 249–261. doi:10.1016/j.chb.2013.09.010

Jasperson, S., Carter, P. E., & Zmud, R. W. (2005). A Comprehensive Conceptualization of Post-Adaptive Behaviors Associated with Information Technology Enabled Work Systems. MIS Quarterly, 29(3), 525–557.

Laxon, A. (2013, September 14). Exams go online for university students. The New Zealand Herald.

Anyone capturing users’ post-adoptive behaviours for the LMS? Implications?

Jasperson, Carter & Zmud (2005)

advocate that organizations strongly consider capturing users’ post-adoptive behaviors, overtime, at a feature level of analysis (as well as the outcomes associated with these behaviors). It is only through analyzing a community’s usage patterns at a level of detail sufficient to enable individual learning (regarding both the IT application and work system) to be exposed, along with the outcomes associated with this learning, that the expectation gaps required to devise and direct interventions can themselves be exposed. Without such richness in available data, it is unlikely that organizations will realize significant improvements in their capability to manage the post-adoptive life cycle (p. 549)

Are there any universities “capturing users’ post-adoptive behaviours” for the LMS? Or any other educational system?

There’s lots of learning analytics research (e.g. interesting stuff from Gasevic et al, 2015) going on, but most of that is focused on learning and learners. This is important stuff and there should be more of it.

But Jasperson et al (2015) are Information Systems researchers publishing in one of the premier IS journals. Are there University IT departments that are achieving the “richness in available data…(that) will realize significant improvements in their capability to manage the post-adoptive life cycle”?

If there is, what does that look like? How do they do it? What “expectation gaps” have they identified? What “direct interventions” have they implemented? How?

My experience suggests that this work is limited. I wonder what implications that has for the quality system use and thus the quality of learning and teaching?

What “expectation gaps” are going ignored? What impact does that have on learning and teaching?

Jasperson et al (2005) develop a “Conceptual model of post-adoptive behaviour” shown in the image below. Post-adoptive behaviours can include the decision not to use, or change how to use. A gap in expectations that is never filled, is not likely to encourage on-going use.

They also identify that there is an “insufficient understanding of the technology sensemaking process” (p. 544). The model suggests that technology sensemaking is a pre-cursor to “user-initiated learning interventions”, examples of which include: formal or informal training opportunities; accessing external documentation; observing others; and, experimenting with IT application features.

Perhaps this offers a possible explanation for complaints about academics not using the provided training/documentation for institutional digital learning systems? Perhaps this might offer some insight into the apparent “low digital fluency of faculty” problem.

conceptual model of post-adoptive behaviours


Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2015). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicating learning success. The Internet and Higher Education, 28, 68–84. doi:doi:10.1016/j.iheduc.2015.10.002

Jasperson, S., Carter, P. E., & Zmud, R. W. (2005). A Comprehensive Conceptualization of Post-Adaptive Behaviors Associated with Information Technology Enabled Work Systems. MIS Quarterly, 29(3), 525–557.

The CSCW view of Knowledge Management

Earlier this week I attended a session given by the research ethics folk at my institution. One of the observations was that they’d run training sessions but almost no-one came. I’ve heard similar observations from L&T folk, librarians, and just about anyone else aiming to help academics develop new skills. Especially when people spend time and effort developing yet another you beaut website or booklet that provides everything one would want to know about a topic. There’s also the broader trope developing about academics/teachers being digitally illiterate, which I’m increasingly seeing as unhelpful and perhaps even damaging.

Hence my interest when I stumbled across Ackerman et al (2013) a paper titled “Sharing knowledge and expertise: The CSCW View” with the abstract

Knowledge Management (KM) is a diffuse and controversial term, which has been used by a large number of research disciplines. CSCW, over the last 20 years, has taken a critical stance towards most of these approaches, and instead, CSCW shifted the focus towards a practice-based perspective. This paper surveys CSCW researchers’ viewpoints on what has become called ‘knowledge sharing’ and ‘expertise sharing’. These are based in an understanding of the social contexts of knowledge work and practices, as well as in an emphasis on communication among knowledgeable humans. The paper provides a summary and overview of the two strands of knowledge and expertise sharing in CSCW, which, froman analytical standpoint, roughly represent ‘generations’ of research: an ‘object-centric’ and a ‘people-centric’ view.We also survey the challenges and opportunities ahead.

What follows are a summary and some thoughts on the paper.

Thoughts? Possibilities?

The paper’s useful in that it appears to give a good overview of the work from CSCW on this topic. Relevant to some of the problem being faced around digital learning.

All this is especially interesting to me due to my interest in exploring the design and impact of distributed means of sharing knowledge about digital learning

Look at Cabitza and Simone (2012) – two levels of information, and affording mechanisms – as informing design. Their work on knowledge artifacts (Cabitza et al, 2008) might also be interesting.

Brown and Duguid’s (2000) Network of Practice is a better fit for what I’m thinking here.

CSCW has a tendency to precede development with ethnographic studies.

Learning object repositories?

Given the fairly scathing findings re: the idea of repositories, what does this say about current University practices around learning object repositories?

Is digitally illiterate a bad place to start?

The “sharing expertise” approach would appear to assume that the people you’re trying to help have knowledge to share. Labeling teachers as digitally illiterate would appear to mean you couldn’t even conceptualise this as a possibility. Is this a core problem here?

The shift from system to individual practice

At some level the shift in the CSCW work illustrates a shift from focusing on IT systems to a focus on individual practices. The V&R mapping process illustrates some of this.

Context and embedding is important

Findings reinforce the contextual and situated nature of knowledge (is that a bias from the assumptions of these researchers?). Does this explain many of the problems currently being faced? i.e. what’s being done at the moment is neither contextual nor situated? Would addressing this improve outcomes?


A topic dealt with by different research communities (Information Systems, CSCL, Computer Science) each with their particular focus and limitations. e.g. CS has developed interesting algorithms but “Empirical explroations into the practice of knowledge-intense work have been typically lacking in this discourse” (p. 532).

The CSCW strength has been “to have explore the relationship between innovative computational artifacts and knowledge work – from a micro-perspective” (p. 532)

Uses two different terms that “connote CSCW’s spin on the problem” i.e.

that knowledge is situated in people and in location, and that the social is an essential part of using any knowledge…far more useful systems can be developed if they are grounded in an analysis of work practices and do not ignore the social aspects of knowledge sharing. (p. 532)

  1. Knowledge sharing – knowledge is externalised so that it can be captured/manipulated/shared by technology.
  2. Expertise sharing – where the capability/expertise to do work is “based on discussions among knowledgeable actors and less significantly supported by a priori externalizations”

Speak of generations of knowledge management

  1. Repository models of information and knowledge.
    Ignoring the social nature of knowledge, focused on externalising knowledge.
  2. Sharing expertise
    Tying communication among people into knowledge work. Either through identifying how best to “find” who has the knowledge or on creating online communities to allow people to share their knowledge. – expertise finders, recommenders, and collaborative help systems.
    Work later scaled to Internet size systems and communities – collectives, inter-organisational networks etc.

Repository model

started with attempts “to build vast repositories of what they knew” (p. 533).

it should be noted that CSCW never really accepted that this model would work in practice (p. 534)…Reducing the richness of collective memory to specific information artifacts was utopian (p. 537)

Findings from various CSCW repository studies

  • Standard issues with repository systems
    particularly difficulty with motivating users to author and organize the material and to maintain the information and its navigation

  • Context is important.

    Some systems tackled the problem of context by trying to channel people to expertise that was as local as possible based on the assumption that “people nearby an asker would know more about local context and might be better at explaining than might experts”.

    Other research found “difficulties of reuse and the organisation of the information into repositories over time, especially when context changed…showed that no organisational memory per se existed; the perfect repository was a myth” (p. 534)

  • Need to embed.

    such a memory could be constructed and used, but the researchers also found they needed to embed both the system and the information in both practice and in the organizational context

  • situated and social.

    CSCWin general has assumed that understanding situated use was critical to producing useful, and usable, systems (Suchman 1987;Suchman and Wynn 1984) and that usability and usefulness are social and collaborative in nature (p. 537)

  • deviations seen as useful

    Exceptions in organizational activities, instead of being assumed to be deviations from correct procedures, were held to be ‘normal’ in organizational life (Suchman 1983) and to be examined for what they said about organizational activity, including information handling (Randall et al. 2007;Schmidt 1999) (p. 537)

  • issues in social creation, use, and reuse of information.

    • issues of motivation,
      Getting information is hard. Aligning reward structures a constant problem. The idea of capturing all knowledge clashed with a range of factors, especially in competitive organisational settings.
    • context in reuse,
      “processes of decontextualisation and recontextualisation loomed over the repository model” (p. 538). “This is difficult to achieve, and even harder to achieve for complex problems” (p. 539).
    • assessments of reliability and authoritativeness,
      de/recontextualisation is social/situated. Information is assessed based on: expertise of the author, reliability, authoritativeness, quality, understandability, the provisional/final nature of he information, obsolescense and completeness, is it officialy vetted?
    • organizational politics, maintenance, and
      “knowledge sharing has politics” (p. 539). Who is and can author/change information impacts use. Categories/meta data of/about data has politics.
    • reification
      “repository systems promote an objectified view of knowledge” (p. 540)

Repository work has since been commercialised.

Some of this work is being re-examined/done due to new methods: machine learning and crowd-sourcing.

Boundary objects – “critical to knowledge sharing. Because of their plasticity of meaning boundary objects serve as translation mechanisms for ideas, viewpoints, and values across otherwise difficult to traverse social boundaries. Boundary objects are bridges between different communities of practice (Wenger 1998) or social worlds (Strauss 1993).” (p. 541)

“information objects that have meaning on both sides of an intra-organisational or inter-organisational boundary”.

CSCW tended to focus on “tractable information processing objects” (p. 542) – forms etc. – easier to implement but “over-emphasis on boundary objects as material artifact, which can limit the analytical power that boundary objects bring to understanding negotiation and mediation in routine work”

Example – T-Matrix – supporting production of a tire and innovation.

Cabitz and Simone (2012) identify two levels of information

  1. awareness promoting information – current state of the activity
  2. knowledge evoking information – triggering previously acquired knowledge or triggering/supporting learning and innovation

Also suggest “affording mechanisms”

Other terms

  1. “boundary negotiating” objects
    Less structured ideas of boundary objects suggested
  2. knowledge artifacts – from Cabitza et al (2013)

    a physical, i.e., material but not necessarily tangible, inscribed artifact that is collaboratively created, maintained and used to support knowledge- oriented social processes (among which knowledge creation and exploita- tion, collaborative problem solving and decision making) within or across communities of practice…. (p. 35)

    These are inherently local, remain open for modification. Can stimulate socialisation and internalisation of knowledge.

common information spaces – common central archive (repository?) used by distributed folk. Open and malleable by nature. A repository is closed/finalised, CIS isn’t. Various work to make the distinction – e.g. degrees of distribution; kinds of articulation work and artifacts required, the means of communication , and the differences in frames of participant reference.

Various points made as to the usefulness of this abstraction.


  • Assembly – “denote an organised collection of information objects”
  • Assemblages – “would include the surrounding practices and culture around an object or collection” (p. 545)

How assemblies are put together and their impacts is of interest.

Sharing expertise

Emphasis on interpersonal communications over externalisation in IT artifacts. “ascribed a more crucial role to the practices of individuals” (p. 547). A focus on sharing tacit knowledge – including contextual knowledge.

tacit/explicit – Nonaka’s mistake – explicit mention of the misinterpretation of Polanyi’s idea of tacit knowledge. The mistaken assumption/focus was on making tacit knowledge explicit. When Polanyi used tacit to describe knowledge that was very hard, if not impossible to make explicit.

Tacit knowledge can be learned only through common experiences, and therefore, contact with others, in some form, is required for full use of the information. (p. 547)

Community of practice “roughly be defined as a group that works toegher in a certain domain and whose members share a common practice”.

Network of practice (from Brown and Duguid, 2000) – members do not necessarily work together, but work on similar issues in a similar way.

Community of Interest – defined by common interests, not common practice. Diversity is a source of creativity and innovation.

I like this critique of the evolution of use of CoP

Intrinsically based in their view of ‘tacit knowledge,’ the Knowledge Management community appropriated CoP in an interventionist manner. CoPs were to be cultivated or even created (Wenger et al. 2002), and they became fashionable as ‘the killer application for knowledge management practitioners’ (Su andWilensky 2011, p. 10) with supposedly beneficial effects on knowledge exchange within groups. (p. 547)

CSCW didn’t use CoPs in an interventionist way – instead as an analytical lens.

Social capital – from Bourdieu – “refers to the collective abilities derived from social networks”. Views sharing “in the relational and empathic dimension of social networks” (p. 548).

Nahapiet and Ghoshal (1998) suggest it consists of 3 dimensions

  1. Structural opportunity (‘who’ shares and ‘how’);
    Which is where the technical enters the picture.
  2. Cognitive ability (‘what’ is shared);
  3. Relational motivation (‘why’ and ‘when’ people engage)

Latter 2 dimensions not often considered by system designers.

The sharing approach places emphasis on “finding-out” work. Where knowledge is found by knowing/asking others and in finding the source, de-contextualising and then re-contextualising. Often involves “local knowledge” – which tends to have an emergent nature. What’s important is only known in the situation at hand and who holds it evolves within a concrete situation.

People finding and expertise location

Move from focusing on representations of data to the interactions between people – trying to produce and modify them. Tackling technical, organisational and social issues simultaneously.

Techniques include: information retrival, network analysis, topics of interest, expertise determination.

Profile construction can be contentious – privacy, identification of expertise. Especially given “big data” approaches to analysing and identification.

Expertise finding’s 3 stages: identification, selection, escalation.

Need to promote awareness of individual expertise and their availability – “based in ‘seeing’ others’ activities” (p. 551)

“people prefer others with whom they share a social connection to complete strangers” (p. 553) – no surprise there – but people known directly weren’t chosen as they were deemed not likely to have any greater expertise. Often people who were 2 or 3 degrees of separation away.

Profiles also found by one study to be often out of date. Explored “peripheral awareness” as a solution.

Open issues

  • Development of personal profiles.
  • Privacy and control.
  • Accuracy.

Finding others Lot of work outside CSCW.

CoI in the form of web Q&A communities have arising on the Internet. With research that has studied question classification, answer quality, user satisfaction, motivation and reputation.


  • more money = more answers, but not necessarily better quality.
  • charitable contributions increased credibility of answers “in a nuanced way”?
  • Altruism and reputation building two important motivations

Recent research looking at “social Q&A” – how people use social media to answer – two lines of research (echoing above)

  1. social analysis of existing systems;
    Looking at: impact of tie strength on answer quality, org setting, response rates when asking strangers – especially with quick, non-personal answers, community size and contact rate.
  2. technical development of new systems

Future directions

Interconnected practices: expertise infrastructures

Increasing inter-connectedness

  • may cause “experts” to become anonymous.
  • propel new types of interactions via micro-activities – microtasking environments make it easy/convenient to help
  • Collaboratively constructed information spaces – wikipedia – numerous papers examiner how it was constructed, including work looking more broadly at Wikis
  • Other research looked at github, mozilla bug reports etc.
  • And work looking at social media, microblogging etc and its use.


Ackerman, M. S., Dachtera, J., Pipek, V., & Wulf, V. (2013). Sharing Knowledge and Expertise: The CSCW View of Knowledge Management. Computer Supported Cooperative Work (CSCW), 22(4-6), 531–573. doi:10.1007/s10606-013-9192-8

Re-purposing V&R mapping to explore modification of digital learning spaces


Apparently there is a digital literacy/fluency problem with teachers. The 2014 Horizon Report for Higher Education identified the “Low Digital Fluency of Faculty” as the number 1 “significant challenge impeding higher education technology adoption”. In the 2015 Horizon Report for Higher Education this morphs into “Improving Digital Literacy” being the #2 significant challenge. While the 2015 K-12 Horizon Report has “Integrating Technology in Teacher Education” as the #2 significant challenge.

But focusing solely on the literacy of the teaching staff seems a bit short sighted. @palbion, @chalkhands and I are teacher educators working in a digitally rich learning environment (i.e. a large percentage of our students are online only students). We are also fairly digitally fluent/literate. In a paper last year we explored how a distributive view of knowledge sharing helped us “overcome the limitations of organisational practices and technologies that were not always well suited to our context and aims”.

Our digital literacy isn’t a problem, we’re able and believe we have to overcome the limitations of the environment in which we teach. Increasingly the digital tools we are provided by the institution do not match the needs we have for our learning designs and consequently we make various types of changes.

Often these changes are seen as bad. At best these changes are invisible to other people within our institution. At worst they are labelled as duplication, inefficient, unsafe, and feral. They are seen as shadow systems. Systems and changes that are undesirable and should be rooted out.


Rather than continue this negative perspective, @palbion, @chalkhands and I have just finished a rough paper that set out to explore if there was anything valuable or interesting to learn from the changes we made to our digital learning spaces. Our process for this paper was

  1. Generate a list of stories of the changes we made to our digital learning/teaching spaces.
    Using a Google doc and a simple story format (descriptive title; what change was made; why; and, outcomes) each of us generated a list of stories of where we’d changed the digital tools/spaces we use for our teaching.
  2. Map those stories using a modified Visitor and Resident mapping approach.
    The stories needed to be analysed in someway. The Visitors & Residents approach offered a number of advantages – more detail below.
  3. Reflect upon what that analysis showed and about potential future applications of this approach.

What follows is some reflection on the approach, a description of the original V&R map, and a description and example of our modified V&R map.

Reflection on the approach

In short, we (I think I can say we) found the whole approach interesting and could see some potential for broader use. In particular, the potential benefits of the approach include:

  1. Great way to start discussions and share knowledge.
    Gathering stories and analysing them using the V&R process appear to be very useful ways for starting discussions and sharing knowledge. Not the least because it starts with people sharing what they are doing (trying to do) now, rather than some mythical ideal future state.
    Reports from others using the original V&R mapping process suggest this is a strength of the V&R mapping approach. Our experience seems to suggest this might continue with the modified map we used.
  2. Doesn’t start by assuming that people are illiterate.
    Neither @palbion or I think we’re digitally illiterate. We have formal qualifications in Information Technology (IT). @chalkhands doesn’t have formal qualifications in IT. Early on in this process she was questioning whether or not she had anything to add. She wasn’t as “literate” as @palbion and I. However, as we started sharing stories and mapping them that questioning went away.
    The V&R approach is very much based on the idea of focusing on what people do, rather than who they are or what they know (or don’t). It doesn’t assume teaching staff are digitally illiterate and is just interested in what people do. I think this is a much more valuable starting point for engaging in this space. It appears likely to provide a method for helping universities follow observations from the 2015 Horizon Report that solving the “digital literacy problem” requires “individual scaffolding and support along with helping learners as they manage conflict between practice and different contexts” and “Understanding how to use technologies is a key first step, but being able to leverage them for innovation is vital to fostering real transformation in higher education” and “that programs with one-size-fits-all training approaches that assume all faculty are at the same level of digital literacy pose a higher risk of failure.”
  3. It accepts that the ability for people to change digital technologies is not only ok, it is necessary and unavoidable.
    Worthen (2007) makes the point that those in charge of institutional IT (including digital learning spaces) want to prevent change while the people using digital systems want the technology to change
    Users want IT to be responsive to their individual needs and to make them more productive. CIOs want IT to be reliable, secure, scalable, and compliant with an ever increasing number of government regulations

    Since the CIOs are in charge of the technology (they have the power) the practice of changing digital systems (without having gone through the approved governance processes) is deemed as bad and something to be avoided. This is due to change, especially in learning and teaching if you accept Shulman’s (1987) identification of the “knowledge based of teaching” laying (emphasis added)

    at the intersection of content and pedagogy, in the capacity of a teacher to transform the content knowledge he or she possesses into forms that are pedagogically powerful and yet adaptive to the variations in ability and background presented by the students (p. 15)

The original V&R map

The original V&R map is (example in the image below) a cartesian graph with two axes. The X-axis ranges from visitor to resident and describes how you perceive and use digital technologies. A visitor sees a collection of disparate tools that are fit for specific purposes. When something has to be done the visitor selects the tool, gets the job done, and leaves the digital space leaving no social trace. A resident on the other hand sees a digital space where they can connect and socialise with others. The Y-axis ranges from Institutional to Personal and describes where use of digital technologies fits on a professional or personal scale.

The following map shows someone for whom LinkedIn is only used for professional purposes. So it’s located toward the “Institutional” end of the Y-axis. Since LinkedIn is about leaving a public social trace for others to link to, it’s located toward the “Resident” end of the X-axis.

Our modified V&R map

Our purpose was to map stories about how we had change digital technologies within our role as teacher educators. Thus the normal Institutional/Personal scale for the Y-axis doesn’t work. We’re only considering activities that are institutional in purpose. In addition, we’re focusing on activities that changed digital technologies. We’re interested in understanding the types of changes that were made. As a result we adopted a “change scale” as the Y-axis. The scale was adapted from software engineering/information systems research and is summarised in the following table.

Item Description Example
Use Tool used with no change Add an element to a Moodle site
Internal configuration Change operation of a tool using the configuration options of the tool Change the appearance of the Moodle site with course settings
External configuration Change operation of a tool using means external to the tool Inject CSS of Javascript into a Moodle site to change its operation
Customization Change the tool by modifying its code Modify the Moodle source code, or install a new plugin
Supplement Use another tool(s) to offer functionality not provided by existing tools Implement course level social bookmarking by requiring use of Diigo
Replacement Use another tool to replace/enhance functionality provided by existing tools Require students to use external blog engines, rather than the Moodle blog engine.

Since we were new to the V&R mapping process and were trying to quickly do this work without being able to meet, some additional scaffolding was placed on the X-axis (visitor-resident). This provide some common level of understanding of the scale and was based on a specific (and fairly limited) definition of “social trace”. The lowest level of the scale was “tools used by teachers” which meant no social trace. The scale gradually increased the number of people involved in the activities mediated by the digital technology. “Subsets of students in a course” to “All students in a course” and right on up to “Anyone on the open web”.

The following image is the “template” map that each of used to map out our stories of changing digital technologies.

Modified V&R map template

An example map and stories

The following image is the outcome of mapping my stories of change. A couple of example stories are included after the image.

My V&R change map

Know thy student

This story involves replacing/supplementing existing digital tools, but is something that only I use. Hence Visitor/Replacement.

What? A collection of Greasemonkey scripts, web scrapping, local database/server designed to help me know my students and what they were doing in the Study Desk. Wherever there is a Moodle user profile link in Moodle, the script will add a link [ details ] that is specific for each user. If I click on that link I see a popup window with a range of information about the student

Why? Because finding out this information about a student would normally take 10+ minutes and require the use of multiple different web pages in two different system. Many of these pages don’t exactly make it easy to see the information. Knowing the students better is a core part of improving my teaching.

Outcomes? It’s been a god send. Saving time and enabling me to be more aware of student progess.

Using links in student blog posts

A fairly minor example of change. There’s a question of whether it’s just “use” or “internal configuration”? After all, it’s just using an editor on a web page to create some HTML. It was bumped up to “internal configuration” because of an observation that hyperlinks were not often used by many teachers. Something I’m hoping that @beerc will test empirically.

What? Some comments I write on student blog posts will make use of links to offer pointers to relevant resources.

Why? It’s more useful/easy to the students to have the direct link. Hence more likely to make use of the suggestion.

Outcomes? Minor anecdotal positive comments. Not really known

Early indications and reflection

The change scale worked okay but could use some additional reflection. In particular we raised some questions about whether many of the “replacement” examples of change (including those in my map above) are actually examples of supplement.

On reflecting on all this we made some initial observations, including

  1. Regardless of perceived levels of digital literacy we all engaged in a range of changes to digital technologies.
  2. Not surpisingly, the breadth/complexity of those changes increased with greater digital literacy.
  3. In the end very few of our changes were “replacement”. Almost all were focused more on overcoming perceived shortcomings with the provided tools, rather than duplicating their functionality.
  4. Most of changes tended to congregate towards the “visitor” end of the X-axis. Not surprising given that all of the digital technologies provided by the institution are not on the open web.
  5. Almost all of the stories that involved “replacement” were based on moving out onto the “open web”. i.e. they were all located toward the “resident” end of the X-axis.
  6. Changes were being made due to two main reasons: improving the efficiency of institutional systems or practices; or, customising digital technologies to fit the specific learning activities we wanted to implement.

Are our institutions digital visitors? What are the impacts on learning and teaching?

As it happens, we’ve been talking and thinking about the Visitor/Resident typology (White & Cornu, 2011) that last couple of weeks. The network gods have been kind, because over night a post titled “The resident web and its impact on the academy” (Lanclos & White, 2015) floats across my Twitter stream. Much food for thought.

It has me wondering

Are universities are digital visitors? If so, what impact is this having on learning and teaching?

Update: more reading and thinking has led to the addition of a section “Branding pushing out social traces”.

Residents and visitors

White & Cornu (2011) describe visitors as those that

understand the Web as akin to an untidy garden tool shed. They have defined a goal or task and go into the shed to select an appropriate tool which they use to attain their goal…Visitors are unlikely to have any form of persistent profile online which projects their identity into the digital space

White & Cornu (2011) describe residents as those that

see the Web as a place, perhaps like a park or a building in which there are clusters of friends and colleagues whom they can approach and with whom they can share information about their life and work. A proportion of their lives is actually lived out online where the distinction between online and off–line is increasingly blurred. Residents are happy to go online simply to spend time with others and they are likely to consider that they ‘belong’ to a community which is located in the virtual…To Residents, the Web is a place to express opinions, a place in which relationships can be formed and extended.

How Universities think about digital learning spaces

@damoclarky and I argued that institutional digital learning is informed by the SET mindset. A mindset that approaches any large, complex problem (like digital learning) with a Tree-like approach. That is, it employs logical decomposition to break the large problem up into its smaller and smaller problems until there is a collection of solvable problems that can allocated to individual units. The units now solve the problems (largely) independently, and each of the small solutions are joined back up together and consequently (hopefully) solve the original big problem.

You can see evidence of this tree-like perspective all over our institutions and the digital learning spaces they produce.

The institutions themselves are divided into hierarchical organisational structures.

What the institution teaches is divided up into a hierarchical structure consisting of programs (degrees), majors, courses, semesters, weeks, lectures, and tutorials.

And more relevant to this argument, the institutional, digital learning space is divided up into separate tools.

At my institution those separate tools include, but are not limited to:

  • the staff/student portal;
  • the Learning Management System;
    In the case of my institution that’s Moodle. Moodle (like many of these systems) is structured into a tree-like collection of modules. The “M” in Moodle stands for Modular.
  • the eportfolio system;
  • the learning object repository system;
  • the library system;
  • the gradebook (Peoplesoft); etc….

Each tool is designed to serve a particular goal, to help complete a specific task.

Hence the tendency for people to see these digital learning spaces “as akin to an untidy garden tool shed” where when they want to do something they “go into the shed to select an appropriate tool which they use to attain their goal” (White & Cornu, 2011).

This collection of separate tools is not likely to be seen as a “place, perhaps like a park or a building in which there are clusters of friends and colleagues whom they can approach and with whom they can” (White & Cornu, 2011) learn.

Of course, there is some awareness of this problem, which leads to a solution.

Brand as unifying solution

Increasingly, the one solution that the corporate university seems able to provide for this “untidy garden tool shed” problem is branding. The idea being that if all the tools use the same, approved, corporate brand then all will be ok. It will be seen as an institutional learning space. With the emphasis explicitly on the institution. It is the institution’s brand that is used to cover the learning space, not the learners and not the teachers. With which I see some problems.

First, is the observation made by Lanclos and White (2015) in the context of the resident web and the academy

scholars will gain a form of currency by becoming perceived as “human” (the extent to which ‘humanness’ must be honest self-expression or could be fabricated is an interesting question here) rather than cloaked by the deliberately de-humanised unemotive academic voice.

In this context the problem isn’t so much the “de-humanised unemotive academic voice” as it is the stultifying, stripping of individuality on the altar of the institutional identity. It doesn’t matter whether you’re learning engineering, accounting, teaching or anything else. It’s the institution and how it wishes to project itself that matters.

Which creates the second problem for which one of my institution’s documents around a large institutional digital learning project provides a wonderful exemplar.

Can you have a digital learning experience that is consistent, brand enhancing, and optimal for each student? I tend to think not. Especially in light of arguments that the diversification and massification of the student body has led universities to shift their education rhetoric from a notion of “one size fits all” to a concept of tailored, flexible learning (Lewis, Marginson et al. 2005).

My current experience is that instead of getting digital learning spaces that support tailored and flexible learning, institutions are more likely to create learning spaces that “have less variety in approach than a low-end fast-food restaurant” (Dede, 2008, p. 58).

Brand pushing out social traces

The visitors/residents typology (White and Cornu, 2011) is particularly interested in whether or not people are leaving social traces of themselves online as they interact with digital learning spaces (well, they are actually focused on the participatory web, but I’ll narrow it a bit). Does the “consistent..brand enhancing” approach to institutional digital learning spaces limit the likelihood of social traces being left? Can institutional digital learning spaces be seen as places people will want to reside within when it’s branded?

It would seem obvious that such a branded space couldn’t be seen as “my space”, especially for students. But what about the impact of teachers. Many teachers – for better or worse – like to customise the learning space (not only for the needs of the students) but also to meet project their personality. Can this be done in a branded digital space?

Impact on learning?

The above points to an institutionally provided (and sometimes mandated) digital learning space that is more likely to resemble and consistently branded, untidy garden tool shed. A perception that is unlikely to be perceived by learners and teachers as a space they would wish to inhabit. Instead, it’s more likely to encourage them to see the learning space as place to visit, complete a task, and leave ASAP. Which would appear likely to negatively impact engagement and learning.

It’s would also appear likely to be a perception that is not going to help institutions address a pressure identified by Lanclos and White (2015)

The academy can no longer simply serve its own communities in the context of the networked Web, and it is under increasing cultural pressure to reach out and appear relevant.


Dede, C. (2008). Theoretical perspectives influencing the use of information technology in teaching and learning. In J. Voogt & G. Knezek (Eds.), International Handbook of Information Technology in Primary and Secondary Education (pp. 43–62). New York: Springer.

Lewis, T., S. Marginson, et al. (2005). “The network university? Technology, culture and organisational complexity in contemporary higher education.” Higher Education Quarterly 59(1): 56-75.

White, D., & Le Cornu, A. (2011). Visitors and Residents : A new typology for online engagement. First Monday, 16(9). doi:doi:10.5210/fm.v16i9.3171

What is “netgl” and how might it apply to my problem

At least a couple of the students in a course I help out with are struggling a little with Assignment 2 which asks them “to develop a theory-informed plan for using NGL to transform your teaching (very broadly defined) practice”.

The following is a collection of bits of advice that will hopefully help. Littered throughout are also some examples from my own practice.

NGL != social media

Network and Global Learning (NGL/netgl) should not be interpreted to mean use of social media. In the course we use blogs, Diigo, feed readers etc as the primary form of NGL practice and in the past this has led folk to think that NGL equates to use of social media.

Just because we used blogs, Diigo, and feed readers, that doesn’t you should. You should use whatever is appropriate to your problem and your context.

What is NGL?

Which begs the question, “what is NGL”? If not just social media.

As I hope was demonstrated in the first two-thirds of the course there is no one definition of NGL. There are many different views from many different perspectives.

The first week’s material had a section on networked learning that included a few broad definitions. I particularly like the Goodyear et al (2014) quote that includes

learning networks now consist of heterogeneous assemblages of tasks, activities, people, roles, rules, places, tools, artefacts and other resources, distributed in complex configurations across time and space and involving digital, non-digital and hybrid entities.

The course material also covers more specific conceptions of NGL. e.g. connectivism gets a mention in week 1, as does public click pedagogy.

Week 3 mentions groups, networks, collectives and communities; the idea of network learning as a 3rd generation of pedagogy; and some historical origins of network learning.

What’s your problem?

It’s all overwhelming, is a common refrain I’m hearing. Understanding that there is a range of different views of NGL probably isn’t going to help. That’s one of the reason why Assignment 2 is intended to use a design-based research approach i.e. (emphasis added)

a particular approach to research that seeks to address practical problems by using theories and other knowledge to develop and enhance new practices, tools and theories.

At some level DBR can help narrow your focus by asking you to focus on a practical problem. A problem near and dear to your heart and practice.

Of course, the nature of “problems” in and around education are themselves likely to be complex and overwhelming. The example I give from my own practice – described initially as “university e-learning tends to be so bad” or “a bit like teenage sex” is a big complex problem with lots of perspectives.

How do you reduce the big overwhelming problem to something that you can meaningful address?

This is where the literature and theory(ies) enter the picture.

What might “theory informed” mean?

First, go and read a short post titled What is theory and why use theories?.

Adopting this broad and pragmatic view of theory, there are many ideas and concepts littered throughout this course (and many, many more outside) including, but not limited to: connectivism; connected learning; communities of practice; group, networks, collectives, and communities; threshold concepts etc. In understanding your problem, you are liable to draw upon a range more.

As per the short post theories are meant to be useful to you in understanding a situation or problem and then as an aid in formulating action.

Combining theories from NGL and your “problem”

The theories for assignment 2 aren’t limited just to theories from NGL. You should also use theories that are relevant to your problem.

You look around for how other people have conceptualised the problem and the approaches and theories that they have used. Do any of those resonate with you? Can you see any problems or limitations with the approaches used? Are there other theoretical lenses or just simple ways of understanding the problem that help narrow down useful avenues for action?

In terms of my problem with the perceived quality limitations of university e-learning, I’ve been using the TPACK framework for a while as one theoretical lens. TPACK is quite a recent and broadly used theory for understanding the knowledge teachers require to design technology-based learning experiences. (Since all models are wrong, it has it’s limitations)

Drawing on TPACK I wonder if the reason why university e-learning is so bad is because the TPACK (knowledge) being used to design, implement, and support it is insufficient. It needs to be improved.

Not an earth shatteringly insightful or novel suggestion. But by focusing on TPACK that does suggest that perhaps I focus my attention for potential solutions within the TPACK related literature, other than elsewhere. Almost always there is more literature than any body (especially in the context of a few weeks) can get their head around. So for better or worse, you need to starting drawing boundaries.

Now with a focus on TPACK it’s time to combine my personal experience with the theory and associated literature. My personal experience and context may also help focus my exploration. e.g. if I were working in a TAFE/VET context, I might start looking at the literature for mentions of TPACK in the TAFE/VET context (or just at TAFE/VET literature). Again, narrowing down the focus.

I might find that there’s nothing in the TAFE/VET context that mentions TPACK in conjunction with e-learning. This might highlight an opportunity to learn lessons from other contexts and test them out in the TAFE/VET context. Or there might already be some TPACK/TAFE/VET/e-learning literature that I can learn from.

In my case, as someone with relatively high TPACK I get really annoyed when people think the main challenge is “low digital fluency of faculty” (i.e. teaching staff). This gets me thinking that perhaps the problem isn’t going to be solved by focusing on developing the knowledge of teaching staff. i.e. requiring teaching staff to have formal teaching qualification isn’t (I believe) going to solve the problem, so what is?

You want digitally fluent faculty?

This is potentially interesting because a fair chunk of existing practice assumes that formal teaching qualifications or the “right” professional development opportunities will help teaching staff develop the right TPACK and thus university e-learning will be fantastic. Being able to mount a counter to a prevailing orthodoxy might be interesting and useful. It might make a contribution. It might also identify a fundamental misunderstanding of a problem and a need to read and consider further.

In my case that led to an interest in (seeing a connection with) another theoretical idea, i.e. the distributive view of learning and knowledge. I do recommend Putnam & Borko (2000) as a good place to start learning about how the distributive view of knowledge and thinking can help situate teacher learning.

The combination of TPACK and the distributive view of learning appears to be useful. So we ended up using it in this paper to explore our experience with university e-learning. That work lead to questions such as

  • How can institutional learning and teaching support engage with the situated nature of TPACK and its development?
  • How can University-based systems and teaching practices be closer to, or better situated in, the teaching contexts experienced by pre-service educators?
  • How can the development of TPACK by teacher educators be made more social?
  • How can TPACK be shared with other teacher educators and their students?
  • Can the outputs of digital renovation practices by individual staff be shared?
  • How can institutions encourage innovation through digital renovation?
  • What are the challenges and benefits involved in encouraging digital renovation?

Most of these are questions that could be good candidates for a design-based research project. i.e. can you use these and other theories to design an intervention or change in practice?

Designing an intervention

This recent post is my attempt to answer at least this question from above

How can institutional learning and teaching support engage with the situated nature of TPACK and its development?

It takes the distributed view of TPACK, the BAD mindset, and tries to envision some changes in practice/technology that might embody the principles from those theoretical ideas.

The idea is that being guided by those theoretical ideas makes it more likely that I can predict what can/should happen. I can justify the design of the intervention. I might be wrong, but it will hopefully be a better reason for the specific design approach than “because I wanted to”.

The ultimate aim of a DBR approach is to design, implement, and then test this design to see if it does achieve what I think it might.

Don’t forget the context. Don’t focus on the technology

My example above is very heavy on in terms of technology and requires fairly large technical expertise. That’s because it is something that I’ve designed for my specific context. It makes sense (hopefully) within that context.

If I were someone else working (with less technical knowledge) in a different context (e.g. an outback school with no Internet connection), then the solution I would design would be different.

Putnam and Borko (2000) give a range of examples around teacher learning that aren’t heavily technology based. If there is no Internet connection, there might be a high prevalence of mobile phones. If not, I might need to become a little more creative about using low levels of digital technologies.

In fact, if I were in a very low technology environment, I’d be actively searching the literature for insight and ideas about how other people have dealt with this problem. Almost certainly I wouldn’t be the first in the world.


Putnam, R. T., & Borko, H. (2000). What do new views of knowledge and thinking have to say about research on teacher learning? Educational Researcher, 4-15.

What is theory and why use theories?

The following is an edited version of something used in a course I teach that’s currently hidden away in the LMS. I’m adding it here because I’m using it with another group of students.

It’s a quick attempt to cover what I perceive to be a reasonable whole for many education students. i.e. what exactly is a theory and why the hell would I want to use them? My impression is that not many of them have developed an answer to these questions that they are comfortable with.

This is a complex and deeply contested pair of questions. I’m assuming that if you lined up 50 academics you’d get at least 50 different sets of answers. My hope that this is a useful start for some. Feel free to add your own pointers and answers to these questions.

If you want a more detailed look into the nature of theory then I can recommend Gregor (2006).

What is theory?

I take an inclusive and pragmatic view of theory.

An inclusive view, because there is a huge array of very different ideas that can be labelled theories. A pragmatic view is taken because the reason we use theories in this course is to make it easier to do something. To understand a particular situation, or for most reading this figure out how to design some use of digital technology to enhance or transform student learning.

Hirst (2012, p. 3) describes educational theory as

A domain of practical theory, concerned with formulating and justifying principles of action for a range of practical activities.

i.e. educational theory should help you teach and help your learners learn.

In the context of this particular course we touch on various ideas such as: the Computer
Practice Framework, TPACK, Backwards Design, the RAT framework, the SAMR model, The TIP Model, constructivism, and many more. For the purposes of this course, we’ll call these things theories. They help with “formulating and justifying principles of action”.

There is huge variability in the purpose, validity, and approaches used to formulate and describe these objects called theories. A theory isn’t inherently useful, important, or even appropriate. That’s a judgement that you need to make.

A theory is just a model and All models are wrong, but some are useful (Box, 1979).

Why use theories?

Thomas (1997, p. 78) cites Mouly (1978)

Theory is a convenience a necessity, really organizing a whole slough of facts, laws, concepts, constructs, principles into a meaningful and manageable form

These theories are useful because they help you understand, formulate and justify how and what to do. In this course, these theories will help you plan, implement, and evaluate/reflect upon the use of digital technologies to improve your teaching and your students’ learning.

Learning and teaching are difficult enough. When you add digital technologies to the mix even more complexity arises. The theories we introduce in this course should hopefully help you make sense of this complexity. Guide you in understanding, planning, implementing and evaluating of your use of ICTs.


Gregor, S. (2006). The nature of theory in information systems. MIS Quarterly, 30(3), 611–642.

Hirst, P. H. (2012). Educational theory. In P. H. Hirst (Ed.), Educational Theory and Its Foundation Disciplines (pp. 3-29). Milton Park, UK: Routledge.

Thomas, G. (1997). What’s the Use of Theory? Harvard educational review, 67(1), 75:105.