Anyone capturing users’ post-adoptive behaviours for the LMS? Implications?

Jasperson, Carter & Zmud (2005)

advocate that organizations strongly consider capturing users’ post-adoptive behaviors, overtime, at a feature level of analysis (as well as the outcomes associated with these behaviors). It is only through analyzing a community’s usage patterns at a level of detail sufficient to enable individual learning (regarding both the IT application and work system) to be exposed, along with the outcomes associated with this learning, that the expectation gaps required to devise and direct interventions can themselves be exposed. Without such richness in available data, it is unlikely that organizations will realize significant improvements in their capability to manage the post-adoptive life cycle (p. 549)

Are there any universities “capturing users’ post-adoptive behaviours” for the LMS? Or any other educational system?

There’s lots of learning analytics research (e.g. interesting stuff from Gasevic et al, 2015) going on, but most of that is focused on learning and learners. This is important stuff and there should be more of it.

But Jasperson et al (2015) are Information Systems researchers publishing in one of the premier IS journals. Are there University IT departments that are achieving the “richness in available data…(that) will realize significant improvements in their capability to manage the post-adoptive life cycle”?

If there is, what does that look like? How do they do it? What “expectation gaps” have they identified? What “direct interventions” have they implemented? How?

My experience suggests that this work is limited. I wonder what implications that has for the quality system use and thus the quality of learning and teaching?

What “expectation gaps” are going ignored? What impact does that have on learning and teaching?

Jasperson et al (2005) develop a “Conceptual model of post-adoptive behaviour” shown in the image below. Post-adoptive behaviours can include the decision not to use, or change how to use. A gap in expectations that is never filled, is not likely to encourage on-going use.

They also identify that there is an “insufficient understanding of the technology sensemaking process” (p. 544). The model suggests that technology sensemaking is a pre-cursor to “user-initiated learning interventions”, examples of which include: formal or informal training opportunities; accessing external documentation; observing others; and, experimenting with IT application features.

Perhaps this offers a possible explanation for complaints about academics not using the provided training/documentation for institutional digital learning systems? Perhaps this might offer some insight into the apparent “low digital fluency of faculty” problem.

conceptual model of post-adoptive behaviours

References

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2015). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicating learning success. The Internet and Higher Education, 28, 68–84. doi:doi:10.1016/j.iheduc.2015.10.002

Jasperson, S., Carter, P. E., & Zmud, R. W. (2005). A Comprehensive Conceptualization of Post-Adaptive Behaviors Associated with Information Technology Enabled Work Systems. MIS Quarterly, 29(3), 525–557.

The CSCW view of Knowledge Management

Earlier this week I attended a session given by the research ethics folk at my institution. One of the observations was that they’d run training sessions but almost no-one came. I’ve heard similar observations from L&T folk, librarians, and just about anyone else aiming to help academics develop new skills. Especially when people spend time and effort developing yet another you beaut website or booklet that provides everything one would want to know about a topic. There’s also the broader trope developing about academics/teachers being digitally illiterate, which I’m increasingly seeing as unhelpful and perhaps even damaging.

Hence my interest when I stumbled across Ackerman et al (2013) a paper titled “Sharing knowledge and expertise: The CSCW View” with the abstract

Knowledge Management (KM) is a diffuse and controversial term, which has been used by a large number of research disciplines. CSCW, over the last 20 years, has taken a critical stance towards most of these approaches, and instead, CSCW shifted the focus towards a practice-based perspective. This paper surveys CSCW researchers’ viewpoints on what has become called ‘knowledge sharing’ and ‘expertise sharing’. These are based in an understanding of the social contexts of knowledge work and practices, as well as in an emphasis on communication among knowledgeable humans. The paper provides a summary and overview of the two strands of knowledge and expertise sharing in CSCW, which, froman analytical standpoint, roughly represent ‘generations’ of research: an ‘object-centric’ and a ‘people-centric’ view.We also survey the challenges and opportunities ahead.

What follows are a summary and some thoughts on the paper.

Thoughts? Possibilities?

The paper’s useful in that it appears to give a good overview of the work from CSCW on this topic. Relevant to some of the problem being faced around digital learning.

All this is especially interesting to me due to my interest in exploring the design and impact of distributed means of sharing knowledge about digital learning

Look at Cabitza and Simone (2012) – two levels of information, and affording mechanisms – as informing design. Their work on knowledge artifacts (Cabitza et al, 2008) might also be interesting.

Brown and Duguid’s (2000) Network of Practice is a better fit for what I’m thinking here.

CSCW has a tendency to precede development with ethnographic studies.

Learning object repositories?

Given the fairly scathing findings re: the idea of repositories, what does this say about current University practices around learning object repositories?

Is digitally illiterate a bad place to start?

The “sharing expertise” approach would appear to assume that the people you’re trying to help have knowledge to share. Labeling teachers as digitally illiterate would appear to mean you couldn’t even conceptualise this as a possibility. Is this a core problem here?

The shift from system to individual practice

At some level the shift in the CSCW work illustrates a shift from focusing on IT systems to a focus on individual practices. The V&R mapping process illustrates some of this.

Context and embedding is important

Findings reinforce the contextual and situated nature of knowledge (is that a bias from the assumptions of these researchers?). Does this explain many of the problems currently being faced? i.e. what’s being done at the moment is neither contextual nor situated? Would addressing this improve outcomes?

Summary

A topic dealt with by different research communities (Information Systems, CSCL, Computer Science) each with their particular focus and limitations. e.g. CS has developed interesting algorithms but “Empirical explroations into the practice of knowledge-intense work have been typically lacking in this discourse” (p. 532).

The CSCW strength has been “to have explore the relationship between innovative computational artifacts and knowledge work – from a micro-perspective” (p. 532)

Uses two different terms that “connote CSCW’s spin on the problem” i.e.

that knowledge is situated in people and in location, and that the social is an essential part of using any knowledge…far more useful systems can be developed if they are grounded in an analysis of work practices and do not ignore the social aspects of knowledge sharing. (p. 532)

  1. Knowledge sharing – knowledge is externalised so that it can be captured/manipulated/shared by technology.
  2. Expertise sharing – where the capability/expertise to do work is “based on discussions among knowledgeable actors and less significantly supported by a priori externalizations”

Speak of generations of knowledge management

  1. Repository models of information and knowledge.
    Ignoring the social nature of knowledge, focused on externalising knowledge.
  2. Sharing expertise
    Tying communication among people into knowledge work. Either through identifying how best to “find” who has the knowledge or on creating online communities to allow people to share their knowledge. – expertise finders, recommenders, and collaborative help systems.
    Work later scaled to Internet size systems and communities – collectives, inter-organisational networks etc.

Repository model

started with attempts “to build vast repositories of what they knew” (p. 533).

it should be noted that CSCW never really accepted that this model would work in practice (p. 534)…Reducing the richness of collective memory to specific information artifacts was utopian (p. 537)

Findings from various CSCW repository studies

  • Standard issues with repository systems

    particularly difficulty with motivating users to author and organize the material and to maintain the information and its navigation

  • Context is important.

    Some systems tackled the problem of context by trying to channel people to expertise that was as local as possible based on the assumption that “people nearby an asker would know more about local context and might be better at explaining than might experts”.

    Other research found “difficulties of reuse and the organisation of the information into repositories over time, especially when context changed…showed that no organisational memory per se existed; the perfect repository was a myth” (p. 534)

  • Need to embed.

    such a memory could be constructed and used, but the researchers also found they needed to embed both the system and the information in both practice and in the organizational context

  • situated and social.

    CSCWin general has assumed that understanding situated use was critical to producing useful, and usable, systems (Suchman 1987;Suchman and Wynn 1984) and that usability and usefulness are social and collaborative in nature (p. 537)

  • deviations seen as useful

    Exceptions in organizational activities, instead of being assumed to be deviations from correct procedures, were held to be ‘normal’ in organizational life (Suchman 1983) and to be examined for what they said about organizational activity, including information handling (Randall et al. 2007;Schmidt 1999) (p. 537)

  • issues in social creation, use, and reuse of information.
    Including:

    • issues of motivation,
      Getting information is hard. Aligning reward structures a constant problem. The idea of capturing all knowledge clashed with a range of factors, especially in competitive organisational settings.
    • context in reuse,
      “processes of decontextualisation and recontextualisation loomed over the repository model” (p. 538). “This is difficult to achieve, and even harder to achieve for complex problems” (p. 539).
    • assessments of reliability and authoritativeness,
      de/recontextualisation is social/situated. Information is assessed based on: expertise of the author, reliability, authoritativeness, quality, understandability, the provisional/final nature of he information, obsolescense and completeness, is it officialy vetted?
    • organizational politics, maintenance, and
      “knowledge sharing has politics” (p. 539). Who is and can author/change information impacts use. Categories/meta data of/about data has politics.
    • reification
      “repository systems promote an objectified view of knowledge” (p. 540)

Repository work has since been commercialised.

Some of this work is being re-examined/done due to new methods: machine learning and crowd-sourcing.

Boundary objects – “critical to knowledge sharing. Because of their plasticity of meaning boundary objects serve as translation mechanisms for ideas, viewpoints, and values across otherwise difficult to traverse social boundaries. Boundary objects are bridges between different communities of practice (Wenger 1998) or social worlds (Strauss 1993).” (p. 541)

“information objects that have meaning on both sides of an intra-organisational or inter-organisational boundary”.

CSCW tended to focus on “tractable information processing objects” (p. 542) – forms etc. – easier to implement but “over-emphasis on boundary objects as material artifact, which can limit the analytical power that boundary objects bring to understanding negotiation and mediation in routine work”

Example – T-Matrix – supporting production of a tire and innovation.

Cabitz and Simone (2012) identify two levels of information

  1. awareness promoting information – current state of the activity
  2. knowledge evoking information – triggering previously acquired knowledge or triggering/supporting learning and innovation

Also suggest “affording mechanisms”

Other terms

  1. “boundary negotiating” objects
    Less structured ideas of boundary objects suggested
  2. knowledge artifacts – from Cabitza et al (2013)

    a physical, i.e., material but not necessarily tangible, inscribed artifact that is collaboratively created, maintained and used to support knowledge- oriented social processes (among which knowledge creation and exploita- tion, collaborative problem solving and decision making) within or across communities of practice…. (p. 35)

    These are inherently local, remain open for modification. Can stimulate socialisation and internalisation of knowledge.

common information spaces – common central archive (repository?) used by distributed folk. Open and malleable by nature. A repository is closed/finalised, CIS isn’t. Various work to make the distinction – e.g. degrees of distribution; kinds of articulation work and artifacts required, the means of communication , and the differences in frames of participant reference.

Various points made as to the usefulness of this abstraction.

Assemblies

  • Assembly – “denote an organised collection of information objects”
  • Assemblages – “would include the surrounding practices and culture around an object or collection” (p. 545)

How assemblies are put together and their impacts is of interest.

Sharing expertise

Emphasis on interpersonal communications over externalisation in IT artifacts. “ascribed a more crucial role to the practices of individuals” (p. 547). A focus on sharing tacit knowledge – including contextual knowledge.

tacit/explicit – Nonaka’s mistake – explicit mention of the misinterpretation of Polanyi’s idea of tacit knowledge. The mistaken assumption/focus was on making tacit knowledge explicit. When Polanyi used tacit to describe knowledge that was very hard, if not impossible to make explicit.

Tacit knowledge can be learned only through common experiences, and therefore, contact with others, in some form, is required for full use of the information. (p. 547)

Community of practice “roughly be defined as a group that works toegher in a certain domain and whose members share a common practice”.

Network of practice (from Brown and Duguid, 2000) – members do not necessarily work together, but work on similar issues in a similar way.

Community of Interest – defined by common interests, not common practice. Diversity is a source of creativity and innovation.

I like this critique of the evolution of use of CoP

Intrinsically based in their view of ‘tacit knowledge,’ the Knowledge Management community appropriated CoP in an interventionist manner. CoPs were to be cultivated or even created (Wenger et al. 2002), and they became fashionable as ‘the killer application for knowledge management practitioners’ (Su andWilensky 2011, p. 10) with supposedly beneficial effects on knowledge exchange within groups. (p. 547)

CSCW didn’t use CoPs in an interventionist way – instead as an analytical lens.

Social capital – from Bourdieu – “refers to the collective abilities derived from social networks”. Views sharing “in the relational and empathic dimension of social networks” (p. 548).

Nahapiet and Ghoshal (1998) suggest it consists of 3 dimensions

  1. Structural opportunity (‘who’ shares and ‘how’);
    Which is where the technical enters the picture.
  2. Cognitive ability (‘what’ is shared);
  3. Relational motivation (‘why’ and ‘when’ people engage)

Latter 2 dimensions not often considered by system designers.

The sharing approach places emphasis on “finding-out” work. Where knowledge is found by knowing/asking others and in finding the source, de-contextualising and then re-contextualising. Often involves “local knowledge” – which tends to have an emergent nature. What’s important is only known in the situation at hand and who holds it evolves within a concrete situation.

People finding and expertise location

Move from focusing on representations of data to the interactions between people – trying to produce and modify them. Tackling technical, organisational and social issues simultaneously.

Techniques include: information retrival, network analysis, topics of interest, expertise determination.

Profile construction can be contentious – privacy, identification of expertise. Especially given “big data” approaches to analysing and identification.

Expertise finding’s 3 stages: identification, selection, escalation.

Need to promote awareness of individual expertise and their availability – “based in ‘seeing’ others’ activities” (p. 551)

“people prefer others with whom they share a social connection to complete strangers” (p. 553) – no surprise there – but people known directly weren’t chosen as they were deemed not likely to have any greater expertise. Often people who were 2 or 3 degrees of separation away.

Profiles also found by one study to be often out of date. Explored “peripheral awareness” as a solution.

Open issues

  • Development of personal profiles.
  • Privacy and control.
  • Accuracy.

Finding others Lot of work outside CSCW.

CoI in the form of web Q&A communities have arising on the Internet. With research that has studied question classification, answer quality, user satisfaction, motivation and reputation.

Motivation

  • more money = more answers, but not necessarily better quality.
  • charitable contributions increased credibility of answers “in a nuanced way”?
  • Altruism and reputation building two important motivations

Recent research looking at “social Q&A” – how people use social media to answer – two lines of research (echoing above)

  1. social analysis of existing systems;
    Looking at: impact of tie strength on answer quality, org setting, response rates when asking strangers – especially with quick, non-personal answers, community size and contact rate.
  2. technical development of new systems

Future directions

Interconnected practices: expertise infrastructures

Increasing inter-connectedness

  • may cause “experts” to become anonymous.
  • propel new types of interactions via micro-activities – microtasking environments make it easy/convenient to help
  • Collaboratively constructed information spaces – wikipedia – numerous papers examiner how it was constructed, including work looking more broadly at Wikis
  • Other research looked at github, mozilla bug reports etc.
  • And work looking at social media, microblogging etc and its use.

References

Ackerman, M. S., Dachtera, J., Pipek, V., & Wulf, V. (2013). Sharing Knowledge and Expertise: The CSCW View of Knowledge Management. Computer Supported Cooperative Work (CSCW), 22(4-6), 531–573. doi:10.1007/s10606-013-9192-8

Re-purposing V&R mapping to explore modification of digital learning spaces

Why?

Apparently there is a digital literacy/fluency problem with teachers. The 2014 Horizon Report for Higher Education identified the “Low Digital Fluency of Faculty” as the number 1 “significant challenge impeding higher education technology adoption”. In the 2015 Horizon Report for Higher Education this morphs into “Improving Digital Literacy” being the #2 significant challenge. While the 2015 K-12 Horizon Report has “Integrating Technology in Teacher Education” as the #2 significant challenge.

But focusing solely on the literacy of the teaching staff seems a bit short sighted. @palbion, @chalkhands and I are teacher educators working in a digitally rich learning environment (i.e. a large percentage of our students are online only students). We are also fairly digitally fluent/literate. In a paper last year we explored how a distributive view of knowledge sharing helped us “overcome the limitations of organisational practices and technologies that were not always well suited to our context and aims”.

Our digital literacy isn’t a problem, we’re able and believe we have to overcome the limitations of the environment in which we teach. Increasingly the digital tools we are provided by the institution do not match the needs we have for our learning designs and consequently we make various types of changes.

Often these changes are seen as bad. At best these changes are invisible to other people within our institution. At worst they are labelled as duplication, inefficient, unsafe, and feral. They are seen as shadow systems. Systems and changes that are undesirable and should be rooted out.

What?

Rather than continue this negative perspective, @palbion, @chalkhands and I have just finished a rough paper that set out to explore if there was anything valuable or interesting to learn from the changes we made to our digital learning spaces. Our process for this paper was

  1. Generate a list of stories of the changes we made to our digital learning/teaching spaces.
    Using a Google doc and a simple story format (descriptive title; what change was made; why; and, outcomes) each of us generated a list of stories of where we’d changed the digital tools/spaces we use for our teaching.
  2. Map those stories using a modified Visitor and Resident mapping approach.
    The stories needed to be analysed in someway. The Visitors & Residents approach offered a number of advantages – more detail below.
  3. Reflect upon what that analysis showed and about potential future applications of this approach.

What follows is some reflection on the approach, a description of the original V&R map, and a description and example of our modified V&R map.

Reflection on the approach

In short, we (I think I can say we) found the whole approach interesting and could see some potential for broader use. In particular, the potential benefits of the approach include:

  1. Great way to start discussions and share knowledge.
    Gathering stories and analysing them using the V&R process appear to be very useful ways for starting discussions and sharing knowledge. Not the least because it starts with people sharing what they are doing (trying to do) now, rather than some mythical ideal future state.
    Reports from others using the original V&R mapping process suggest this is a strength of the V&R mapping approach. Our experience seems to suggest this might continue with the modified map we used.
  2. Doesn’t start by assuming that people are illiterate.
    Neither @palbion or I think we’re digitally illiterate. We have formal qualifications in Information Technology (IT). @chalkhands doesn’t have formal qualifications in IT. Early on in this process she was questioning whether or not she had anything to add. She wasn’t as “literate” as @palbion and I. However, as we started sharing stories and mapping them that questioning went away.
    The V&R approach is very much based on the idea of focusing on what people do, rather than who they are or what they know (or don’t). It doesn’t assume teaching staff are digitally illiterate and is just interested in what people do. I think this is a much more valuable starting point for engaging in this space. It appears likely to provide a method for helping universities follow observations from the 2015 Horizon Report that solving the “digital literacy problem” requires “individual scaffolding and support along with helping learners as they manage conflict between practice and different contexts” and “Understanding how to use technologies is a key first step, but being able to leverage them for innovation is vital to fostering real transformation in higher education” and “that programs with one-size-fits-all training approaches that assume all faculty are at the same level of digital literacy pose a higher risk of failure.”
  3. It accepts that the ability for people to change digital technologies is not only ok, it is necessary and unavoidable.
    Worthen (2007) makes the point that those in charge of institutional IT (including digital learning spaces) want to prevent change while the people using digital systems want the technology to change

    Users want IT to be responsive to their individual needs and to make them more productive. CIOs want IT to be reliable, secure, scalable, and compliant with an ever increasing number of government regulations

    Since the CIOs are in charge of the technology (they have the power) the practice of changing digital systems (without having gone through the approved governance processes) is deemed as bad and something to be avoided. This is due to change, especially in learning and teaching if you accept Shulman’s (1987) identification of the “knowledge based of teaching” laying (emphasis added)

    at the intersection of content and pedagogy, in the capacity of a teacher to transform the content knowledge he or she possesses into forms that are pedagogically powerful and yet adaptive to the variations in ability and background presented by the students (p. 15)

The original V&R map

The original V&R map is (example in the image below) a cartesian graph with two axes. The X-axis ranges from visitor to resident and describes how you perceive and use digital technologies. A visitor sees a collection of disparate tools that are fit for specific purposes. When something has to be done the visitor selects the tool, gets the job done, and leaves the digital space leaving no social trace. A resident on the other hand sees a digital space where they can connect and socialise with others. The Y-axis ranges from Institutional to Personal and describes where use of digital technologies fits on a professional or personal scale.

The following map shows someone for whom LinkedIn is only used for professional purposes. So it’s located toward the “Institutional” end of the Y-axis. Since LinkedIn is about leaving a public social trace for others to link to, it’s located toward the “Resident” end of the X-axis.

Our modified V&R map

Our purpose was to map stories about how we had change digital technologies within our role as teacher educators. Thus the normal Institutional/Personal scale for the Y-axis doesn’t work. We’re only considering activities that are institutional in purpose. In addition, we’re focusing on activities that changed digital technologies. We’re interested in understanding the types of changes that were made. As a result we adopted a “change scale” as the Y-axis. The scale was adapted from software engineering/information systems research and is summarised in the following table.

Item Description Example
Use Tool used with no change Add an element to a Moodle site
Internal configuration Change operation of a tool using the configuration options of the tool Change the appearance of the Moodle site with course settings
External configuration Change operation of a tool using means external to the tool Inject CSS of Javascript into a Moodle site to change its operation
Customization Change the tool by modifying its code Modify the Moodle source code, or install a new plugin
Supplement Use another tool(s) to offer functionality not provided by existing tools Implement course level social bookmarking by requiring use of Diigo
Replacement Use another tool to replace/enhance functionality provided by existing tools Require students to use external blog engines, rather than the Moodle blog engine.

Since we were new to the V&R mapping process and were trying to quickly do this work without being able to meet, some additional scaffolding was placed on the X-axis (visitor-resident). This provide some common level of understanding of the scale and was based on a specific (and fairly limited) definition of “social trace”. The lowest level of the scale was “tools used by teachers” which meant no social trace. The scale gradually increased the number of people involved in the activities mediated by the digital technology. “Subsets of students in a course” to “All students in a course” and right on up to “Anyone on the open web”.

The following image is the “template” map that each of used to map out our stories of changing digital technologies.

Modified V&R map template

An example map and stories

The following image is the outcome of mapping my stories of change. A couple of example stories are included after the image.

My V&R change map

Know thy student

This story involves replacing/supplementing existing digital tools, but is something that only I use. Hence Visitor/Replacement.

What? A collection of Greasemonkey scripts, web scrapping, local database/server designed to help me know my students and what they were doing in the Study Desk. Wherever there is a Moodle user profile link in Moodle, the script will add a link [ details ] that is specific for each user. If I click on that link I see a popup window with a range of information about the student

Why? Because finding out this information about a student would normally take 10+ minutes and require the use of multiple different web pages in two different system. Many of these pages don’t exactly make it easy to see the information. Knowing the students better is a core part of improving my teaching.

Outcomes? It’s been a god send. Saving time and enabling me to be more aware of student progess.

Using links in student blog posts

A fairly minor example of change. There’s a question of whether it’s just “use” or “internal configuration”? After all, it’s just using an editor on a web page to create some HTML. It was bumped up to “internal configuration” because of an observation that hyperlinks were not often used by many teachers. Something I’m hoping that @beerc will test empirically.

What? Some comments I write on student blog posts will make use of links to offer pointers to relevant resources.

Why? It’s more useful/easy to the students to have the direct link. Hence more likely to make use of the suggestion.

Outcomes? Minor anecdotal positive comments. Not really known

Early indications and reflection

The change scale worked okay but could use some additional reflection. In particular we raised some questions about whether many of the “replacement” examples of change (including those in my map above) are actually examples of supplement.

On reflecting on all this we made some initial observations, including

  1. Regardless of perceived levels of digital literacy we all engaged in a range of changes to digital technologies.
  2. Not surpisingly, the breadth/complexity of those changes increased with greater digital literacy.
  3. In the end very few of our changes were “replacement”. Almost all were focused more on overcoming perceived shortcomings with the provided tools, rather than duplicating their functionality.
  4. Most of changes tended to congregate towards the “visitor” end of the X-axis. Not surprising given that all of the digital technologies provided by the institution are not on the open web.
  5. Almost all of the stories that involved “replacement” were based on moving out onto the “open web”. i.e. they were all located toward the “resident” end of the X-axis.
  6. Changes were being made due to two main reasons: improving the efficiency of institutional systems or practices; or, customising digital technologies to fit the specific learning activities we wanted to implement.

Are our institutions digital visitors? What are the impacts on learning and teaching?

As it happens, we’ve been talking and thinking about the Visitor/Resident typology (White & Cornu, 2011) that last couple of weeks. The network gods have been kind, because over night a post titled “The resident web and its impact on the academy” (Lanclos & White, 2015) floats across my Twitter stream. Much food for thought.

It has me wondering

Are universities are digital visitors? If so, what impact is this having on learning and teaching?

Update: more reading and thinking has led to the addition of a section “Branding pushing out social traces”.

Residents and visitors

White & Cornu (2011) describe visitors as those that

understand the Web as akin to an untidy garden tool shed. They have defined a goal or task and go into the shed to select an appropriate tool which they use to attain their goal…Visitors are unlikely to have any form of persistent profile online which projects their identity into the digital space

White & Cornu (2011) describe residents as those that

see the Web as a place, perhaps like a park or a building in which there are clusters of friends and colleagues whom they can approach and with whom they can share information about their life and work. A proportion of their lives is actually lived out online where the distinction between online and off–line is increasingly blurred. Residents are happy to go online simply to spend time with others and they are likely to consider that they ‘belong’ to a community which is located in the virtual…To Residents, the Web is a place to express opinions, a place in which relationships can be formed and extended.

How Universities think about digital learning spaces

@damoclarky and I argued that institutional digital learning is informed by the SET mindset. A mindset that approaches any large, complex problem (like digital learning) with a Tree-like approach. That is, it employs logical decomposition to break the large problem up into its smaller and smaller problems until there is a collection of solvable problems that can allocated to individual units. The units now solve the problems (largely) independently, and each of the small solutions are joined back up together and consequently (hopefully) solve the original big problem.

You can see evidence of this tree-like perspective all over our institutions and the digital learning spaces they produce.

The institutions themselves are divided into hierarchical organisational structures.

What the institution teaches is divided up into a hierarchical structure consisting of programs (degrees), majors, courses, semesters, weeks, lectures, and tutorials.

And more relevant to this argument, the institutional, digital learning space is divided up into separate tools.

At my institution those separate tools include, but are not limited to:

  • the staff/student portal;
  • the Learning Management System;
    In the case of my institution that’s Moodle. Moodle (like many of these systems) is structured into a tree-like collection of modules. The “M” in Moodle stands for Modular.
  • the eportfolio system;
  • the learning object repository system;
  • the library system;
  • the gradebook (Peoplesoft); etc….

Each tool is designed to serve a particular goal, to help complete a specific task.

Hence the tendency for people to see these digital learning spaces “as akin to an untidy garden tool shed” where when they want to do something they “go into the shed to select an appropriate tool which they use to attain their goal” (White & Cornu, 2011).

This collection of separate tools is not likely to be seen as a “place, perhaps like a park or a building in which there are clusters of friends and colleagues whom they can approach and with whom they can” (White & Cornu, 2011) learn.

Of course, there is some awareness of this problem, which leads to a solution.

Brand as unifying solution

Increasingly, the one solution that the corporate university seems able to provide for this “untidy garden tool shed” problem is branding. The idea being that if all the tools use the same, approved, corporate brand then all will be ok. It will be seen as an institutional learning space. With the emphasis explicitly on the institution. It is the institution’s brand that is used to cover the learning space, not the learners and not the teachers. With which I see some problems.

First, is the observation made by Lanclos and White (2015) in the context of the resident web and the academy

scholars will gain a form of currency by becoming perceived as “human” (the extent to which ‘humanness’ must be honest self-expression or could be fabricated is an interesting question here) rather than cloaked by the deliberately de-humanised unemotive academic voice.

In this context the problem isn’t so much the “de-humanised unemotive academic voice” as it is the stultifying, stripping of individuality on the altar of the institutional identity. It doesn’t matter whether you’re learning engineering, accounting, teaching or anything else. It’s the institution and how it wishes to project itself that matters.

Which creates the second problem for which one of my institution’s documents around a large institutional digital learning project provides a wonderful exemplar.

Can you have a digital learning experience that is consistent, brand enhancing, and optimal for each student? I tend to think not. Especially in light of arguments that the diversification and massification of the student body has led universities to shift their education rhetoric from a notion of “one size fits all” to a concept of tailored, flexible learning (Lewis, Marginson et al. 2005).

My current experience is that instead of getting digital learning spaces that support tailored and flexible learning, institutions are more likely to create learning spaces that “have less variety in approach than a low-end fast-food restaurant” (Dede, 2008, p. 58).

Brand pushing out social traces

The visitors/residents typology (White and Cornu, 2011) is particularly interested in whether or not people are leaving social traces of themselves online as they interact with digital learning spaces (well, they are actually focused on the participatory web, but I’ll narrow it a bit). Does the “consistent..brand enhancing” approach to institutional digital learning spaces limit the likelihood of social traces being left? Can institutional digital learning spaces be seen as places people will want to reside within when it’s branded?

It would seem obvious that such a branded space couldn’t be seen as “my space”, especially for students. But what about the impact of teachers. Many teachers – for better or worse – like to customise the learning space (not only for the needs of the students) but also to meet project their personality. Can this be done in a branded digital space?

Impact on learning?

The above points to an institutionally provided (and sometimes mandated) digital learning space that is more likely to resemble and consistently branded, untidy garden tool shed. A perception that is unlikely to be perceived by learners and teachers as a space they would wish to inhabit. Instead, it’s more likely to encourage them to see the learning space as place to visit, complete a task, and leave ASAP. Which would appear likely to negatively impact engagement and learning.

It’s would also appear likely to be a perception that is not going to help institutions address a pressure identified by Lanclos and White (2015)

The academy can no longer simply serve its own communities in the context of the networked Web, and it is under increasing cultural pressure to reach out and appear relevant.

References

Dede, C. (2008). Theoretical perspectives influencing the use of information technology in teaching and learning. In J. Voogt & G. Knezek (Eds.), International Handbook of Information Technology in Primary and Secondary Education (pp. 43–62). New York: Springer.

Lewis, T., S. Marginson, et al. (2005). “The network university? Technology, culture and organisational complexity in contemporary higher education.” Higher Education Quarterly 59(1): 56-75.

White, D., & Le Cornu, A. (2011). Visitors and Residents : A new typology for online engagement. First Monday, 16(9). doi:doi:10.5210/fm.v16i9.3171

What is “netgl” and how might it apply to my problem

At least a couple of the students in a course I help out with are struggling a little with Assignment 2 which asks them “to develop a theory-informed plan for using NGL to transform your teaching (very broadly defined) practice”.

The following is a collection of bits of advice that will hopefully help. Littered throughout are also some examples from my own practice.

NGL != social media

Network and Global Learning (NGL/netgl) should not be interpreted to mean use of social media. In the course we use blogs, Diigo, feed readers etc as the primary form of NGL practice and in the past this has led folk to think that NGL equates to use of social media.

Just because we used blogs, Diigo, and feed readers, that doesn’t you should. You should use whatever is appropriate to your problem and your context.

What is NGL?

Which begs the question, “what is NGL”? If not just social media.

As I hope was demonstrated in the first two-thirds of the course there is no one definition of NGL. There are many different views from many different perspectives.

The first week’s material had a section on networked learning that included a few broad definitions. I particularly like the Goodyear et al (2014) quote that includes

learning networks now consist of heterogeneous assemblages of tasks, activities, people, roles, rules, places, tools, artefacts and other resources, distributed in complex configurations across time and space and involving digital, non-digital and hybrid entities.

The course material also covers more specific conceptions of NGL. e.g. connectivism gets a mention in week 1, as does public click pedagogy.

Week 3 mentions groups, networks, collectives and communities; the idea of network learning as a 3rd generation of pedagogy; and some historical origins of network learning.

What’s your problem?

It’s all overwhelming, is a common refrain I’m hearing. Understanding that there is a range of different views of NGL probably isn’t going to help. That’s one of the reason why Assignment 2 is intended to use a design-based research approach i.e. (emphasis added)

a particular approach to research that seeks to address practical problems by using theories and other knowledge to develop and enhance new practices, tools and theories.

At some level DBR can help narrow your focus by asking you to focus on a practical problem. A problem near and dear to your heart and practice.

Of course, the nature of “problems” in and around education are themselves likely to be complex and overwhelming. The example I give from my own practice – described initially as “university e-learning tends to be so bad” or “a bit like teenage sex” is a big complex problem with lots of perspectives.

How do you reduce the big overwhelming problem to something that you can meaningful address?

This is where the literature and theory(ies) enter the picture.

What might “theory informed” mean?

First, go and read a short post titled What is theory and why use theories?.

Adopting this broad and pragmatic view of theory, there are many ideas and concepts littered throughout this course (and many, many more outside) including, but not limited to: connectivism; connected learning; communities of practice; group, networks, collectives, and communities; threshold concepts etc. In understanding your problem, you are liable to draw upon a range more.

As per the short post theories are meant to be useful to you in understanding a situation or problem and then as an aid in formulating action.

Combining theories from NGL and your “problem”

The theories for assignment 2 aren’t limited just to theories from NGL. You should also use theories that are relevant to your problem.

You look around for how other people have conceptualised the problem and the approaches and theories that they have used. Do any of those resonate with you? Can you see any problems or limitations with the approaches used? Are there other theoretical lenses or just simple ways of understanding the problem that help narrow down useful avenues for action?

In terms of my problem with the perceived quality limitations of university e-learning, I’ve been using the TPACK framework for a while as one theoretical lens. TPACK is quite a recent and broadly used theory for understanding the knowledge teachers require to design technology-based learning experiences. (Since all models are wrong, it has it’s limitations)

Drawing on TPACK I wonder if the reason why university e-learning is so bad is because the TPACK (knowledge) being used to design, implement, and support it is insufficient. It needs to be improved.

Not an earth shatteringly insightful or novel suggestion. But by focusing on TPACK that does suggest that perhaps I focus my attention for potential solutions within the TPACK related literature, other than elsewhere. Almost always there is more literature than any body (especially in the context of a few weeks) can get their head around. So for better or worse, you need to starting drawing boundaries.

Now with a focus on TPACK it’s time to combine my personal experience with the theory and associated literature. My personal experience and context may also help focus my exploration. e.g. if I were working in a TAFE/VET context, I might start looking at the literature for mentions of TPACK in the TAFE/VET context (or just at TAFE/VET literature). Again, narrowing down the focus.

I might find that there’s nothing in the TAFE/VET context that mentions TPACK in conjunction with e-learning. This might highlight an opportunity to learn lessons from other contexts and test them out in the TAFE/VET context. Or there might already be some TPACK/TAFE/VET/e-learning literature that I can learn from.

In my case, as someone with relatively high TPACK I get really annoyed when people think the main challenge is “low digital fluency of faculty” (i.e. teaching staff). This gets me thinking that perhaps the problem isn’t going to be solved by focusing on developing the knowledge of teaching staff. i.e. requiring teaching staff to have formal teaching qualification isn’t (I believe) going to solve the problem, so what is?

You want digitally fluent faculty?

This is potentially interesting because a fair chunk of existing practice assumes that formal teaching qualifications or the “right” professional development opportunities will help teaching staff develop the right TPACK and thus university e-learning will be fantastic. Being able to mount a counter to a prevailing orthodoxy might be interesting and useful. It might make a contribution. It might also identify a fundamental misunderstanding of a problem and a need to read and consider further.

In my case that led to an interest in (seeing a connection with) another theoretical idea, i.e. the distributive view of learning and knowledge. I do recommend Putnam & Borko (2000) as a good place to start learning about how the distributive view of knowledge and thinking can help situate teacher learning.

The combination of TPACK and the distributive view of learning appears to be useful. So we ended up using it in this paper to explore our experience with university e-learning. That work lead to questions such as

  • How can institutional learning and teaching support engage with the situated nature of TPACK and its development?
  • How can University-based systems and teaching practices be closer to, or better situated in, the teaching contexts experienced by pre-service educators?
  • How can the development of TPACK by teacher educators be made more social?
  • How can TPACK be shared with other teacher educators and their students?
  • Can the outputs of digital renovation practices by individual staff be shared?
  • How can institutions encourage innovation through digital renovation?
  • What are the challenges and benefits involved in encouraging digital renovation?

Most of these are questions that could be good candidates for a design-based research project. i.e. can you use these and other theories to design an intervention or change in practice?

Designing an intervention

This recent post is my attempt to answer at least this question from above

How can institutional learning and teaching support engage with the situated nature of TPACK and its development?

It takes the distributed view of TPACK, the BAD mindset, and tries to envision some changes in practice/technology that might embody the principles from those theoretical ideas.

The idea is that being guided by those theoretical ideas makes it more likely that I can predict what can/should happen. I can justify the design of the intervention. I might be wrong, but it will hopefully be a better reason for the specific design approach than “because I wanted to”.

The ultimate aim of a DBR approach is to design, implement, and then test this design to see if it does achieve what I think it might.

Don’t forget the context. Don’t focus on the technology

My example above is very heavy on in terms of technology and requires fairly large technical expertise. That’s because it is something that I’ve designed for my specific context. It makes sense (hopefully) within that context.

If I were someone else working (with less technical knowledge) in a different context (e.g. an outback school with no Internet connection), then the solution I would design would be different.

Putnam and Borko (2000) give a range of examples around teacher learning that aren’t heavily technology based. If there is no Internet connection, there might be a high prevalence of mobile phones. If not, I might need to become a little more creative about using low levels of digital technologies.

In fact, if I were in a very low technology environment, I’d be actively searching the literature for insight and ideas about how other people have dealt with this problem. Almost certainly I wouldn’t be the first in the world.

References

Putnam, R. T., & Borko, H. (2000). What do new views of knowledge and thinking have to say about research on teacher learning? Educational Researcher, 4-15.

What is theory and why use theories?

The following is an edited version of something used in a course I teach that’s currently hidden away in the LMS. I’m adding it here because I’m using it with another group of students.

It’s a quick attempt to cover what I perceive to be a reasonable whole for many education students. i.e. what exactly is a theory and why the hell would I want to use them? My impression is that not many of them have developed an answer to these questions that they are comfortable with.

This is a complex and deeply contested pair of questions. I’m assuming that if you lined up 50 academics you’d get at least 50 different sets of answers. My hope that this is a useful start for some. Feel free to add your own pointers and answers to these questions.

If you want a more detailed look into the nature of theory then I can recommend Gregor (2006).

What is theory?

I take an inclusive and pragmatic view of theory.

An inclusive view, because there is a huge array of very different ideas that can be labelled theories. A pragmatic view is taken because the reason we use theories in this course is to make it easier to do something. To understand a particular situation, or for most reading this figure out how to design some use of digital technology to enhance or transform student learning.

Hirst (2012, p. 3) describes educational theory as

A domain of practical theory, concerned with formulating and justifying principles of action for a range of practical activities.

i.e. educational theory should help you teach and help your learners learn.

In the context of this particular course we touch on various ideas such as: the Computer
Practice Framework, TPACK, Backwards Design, the RAT framework, the SAMR model, The TIP Model, constructivism, and many more. For the purposes of this course, we’ll call these things theories. They help with “formulating and justifying principles of action”.

There is huge variability in the purpose, validity, and approaches used to formulate and describe these objects called theories. A theory isn’t inherently useful, important, or even appropriate. That’s a judgement that you need to make.

A theory is just a model and All models are wrong, but some are useful (Box, 1979).

Why use theories?

Thomas (1997, p. 78) cites Mouly (1978)

Theory is a convenience a necessity, really organizing a whole slough of facts, laws, concepts, constructs, principles into a meaningful and manageable form

These theories are useful because they help you understand, formulate and justify how and what to do. In this course, these theories will help you plan, implement, and evaluate/reflect upon the use of digital technologies to improve your teaching and your students’ learning.

Learning and teaching are difficult enough. When you add digital technologies to the mix even more complexity arises. The theories we introduce in this course should hopefully help you make sense of this complexity. Guide you in understanding, planning, implementing and evaluating of your use of ICTs.

References

Gregor, S. (2006). The nature of theory in information systems. MIS Quarterly, 30(3), 611–642.

Hirst, P. H. (2012). Educational theory. In P. H. Hirst (Ed.), Educational Theory and Its Foundation Disciplines (pp. 3-29). Milton Park, UK: Routledge.

Thomas, G. (1997). What’s the Use of Theory? Harvard educational review, 67(1), 75:105.

Technology required by teachers to customise technology-enhanced units

This is the 2nd post (first here) looking at Instructional Science 43(2) on the topic of “Teachers as designers of technology enhanced learning”. This post looks at Matuk et al (2015)

In summary

  1. The claim is that the ability for teachers to customise is positive for learning.

    Teachers’ involvement in curriculum design is essential for sustaining the relevance of technology-enhanced learning materials. Customizing – making small adjustments to tailor given materials to particular situations and settings – is one design activity in which busy teachers can feasibly engage. Research indicates that customizations based in evidence from student work lead to improved learning outcomes (p. 229)

  2. Customisations by a four middle/high school teachers are examined to see how these customisations were afforded
  3. Identified 3 4 types of customisations (the abstract says 3 and then proceeds to list these 4)
    • “devising timely instructional interventions to provide individualised guidance”
    • “planning activities and adjusting milestones to align with students’ progress”
    • “modifying existing materials to better integrate content into overall curriculum plans”
    • incorporating scaffolds to better address students’ needs
  4. Identified 3 technology features that support customisations
    • A system that logs student work for teachers’ inspection;
    • tools for conducting dynamic, formative assessment; and,
    • an authoring environment that supports re-design of units at multiple levels of granularity

In this paper, we argue that teachers’ effectiveness in customizing TEL materials also relies on the affordances of the tools available to them, particularly in their ability to make students’ ideas visible (p. 232)

Preliminary design principles “for flexibly adaptive curriculum materials based on the premise of making student work visible as evidence to inform teachers’ customizations” (p. 250)

  1. Provide an interface for browsing logged responses;
    i.e. display responses and revisions “and give teachers a persistent record of their students’ thinking”.
  2. Integrate scaffolds that make student thinking explicit;
    i.e. make students’ thinking processes visible to teachers to enable formative advice. Strong link here with learning process analytics (Lockyer et al, 2013)
  3. Provide technologies to monitor real-time progress;
  4. Offer flexible, accessible authoring tools that support testing and refinement.

Challenges for future technologies: a research and design agenda

  1. How do we design interfaces and real-time displays that make students’ logged data both accessible to, and usable by teachers?
  2. How can we make the underlying instructional framework transparent such that the curriculum materials themselves guide teachers’ customizations?
  3. How can authoring tools be designed that both take advantage of teachers’ expertise and respect their time?

Some of the findings echo some of the ideas from learning analytics, but more directly from a teacher perspective.

Questions

  1. The participants and context for this study was fairly limited. What types of customisations and features to support customisations might be identified by examining the work of other teachers in other contexts. Especially contexts that make significantly greater use of digital technologies (e.g. largely online university courses)?
  2. This paper appears to focus on teacher’s redesigning technology-enhanced “curriculum materials”, almost a content focus. What differences do you have to consider if you see digital technology as part of the learning space? As the environment in which learning occurs, not just the curriculum?
  3. The idea of educative curriculum materials – “curriculum materials with an additional tools and resources to aid teachers in attending to changing classroom dynamics, reflecting on their practices, and seeking new approaches to solving problems” (p. 233) – resonates with the idea of Context Appropriate Scaffolding Assemblages (CASA) including the idea of a CASA that allows course designers (teacher educators) to annotate their digital learning spaces (course sites) with explanations and rationalisations behind the designs. Perhaps something useful for other teacher educators, but also for pre-service teachers (links to an idea that @palbion has previously mentioned).
  4. How does this papers purpose/context

    existing research establishes that technology can support teachers’ customizations. It also characterizes broad categories of the kinds of customizations teachers make. Still, little is known about the specific ways by which technology enables customizations, especially those based in students’ ideas.

    link and inform the purpose/context of the paper(s) we’re thinking of?
    Links somewhat back to questions #1 and #2. A different context and a broader notion of digital technologies. Also perhaps a focus more on the type of digital knowledge required of teachers. “Affordance” as in “affordance of a technology for customisation” is a relational term. It’s dependent on the functionality of the technology and the teachers capability to perceive and perform tasks with that functionality.

  5. The technology-enhance units being customised here can be customised “without the need for programming skills” (p. 234). Might not this limit the type of customisations that teachers can undertake? Might not teachers with programming skills want to make different customisations and thus require different affordances from the systems? The customisations identified in this paper are very dependent on the nature of the system and the affordances it offered. Would a more open system combined with a teacher with programming skills identified more and different customisations/technology features?
    Something that the authors identify later

    Our findings raise questions for future research about how teachers’ different prior knowledge of their students and of the subject matter, their individual skills with tech- nology, and their personal orientations toward their roles as teachers and designers, influence their interpretations and responses to their students’ work. They also raise questions about how these interactions are manifested in teachers’ customizations. (p. 250)

  6. Is this observation

    A recent review of 30 technology-based inquiry-learning environments identified only eight, including WISE, that support teachers’ customizations (Donnelly et al. 2014) (pp. 234-235)

    indicative of a broader problem around digital technologies? i.e. they are generally not designed to be modified by teachers. There’s an aspect of that around the LMS, what about more broadly? How does this fit in with various perspectives about the (de-)professionalisation of teachers?

It’s all about putting the context back in

Reading the 4 types of customisation that were identified puts me in mind of the reusability paradox described as the tension between these two observations

  • “The more context a learning object has, the more (and the more easily) a learner can learn from it.”
  • “To make learning objects maximally reusable, learning objects should contain as little context as possible.”

And my current pet argument that the mindset underpinning the design and implementation of digital technologies for learning and teaching has a (strong) tendency to remove context and hence reduce pedagogical value.

Choice1

What strikes me about the four customisations is that they are all about modifying the “technology-enhanced units” to insert more context. e.g. providing individual guidance, align with students’ progress, better integrate content into overall curriculum plans, and better address needs. All these talk about teachers modifying the “technology” to better respond to context.

Which resonates strongly with Shulman’s (1987) suggestion that

the key to distinguishing the knowledge base of teaching lies at the intersection of content and pedagogy, in the capacity of a teacher to transform the content knowledge he or she possesses into forms that are pedagogically powerful and yet adaptive to the variations in ability and background presented by the students (p. 15)

And also picks up a quote from this paper

The relationship between teachers and curriculum has been characterized as one between designers and their tools (Brown 2009). In designing curriculum, teachers combine available materials with their own knowledge and expertise to craft instructional experi- ences (Brown and Edelson 2003). (p. 232)

Animated gif of reusability paradox showing a trend to putting more context into the object

Summary

Introduction

The authors argue that

materials that yield to teachers’ modifications better respond to the classroom’s changing needs, constraints, and resources…research finds that teachers who attend to students’ ideas design more effective instruction and formative feedback (Black and Wiliam 2010) (p. 230)

But the various constraints of the classroom setting mean that

their customization decisions tend to be driven by issues of practicality and feasibility (Boschman et al. 2014) rather than by evidence from students’ ideas

Reasons why materials are changed and how are outlined with some supporting references. Labelled as curriculum customizations (Brown and Edelson, 2003). Largely guided by experience, practicalities etc.

Customisation may be a process of differentiation leading to learning gains. “This process demands a degree of expertise” (p. 231). “Customisations based in students ideas have been shown to lead to improved learning outcomes (Ruiz-Primo and Furtak, 2007)…,em>how teachers understand their students’ thinking also influences the kinds of customizations they make” (p. 232)

The role of technology in supporting customisation

The relationship between teachers and curriculum has been characterized as one between designers and their tools (Brown 2009)…Thus, by understanding how teachers use tools to aid their practice, we can further define their facilitating roles. (p. 232)

Apparently Schwartz et al (1999) make a point related to the need to provide flexibly adaptive materials that can support teacher customisation without losing integrity. Which brings up the interesting point

because whereas teachers’ adaptations of materials to local conditions can sometimes lead to improved student learning, it is also possible that they deviate from the intended value of the innovation (p. 232)

TEL materials and afford/guide customisations. Many examples of TEL curriculum material that have done this. Also mentions educative curriculum materials as materials with additional tools and resources – e.g. annotations on documents viewable by a teacher that offers suggestions for implementation and described the rationale behind these designs.

The context

Case studies arise from use of the Web-based Inquiry Science Environment a system used by 9900+ teachers, 80,000+ students, and with 8,000 different customised WISE units (at the time of writing). Up to date statistics are available from the web site

Essentially appears to be a collection of established units in the form of web pages, animations etc supported by various functions (e.g. concept maps). It does have an authoring environment that “allows users to copy and modify existing units without the need for programming skills” (p. 234).

This is interesting

A recent review of 30 technology-based inquiry-learning environments identified only eight, including WISE, that support teachers’ customizations (Donnelly et al. 2014) (pp. 234-235)

Cases of customisation and the role of technology

Much detailed description. Explaining how and why the four teachers customised the WISE units in response to their students. Shows the origins of the four types of customisation.

Discussion

Teachers used different tools based on a range of factors:

students’ differing needs; the conceptual and linguistic challenges most prominent in teachers’ regard; teachers’ own instructional goals; and teachers’ orientations toward technology, pedagogy and their roles as designers with respect to the curriculum materials (p. 248)

There was variability in modes of customisation – variability in level of digital changes

These differences in customization mode might be explained by teachers’ familiarity with, and orientations toward technology; as well as to the support available for using that technology (Inan and Lowther 2009; Koehler and Mishra 2008; Zhao et al. 2002)….If teachers did indeed vary in their facilities and familiarities with technology, then with consistent amounts of training, their customization strategies would come to more closely resemble one another. But another explanation for teachers’ differences is their perceptions of themselves as designers (Cviko et al. 2013) and as research participants in curriculum development projects such as WISE. (p. 249)

The last point is perhaps interesting.

How can technologies offload the effort involved in giving individualised guidance?

“logistic constraints of the classroom can limit what teachers can do” (p. 253) Mainly talks about automation as the tactic. Fairly limited discussion and something a lot of machine intelligence guys are working on.

References

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, 57(10), 1439–1459. doi:10.1177/0002764213479367

Matuk, C. F., Linn, M. C., & Eylon, B.-S. (2015). Technology to support teachers using evidence from student work to customize technology-enhanced inquiry units. Instructional Science, 43, 229–257. doi:10.1007/s11251-014-9338-1

Shulman, L. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57(1), 1–21. Retrieved from http://her.hepg.org/index/J463W79R56455411.pdf