Preparing my digital “learning space”

The following documents the (hopefully) last bit of extra work I have to undertake to prepare the digital “learning space” for EDC3100, ICT and Pedagogy. It’s work that has taken most of my working day. At a time when I can’t really afford it.  But it’s time I have to spend if I want to engage effectively in one of the most fundamental activities in teaching – know thy student.

End result

The work I’ve done today allows me to easily access from within the main digital learning space for EDC3100 (the Moodle course site) three different types of additional information about individual students.

It’s also an example of how the BAD mindset is able to work around the significant constraints caused by the SET mindset and in the process create shadow systems, which in turn illustrates the presence of a gap (i.e. yawning chasm) between what is provided and what is required.

The shadow system gapAdapted from Behrens and Sedera (2004)

What are they studying? What have they done before?

This student is studying Early Childhood education. They’ve completed 21 prior courses, but 5 of those were exemptions. I can see their GPA (blurred out below). They are studying via the online mode and is located in Queensland.

Screen Shot 2016-03-04 at 1.17.07 pm

How much of the course activities they’ve completed and when

This particular student is about half way through the first week’s material. They made that progress about 5 days ago. Looks like the “sharing, reflecting and connecting” resource took a while for them to complete. More so than the others – almost two hours

Screen Shot 2016-03-04 at 1.17.15 pm

What they’ve written on their blog and how they are “feeling”?

This student has written two blog posts. Both are fairly positive in the sentiment they express. Through the second is a little less positive in outlook.

Screen Shot 2016-03-04 at 1.26.04 pm

Reasons for the post

There are a number of reasons for this post:

  1. Reinforce the point about the value of an API infrastructure for sharing information between systems (and one that’s open to users).
  2. Document the huge gap that exists between the digital learning spaces universities are providing and what is actually required to implement useful pedagogies – especially when it comes to what Goodyear and Dimitriatdis (2013) call “design for orchestration” – providing support for the teacher’s work at learn time.
  3. Make sure I document the process to reduce the amount of work I have to do next time around.
  4. Demonstrate to the EDC3100 participants some of the possibilities with digital technologies, make them aware of some of what happens in the background of the course, and illustrate the benefits that can come from manipulating digital technologies for pedagogical purposes.
  5. Discover all the nastly little breaks in the routine caused by external changes (further illustrating the unstable nature of digital technologies).

What will I be doing

I’ll be duplicating a range of institutional data sources (student records and Moodle) so that I can implement a range of additional pedagogical supports, including:

Hopefully, I’ll be able to follow the process vaguely outlined from prior offerings. (Yep, that’s right. I have to repeat this process for every course offering, would be nice to automate).

Create new local Moodle course

I have a version of Moodle running on my laptop. I need to create a new course on that Moodle which will the local store for information about the students in my course.

Need to identify:

  • USQ moodle course id – 8036
  • local course id – 15
    Create the course in Moodle and get the id
  • group id – 176
    Create the group in the course
  • context id – 1635
    select * from mdl_context where instanceid=local_course_id  and contextlevel=50
  • course label – EDC3100_2016_S1
    One of the values defined when creating the course.
  • Update MoodleUsers::TRANSLATE_PARAMETERS
  • Update ActivityMapping::TRANSLATE_PARAMETERS
  • enrolid – 37
    select * from mdl_enrol where courseid=local_course_id and enrol=’manual’;

Create BIM activity in new course

Need to identify

  • bim id – 9

Enrol students in the course

Ahh, returning to Webfuse scripts, the sad, depleted remnants of my PhD.

~/webfuse/lib/BAM/3100/3100_support/participants/parse.pl is a script that will parse the Moodle participants web page, extract data about the enrolled users, and insert them appropriately into the database for my local Moodle course.

Initial test, no-one showing up as a participant. But add myself as teacher.

  1. Figure out that the “show all participants” option is hidden down the very bottom of the page.
  2. Save the page to my laptop
  3. Edit the parse.pl script to update course details
  4. Test that it parses the HTML file (in case changes have been made by the institution or by the new version of Moodle) – looking good.
  5. The finding of old students appears to be working.
    Oh nice, easy way to identify repeating students.  Need to save that data.
  6. Run the script
  7. Fix the errors
    • Duplicate key inserting into groups
    • missing required parameter COURSE_ID 111
      Complaint from MoodleUsers class – need to update TRANSLATE_PAREMETERS above
    • Particpants still not appearing, something missing — have to update the script. Done.

Took a while, but that should further automate the process for next time.

Add some extras

The above step only adds in some basic information about the student (USQ Moodle ID, email address). TO be useful I need to be able to know the sector/specialisation of the student, their postal code etc.

This information comes from a spreadsheet generated from the student records. And the data added into a “special” table in the Moodle database. This year I’m using a different method to obtain the spreadsheet, meaning that the format is slightly different. The new process was going to be automated to update each night, but that doesn’t appear to be working yet. But I have a version, will start with that.

  1. Compare the new spreadsheet content
    Some new fields: transferred_units, acad_load. Missing phone number.
  2. Add columns to extras table.
  3. Update the parsing of the file

Seems to be working

Activity data

This is to identify what activities are actually on the study desk.

Another script that parses a Moodle web page to extract data. Currently re-writing some of the activities, wonder how that will work. Actually, seem to have designed for it.  Does a replace of the list, not an update

~~/activites/parseActivity.pl

  1. Add in the course id for the new course
  2. ??? may be update the script to handle that parameterised section titles

Seems to be working

Activity completion data

Now to find out which activities each student has completed. Another script, this time parsing a CSV file produced by Moodle.

~/activities/parseCompletion.pl

  1. Update the script with new course data
  2. Unable to find course id – update ActivityMapping.pm
  3. Having problems again with matching activity names
    1. EDC3100 Springfield resources
      it shouldn’t be there. Turn off activity completion and get new CSV file
    2. For “.”???.
      First field is a . should be empty May need to watch this.
  4. Parses okay – try checkStudents
    Getting a collection of missing students.

    1. Are they in the local database at all? – no
    2. Have they withdrawn, but still in activity completion – yes.
  5. Seems to have worked

Student blog data

Yet another scraping of a Moodle web page.   ~/BIM/parseBIM.pl

  1. Update the config
  2. Check the parsing of the file
    1. Only showing a single student – the last one in the list
      For some reason, the table rows are missing a class. Only the lastrow has a class. Given I wrote the BIM code, this might be me. The parsing code assumes no class means it’s the header row.  But seems to work.
  3. Check the conversion process
    1. Crashed and burned at me – no Moodle id – hard code my exclusion
  4. Check insertion
  5. Do insertion
  6. Check BIM activity
  7. Check mirror for individual student – done
  8. Run them all – looks like there might be a proxy problem with the cron version.  Will have to do this at home – at least wait until it finishes.

Greasemonkey script

This is the user interface end of the equation.  What transforms all of the above into something useful.

/usr/local/www/mav

  • gmdocs/moreStudentDetails.user.js
    • Add the Moodle course id – line 331
  • phpdocs/api/getUserDetails.php
    • map the USQ and local Moodle ids
    • map USQ course id to BIM
    • add in the hard coded week data
    • Modify the module mapping (hard coded to the current course) — actually probably don’t need to do this.
  • Download the modified version of the greasemonkey client – http://localhost:8080/fred/mav/moreStudentDetails.user.js
  • Test it
    • Page is being updated with details link
    • Personal details being displayed
    • Activity completion not showing anything
      • Check server
        • Getting called – yes
        • Activity completion string is being produced
        • But the completion HTML is empty – problem in displayActivityStructure
        • That’s because the structure to display (from updateActivityStructure) is empty – which is actually from getActivityMapping
        • getActivityMapping
          • **** course id entered incorrectly
    • Blog posts showing error message
      Problem with type with the course id
  • Can I add in the extra bits of information – load, transferred courses
    • Client

Sentiment analysis

This is the new one, run the blog posts through indico sentiment analysis

~/BIM/sentiment.pl

  • update the BIM id

 

 

References

Behrens, S. & Sedera, W. (2004) Why do shadow systems exist after an ERP implementation? Lessons from a case study. IN WEI, C.-P. (Ed.) The 8th Pacific Asia Conference on Information Systems. Shanghai, China.

 

 

 

 

Sentiment analysis of student blog posts

In June last year I started an exploration into the value of sentiment analysis of student blog posts. This morning I’ve actually gotten it to work. There may be some value, but further exploration is required. Here’s the visible representation of what I’ve done.

The following is a screen shot of the modified “know thy student” kludge I’ve implemented for my course. The window shows some details for an individual student from second semester last year (I’ve blurred out identifying elements). The current focus is on the blog posts the student has written.
Sentiment analysis of blog posts

Each row in the above corresponds to an individual blog post. It used to show how long ago the post was written, the post’s title, and provide a link to the blog post. The modified version has the background colour for the cell modified to represent the sentiment of the blog post content. A red background indicates a negative post, a green background indicates a positive post, and a yellow background indicates somewhere in the middle.

The number between 0 and 1 shown next to the post title is the result provided by the Indico sentiment analysis function. The method use to perform the sentiment analysis.

Does this help?

Does this provide any help? Can it be useful?

An initial quick skim of posts from different students seemed to indicate mostly all green. Was the sentiment analysis revealing anything useful? Was it working?

In the following I examine what is revealed by the sentiment analysis by paying close attention to an individual student, the one shown in the image above.

Red blog post – reveal target for intervention?

The “red” blog post from the image above included words like “epic fail”. It tells the story of how the student had problems getting the new software for the course working. It shows as the third post the student made in the semester. The start of this course can be frustrating for students due to technical problems. This particular student didn’t report any of these problems on the course discussion forums.

Given that the course is totally online and there are ~100 students in this offering, there’s little chance for me to have known about these problems otherwise. Had the sentiment analysis been in place during the offering and if it was represented effectively, I might have been able to respond and that response might have been helpful.

Yellow blog post – a problem to address?

The yellow post above is a reflection on the students experience on Professional Experience, in a school, in front of a classroom, actually teaching. It is a reflection on how the student went through an emotional roller coaster on prac (not unusual), how her mentor really helped (also not unusual, but a little less so), but also how the various exemptions she received contributed to her problems.

Very positive blog posts – loved resources?

A number of the posts from this student are as positive as they can get – 1.0. Interestingly, almost all of them are descriptions of useful resources and include phrases like

what a wonderful learning tool …lovely resource…wonderful resource for teachers

What’s next?

Appears that the following are required/might be useful

  1. Explore different representations and analysis
    So far I’ve only looked at the student by student representation. Another forms of analysis/representation would seem potentially useful. Are there differences/patterns across semester, between students that are the same/different on certain characteristics, between different offerings of the course etc.

    How can and should this representation be made visible to the students?

  2. Set this in place for Semester 1.
    In a couple of weeks the 300+ student version of this course runs. Having the sentiment analysis working live during that semester could be useful.
  3. Explore useful affordances.
    One of the points of the PIRAC framework is that this form of learning analytics is only as useful as the affordances for action that it supports. What functionality can be added to this to help me and the students take action in response?

Reflection

I’ve been thinking about doing this for quite some time. But the business of academic life has contributed to a delay.  Getting this to work actually only required three hours of free time. But perhaps more importantly, it required the breathing space to get it done. That said, I still did the work on a Sunday morning and probably would not have had the time to do it within traditional work time.

 

What if our digital technologies were protean? Implications for computational thinking, learning, and teaching

David Jones, Elke Schneider

To be presented at  ACCE’2016 and an extension of Albion et al (2016).

Abstract

Not for the first time, the transformation of global society through digital technologies is driving an increased interest in the use of such technologies in both curriculum and pedagogy. Historically, the translation of such interest into widespread and effective change in learning experiences has been less than successful. This paper explores what might happen to the translation of this interest if the digital technologies within our educational institutions were protean. What if the digital technologies in schools were flexible and adaptable by and to specific learners, teachers, and learning experiences? To provide initial, possible answers to this question, the stories of digital technology modification by a teacher educator and a novice high school teacher are analysed. Analysis reveals that the modification of digital technologies in two very different contexts was driven by the desire to improve learning and/or teaching by: filling holes with the provided digital technologies; modelling to students effective practice with digital technologies; and, to better mirror real world digital technologies. A range of initial implications and questions for practitioners, policy makers, and researchers are drawn from these experiences. It is suggested that recognising and responding to the inherently protean nature of digital technologies may be a key enabler of attempts to harness and integrate digital technologies into both curriculum and pedagogy.

Introduction

Coding or computational thinking is the new black. Reasons given for this increased interest include the need to fill the perceived shortage of ICT-skilled employees, the belief that coding will help students “to understand today’s digitalised society and foster 21st century skills like problem solving, creativity and logical thinking” (Balanskat & Engelhardt, 2015, p. 6), and that computational thinking is “a fundamental skill for everyone” (Wing, 2006, p. 33). Computational thinking is seen as “a universal competence, which should be added to every child’s analytical ability as a vital ingredient of their school learning” (Voogt, Fisser, Good, Mishra, & Yadav, 2015, p. 715). Consequently, there is growing worldwide interest in integrating coding or computational thinking into the school curriculum. One example of this is the Queensland Government’s #codingcounts discussion paper (Department of Education and Training, 2015) which commits the government “to making sure that every student will learn the new digital literacy of coding” (p. 9). It appears that students also recognise the growing importance of coding. The #codingcounts discussion paper (Department of Education and Training, 2015) cites a Microsoft Asia Pacific survey (Microsoft APAC News Centre, 2015) that suggests 75% of students (under 24) in the Asia Pacific “wish that coding could be offered as a core subject in their schools” (n.p.). While not all are convinced of the value of making coding a core part of the curriculum it appears that it is going to happen. Balanskat & Engelhardt (2015) report that 16 of the 21 Ministries of Education surveyed already had coding integrated into the curriculum, and that it was a main priority for 10 of them. Within Australia, the recently approved Technologies learning area of the Australian Curriculum includes a focus on computational thinking combined with design and systems thinking as part of the Digital Technologies subject. This is the subject that is the focus of the Queensland government’s #codingcounts plan and it has been argued that it may also “provide a framework upon which female participation in computing can be addressed” (Zagami, Boden, Keane, Moreton, & Schulz, 2016, p. 13). The question appears to have shifted from if coding or computational thinking should be integrated into the curriculum, toward questions of how and if it can be done effectively in a way that scales for all learners?

These types of questions are especially relevant given the observation that despite extensive efforts over the last 30+ years to eliminate known barriers, the majority of teachers do not yet use digital technologies to enhance learning (Ertmer & Ottenbreit-Leftwich, 2013). It appears that the majority of teachers still do not have the knowledge, skills, resources, and environment in which to effectively use digital technologies to enhance and transform student learning. The introduction of computational thinking – “solving problems, designing systems, and understanding human behaviour, by drawing on the concepts fundamental to computer science” (Wing, 2006, p. 33) – into the curriculum requires teachers to move beyond use of digital technologies into practices that involve the design and modification of digital technologies. In recognition of the difficulty of this move, proponents of integrating computational thinking are planning a range of strategies to aid teachers. One problem, however, is that many of these strategies seem to echo the extensive efforts undertaken to encourage the use of digital technologies for learning and teaching that have yet to prove widely successful. At this early stage, the evaluation and research into the integration of computational thinking into the curriculum remains scarce and with a limited amount of “evidence as to how far teachers really manage to integrate coding effectively into their teaching and the problems they face“ (Balanskat & Engelhardt, 2015, p. 15).

However, attempts to integrate coding or computational thinking into the curriculum are not new. Grover and Pea (2013) identify the long history of computational thinking, tracing it back to recommendations for college students in the 1960s and to Papert’s work with Logo in K12 education in the 1980s. By the mid-1990s, Maddux and Lamont Johnson (1997) write of “a steady waning of interest in student use of the Logo computer language in schools” (p. 2) and examine a range of reasons for this. In the late 1990s, the dotcom boom helped increase interest, but it did not last. By the 2000s the overall participation rate in IT education within Australia declined.  With an even greater decline in enrolments in software development subjects, and especially in female participation (Rowan & Lynch, 2011). The research literature has identified a range of factors for this decline, including the finding that “Students in every participating school joined in a chorus defining the subject as ‘boring’” (Rowan & Lynch, 2011, p. 88). More recently the rise of interest in computational thinking has led to the identification of a range of issues to be confronted, including: “defining what we mean when we speak of computational thinking, to what the core concepts/attributes are and their relationship to programming knowledge; how computational thinking can be integrated into the curriculum; and the kind of research that needs to be done to further the computational thinking agenda in education” (Voogt et al., 2015, p. 716). In this paper, we are interested in exploring the related issue of how and if widespread common perceptions of digital technologies may be hindering attempts to harness and integrate digital technologies into both curriculum and pedagogy.

What if the digital technology environments within education institutions do not mirror the environments in contemporary and future digitalised societies? What if our experience within these limited digital technology environments is negatively impacting our thinking about how to harness and integrate digital technologies into curriculum and pedagogy? What if thinking about digital technology has not effectively understood and responded to the inherent protean nature of digital technologies? What if the digital technologies provided to educators were protean? Might this have an impact on attempts to harness and integrate digital technologies into curriculum and pedagogy? It is these and related questions that this paper seeks to explore.

The paper starts by drawing on a range of literature to explore different conceptions of digital technologies. In particular, it focuses on the 40+ year old idea that digital technologies are the most protean of media. Next, the paper explains how stories of digital technology modification by a high school teacher and a teacher educator were collected and analysed to offer insights into what might happen if our digital technologies were protean. Analysis of these stories is then discussed and used to develop an initial set of implications for practice, policy, and research for attempts to harness and integrate digital technologies into curriculum and pedagogy. The paper suggests that an educational environment that is rich with protean digital technologies appears likely to have a range of positive impacts on attempts to harness and integrate digital technologies into curriculum and pedagogy. However, such an environment requires radically different mindsets than currently used within educational institutions, and is thus likely to be extremely challenging to create and maintain.

Digital technology: A protean meta-medium, or not?

The commonplace notions of digital technologies that underpin both everyday life and research have a tendency to see them “as relatively stable, discrete, independent, and fixed” (Orlikowski & Iacono, 2001, p. 121). Digital technologies are seen as hard technologies, technologies where what can be done is fixed in advance either by embedding it in the technology or “in inflexible human processes, rules and procedures needed for the technology’s operation” (Dron, 2013, p. 35). As noted by Selwyn and Bulfin (2015) “Schools are highly regulated sites of digital technology use” (p. 1) where digital technologies are often seen as a tool that is: used when and where permitted; standardised and preconfigured; conforms to institutional rather than individual needs; and, a directed activity. Rushkoff (2010) argues that one of the problems with this established view of digital technologies is that “instead of optimizing our machines for humanity – or even the benefit of some particular group – we are optimizing humans for machinery” (p. 15). This hard view of digital technologies perhaps also contributes to the problem identified by Selwyn (2016) where in spite of the rhetoric of efficiency and flexibility surrounding digital technologies, “few of these technologies practices serve to advantage the people who are actually doing the work” (p. 5). Digital technologies have not always been perceived as hard technologies.

Seymour Papert in his book Mindstorms (Papert, 1980) describes the computer as “the Proteus of machines” (p. viii) since the essence of a computer is its “universality, its power to simulate. Because it can take on a thousand forms and can serve a thousand functions, it can appeal to a thousand tastes” (p. viii). This is a view echoed by Alan Kay (1984) and his discussion of the “protean nature of the computer” (p. 59) as “the first metamedium, and as such has degrees of freedom and expression never before encountered” (p. 59). In describing the design of the first personal computer, Kay and Goldberg (1977) address the challenge of producing a computer that is useful for everyone. Given the huge diversity of potential users they conclude “any attempt to specifically anticipate their needs in the design of the Dynabook would end in a disastrous feature-laden hodgepodge which would not be really suitable for anyone” (Kay & Goldberg, 1977, p. 40). To address this problem they aimed to provide a foundation technology and sufficient general tools to allow “ordinary users to casually and easily describe their desires for a specific tool” (Kay & Goldberg, 1977, p. 41). They aim to create a digital environment that opens up the ability to create computational tools to every user, including children. For Kay (1984) it is a must that people using digital technologies should be able to tailor those technologies to suit their wants, since “Anything less would be as absurd as requiring essays to be formed out of paragraphs that have already been written” (p. 57). For Stallman (2014) the question is more fundamental, “To make computing democratic, the users must control the software that does their computing!” (n.p.).

This perceived 40-year-old need for individuals to use protean digital technologies to make their own tools in order to fulfil personal desires resonates strongly with the contemporary Maker movement. A movement that is driven by a combination of new technologies that increase the ease of creation, a cultural shift toward do-it-yourself practices, and is seeing people increasingly engaged in creating and customising physical and virtual artefacts. Martinez and Stager (2013) make this link explicit by labelling Seymour Papert as the “Father of the Maker Movement” (n.p.). Similarly, Resnick and Rosenbaum (2013) note the resonance between the Maker movement and a tradition within the field of education that stretches from Dewey’s progressivism to Papert’s constructionism. Resnick and Rosenbaum (2013) see tinkering “as a playful style of designing and making, where you constantly experiment” (p. 165) for which digital technologies – due to their association with logic and precision – may not always appear suitable. A perception reinforced by the evolution of digital technologies after the work of Kay and Goldberg in the 1970s.

The work of Kay, Goldberg, and others at Xerox PARC on Dynabook directly and heavily influenced Apple, Microsoft, and shaped contemporary computing. However, Kay and Goldberg’s conception of computers as a protean medium where tool creation was open to every user did not play a part in that shaping (Wardrip-Fruin & Montfort, 2003). In fact, there’s evidence that digital technologies are getting less modifiable by the end-user. Writing about how our relationship with computers is changing, Turkle (1995) argues that we “have become accustomed to opaque technology” (p. 23). Where early computer systems encouraged, even required, people to understand the mechanism of the computer, the rise of the GUI interface hides the mechanism behind the simulation of a desktop or other metaphor. Limiting users to clicking prepared icons and menus. Desktop personal computers once had an architecture that enabled enhancement and upgrading. While increasingly mobile devices are typically “not designed to be upgraded, serviced or even opened, just used and discarded” (Traxler, 2010, p. 5). The decision by Apple to prevent the creation of executable files on the iPad means “that you can’t make anything that may be used elsewhere. The most powerful form of computing, programming, is verboten” (Stager, 2013, n.p.). But it’s not just the design of technology that hardens digital technologies.

As noted above, Dron (2013) argues that technology can be hardened by embedding it “in inflexible human processes, rules and procedures” (p. 35). Resnick and Rosenabuam (2013) make the point that designing contexts that allow for tinkerability is as important as designing technologies for tinkerability. The affordance of a digital technology to be protean is not solely a feature of the technology. An affordance to be protean arises from the on-going relationship between digital technologies, the people using it, and the environment in which it is used. Being able to code, does not always mean you are able to modify a digital technology. Selwyn and Bulfin’s (2015) positioning of schools as “highly regulated sites of digital technology use” (p. 1) suggest that they are often not a context that are designed for tinkerability through the provision of protean digital technologies.

Even though the context may not provide protean digital technologies, this hasn’t stopped educators modifying digital technologies. Jones, Albion and Heffernan (2016) examine and map stories of digital technology modification by three teacher educators by the traces left in the digital landscape and the levels of modification. Table 1 provides an overview of the levels of digital technology modification used by Jones et. al. (2016). It ranges from simply using a digital technology as is, through changing its operation via configuration options (internal and external), modifying the operation of a digital technology by combining or supplementing it with other digital technologies, and finally to coding. Table 1 suggests that digital technologies can be modified via configuration, combination, and coding.

Table 1: Levels of digital technology modification (Albion et al., 2016)

Type of change Description Example
Use Tool used with no change Add an element to a Moodle site
Internal configuration Change operation of a tool using the configuration options of the tool Change the appearance of the Moodle site by changing Moodle course settings
External configuration Change operations of a tool using means outside of the tool Inject CSS or Javascript into a Moodle site to change its appearance or operation
Customization Change the tool by modifying its code Modify the Moodle source code, or create/install a new plugin
Supplement Use another tool to offer functionality not provided by existing tool Implement course level social bookmarking through Diigo
Replacement Use another tool to replace/enhance functionality provided by existing tool Require students to use external blog engines, rather than the Moodle blog engine

 

Methodology

This paper uses a qualitative case study to describe and explore the potential value, impact, and issues faced by educators when they seek to treat digital technologies as protean. The aim being to offer some initial responses to the question “what if our digital technologies were protean?” As this is an attempt to understand a particular social phenomenon as it occurs in real-life it is well-suited to the case study method (Aaltio & Heilmann, 2010). Data for this case study is drawn from the authors’ own experiences as educators. For Jones this draws on his experiences as a teacher educator at the University of Southern Queensland from commencement in 2012 through 2015. During this time his main teaching responsibility was for a large – over 300 students split evenly between on-campus and online students – 3rd year course within the Bachelor of Education. For Schneider, this draws on her experience as a teacher at secondary schools (neither her current school) within south-east Queensland in 2014 and 2015 teaching grades 7 to 12 in IT and Business subjects.

The authors’ experiences provide a number of advantages for the purpose of exploring the potential impact of protean digital technologies. Both authors have: formal tertiary education in fields related to the development of Information Technology; undertaken professional work within Information Technology; and, later trained as Secondary IPT teachers. Consequently, both authors see digital technologies as more inherently protean than those without an IT background, and have the knowledge and skills necessary to modify existing, somewhat less than protean, digital technologies. While not an activity currently broadly available to all educators, the authors’ experience and knowledge provide an indication of what might be possible if digital technologies available to educators were more protean. At the same time, the authors have different cultural backgrounds (Australia and Canada). The case also explores the impact of protean digital technologies within two very different educational contexts: tertiary and secondary education. The tertiary education context involves a large course with hundreds of students in both on-campus and online modes. This large and diverse student cohort means that there is significant use of digital technologies with online students learning solely via digital technologies. The secondary education context involves a greater number of smaller student cohorts with digital adoption in a state of flux and still primarily delivering teaching and assessing learning with traditional, non-digital means.

The authors engaged in an iterative and cyclical process that involved the gathering, sharing, discussing, and analysing stories of how, why, and what digital technologies they had modified while teaching. Both authors drew on personal records and writings in the form of tweets, blog posts, email archives, and other documents to generate a list of such stories. These stories (Jones: 16, Schneider: 10) were written up using a common format, shared via a Google document, generated on-going discussion, and led to an iterative process of analysis to identify patterns and implications. A major part of the analysis was grouping the stories of digital technology modification via: the purpose (e.g. improve administration, model good practice, teaching, or learning); cause (e.g. inefficient systems, non-existent systems, missing functionality); impact (e.g. save time, improve learning); and, the type of change (as per Table 1). From this analysis a number of evident themes were extracted and are described in the next section.

Themes evident in stories of protean technologies

Upon reading each other’s stories, both authors were immediately struck by the level of commonality between the stories both had told. Not so surprising was that all stories told of attempts to improve learning, teaching, or both. However, even though these stories were taking place in very different types of educational institutions there were three common themes prevalent in stories from both authors. The three themes were: filling holes (14 stories); modelling effective practice (12 stories); and, mirroring the real world (7 stories). There were, however, significant differences in the amount of coding required for these stories and the levels of digital technology modification undertaken.

In terms of coding, eventually none of Schneider’s ten stories involved the use of coding. Two of her stories did initially involve coding (Yahoo Pipes and Java), but she subsequently implemented other modifications that did not require coding. Seven of Jones’ sixteen stories involved coding using Perl, PHP, or jQuery/Javascript. This suggests the digital technologies can be modified without necessarily being able to code. However, it does raise questions about the reasons between the greater prevalence of coding in Jones’ stories. Is it due to the greater reliance on digital technologies within the specific context? Is it his longer work history within higher education? Was Jones less fearful of getting in trouble for wandering away from officially mandated practices? Is it his longer engagement with modifying digital technologies for learning and teaching? Or, are there other factors at play?

Figure 1 describes the level of digital technology modification (as per Table 1) evidence in the stories from each author (some stories involved more than one level of modification). All but one of Schneider’s stories involved supplementing or replacing digital technologies provided by the school. This suggests some significant perceived limitations with the school digital technology environment. Jones’ stories were almost evenly balanced between configuring provided digital technologies, or supplementing/replacing them with different digital technologies.Story Modification.png

Figure 1: Number of stories per author for each level of digital technology modification

Four of Schneider’s stories and ten of Jones’ stories of digital technology modification were designed to fill holes in the functionality provided by institutional technologies. In her very first story (Digital grading using Excel) Schneider outlines her use of Excel spreadsheets to supplement the school’s requirement that teachers update paper-based student profiles located within a dedicated physical folder kept in the head-of-department’s office. Her use of Excel spreadsheets to supplement the required practice provided necessary support for teacher tasks such as maintaining student progress records and discussing progress with individual students. Practices that the school practice did not support – the hole to be filled. In the story “Web scraping to contact not submits” Jones describes a similar hole in an institutionally provided technology. In this story, the University’s online assignment management system provides no mechanism by which students who have not submitted an assignment and have not received an extension can be identified and contacted. Instead, Jones had to use a combination of Perl scripts, regular expressions, manual copying and pasting, and an email client to fill the hole. The value and difficulty in making this particular modification is illustrated by the following quote from a third-year student who was contacted via this modification.

Thank you for contacting me in regards to the submission. You’re the first staff member to ever do that so I appreciate this a lot.

Six of Schneider’s stories and six of Jones’ stories of digital technology modification were intended to improve student learning. These were all driven by a combination of modelling the effective use of digital technologies and/or adopting enhanced pedagogical practices. In “Moviemaker to introduce teacher and topics” Schneider describes how the production by her of a movie trailer for her subject is intended to model the use of digital technologies to visually present information, but also to engage students. In “Course barometers via Google forms” Jones  describes how functionality provided by the University LMS is replaced with Google forms as a way to more effectively gather student feedback, but also model a technology that they may be used by students in their practice. That both authors primarily teach in subjects related to the use of digital technologies would appear to suggest that prevalence of the modelling theme may be reduced for teachers of other areas.

Four of Schneider’s stories and three of Jones’ stories suggest that the institutionally provided digital technologies do not always appropriately mirror the capabilities of real-world technologies and subsequently negatively impact learning and teaching. Both authors share stories about how the visual and content capabilities of institutional learning management systems fail to mirror the diversity, quality, and capabilities of available online technologies, including social networking software. Consequently, both authors tell stories of creating teaching related websites on external blog engines. In “Creating a teaching website with Edublogs” Schneider outlines the visual and functional limitations of the official Learning Management System (LMS) and how use of Edublogs saved teacher time, was more visually appealing, and provided a more authentic experience to students of services they are likely to encounter in the real-world. Schneider also tells stories where computer hardware and network bandwidth provided by the school to students is supplemented through use of personal resources from both students and herself. The story “Encourage student use of phone hot-spots” tells of how the widespread inability of school Internet connections to fulfil learning needs was addressed by encouraging those students with access to use their mobile phone hot spots.

In general, the modification of institutional digital technologies does not come without problems, risks, or costs. Both authors make mention of the additional workload required to implement the changes described, especially when such changes aren’t directly supported or encouraged by the institution.  Such cost can be assuaged through on-going use of the changes and the benefits they generate. However, these types of changes can challenge institutional polices and be frowned upon by management. In “Hacking look and feel” Jones  describes how an institutionally mandated, default look and feel for course websites was modified to avoid a decrease in functionality. A story that also describes how the author had to respond to a “please explain” message from the institutional hierarchy and was for a time seen as “hacking” the institution’s online presence. Similarly, in “Encouraged students to hot-spot with their phones to connect to the web” Schneider describes one digital technology modification that both broke institutional policy, but also enhanced student learning. It is not hard to foresee situations where the outcomes of these stories may well have been considerably more negative for those involved.

What if? Discussion, implications and questions

The perception of digital technologies as protean does not appear widespread within educational institutions. What if our digital technologies were protean? Since designing the context for tinkerability is important (Resnick & Rosenbaum, 2013), what if the context within educational institutions were designed to enable, encourage, and support all teachers and learners in the modification of digital technologies to create the tools they see as necessary to best support their learning and teaching? Understanding and correctly predicting the potential implications and outcomes of such a radical transformation of the complex environment of an education institution is difficult. Hence the following are presented as a tentative exploration of some possible future states and are seen more as questions for exploration and confirmation than firm predictions. The assumption underpinning the following implications and questions is that the experience of the authors described above can be used to generate some indications of what might happen if our digital technologies were protean.

Filling holes – bricolage

One of the reviewers of this paper made the following observation

Some of the tinkerability/evidence of protean behaviour sound rather like the old idea of a kludge – a ‘quick and dirty’ workaround for some computer processes

As noted earlier in the paper, almost 40 years ago, Kay and Goldberg (1977) recognised that any digital technology that attempted to anticipate the needs of a diverse user population would end up as “a disastrous feature-laden hodgepodge which would not be really suitable for anyone” (p. 40). Over recent years the digital technologies used within educational institutions are increasingly enterprise information systems. Systems – such as Learning Management Systems – intended to fulfil the needs of the entire institution and are perhaps more likely to fulfil the prediction of Kay and Goldberg. Jones, Heffernan and Albion (2015) offer a range of additional examples of how institutionally mandated digital technologies are often not suited to specific educational aims and contexts and thus generate the need for ‘digital renovation’.  An example of Koopman’s and Hoffman’s (2003) description of how some “work-arounds are necessary because the computer or software as originally designed simply doesn’t address the problem or task at hand” (p. 72). Koopman and Hoffman (2003) argue that workarounds should not be seen as users departing from officially condoned uses of technology (illustrated above by the increased chance of organisational censure the authors digital renovation risked), but rather as the legitimate practice of adaptive design where the users are helping finish the design of the digital technologies.

A perspective mirrored by Turvey (2012) who argues that the construction of pedagogical tools does not end with production, but instead such tools continue to be refined through “use within a complex ecology of mediating influences, as teachers exercise agency over the development of their professional practice” (p. 114). Further echoed by the argument of Mishra and Koehler (2006) that “there is no single technological solution that applies for every teacher, every course, or every view of teaching” (p. 1029) and that instead quality teaching “requires developing a nuanced understanding of the complex relationships between technology, content, and pedagogy, and using this understanding to develop appropriate context-specific strategies and representations” (p. 1029). Jones, Heffernan and Albion (2015) describe how the protean possibilities of existing digital technologies can be used to engage in ‘digital renovation’ and thus create educational possibilities specific to particular teaching contexts.

Would digital technologies that are protean better support teachers engaging in digital renovation activities that “fill the holes” between those digital technologies and the context-specific requirements of learner and teacher? Would teacher engagement in context-appropriate digital renovation activities lead to improvements in the quality of teaching and learning? If existing digital technologies are largely not protean, what is the nature of the “holes” that are currently experienced by learners and teachers? What impact does an inability to “fill these holes” have on teachers and their workload, sense of agency, their perception of digital technologies, their learners etc.?

Modelling the effective use of digital technologies

The digital technologies subject from the technologies area of the Australian Curriculum defines computational thinking as “A problem solving method that involves various techniques and strategies in order to solve problems that can be implemented by digital systems” (ACARA, 2014). Workarounds, kludges, and digital renovation are examples of the application of computational thinking by users to solve problems that they face. Engaging in digital renovation allowed Schneider to model the application of computational thinking for her secondary computing students. With the incorporation of the Australian Curriculum’s digital technologies subject into the compulsory curricula, the advantages of being able to do this now expand to a majority of teachers. However, as noted above there is the question about whether or not this broader sample of teachers have the experience, knowledge and skills to take advantage of this opportunity. To address this problem a range of professional development opportunities are being made available to teachers.

In the context of ‘technologising literacy education’, Lankshear and Bigum (1999) develop and describe four principles for “guiding further developments in technologizing classrooms” (p. 445) and then show how those principles are seen differently by an ‘insider’ mindset and an ‘outsider-newcomer’ mindset. The first of these principles is ‘Teachers first’. This principle recommends that teachers must first be aided in “making use of new technologies to enhance their personal work before learning to use them in their teaching” (p. 453). The argument is that in order for teachers to be able to make appropriate pedagogical decisions around new technologies “they must first know how to use those technologies for their own purposes (and any benefits of doing so) for their own purposes” (p. 453). Lankshear and Bigum (1999) argue that the intent of this principle is “easy to subvert” (p. 460) by practices “designed to put teachers into classrooms with improved technological skills and understandings, but within the confines of the newcomer-outsider world view” (p. 460). On the other hand, an insider world view focuses both on the importance of addressing teachers’ on-going needs, but also on developing new alliances and articulations around learning, teaching, and the new technologies. Professional development alone is not likely to be sufficient to allow teachers to model computational thinking. Protean digital technologies would seem to be at least a catalyst, if not a pre-requisite, for teachers and others to be able to begin modelling computational thinking in the context of the requirements of the digital technologies subject.

Would the widespread availability of protean digital technologies better enable teachers to develop and model computational thinking? What impact would this have on student learning? Will the absence of protean digital technologies hinder teachers’ ability to develop and refine their computational thinking abilities? Can protean digital technologies help support the creation of new alliances and articulations around learning, teaching, and digital technologies within schools? What other types of support and changes would be required to develop such alliances and articulations? What new alliances and articulations would or should be developed?

Mirror the real world

The introduction of the digital technologies subject into core curricula is being done to ensure that students leave school with the skills necessary to engage in a digital world. It has been suggested that within Australia the introduction of the “compulsory Digital Technologies curriculum may provide a framework upon which female participation in computing can be addressed” (Zagami, Boden, Keane, Moreton, & Schulz, 2016, p. 13). On the other hand, in critiquing school mathematics Lockhart (2009) suggests that “there is surely no more reliable way to kill enthusiasm and interest in a subject than to make it a mandatory part of the school curriculum” (p. 36). A major part of Lockhart’s (2009) critique of school mathematics is a complaint about “the lack of mathematics in our mathematics classes” (p. 29). A problem that arises from a complex set of factors including “that nobody has the faintest idea what it is that mathematicians do” (p. 22) which leads to “forced and contrived” (p. 38) attempts to explain how what happens in mathematics classes as relevant to daily life. Margolis, Estrella, Goode et. al. (2008) found  that classroom practices associated with the teaching of computer science in American schools “can be disconnected from students’ lives, seemingly devoid of real-life relevance” (p. 102). Echoes of the limited relevance problem was found by Rowan & Lynch (2011) in post-compulsory information technology secondary courses in Australia. Margolis et al (2008) argue that it is important that teachers be able to demonstrate to students the relevance and significance of computer science to students’ lived experience, but identify that typically teachers have not received any support in developing approaches that meet this need.

The renewed interest in computational thinking and digital technologies arise from visions of the future, such as that seen by the Queensland Government where digital technologies are “fundamentally transforming the world of work and generating new ways of doing business on a global scale” (Department of Education and Training, 2015, p. 11). A vision of a future real world that is very different from the experience learners and teachers have of digital technologies within schools. As identified by Selwyn and Bulfin (2015), an experience heavy on regulation, standardisation, pre-configuration, directed activity, and on institutional and not individual needs. Suggesting that the prevalent school digital environment is unlikely to help prepare learners and teachers well for the future, fundamentally transformed world. Suggesting also that the teaching of computational thinking within schools may fall into the same trap as the type of school-based mathematics critiqued by Lockhart.

Are current, school-based digital environments suitable for preparing learners and teachers “to understand today’s digitalised society and foster 21st century skills like problem solving, creativity and logical thinking” (Balanskat & Engelhardt, 2015, p. 6)? Would an environment with the widespread availability of protean digital technologies better mirror this future world? What challenges exist in making school-based digital environments better mirror a future world that has been fundamentally transformed by digital

Discussion and Conclusions

This paper has posed the question “What if our digital technologies were protean?” To provide some initial responses to this question the paper has explored what is meant by protean digital technologies and analysed stories of digital technology modification from a high-school teacher and a teacher educator. Analysis of these stories revealed that these educators were driven to modify the available digital technologies while attempting to improve aspects of learning and/or teaching. These attempts at improvement aimed to: fill holes in the functionality provided by the digital technologies; model effective practice with digital technologies; or, better mirror real world digital technologies. Only seven of twenty-six stories of digital technology modification required use of coding. The majority of digital technology modification stories involved the configuration or combination of digital technologies, often to replace digital technologies provided by the organisation. Using this experience as a foundation, the paper has used a range of literature to develop some initial suggestions for what might happen more broadly within education if our digital technologies more protean. Given the complex nature of education and the difficulty of predicting the future, these suggestions are framed as questions for further exploration and confirmation, rather than prediction. However, the authors do suspect that the impact of more protean digital technology within education would be positive for both the teaching of computational thinking, and more broadly for the use of digital technology to enhance learning and teaching.

Actually exploring whether or not this is the case will be quite a challenge. Not the least because the idea of protean digital technologies is diametrically opposed to the existing digital technology environment within most educational institutions, and indeed broader society. Enabling more protean digital technologies within education would need to engage with existing widely held perspectives and practices around difficult issues such as accountability, efficiency, resourcing, risk management, and student safety. This task is made more difficult by the question about whether or not those engaged  with such discussions bring – as identified by Lankshear and Bigum (1999) – an ‘insider’ or ‘outsider-newcomer’ mindset. An ‘outsider-newcomer’ sees “the world as the same, but just more technologised” where the insider sees how pervasive and protean digital technologies means that the world – and subsequently educational institutions – “is radically different” (Lankshear & Bigum, 1999, p. 458). The insider view appears more in line with the espoused rationale behind that rise of computational thinking and coding in schools. However, there remain questions about how much of the rhetoric around digital technology-enabled transformation of society. More pragmatically there is the question of how to provide protean digital technologies within education institutions? A question that might be answered by drawing on research on creating computationally rich environments for learners. Such as Grover and Pea’s  (2013) potential principles including: low floor, high ceiling; support for the “use-modify-create” progression; scaffolding; enable transfer; support equity; and, be systemic and sustainable. Principles that might fruitfully be used to break education out of its traditional norms and structures and allow us to finally explore the question “What IF schools were not encumbered by traditional norms and structures, and technology, social capital and pedagogies were used to their true realisation or potential?”

References

Aaltio, I., & Heilmann, P. (2010). Case Study as a Methodological Approach. In A. J. Mills, G. Durepos, & E. Wiebe (Eds.), Encyclopedia of Case Study Research. (pp. 67–78). Thousand Oaks, CA: Sage Publications.

ACARA. (2014). Computational thinking – Glossary term. Retrieved from 2 July 2016

Balanskat, A., & Engelhardt, K. (2015). Computing our future: Computer programming and coding – Priorities, school curricula and initiatives across Europe. Brussels. Retrieved from http://www.eun.org/c/document_library/get_file?uuid=3596b121-941c-4296-a760-0f4e4795d6fa&groupId=43887

Department of Education and Training. (2015). #codingcounts: A discussion paper on coding and robotics in Queensland schools. Brisbane, Australia. Retrieved from http://advancingeducation.qld.gov.au/SiteCollectionDocuments/Coding-and-robotics-booklet.pdf

Dron, J. (2013). Soft is hard and hard is easy: learning technologies and social media. Form@ Re-Open Journal per La Formazione in Rete, 13(1), 32–43.

Ertmer, P. a., & Ottenbreit-Leftwich, A. (2013). Removing obstacles to the pedagogical changes required by Jonassen’s vision of authentic technology-enabled learning. Computers & Education, 64, 175–182.

Grover, S., & Pea, R. (2013). Computational Thinking in K-12: A Review of the State of the Field. Educational Researcher, 42(1), 38–43.

Jones, D., Albion, P., & Heffernan, A. (2016). Mapping the digital practices of teacher educators: Implications for teacher education in changing digital landscapes. In Proceedings of Society for Information Technology & Teacher Education International Conference 2016 (pp. 2878–2886). Chesapeake, VA: Association for the Advancement of Computing in Education.

Jones, D., Heffernan, A., & Albion, P. (2015). TPACK as Shared Practice: Toward a Research Agenda,. In L. Liu & D. Gibson (Eds.), Research Highlights in Technology and Teacher Education 2015 (pp. 13–20). Waynesville, NC: AACE.

Kay, A. (1984). Computer Software. Scientific American, 251(3), 53–59.

Kay, A., & Goldberg, A. (1977). Personal Dynamic Media. Computer, 10(3), 31–41.

Koopman, P., & Hoffman, R. (2003). Work-arounds, make-work and kludges. Intelligent Systems, IEEE, 18(6), 70–75.

Lankshear, C., & Bigum, C. (1999). Literacies and new technologies in school settings. Pedagogy, Culture & Society, 7(3), 445–465.

Lockhart, P. (2009). A Mathematician’s Lament: How school cheats us out of our most fascinating and imagintive art forms. New York: Bellevue Literary Press.

Maddux, C. D., & Lamont Johnson, D. (1997). Logo: A retrospective. Computers in the Schools, 14(1/2), 1–8.

Margolis, J., Estrella, R., Goode, J., Jullison Holme, J., & Nao, K. (2010). Stuck in the shallow end: Education, race, and computing. Cambridge, MA: MIT Press.

Martinez, S. L., & Stager, G. (2013). Invent to learn: Making, tinkering, and engineering in the classroom. Torrance, CA: Constructing Modern Knowledge Press.

Microsoft APAC News Centre. (2015). Three out of four students in Asia Pacific want coding as a core subject in school, reveals Microsoft study | Asia News Center. Retrieved January 20, 2016, from https://news.microsoft.com/apac/2015/03/23/three-out-of-four-students-in-asia-pacific-want-coding-as-a-core-subject-in-school-reveals-microsoft-study/

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Orlikowski, W., & Iacono, C. S. (2001). Research commentary: desperately seeking the IT in IT research a call to theorizing the IT artifact. Information Systems Research, 12(2), 121–134.

Papert, S. (1980). Mindstorms: children, computers, and powerful ideas. New York: Basic Books.

Resnick, M., & Rosenbaum, E. (2013). Designing for Tinkerability. Design, Make, Play: Growing the next Generation of STEM Innovators, 163–181. doi:Resnick, M.; Rosenbaum, E. (1993). Designing for tinkerability. In Design, Make, Play: Growing the Next Generation of STEM Innovators (pp. 163–181). New York: Routledge.

Rowan, L., & Lynch, J. (2011). The continued underrepresentation of girls in post-compulsory information technology courses: a direct challenge to teacher education. Asia-Pacific Journal of Teacher Education, 39(2), 83–95.

Rushkoff, D. (2010). Program or be programmed: Ten commands for a digital age. New York: OR Books.

Selwyn, N. (2016). The digital labor of digital learning : notes on the technological reconstitution of education work. Retrieved January 25, 2016, from http://newmediaresearch.educ.monash.edu.au/lnm/the-digital-labor-of-digital-learning/

Selwyn, N., & Bulfin, S. (2015). Exploring school regulation of students’ technology use – rules that are made to be broken? Educational Review, 1911(October), 1–17.

Stager, G. (2013). For the love of laptops. Adminstr@tor Magazine. Retrieved January 30, 2016, from http://www.scholastic.com/browse/article.jsp?id=3757848

Stallman, R. (2014). Comment on “We can code IT! Why computer literacy is key to winning the 21st century.” Mother Jones. Retrieved January 26, 2016, from http://www.motherjones.com/media/2014/06/computer-science-programming-code-diversity-sexism-education#comment-1437791881

Traxler, J. (2010). Will student devices deliver innovation, inclusion, and transformation? Journal of the Research Centre for Educational Technology, 6(1), 3–15.

Turkle, S. (1995). Life on the screen: Identity in the age of the Internet. New York: Simon & Schuster.

Turvey, K. (2012). Constructing Narrative Ecologies as a Site for Teachers’ Professional Learning with New Technologies and Media in Primary Education – E-Learning and Digital Media Volume 9 Number 1 (2012). E-Learning and Digital Media, 9(1), 113–126.

Voogt, J., Fisser, P., Good, J., Mishra, P., & Yadav, A. (2015). Computational thinking in compulsory education: Towards an agenda for research and practice. Education and Information Technologies, 20(4), 715–728.

Wardrip-Fruin, N., & Montfort, N. (2003). New Media Reader. Cambridge, MA: MIT Press.

Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.

Zagami, J., Boden, M., Keane, T., Moreton, B., & Schulz, K. (2016). Female participation in school computing : reversing the trend. Retrieved from http://digitalcareers.edu.au/wp-content/uploads/2015/04/Female-Participation.pdf

Exploring “post adoptive usage” of the #moodle Book module – a draft proposal

For quite some time I’ve experienced and believed that there how universities are implementing digital learning has some issues that contribute to perceived problems with the quality of such learning and its associated teaching. The following is an outline of an exploratory research project intended to confirm (or not) aspects of this belief.

The following is also thinking out loud and a work in progress. Criticisms and suggestions welcome. Fire away.

The topic of interest

Like most higher education institutions across the global, Australian universities have undertaken significant investments in corporate educational technologies (Holt et al., 2013). If there is to be any return on any investment in information technology (IT), then it is essential that the technologies are utilised effectively (Burton-Jones & Hubona, 2006). Jasperson, Carter and Zmud (2005) suggest that the potential of most information systems is underutilised and that most “users apply a narrow band of features, operate at low levels of feature use, and rarely initiate extensions of available features” (p. 525).

While Jasperson et al (2005) are talking broadly about information systems, it’s an observation that is supported by my experience and is likely to resonate with a lot of people involved in university digital/e-learning. It certainly seems to echo the quote from Prof Mark Brown I’ve been (over) using recently about e-learning

E-learning is a bit like teenage sex. Everyone says they’re doing it but not many people really are and those that are doing it are doing it very poorly (Laxon, 2013)

Which begs the question, “Why?”.

Jasperson et al (2005) suggest that without a rich understanding of what people are doing with these information systems at “a feature level of analysis (as well as the outcomes associated with those behaviours)” after the adoption of those systems, then “it is unlikely that organizations will realize significant improvements in their capability to manage the post-adoptive life cycle” (p. 549). I’m not convinced that the capability of universities to manage the post-adoptive life cycle is as good as it could be.

My experience of digital learning within Universities is that the focus is almost entirely on adoption of the technology. A lot of effort is placed into deciding which system (e.g. LMS) should be adopted. Once that decision is made that system is implemented. The focus is then on ensuring people are able to use the adopted system appropriately through the provision of documentation, training, and support. The assumption is that the system is appropriate (after all it wouldn’t have been adopted if it had any limitations) and that people just need to have the knowledge (or the compulsion) to use the system.

There are only two main types of changes made to these systems. First, is upgrades. When the adopted system is upgraded, the institution ensures that it maintains currency and upgrades. Second, are strategic changes. That is, senior management want to achieve X, system doesn’t do X, modify system to do X.

It’s my suggestion that changes to specific features of a system (e.g. LMS) that would benefit end users are either

  1. simply not known about; or,
    Due to the organisations lack of any ability to understand what people are experiencing and doing with the features of the system.
  2. are starved of attention.
    Since these are complex systems. Changing them is expensive. Thus only strategic changes can be made. Changes to fix features used by small subsets of people can never be seen as passing the cost/benefit analysis.

I’m interested in developing a rich understanding of the post-adoptive behaviours and experiences of university teachers using digital learning technologies. I’m working on this because I want to identify what is being done with the features of these technologies and understand what is working and what is not. It is hoped that this will reveal something interesting about the ability of universities to manage digital technologies in ways that enable effective utilization and perhaps identify areas for improvement and further exploration.

Research Questions

From that, the following research questions arise.

  1. How do people make use of a particular feature of the LMS?
    Seeking to measure what they actually did when using the LMS for actual learning/teaching. Not what they describe they did, or what they intend to do.
  2. In their experience, what are the strengths and weaknesses of a particular feature?
    Seeking to identify what they thought the system did to help them achieve their goal and what the system made harder.

Following on from Jasperson et al (2005) the aim is to explore these questions at a feature level. Not with the system as a whole but with how people are using a specific feature of the system. For example, what is their experience of using the Moodle Assignment module, or the Moodle Book module?

Thinking about the method(s)

So how do you answer those two questions?

Question 1 – Use

The aim is to analyse how people are actually using the feature. Not how they report their use, but how they actually use it. This suggests at least two methods

  1. Usability studies; or,
    People are asked to complete activities using a system whilst within a controlled environment that is capturing their every move, including tracking the movement of their eyes.On the plus side, this captures very rich data. On the negative side, I don’t have access to an usability lab. There’s also the potential for this sort of testing to be removed from context. First, the test appears in the lab, a different location than the user typically uses. Second, in order to get between user comparisons it can rely on “dummy” tasks (e.g. the same empty course site).
  2. Learning analytics.
    Analysing data gathered by the LMS about how people are using the system.On the plus side, I can probably get access to this data and there are a range of tools and advice on how to analyse it. On the negative side, the richness of the data is reduced. In particular, the user can’t be queried to discover why they performed a particular task.

Question 2 – Strengths and Weaknesses

This is where the user voice enters the picture. The aim here is to find what worked for them and what didn’t within their experience.

Appear to be three main methods

  1. Interviews;
    On the plus side, rich data. On the negative side, “expensive” to implement and scale to largish numbers and a large geographic area.
  2. Surveys with largely open-ended questions; or,
    On the plus side, cheaper, easier to scale to largish numbers and a large geographic area etc. On the negative side, more work on the part of the respondents (having to type their responses) and less ability to follow up on responses and potentially dig deeper.
  3. LMS/system community spaces.
    An open source LMS like Moodle has openly available community spaces in which users/developers of the system interact. Some of the Moodle features have discussion forums where people using the feature can discuss. Content analysis of the relevant forum might reveal patterns.
    The actual source code for Moodle as well as plans and discussion about the development of Moodle occur in systems that can also be analysed.
    On the plus side, there is a fair bit of content in these spaces and there are established methods for analysing them. Is there a negative side?

What’s currently planned

Which translates into an initial project that is going to examine usage of the Moodle Book module (Book). This particular feature was chosen because of this current project. If anything interesting comes of this, the next plan is to repeat a similar process for the Moodle Assignment module.

Three sources of data to be analysed initially

  1. The Moodle database at my current institution.
    Analysed to explore if and how teaching staff are using (creating, maintaining etc) the Book. What is the nature of the artefacts produced using the Book? How are learners interacting with the artefact produced using the Book?
  2. Responses from staff at my institution to a simple survey.
    Aim being to explore relationships between the analytics and user responses.
  3. Responses from the broader Moodle user community to essentially the same survey.
    Aim being to compare/contrast with the broader Moodle user community’s experiences with the experiences of those within the institution.

Specifics of analysis and survey

The analysis of the Book module will be exploratory. The aim is to develop analysis that is specific to the nature of the Book.

The aim of the survey is to generate textual descriptions of the users’ experience with the Book. Initial thought was given to using the Critical Incident Technique in a way similar to Islam (2014).

Currently the plan is to use a similar approach more explicitly based on the Technology Acceptance Model (TAM). The idea is that the survey will consist of a minimal number of closed questions mostly to provide demographic data. The main source of data from the survey will come from four open-ended questions, currently worded as

  1. Drawing on your use, please share anything (events, resources, needs, people or other factors) that have made the Moodle Book module more useful in your teaching.
  2. Drawing on your use, please share anything (events, resources, needs, people or other factors) that have made the Moodle Book module less useful in your teaching.
  3. Drawing on your use, please share anything (events, resources, needs, people or other factors) that have made the Moodle Book module easier to use in your teaching.
  4. Drawing on your use, please share anything (events, resources, needs, people or other factors) that have made the Moodle Book module harder to use in your teaching.

Future extensions

The analysis of Moodle usage might be usefully supplemented with interviews with particular people to explore interesting patterns of usage.

It’s also likely that the content analysis of the Moodle community discussion forum around the Book will also be completed. That’s dependent upon time and may need to wait.

Analysis of the Moodle source code repository or the tracker may also be usefully analysed. However, the focus at the moment is more on the user’s experience. The information within the repository and the tracker is likely to be a little too far away from most users of the LMS.

It would be interesting to repeat the institutionally specific analytics and survey at other institutions to further explore the impact of specific institutional actions (and just the broader contextual differences) on post-adoptive behaviour.

References

Burton-Jones, A., & Hubona, G. (2006). The mediation of external variables in the technology acceptance model. Information & Management, 43(6), 706–717. doi:10.1016/j.im.2006.03.007

Holt, D., Palmer, S., Munro, J., Solomonides, I., Gosper, M., Hicks, M., … Hollenbeck, R. (2013). Leading the quality management of online learning environments in Australian higher education. Australasian Journal of Educational Technology, 29(3), 387–402. Retrieved from http://www.ascilite.org.au/ajet/submission/index.php/AJET/article/view/84

Islam, A. K. M. N. (2014). Sources of satisfaction and dissatisfaction with a learning management system in post-adoption stage: A critical incident technique approach. Computers in Human Behavior, 30, 249–261. doi:10.1016/j.chb.2013.09.010

Jasperson, S., Carter, P. E., & Zmud, R. W. (2005). A Comprehensive Conceptualization of Post-Adaptive Behaviors Associated with Information Technology Enabled Work Systems. MIS Quarterly, 29(3), 525–557.

Laxon, A. (2013, September 14). Exams go online for university students. The New Zealand Herald.

Anyone capturing users’ post-adoptive behaviours for the LMS? Implications?

Jasperson, Carter & Zmud (2005)

advocate that organizations strongly consider capturing users’ post-adoptive behaviors, overtime, at a feature level of analysis (as well as the outcomes associated with these behaviors). It is only through analyzing a community’s usage patterns at a level of detail sufficient to enable individual learning (regarding both the IT application and work system) to be exposed, along with the outcomes associated with this learning, that the expectation gaps required to devise and direct interventions can themselves be exposed. Without such richness in available data, it is unlikely that organizations will realize significant improvements in their capability to manage the post-adoptive life cycle (p. 549)

Are there any universities “capturing users’ post-adoptive behaviours” for the LMS? Or any other educational system?

There’s lots of learning analytics research (e.g. interesting stuff from Gasevic et al, 2015) going on, but most of that is focused on learning and learners. This is important stuff and there should be more of it.

But Jasperson et al (2015) are Information Systems researchers publishing in one of the premier IS journals. Are there University IT departments that are achieving the “richness in available data…(that) will realize significant improvements in their capability to manage the post-adoptive life cycle”?

If there is, what does that look like? How do they do it? What “expectation gaps” have they identified? What “direct interventions” have they implemented? How?

My experience suggests that this work is limited. I wonder what implications that has for the quality system use and thus the quality of learning and teaching?

What “expectation gaps” are going ignored? What impact does that have on learning and teaching?

Jasperson et al (2005) develop a “Conceptual model of post-adoptive behaviour” shown in the image below. Post-adoptive behaviours can include the decision not to use, or change how to use. A gap in expectations that is never filled, is not likely to encourage on-going use.

They also identify that there is an “insufficient understanding of the technology sensemaking process” (p. 544). The model suggests that technology sensemaking is a pre-cursor to “user-initiated learning interventions”, examples of which include: formal or informal training opportunities; accessing external documentation; observing others; and, experimenting with IT application features.

Perhaps this offers a possible explanation for complaints about academics not using the provided training/documentation for institutional digital learning systems? Perhaps this might offer some insight into the apparent “low digital fluency of faculty” problem.

conceptual model of post-adoptive behaviours

References

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2015). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicating learning success. The Internet and Higher Education, 28, 68–84. doi:doi:10.1016/j.iheduc.2015.10.002

Jasperson, S., Carter, P. E., & Zmud, R. W. (2005). A Comprehensive Conceptualization of Post-Adaptive Behaviors Associated with Information Technology Enabled Work Systems. MIS Quarterly, 29(3), 525–557.

The CSCW view of Knowledge Management

Earlier this week I attended a session given by the research ethics folk at my institution. One of the observations was that they’d run training sessions but almost no-one came. I’ve heard similar observations from L&T folk, librarians, and just about anyone else aiming to help academics develop new skills. Especially when people spend time and effort developing yet another you beaut website or booklet that provides everything one would want to know about a topic. There’s also the broader trope developing about academics/teachers being digitally illiterate, which I’m increasingly seeing as unhelpful and perhaps even damaging.

Hence my interest when I stumbled across Ackerman et al (2013) a paper titled “Sharing knowledge and expertise: The CSCW View” with the abstract

Knowledge Management (KM) is a diffuse and controversial term, which has been used by a large number of research disciplines. CSCW, over the last 20 years, has taken a critical stance towards most of these approaches, and instead, CSCW shifted the focus towards a practice-based perspective. This paper surveys CSCW researchers’ viewpoints on what has become called ‘knowledge sharing’ and ‘expertise sharing’. These are based in an understanding of the social contexts of knowledge work and practices, as well as in an emphasis on communication among knowledgeable humans. The paper provides a summary and overview of the two strands of knowledge and expertise sharing in CSCW, which, froman analytical standpoint, roughly represent ‘generations’ of research: an ‘object-centric’ and a ‘people-centric’ view.We also survey the challenges and opportunities ahead.

What follows are a summary and some thoughts on the paper.

Thoughts? Possibilities?

The paper’s useful in that it appears to give a good overview of the work from CSCW on this topic. Relevant to some of the problem being faced around digital learning.

All this is especially interesting to me due to my interest in exploring the design and impact of distributed means of sharing knowledge about digital learning

Look at Cabitza and Simone (2012) – two levels of information, and affording mechanisms – as informing design. Their work on knowledge artifacts (Cabitza et al, 2008) might also be interesting.

Brown and Duguid’s (2000) Network of Practice is a better fit for what I’m thinking here.

CSCW has a tendency to precede development with ethnographic studies.

Learning object repositories?

Given the fairly scathing findings re: the idea of repositories, what does this say about current University practices around learning object repositories?

Is digitally illiterate a bad place to start?

The “sharing expertise” approach would appear to assume that the people you’re trying to help have knowledge to share. Labeling teachers as digitally illiterate would appear to mean you couldn’t even conceptualise this as a possibility. Is this a core problem here?

The shift from system to individual practice

At some level the shift in the CSCW work illustrates a shift from focusing on IT systems to a focus on individual practices. The V&R mapping process illustrates some of this.

Context and embedding is important

Findings reinforce the contextual and situated nature of knowledge (is that a bias from the assumptions of these researchers?). Does this explain many of the problems currently being faced? i.e. what’s being done at the moment is neither contextual nor situated? Would addressing this improve outcomes?

Summary

A topic dealt with by different research communities (Information Systems, CSCL, Computer Science) each with their particular focus and limitations. e.g. CS has developed interesting algorithms but “Empirical explroations into the practice of knowledge-intense work have been typically lacking in this discourse” (p. 532).

The CSCW strength has been “to have explore the relationship between innovative computational artifacts and knowledge work – from a micro-perspective” (p. 532)

Uses two different terms that “connote CSCW’s spin on the problem” i.e.

that knowledge is situated in people and in location, and that the social is an essential part of using any knowledge…far more useful systems can be developed if they are grounded in an analysis of work practices and do not ignore the social aspects of knowledge sharing. (p. 532)

  1. Knowledge sharing – knowledge is externalised so that it can be captured/manipulated/shared by technology.
  2. Expertise sharing – where the capability/expertise to do work is “based on discussions among knowledgeable actors and less significantly supported by a priori externalizations”

Speak of generations of knowledge management

  1. Repository models of information and knowledge.
    Ignoring the social nature of knowledge, focused on externalising knowledge.
  2. Sharing expertise
    Tying communication among people into knowledge work. Either through identifying how best to “find” who has the knowledge or on creating online communities to allow people to share their knowledge. – expertise finders, recommenders, and collaborative help systems.
    Work later scaled to Internet size systems and communities – collectives, inter-organisational networks etc.

Repository model

started with attempts “to build vast repositories of what they knew” (p. 533).

it should be noted that CSCW never really accepted that this model would work in practice (p. 534)…Reducing the richness of collective memory to specific information artifacts was utopian (p. 537)

Findings from various CSCW repository studies

  • Standard issues with repository systems

    particularly difficulty with motivating users to author and organize the material and to maintain the information and its navigation

  • Context is important.

    Some systems tackled the problem of context by trying to channel people to expertise that was as local as possible based on the assumption that “people nearby an asker would know more about local context and might be better at explaining than might experts”.

    Other research found “difficulties of reuse and the organisation of the information into repositories over time, especially when context changed…showed that no organisational memory per se existed; the perfect repository was a myth” (p. 534)

  • Need to embed.

    such a memory could be constructed and used, but the researchers also found they needed to embed both the system and the information in both practice and in the organizational context

  • situated and social.

    CSCWin general has assumed that understanding situated use was critical to producing useful, and usable, systems (Suchman 1987;Suchman and Wynn 1984) and that usability and usefulness are social and collaborative in nature (p. 537)

  • deviations seen as useful

    Exceptions in organizational activities, instead of being assumed to be deviations from correct procedures, were held to be ‘normal’ in organizational life (Suchman 1983) and to be examined for what they said about organizational activity, including information handling (Randall et al. 2007;Schmidt 1999) (p. 537)

  • issues in social creation, use, and reuse of information.
    Including:

    • issues of motivation,
      Getting information is hard. Aligning reward structures a constant problem. The idea of capturing all knowledge clashed with a range of factors, especially in competitive organisational settings.
    • context in reuse,
      “processes of decontextualisation and recontextualisation loomed over the repository model” (p. 538). “This is difficult to achieve, and even harder to achieve for complex problems” (p. 539).
    • assessments of reliability and authoritativeness,
      de/recontextualisation is social/situated. Information is assessed based on: expertise of the author, reliability, authoritativeness, quality, understandability, the provisional/final nature of he information, obsolescense and completeness, is it officialy vetted?
    • organizational politics, maintenance, and
      “knowledge sharing has politics” (p. 539). Who is and can author/change information impacts use. Categories/meta data of/about data has politics.
    • reification
      “repository systems promote an objectified view of knowledge” (p. 540)

Repository work has since been commercialised.

Some of this work is being re-examined/done due to new methods: machine learning and crowd-sourcing.

Boundary objects – “critical to knowledge sharing. Because of their plasticity of meaning boundary objects serve as translation mechanisms for ideas, viewpoints, and values across otherwise difficult to traverse social boundaries. Boundary objects are bridges between different communities of practice (Wenger 1998) or social worlds (Strauss 1993).” (p. 541)

“information objects that have meaning on both sides of an intra-organisational or inter-organisational boundary”.

CSCW tended to focus on “tractable information processing objects” (p. 542) – forms etc. – easier to implement but “over-emphasis on boundary objects as material artifact, which can limit the analytical power that boundary objects bring to understanding negotiation and mediation in routine work”

Example – T-Matrix – supporting production of a tire and innovation.

Cabitz and Simone (2012) identify two levels of information

  1. awareness promoting information – current state of the activity
  2. knowledge evoking information – triggering previously acquired knowledge or triggering/supporting learning and innovation

Also suggest “affording mechanisms”

Other terms

  1. “boundary negotiating” objects
    Less structured ideas of boundary objects suggested
  2. knowledge artifacts – from Cabitza et al (2013)

    a physical, i.e., material but not necessarily tangible, inscribed artifact that is collaboratively created, maintained and used to support knowledge- oriented social processes (among which knowledge creation and exploita- tion, collaborative problem solving and decision making) within or across communities of practice…. (p. 35)

    These are inherently local, remain open for modification. Can stimulate socialisation and internalisation of knowledge.

common information spaces – common central archive (repository?) used by distributed folk. Open and malleable by nature. A repository is closed/finalised, CIS isn’t. Various work to make the distinction – e.g. degrees of distribution; kinds of articulation work and artifacts required, the means of communication , and the differences in frames of participant reference.

Various points made as to the usefulness of this abstraction.

Assemblies

  • Assembly – “denote an organised collection of information objects”
  • Assemblages – “would include the surrounding practices and culture around an object or collection” (p. 545)

How assemblies are put together and their impacts is of interest.

Sharing expertise

Emphasis on interpersonal communications over externalisation in IT artifacts. “ascribed a more crucial role to the practices of individuals” (p. 547). A focus on sharing tacit knowledge – including contextual knowledge.

tacit/explicit – Nonaka’s mistake – explicit mention of the misinterpretation of Polanyi’s idea of tacit knowledge. The mistaken assumption/focus was on making tacit knowledge explicit. When Polanyi used tacit to describe knowledge that was very hard, if not impossible to make explicit.

Tacit knowledge can be learned only through common experiences, and therefore, contact with others, in some form, is required for full use of the information. (p. 547)

Community of practice “roughly be defined as a group that works toegher in a certain domain and whose members share a common practice”.

Network of practice (from Brown and Duguid, 2000) – members do not necessarily work together, but work on similar issues in a similar way.

Community of Interest – defined by common interests, not common practice. Diversity is a source of creativity and innovation.

I like this critique of the evolution of use of CoP

Intrinsically based in their view of ‘tacit knowledge,’ the Knowledge Management community appropriated CoP in an interventionist manner. CoPs were to be cultivated or even created (Wenger et al. 2002), and they became fashionable as ‘the killer application for knowledge management practitioners’ (Su andWilensky 2011, p. 10) with supposedly beneficial effects on knowledge exchange within groups. (p. 547)

CSCW didn’t use CoPs in an interventionist way – instead as an analytical lens.

Social capital – from Bourdieu – “refers to the collective abilities derived from social networks”. Views sharing “in the relational and empathic dimension of social networks” (p. 548).

Nahapiet and Ghoshal (1998) suggest it consists of 3 dimensions

  1. Structural opportunity (‘who’ shares and ‘how’);
    Which is where the technical enters the picture.
  2. Cognitive ability (‘what’ is shared);
  3. Relational motivation (‘why’ and ‘when’ people engage)

Latter 2 dimensions not often considered by system designers.

The sharing approach places emphasis on “finding-out” work. Where knowledge is found by knowing/asking others and in finding the source, de-contextualising and then re-contextualising. Often involves “local knowledge” – which tends to have an emergent nature. What’s important is only known in the situation at hand and who holds it evolves within a concrete situation.

People finding and expertise location

Move from focusing on representations of data to the interactions between people – trying to produce and modify them. Tackling technical, organisational and social issues simultaneously.

Techniques include: information retrival, network analysis, topics of interest, expertise determination.

Profile construction can be contentious – privacy, identification of expertise. Especially given “big data” approaches to analysing and identification.

Expertise finding’s 3 stages: identification, selection, escalation.

Need to promote awareness of individual expertise and their availability – “based in ‘seeing’ others’ activities” (p. 551)

“people prefer others with whom they share a social connection to complete strangers” (p. 553) – no surprise there – but people known directly weren’t chosen as they were deemed not likely to have any greater expertise. Often people who were 2 or 3 degrees of separation away.

Profiles also found by one study to be often out of date. Explored “peripheral awareness” as a solution.

Open issues

  • Development of personal profiles.
  • Privacy and control.
  • Accuracy.

Finding others Lot of work outside CSCW.

CoI in the form of web Q&A communities have arising on the Internet. With research that has studied question classification, answer quality, user satisfaction, motivation and reputation.

Motivation

  • more money = more answers, but not necessarily better quality.
  • charitable contributions increased credibility of answers “in a nuanced way”?
  • Altruism and reputation building two important motivations

Recent research looking at “social Q&A” – how people use social media to answer – two lines of research (echoing above)

  1. social analysis of existing systems;
    Looking at: impact of tie strength on answer quality, org setting, response rates when asking strangers – especially with quick, non-personal answers, community size and contact rate.
  2. technical development of new systems

Future directions

Interconnected practices: expertise infrastructures

Increasing inter-connectedness

  • may cause “experts” to become anonymous.
  • propel new types of interactions via micro-activities – microtasking environments make it easy/convenient to help
  • Collaboratively constructed information spaces – wikipedia – numerous papers examiner how it was constructed, including work looking more broadly at Wikis
  • Other research looked at github, mozilla bug reports etc.
  • And work looking at social media, microblogging etc and its use.

References

Ackerman, M. S., Dachtera, J., Pipek, V., & Wulf, V. (2013). Sharing Knowledge and Expertise: The CSCW View of Knowledge Management. Computer Supported Cooperative Work (CSCW), 22(4-6), 531–573. doi:10.1007/s10606-013-9192-8

Re-purposing V&R mapping to explore modification of digital learning spaces

Why?

Apparently there is a digital literacy/fluency problem with teachers. The 2014 Horizon Report for Higher Education identified the “Low Digital Fluency of Faculty” as the number 1 “significant challenge impeding higher education technology adoption”. In the 2015 Horizon Report for Higher Education this morphs into “Improving Digital Literacy” being the #2 significant challenge. While the 2015 K-12 Horizon Report has “Integrating Technology in Teacher Education” as the #2 significant challenge.

But focusing solely on the literacy of the teaching staff seems a bit short sighted. @palbion, @chalkhands and I are teacher educators working in a digitally rich learning environment (i.e. a large percentage of our students are online only students). We are also fairly digitally fluent/literate. In a paper last year we explored how a distributive view of knowledge sharing helped us “overcome the limitations of organisational practices and technologies that were not always well suited to our context and aims”.

Our digital literacy isn’t a problem, we’re able and believe we have to overcome the limitations of the environment in which we teach. Increasingly the digital tools we are provided by the institution do not match the needs we have for our learning designs and consequently we make various types of changes.

Often these changes are seen as bad. At best these changes are invisible to other people within our institution. At worst they are labelled as duplication, inefficient, unsafe, and feral. They are seen as shadow systems. Systems and changes that are undesirable and should be rooted out.

What?

Rather than continue this negative perspective, @palbion, @chalkhands and I have just finished a rough paper that set out to explore if there was anything valuable or interesting to learn from the changes we made to our digital learning spaces. Our process for this paper was

  1. Generate a list of stories of the changes we made to our digital learning/teaching spaces.
    Using a Google doc and a simple story format (descriptive title; what change was made; why; and, outcomes) each of us generated a list of stories of where we’d changed the digital tools/spaces we use for our teaching.
  2. Map those stories using a modified Visitor and Resident mapping approach.
    The stories needed to be analysed in someway. The Visitors & Residents approach offered a number of advantages – more detail below.
  3. Reflect upon what that analysis showed and about potential future applications of this approach.

What follows is some reflection on the approach, a description of the original V&R map, and a description and example of our modified V&R map.

Reflection on the approach

In short, we (I think I can say we) found the whole approach interesting and could see some potential for broader use. In particular, the potential benefits of the approach include:

  1. Great way to start discussions and share knowledge.
    Gathering stories and analysing them using the V&R process appear to be very useful ways for starting discussions and sharing knowledge. Not the least because it starts with people sharing what they are doing (trying to do) now, rather than some mythical ideal future state.
    Reports from others using the original V&R mapping process suggest this is a strength of the V&R mapping approach. Our experience seems to suggest this might continue with the modified map we used.
  2. Doesn’t start by assuming that people are illiterate.
    Neither @palbion or I think we’re digitally illiterate. We have formal qualifications in Information Technology (IT). @chalkhands doesn’t have formal qualifications in IT. Early on in this process she was questioning whether or not she had anything to add. She wasn’t as “literate” as @palbion and I. However, as we started sharing stories and mapping them that questioning went away.
    The V&R approach is very much based on the idea of focusing on what people do, rather than who they are or what they know (or don’t). It doesn’t assume teaching staff are digitally illiterate and is just interested in what people do. I think this is a much more valuable starting point for engaging in this space. It appears likely to provide a method for helping universities follow observations from the 2015 Horizon Report that solving the “digital literacy problem” requires “individual scaffolding and support along with helping learners as they manage conflict between practice and different contexts” and “Understanding how to use technologies is a key first step, but being able to leverage them for innovation is vital to fostering real transformation in higher education” and “that programs with one-size-fits-all training approaches that assume all faculty are at the same level of digital literacy pose a higher risk of failure.”
  3. It accepts that the ability for people to change digital technologies is not only ok, it is necessary and unavoidable.
    Worthen (2007) makes the point that those in charge of institutional IT (including digital learning spaces) want to prevent change while the people using digital systems want the technology to change

    Users want IT to be responsive to their individual needs and to make them more productive. CIOs want IT to be reliable, secure, scalable, and compliant with an ever increasing number of government regulations

    Since the CIOs are in charge of the technology (they have the power) the practice of changing digital systems (without having gone through the approved governance processes) is deemed as bad and something to be avoided. This is due to change, especially in learning and teaching if you accept Shulman’s (1987) identification of the “knowledge based of teaching” laying (emphasis added)

    at the intersection of content and pedagogy, in the capacity of a teacher to transform the content knowledge he or she possesses into forms that are pedagogically powerful and yet adaptive to the variations in ability and background presented by the students (p. 15)

The original V&R map

The original V&R map is (example in the image below) a cartesian graph with two axes. The X-axis ranges from visitor to resident and describes how you perceive and use digital technologies. A visitor sees a collection of disparate tools that are fit for specific purposes. When something has to be done the visitor selects the tool, gets the job done, and leaves the digital space leaving no social trace. A resident on the other hand sees a digital space where they can connect and socialise with others. The Y-axis ranges from Institutional to Personal and describes where use of digital technologies fits on a professional or personal scale.

The following map shows someone for whom LinkedIn is only used for professional purposes. So it’s located toward the “Institutional” end of the Y-axis. Since LinkedIn is about leaving a public social trace for others to link to, it’s located toward the “Resident” end of the X-axis.

Our modified V&R map

Our purpose was to map stories about how we had change digital technologies within our role as teacher educators. Thus the normal Institutional/Personal scale for the Y-axis doesn’t work. We’re only considering activities that are institutional in purpose. In addition, we’re focusing on activities that changed digital technologies. We’re interested in understanding the types of changes that were made. As a result we adopted a “change scale” as the Y-axis. The scale was adapted from software engineering/information systems research and is summarised in the following table.

Item Description Example
Use Tool used with no change Add an element to a Moodle site
Internal configuration Change operation of a tool using the configuration options of the tool Change the appearance of the Moodle site with course settings
External configuration Change operation of a tool using means external to the tool Inject CSS of Javascript into a Moodle site to change its operation
Customization Change the tool by modifying its code Modify the Moodle source code, or install a new plugin
Supplement Use another tool(s) to offer functionality not provided by existing tools Implement course level social bookmarking by requiring use of Diigo
Replacement Use another tool to replace/enhance functionality provided by existing tools Require students to use external blog engines, rather than the Moodle blog engine.

Since we were new to the V&R mapping process and were trying to quickly do this work without being able to meet, some additional scaffolding was placed on the X-axis (visitor-resident). This provide some common level of understanding of the scale and was based on a specific (and fairly limited) definition of “social trace”. The lowest level of the scale was “tools used by teachers” which meant no social trace. The scale gradually increased the number of people involved in the activities mediated by the digital technology. “Subsets of students in a course” to “All students in a course” and right on up to “Anyone on the open web”.

The following image is the “template” map that each of used to map out our stories of changing digital technologies.

Modified V&R map template

An example map and stories

The following image is the outcome of mapping my stories of change. A couple of example stories are included after the image.

My V&R change map

Know thy student

This story involves replacing/supplementing existing digital tools, but is something that only I use. Hence Visitor/Replacement.

What? A collection of Greasemonkey scripts, web scrapping, local database/server designed to help me know my students and what they were doing in the Study Desk. Wherever there is a Moodle user profile link in Moodle, the script will add a link [ details ] that is specific for each user. If I click on that link I see a popup window with a range of information about the student

Why? Because finding out this information about a student would normally take 10+ minutes and require the use of multiple different web pages in two different system. Many of these pages don’t exactly make it easy to see the information. Knowing the students better is a core part of improving my teaching.

Outcomes? It’s been a god send. Saving time and enabling me to be more aware of student progess.

Using links in student blog posts

A fairly minor example of change. There’s a question of whether it’s just “use” or “internal configuration”? After all, it’s just using an editor on a web page to create some HTML. It was bumped up to “internal configuration” because of an observation that hyperlinks were not often used by many teachers. Something I’m hoping that @beerc will test empirically.

What? Some comments I write on student blog posts will make use of links to offer pointers to relevant resources.

Why? It’s more useful/easy to the students to have the direct link. Hence more likely to make use of the suggestion.

Outcomes? Minor anecdotal positive comments. Not really known

Early indications and reflection

The change scale worked okay but could use some additional reflection. In particular we raised some questions about whether many of the “replacement” examples of change (including those in my map above) are actually examples of supplement.

On reflecting on all this we made some initial observations, including

  1. Regardless of perceived levels of digital literacy we all engaged in a range of changes to digital technologies.
  2. Not surpisingly, the breadth/complexity of those changes increased with greater digital literacy.
  3. In the end very few of our changes were “replacement”. Almost all were focused more on overcoming perceived shortcomings with the provided tools, rather than duplicating their functionality.
  4. Most of changes tended to congregate towards the “visitor” end of the X-axis. Not surprising given that all of the digital technologies provided by the institution are not on the open web.
  5. Almost all of the stories that involved “replacement” were based on moving out onto the “open web”. i.e. they were all located toward the “resident” end of the X-axis.
  6. Changes were being made due to two main reasons: improving the efficiency of institutional systems or practices; or, customising digital technologies to fit the specific learning activities we wanted to implement.