Planning changes to EDC3100 assignment 1

In the first half of the year there was a new assignment in EDC3100 designed to both enhance student learning, but also experiment with making the data produced by students and markers as part of the assessment process more accessible for manipulation by software. i.e. the students and markers entered data into a spreadsheet.

It’s a new semester, time to reflect on that initial use and see what changes should and can be made.

Student results

Let’s start with student results. (Note: this is all a bit rough and ready)

Overall the average mark for the assignment was 13.8 (72%) out of 19 with a standard deviation of around 3.  But that’s for both parts of the assignment.

Given current practice of using Word documents as assignment cover sheets, extracting out the specific marks for the checklist/spreadsheet assignment is difficult. But I have an Excel spreadsheet and I can run a script to get that data.

The average mark is about 9.5 (68%) out of 14, with a standard deviation around 2.

Let’s dig a bit deeper into the three criteria that made up that mark. The three criteria were

  1. Mark – students use a checklist to evaluate a lesson plan and its use of ICT and pedagogy.
  2. Acceptable use – focused on students ability to identify a lesson plan they can use wrt copyright.
  3. RAT – students use the RAT model to evaluate the use of ICT and pedagogy in the course

The following table compares cohort performance on the criteria and overall.

Criteria Average % stdev %
Overall 68 15.8
Mark 75.2 17.2
Acceptable Use 63.2 16.7
RAT 59.3 17.8

The RAT question was where the students were least successful.  It’s also (arguably) the more difficult question. The checklist was the highest mark.  Acceptable use is also quite low and needs some work.

Those last two is where the focus will go for now.

Other thoughts and experiences

Student feedback

Student feedback included the following comments related to the assignment

Some of the items we were required to assess in Assignment One could have been better explained

more guidance was required for Assignment 1. I didn’t like the use of the Excel document

 The last point was connected to the issue of not being able to justify the interpretation, which links back to points raised elsewhere. The first point is one to ponder. The results above suggest that’s not where the real need lays.

Marker feedback

Feedback from markers included

  • Identifying use of an IWB, when in fact it’s just being used as a data projector.
  • Little understanding of what constitutes: an authentic problem, and connections beyond the classroom
  • Some surprise that even with 50 assignments to mark, there were few double ups of lesson plans.
  • Another liked the format in that it gave students a better handle on what to look for in an ICT-rich lesson and the RAT model was useful for framing an evaluation.
  • The wording and nature of the statements for the acceptable use and the RAT question need to be clarified – to confusing (for marker and student)

One aspect of the assignment that troubled one of the markers was that the lesson chosen by the student only had to include some form of ICT.  It didn’t need to be rich nor effective ICT. This was actually one of the aims of the assignment, to allow students develop some appreciation for the breadth of what is possible and just how narrow use often is.

Questions asked during semester

  • Struggles to find a CC-licensed lesson plan.
  • Clarity about what makes an acceptable lesson plan
    • e.g. Can an American lesson be used?
    • Linked to concerns about Q10 and distinguishing between an appropriate lesson plan and whether or not you can use it due to copyright
  • Questions re: term of use and uploading
  • What if I can’t find any information about copyright?
  • How can/should the lesson plan be put online?
  • The distinction between what a student is using an ICT, and when the teacher is using it
  • Explanation of how the checklist questions are marked – e.g. those that don’t apply
  • Reporting bugs in the formatting of the cells

Personal thoughts

Early reflections on the semester included

The spreadsheet worked reasonably well. The checklist within the spreadsheet requires some refinement. As does some aspects of the rubric. The duplication of a Word-based coversheet needs to be removed.

 Other thoughts during the semester included:

  • Students had a tendency to treat the free text questions as requiring an essay.
  • The “pretend” context for the task wasn’t clear enough.
  • In particular, a problem about the exact legal status of ACME’s server, links and making copies of files.
  • Issues with specific questions and the checklist
    • The “web applications” option under “What is used” causing confusion about overlap with “web browser” question
    • Q16 includes mention of print material around ICT
    • Q26 mentions embedded hardware, a question of it and the connection with IWB
    • Appears to be strong connections between Q22 and A46
    • The purpose of Q10 is not clear enough, confusion with matching curriculum etc.
    • A feeling that there are too many questions and perhaps overlap
    • Criteria for RAT question isn’t clear enough about the quality of the response
      • e.g. not mentioning all uses of ICT and Pedagogy
      • Missing out on themes
      • Incorrect identifying something as belonging to a theme
    • Suggestion for a drop down box around linkage of ICT to objectives: not related, somewhat related, essential, extends, transforms
  • More explicit scaffolding/basic activities around the evaluation questions
    • e.g. Is ICT being used to Question, Scaffold, Lecture, in an authentic task

Random suggestions

Due to institutional constraints (not to mention time) none of the changes to be made can be radical.  Keeping with that, some initial suggested changes to explore include:

  1. Pre-submission checks
    1. What pre-submission checks should I run?
    2. Can they be run? How does that integrate with the Moodle assignment activity workflow?
  2. Remove the cover sheet entirely, just use the spreadsheet
    1. Need to include the learning journal mark into the spreadsheet
    2. Would be nice to do this automagically
  3. Tweaking the marking
    1. The criteria for Acceptable use and RAT questions need to be improved
    2. Look closely at each of the points about the questions
  4. Student preparation
    1. Make clear the need not to write essays for the free text questions
    2. Finding CC licensed lesson plans
      1. Great difficulty in finding those that are CC licensed
      2. Provide a list of prior sites people have used
      3. Generate some sort of activity to test understanding of CC with a specific example
    3. RAT Model
      1. More activities in learning paths
      2. Better labeling on  the spreadsheet
    4. More questions/activities around specific terms and concepts within the checklist

 

Planning an EDC3100 “installfest”

The following documents the planning of an “installfest” for the course EDC3100. Implementation and reflection will come later.

Rationale

The course encourages/requires that students to modify their learning process in the course to engage in Jarche’s seek/sense/share framework using a combination of a personal blog, Diigo, and the Feedly feed reader.

This is a radical departure for most students and a challenge for most. It results in a lot of time expended at the start of semester. For example, a past students shared her experience

I spent a lot of time trying to work out blogging, Diigo and Feedly and to be honest I am still only using the bare minimum with blogging

Not a good outcome and apparently what has been used previously, doesn’t work. So an alternative is required.

As it happens, the same student also suggested a possible solution

My thoughts on changes or additions to the course that I would have found useful, would have been to come to a workshop near the start.

I’ve been pondering this suggestion and how it might work with the next offering of the course that has around 100 online students. Being of a certain age I remember installfests and have been wondering if that might be a useful model.    Leading to questions such as..

Can something like an installfest be run in a online video-conference space? Will students participate? Will it help? How to organise it within existing constraints?

Design thoughts

Linux Installfest HOWTO

Interestingly, I came across the Linux Documentation Project’s Linux Installfest HOWTO, the following starts from that document.

The location will be virtual, not physical. So advice about preparing the physical location doesn’t quite apply. However, the features of the Zoom service will need to considered.

Consideration: Might the “other room” feature of Zoom be useful for organising people at different stages?

Bringing up the major constraint, there’s likely to be only me to fulfill the various suggested roles. With more time I might have been able to organise additional help, but let’s not talk about the one week missing between semester 1 and semester 2.

Consideration: Can the session structure be informed by the identified roles? e.g. a receptionist role could be taken by the initial part of the session which focuses on welcoming people to the space. Might also be useful to explicitly ask for volunteers who are a little further ahead than others, volunteers who might take on a Tier 1 support role.

Consideration: Can a Google document/sheet be used to get an idea of people’s knowledge, experience and comfort level with the various tools? Is completing this sheet part of the entry process? Perhaps something based on the data sheet?

Consideration: Have a space at the end for reflection? Perhaps in part people could do this on their blog?  It might even be a good exercise to start them making connections etc.  To see all the tools working together.

Fit with the course requirements

Course requirements to consider include

  • Blog
    • Which blog?
    • Posts and their model.
    • Feeds
  • Trying to help students develop an appreciation of the value of developing conceptual models of how a technology works, moving beyond recipe following.
  • Challenge of explaining how these three tools fit together.
  • What about seek/sense/share, and what that means for how they learn.
    Question: Do the why first? Too abstract.  Leave it until the end? Don’t know why and perhaps too late and too tired by everything else.  Perhaps show them how it all looks at the end?
  • Identity
    • anonymous or not
    • Professional identity
    • Not being an egg
  • How to demonstrate to people the process
    Select a volunteer and I help guide them through the process using some sort of scaffold (e.g. the slides or study desk)
  • How to give people the time to try it by themselves and perhaps get support
  • How to encourage/enable reuse of sections of the video to integrate into the learning paths

Questions to ask (form/spreadsheet)

  • Name
  • Are you will to volunteer to be guided
  • Blog
    • Do you have one set up?
    • Rate your knowledge about the blog?
    • have you written a blog post?
    • Have you customised your blog?
  • Diigo
    • Do you have a Diigo account?
    • Do you have a Diigo extension installed?
    • Have you book marked something using Diigo
    • Have you shared it to the EDC3100 Diigo group
  • Feedly
    • Have you logged into Feedly?
    • Have you imported the EDC3100 OPML files?
    • Have you tried following anyone else?

 

Initial design

Welcome

First 5+ minutes focus on welcoming everyone and asking them to fill out the form.

Outline the purpose of the session.

Outline the structure

  • Welcome
  • Where are we up to, where are we going
  • Doing it
    • Diigo
    • Feedly
    • Blog
  • Pulling it all together

Where are we up to? Where are we going?

Explain the three tools and the seek/sense/share approach to learning, only briefly on why, focus on concrete illustration showing my use of the tools. Link this to professional identity and the idea of being anonymous. Which tools need to be anonymous?

We want you to be able to do this by the end of the session.

Show the sheet behind the form – link to an idea they can use, mention the Google spreadsheet links in the EDC3100 Diigo group.  Find out where people are up to, think about approaches, ask for volunteers to be Tier 1 support – perhaps on the chat?  Or perhaps in a breakout room.

Outline structure (easy first, to more difficult)

  • Feedly
  • Diigo
  • Blog

Diigo

  1. Sign-up for account.
    Make sure go to learn more. — username and email (which email – personal or USQ)
  2. Join the EDC3100 group.
  3. Show the emails I get and the approval process
  4. Install a Diigo tool
    Recommend Diigo extension – but Diigolet will do
  5. Bookmark a page for yourself
  6. Bookmark a page to the group????
  7. Do minute paper

Feedly

  1. Which account – link to professional identity
    1. Umail if only for University – this okay because it’s not visible.
    2. Facebook or other account if using for personal
  2. Visit Feedly – Hit the get started button – login
  3. Import the OPML files.
  4. Add some content – get them to search in Feedly for something they are interested in
  5. Make point about not reading the actual page, but a copy, show how to access the actual page
  6. minute paper

Blog

  1. Which blog service
  2. Which identity – anonymous etc.
  3. Go to your choice of blog provider
  4. Hit the equivalent of “Create website”
  5. Follow the process
  6. Choose your configuration
  7. Write your first blog post — maybe suggest it should be linked to this post and reflect upon it.  Work in some ideas about reflection.
  8. Register the blog on the Study Desk — probably shouldn’t show this in Zoom.
  9. Talk about WordPress reader and it’s relationship with Diigo
  10. Minute paper

Pulling it all together

  1. Can I get them to download the OPML file and into Feedly.
  2. Come back to the seek/sense/share processe
    1. Seek – Start with Feedly
      1. See discussion forum posts
      2. See posts from other students
    2. Sense – on blog
    3. Share – on blog and Diigo
  3. Another minute paper???

Tasks

  1. Powerpoint scaffold for the session
  2. Google forums
    1. Where are you up to?
    2. Minute papers
      1. Feedly
      2. Diigo
      3. Blog
  3. Set up data system for EDC3100 S2
    1. Blog registration counter
    2. Creating OPML files

 

Any pointers to an old, ancient game?

Way back in 1986 I started studying undergraduate computer science at the University of Queensland. One of our first year programming assignments was to use the fancy, new Macintosh computers to add some code to a game.  I’m looking for pointers to the name of the game and any online resources about it. A working version on some contemporary platform would be great.

Any help more than welcome.

The game

The game was played with a grid. Typically 4 by 4 grid that looked something like this.

Grid 001

The idea is that there were random mirrors hidden throughout the grid. The aim of the game was to figure out what type of mirrors were located where within the grid. To do this you had a flashlight that you could shine through one of the holes on the outside. The light would exit the grid at another location, depending on the location and type of mirrors it would encounter. A bit like this
grid 002

There were three types of mirrors. Two diagonal mirrors / and \; and, a X mirror.  The diagonal mirrors would change the direction of the light depending on how the light struck the mirror. The X mirror would direct the light back the way it came.

The following image shows one potential layout of mirrors to explain how the light behaved in the above image.

Grid 003

The light travels straight ahead until it hits the first diagonal mirror. This mirror causes the to change direction directly up. Where it immediately hits another diagonal mirror which send the light traveling right again until it exits the grid.

Nature of digital technology? Part 2 – expansion

@damoclarky has commented on yesterday’s Part 2 post. A comment that’s sparked a bit of thinking. I’ve moved my length response into this post, rather than as a reply to the comment.

What is it? Stable or unstable?

@damoclarky writes

There also appears (at least to me) to be an irony in your blog post. On the one hand, we have technology as unstable, with constant change occurring such as Apple iOS/Phone updates, or 6monthly Moodle releases. Then on the other, we have:

“… commonplace notions of digital technologies that underpin both everyday life and research have a tendency to see them “as relatively stable, discrete, independent, and fixed” (Orlikowski & Iacono, 2001, p. 121).”

Part of the argument I’m working toward is that how people/organisations conceptualise and then act with digital technology doesn’t align or leverage the nature of digital technology. This lack of alignment causes problems and/or lost opportunities.  This is related to the argument that Orlikowski & Iacono make as they identify 5 different views of technology, illustrate the differences and argue for the importance of theorising the “IT artifact”.

The “relatively stable, discrete, independent, and fixed” view of technology is one of the views Orlikowsi & Iacono describe – the tool view. There are other views and what I’m working on here is a somewhat different representation.  I’m actually arguing against that tool view.  The discrepancy between the “relatively stable, discrete, independent, and fixed” view of digital technology and the unstable and protean nature of digital technology is evidence (for me) of the problem I’m trying to identify.

Actually, as I’m writing this and re-reading Orlikowski and Iacono it appears likely that the other nature of digital technology described in the part 2 post – opaque – contributes to the tool view. Orlikowski and Iacono draw on Latour to describe the tool view as seeing technologies as “black boxes”. Which aligns with the idea of digital technologies as being increasingly opaque.

Stable but unstable

For most people the tools they use are black boxes.  They can’t change them. They have to live with what those tools can or can’t do. But at the same time they face the problem of those tools changing (upgrades of Moodle, Microsoft Office etc), of the tools being unstable. But even though the tools change, the tools still remain opaque to them, they still remain as black boxes.  Black boxes that the person has to make do with, they can’t change it, they just have to figure out how to get on.

Perceptions of protean

Is it just perception that technology is not protean? There is a power differential at play. Who owns technology? Do you really “own” your iPhone? What about the software on your iPhone? What controls or restriction exist when you purchase something? What about your organisation’s OSS LMS software? It is very opaque, but who has permissions to change it?

Later in the series the idea of affordances will enter the picture. This will talk a bit more about how the perception of a digital technology being protean (or anything else) or not does indeed depend on the actor and the environment, not just the nature of the digital technology.

But there’s also the question of whether or not the tool itself is protean. Apple is a good example. Turkle actually talks about the rise of the GUI and the Job’s belief at Apple of controlling the entire experience as major factors in the increasing opacity of digital technology. While reprogrammability is a fundamental property of digital technology the developers of digital technology can decide to limit who can leverage that property. The developers of digital technology can limit the protean nature of digital technology.

In turn the organisational gate keepers of digital technology can further limit the protean nature of digital technology. For example, the trend toward standard course sites within  University run LMS as talked about by Mark Smithers.

But as you and I know, no matter how hard they try they can’t remove it entirely. The long history of shadow systems, workarounds, make-work and kludges (Koopman & Hoffman, 2003) spread through the use of digital technologies (and probably beyond). For example, my work at doing something with the concrete lounges in my institution’s LMS. But at this stage we’re starting to enter the area of affordances etc.

The point I’m trying to make is that digital technologies can be protean. At the moment, most of the digital technologies within formal education are not. This is contributing to formal education’s inability to effectively leverage digital technology.

Blackboxes, complexity and abstraction

Part of the black box approach to technology is to deal with complexity. Not in terms of complexity theory, but in terms of breaking big things into smaller things, thus making them easier to understand. This is a typical human approach to problem solving. If we were to alter the opacity of technological black boxes, how much complexity can we expect educators to cope with in then being able to leverage their own changes?

When I read Turkle in more detail for the first time yesterday, this was one of the questions that sprung to mind. Suchman is talking about being able to perceive the bare technology as being transparent, but even as she does this she mentions

When people say that they used to be able to “see” what was “inside” their first personal computers, it is important to keep in mind that for most of them there still remained many intermediate levels of software between them and the bare machine. But their computer system encouraged them to represent their understanding of the technology as knowledge of what lay beneath the screen surface. They were encouraged to think of understanding as looking beyond the magic of the mechanism (p. 23).

She then goes onto argue how the rise of the GUI – especially in the Macintosh – encourage people to stay on the surface. To see the menus, windows and icons and interact with those.  To understand that clicking this icon, that menu, and selecting this option led to this outcome without understanding how this actually worked.

The problem I’m suggesting here isn’t that people should know the details of the hardware, or the code that implements their digital technology. But that they should go beyond the interface to understand the model used by the digital technology.

The example I’ll use in the talk (I think) will be the Moodle assignment activity. I have a feeling (which could be explored with research) that most teachers (and perhaps learners) are stuck at the interface. They have eventually learned which buttons to push to achieve their task. But they have no idea of the model used by the Moodle assignment activity because the training they receive and the opaque nature of the interface to the Moodle assignment activity doesn’t help them understand the model.

How many teaching staff using the Moodle assignment activity could define and explain the connections between availability, submission types, feedback types, submission settings, notifications, and grade? How many could develop an appropriate mental model of how it works?  How many can then successfully translate what they would like to do into how the Moodle assignment activity should be configured to help them achieve those goals?

What about the home page for a Moodle course site? How much of the really poorly designed Moodle course home pages is due to the fact that the teachers have been unable to develop an effective mental model of how Moodle works because of the opaque nature of the technology?

How many interactive white boards are sitting unused in school classrooms because the teacher doesn’t have a mental model of how it works and thus can’t identify the simple fix required to get it working again?

I imagine that the more computational thinking a teacher/learner is capable of, the more likely it is that they have actively tried to construct the model behind that tool, and subsequently the more able they are to leverage the Moodle assignment activity to fit their needs.  The more someone sees a digital technology as not opaque and as protean, the more likely I think that they will actively try to grok the model underpinning the digital technology.

This isn’t about delving down in the depths of the abstraction layer. It’s just trying to see beyond the opaque interface.

Another interesting research project might be to explore if modifying the interface of a digital technology to make it less opaque – to make the model underpinning the digital technology clearer to the user – would make it easier to use and eventually improve the quality of the task they wish to complete?  e.g. would it improve the quality of learning and teaching with digital technology?

Can you do anything? How?

Without sounding too dramatic (or cynical), without industry-wide changes to how digital technology is viewed, are attempts to address the issues outlined in your blog post futile?

How do you bring about industry-wide change in attitude and thinking?

The funny thing is that significant parts of the digital technology industry is already moving toward ideas related to this.Increasingly what software developers – especially within organisations – are doing is informed by the nature of digital technologies outlined here. But that hasn’t quite translated into formal education insitutions. It is also unclear just how much of this thinking on the part of software developers has informed how they think about what the users of their products can do. But in some cases, the changes they are making to help them leverage the nature of digital technologies are making it more difficult, if not impossible, to prevent their users from making use of it.

For example, both you and I know that the improvements in HTML have made it much easier to engage in screen scraping. The rise of jQuery has also made it much easier to make changes to web pages in tools like Moodle. But at the same time you get moves to limit this (e.g. the TinyMCE editor on Moodle actively looking to hobble javascript).

This is something that will get picked up more in later posts in this series.

So it’s going to happen, it’s going to be easy, but I do think it’s going to get easier.

References

Koopman, P., & Hoffman, R. (2003). Work-arounds, make-work and kludges. Intelligent Systems, IEEE, 18(6), 70-75.

The nature of digital technology? Part 2

This is a followup to yesterday’s Part 1 post and a continuation of an attempt to describe the nature of digital technology and to think about what this might reveal about how and what is being done by formal education has it attempts to use digital technology for learning and teaching. This post moves from the fundamental properties of digital technologies (yesterday’s focus) to what some suggest is that nature of digital technologies.

Note: this is not the end of this series. There’s a fair bit more to go (e.g. this is all still focused on a single black box/digital technology, it hasn’t touched on what happens when digital technology becomes pervasive). I’m not entirely comfortable with the use of “nature” at this level, but the authors I’m drawing on use that phrase.

Recap and revision

Yesterday’s post aimed to open up the black box of digital technology a touch by explaining the two fundamental properties (data homogenization and reprorammability) of digital technology proposed by Yoo, Boland, Lyytinen, and Majchrzak (2012).  This was original represented using this image.

Fundamental Properties

I don’t think the image makes the point that these are fundamental properties of the black box, the digital technology. Hence, the following revised image. The idea being is that data homogenization and reprogrammability are properties that are “baked into” digital technology.  Identifying these properties has opened up the black box a little. This is going to be useful as I attempt to develop the model of digital technology further.
Fundamental Properties embedded

Nature of digital technologies

The aim here is to move up a bit from the fundamental properties to look at the “nature” of digital technologies.  As mentioned above, I’m not entirely happy with the use of the phrase “nature” at this level, but I don’t have a better term at the moment, and I’m drawing on Koehler and Mishra (2009) here who argued (emphasis added)

By their very nature, newer digital technologies, which are protean, unstable, and opaque, present new challenges to teachers who are struggling to use more technology in their teaching. (p. 61)

As they argue the combination of protean, unstable, and opaque makes the use of digital technology by teachers (and others) difficult. The following seeks to expand and explore that a bit more.

The following representation (I’m not a designer by any stretch of the imagination) is attempting to illustrate that this “nature” of digital technology sits above (or perhaps build upon or become possible due to) the fundamental properties introduced in the last post.

Nature of Digital Technology

Unstable

In this context, Koehler and Mishra (2009) define unstable and “rapidly changing” (p. 61). Which version of the iPhone (insert your preference) do you have? The combination of data homogenization and reprogrammability mean that digital technologies can be changed, and other external factors tend to make sure that they do. Commercial pressures mean that consumer digital technologies keep changing. Other digital technologies change to improve their functionality.

But beyond that is the argument that digital technology shows exponential growth. Bigum (2012) writes

To most, the notion of an exponential is something that belongs in a mathematic’s classroom or perhaps may somehow be related to home loan repayments. Exponential change is not something with which we have had to become familiar, despite the fact of Moore’s Law and other Laws that map the growth of various digi- tal technologies and which tell us that the price of various digital technologies is halving roughly every 18 months to 2 years and that their performance is doubling on about the same time scale….The fact is that the various digital technologies that end up in laptop computers, mobile phones, and an increasing number of things that we tend not to associate with computers, are still doubling their performance and halving their cost in fixed time periods, i.e. we are seeing exponential growth. (p. 32-33)

Opaque

Koehler and Mishra (2009) draw on Turkle (1995) to define opaque as “the inner workings are hidden from users”. Turkle (1995) talks about people having “become accustomed to opaque technology”. Meaning that as the power of digital technologies have increased we no longer see the inner workings of the technology. She suggests that computers of the 1970s “presented themselves as open, ‘transparent’, potentially reducible to the underlying mechanisms”. Perhaps more importantly she argues that

their computer systems encouraged them to represent their understanding of the technology as knowledge of what lay beneath the screen surface. They were encouraged to think of understanding as looking beyond the magic to the mechanism. (p. 23)

Earlier this year, as part of an introductory activity, I asked students to find and share an image (or other form of multimedia) that captured how they felt about digital technologies. The following captures just some of the images shared, and also captures a fairly widespread consensus of how these pre-service educators felt about digital technology. I’m guessing that it resonates with quite a few people.
Perceptions of computers
The increasingly opaque nature of digital technology combined with our increasing reliance on digital technologies in most parts of our everyday life would seem to have something to do this sense of frustration. Ben-Ari and Yeshno (2006) found that people with appropriate conceptual models of digital technologies were better able to analyse and solve problems. While learners without appropriate conceptual models were limited to aimless trial and error. I suggest that it is the aimless trial and error, due to a inappropriate conceptual model of how a digital technology works, is what creates the feelings of frustration illustrated by the above image.

 Protean

This is the characteristic that I’ve written the most about.  The following two paragraphs are from the first version of Jones and Schneider (2016).

The commonplace notions of digital technologies that underpin both everyday life and research have a tendency to see them “as relatively stable, discrete, independent, and fixed” (Orlikowski & Iacono, 2001, p. 121). Digital technologies are seen as hard technologies, technologies where what can be done is fixed in advance either by embedding it in the technology or “in inflexible human processes, rules and procedures needed for the technology’s operation” (Dron, 2013, p. 35). As noted by Selwyn and Bulfin (2015) “Schools are highly regulated sites of digital technology use” (p. 1) where digital technologies are often seen as a tool that is: used when and where permitted; standardised and preconfigured; conforms to institutional rather than individual needs; and, a directed activity. Rushkoff (2010) argues that one of the problems with this established view of digital technologies is that “instead of optimizing our machines for humanity – or even the benefit of some particular group – we are optimizing humans for machinery” (p. 15). This hard view of digital technologies perhaps also contributes to the problem identified by Selwyn (2016) where in spite of the efficiency and flexibility rhetorics surrounding digital technologies, “few of these technologies practices serve to advantage the people who are actually doing the work” (p. 5). Digital technologies have not always been perceived as hard technologies.

Seymour Papert in his book Mindstorms (Papert, 1993) describes the computer as “the Proteus of machines” (p. xxi) since the essence of a computer is its “universality, its power to simulate. Because it can take on a thousand forms and can serve a thousand functions, it can appeal to a thousand tastes” (p. xxi). This is a view echoed by Alan Kay (1984) and his discussion of the “protean nature of the computer” (p. 59) as “the first metamedium, and as such has degrees of freedom and expression never before encountered” (p. 59). In describing the design of the first personal computer, Kay and Goldberg (1977) address the challenge of producing a computer that is useful for everyone. Given the huge diversity of potential users they conclude “any attempt to specifically anticipate their needs in the design of the Dynabook would end in a disastrous feature-laden hodgepodge which would not be really suitable for anyone” (Kay & Goldberg, 1977, p. 40). To address this problem they aimed to provide a foundation technology and sufficient general tools to allow “ordinary users to casually and easily describe their desires for a specific tool” (Kay & Goldberg, 1977, p. 41). They aim to create a digital environment that opens up the ability to create computational tools to every user, including children. For Kay (1984) it is a must that people using digital technologies should be able to tailor those technologies to suit their wants, since “Anything less would be as absurd as requiring essays to be formed out of paragraphs that have already been written” (p. 57). For Richard Stallman (2014) the question is more fundamental, “To make computing democratic, the users must control the software that does their computing!” (n.p.).

Implications for formal education

The above – at least for me – opens up a range of questions about how formal education uses digital technology for learning and teaching. A small and rough list follows.

Unstable changes everything

If digital technologies are fundamentally different and if they are unstable (rapidly – even exponentially – changing) then everything will change.  Bigum (2012)

Taken together and without attempting to anticipate how any of these technologies will play out, it is nevertheless patently clear that doing school the way school has always been done or tweaking it around the edges will not prepare young people who will grow up in this world (p. 34)

Bigum (2012) then draws on this from Lincoln

The dogmas of the quiet past, are inadequate to the stormy present. The occasion is piled high with difficulty, and we must rise – with the occasion. As our case is new, so we must think anew, and act anew. We must disenthrall ourselves, and then we shall save our country

The increasing neo-liberal/corporatisation fetish within formal education on efficiency etc. appears to be placing an emphasis on refining what we already do. Dropping the dogmas of the quiet past would mean admitting that people had it wrong….etc.  It’s difficult to see how such change will happen.

Moving beyond recipe followers?

Since digital technology is increasingly opaque, it is increasingly difficult for people to develop conceptual models of how digital technology works. As a result, many people have developed recipes that they follow when using digital technology. i.e. they know that if they press this button, select that menu, and check that box this will happen. They don’t know why, they just know the recipe.

Increasingly, a lot of the training and documentation provided to help users use digital technologies are recipes. They are step-by-step examples (with added screen shots ) of the recipe to follow to achieve this specific goal. If they don’t have the recipe, or the recipe doesn’t work then they are stuck. They don’t have the conceptual models necessary to analyse and solve problems.

What can be done to digital technologies and the methods used to support them to help people develop better conceptual models? If you do that, does that improve the quality of learning and teaching with digital technology?

If your documentation and training is a collection of recipes, why aren’t you automating those recipes and building them into the technology? i.e. making use of the protean nature of digital technology?

What or whom drives the change? What is the impact?

My institution has adopted Moodle, an open source LMS. One of the benefits of open source is that it is meant to be more protean. It can change. The Moodle release calendar shows the aim of releasing a major upgrade of Moodle every six months. It appears that my institution aims to keep reasonably up to date with that cycle. This means that every 6 months a change process kicks in to make staff and students aware that a change is coming. It means that every 6 months or so it is possible that staff and students will find changes in how the system works. Changes they didn’t see the need for.

To make matters worse, since most people are recipe followers, even the most minor of changes cause confusion and frustration. Emotions that make people question why this change has been inflicted upon them. An outcome not likely to enhance acceptance and equanimity.

Perhaps if more of the changes being made responded to the experiences and needs of those involved, change might be more widely accepted. The problem is that because most institutional digital technologies aren’t that protean, changes can only be made by a small number of specific people who are in turn constrained by a hierarchical governance process. A situation that might lead a problem of starvation where the priority is given to  large-scale, institutional level changes, rather than changes beneficial to small numbers of specific situations.

Would mapping who and why changes are being made to the digital technologies reveal this starvation? How can institutional digital technologies be made more protean and more able to respond to the needs of individuals? What impact would that have on learning and teaching? Is this sort of change necessary to respond to exponential growth?

Opaque technology creates consumers, not producers

Kafai et al (2014) talk about the trend within schools of transforming “computer class” into the study of how to use applications such as  word processors and spreadsheets. Approaches which they argue

These technology classes promote an understanding of computers and software as black boxes where the inner workings are hidden to users. (p 536)

In contrast they argue that

working with e-textiles gives students the opportunity to grap- ple with the messiness of technology; taking things apart, putting them back together, and experimenting with the purposes and functions of technology make computers accessible to students

Which importantly has the effect of

by engaging learners in designing e-textiles, educators can encourage student agency in problem solving and designing with technologies. This work can disrupt the trend that puts students on the sidelines as consumers rather than producers of technology

Currently, most digital learning environment within formal education tend to lean towards being opaque and not protean. Does this contribute toward a cadre of learners and teachers that see themselves as consumers (victims?) of digital technologies for learning and teaching? Would the provision of a digital learning environment that is transparent and protean help encourage learner and teacher agency? Would this transform their role from consumer to producer? Would this improve the use of digital technology for learning and teaching within formal education?

References

Ben-Ari, M., & Yeshno, T. (2006). Conceptual Models of Software Artifacts. Interacting with Computers, 18(6), 1336–1350. doi:10.1016/j.intcom.2006.03.005

Kafai, Y. B., Fields, D. A., & Searle, K. A. (2014). Electronic Textiles as Disruptive Designs: Supporting and Challenging Maker Activities in Schools. Harvard Educational Review, 84(4), 532–556,563–565. doi:10.17763/haer.84.4.46m7372370214783

Koehler, M., & Mishra, P. (2009). What is Technological Pedagogical Content Knowledge (TPACK)? Contemporary Issues in Technology and Teacher Education, 9(1), 60–70. Retrieved from http://www.editlib.org/p/29544/

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

What is the nature of digital technology? Part 1

Formal education in most of its forms is still struggling to effectively harness digital technology to enhance and transform learning and teaching. Even with a history for 40+ years of various attempts. The reasons for this are numerous and diverse. The following is an attempt to look at one of the reasons. A reason, at least to me, which seems to have be somewhat ignored.

The technology. Does digital technology have a unique nature/set of capabilities/affordances that sets it apart from other types of technology? If so, what is it? What might understanding the nature of digital technology have to say about how formal education is attempting to use it to transform learning and teaching?

The following is a first attempt to frame some thinking that is moving towards a presentation I’ll be giving in a couple of weeks.  This is only the first step, there’ll be follow up posts over the coming week or two. These posts will aim to develop my own understanding of a model that aims to capture the nature of pervasive digital technology. It’s a model that will draw largely on the work of Yoo, Boland, Lyytinen, and Majchrzak (2012) combined with a few others (e.g. Papert, 1980; Kay, 1984; Mishra & Koehler, 2006). That model will then be used to look at current attempts within formal education to use digital technology for learning and teaching.

Views of Digital Technology

For most people digital technology is a black box. Regardless of what type of digital technology, it’s a black box.

DT black box
Orlikowski and Iacono (2001) label this the tool view of technology which

represents the common, received wisdom about what technology is and means. Technology, from this view is the engineered artifact, expected to do what its designers intend it to do. (p. 123)

They go onto cite work by Kling and Latour to describe this view and its’ limitations before going on to examine 4 other views of the IT artifact. The motivation for their work is that “The IT artifact itself tends to disappear from view, be taken for granted, or is presumed to be unproblematic once it is build and installed” (Orlikowski & Iacono, 2001 p. 121). They go proceed to describe 4 additional “broad metacategories” of the IT artifact “each representing a common set of assumptions about and treatments of information technology in IS research” (Orlikowski & Iacono, 2001 p. 123). Metacategories or views of technology that draw on a range of perspectives outside of their discipline such as Actor-Network Theory etc.

My attempt here at opening up the black box of digital technology perhaps best fits with Orlikowski & Iacono’s (2001) fourth view of technology – the computational view – where the interest is “primarily in the capabilities of the technology to represent, manipulate, store, retrieve, and transmit information, thereby supporting, processing, modeling, or simulating aspects of the world” (Orlikowski & Iacono, 2001 p. 127). My focus here is on trying to explore what is the unique nature of digital technology. Not as an end in itself, but as a starting point that will draw on (at least) the other four views of technology suggested by Orlikowski & Iacono (2011) in attempting to understand and improve the use of digital technology within formal education.

Fundamental properties of digital technology

Yoo, Boland, Lyytinen, and Majchrzak (2012) argue that the “fundamental properties of digital technology are reprogrammability and data homogenization” (p. 1398)
Fundamental Properties

Data homogenization

Whether a digital technology is allowing you to talk to friends via Skype (or smartphone or…); capture images of snow monkeys; listen to Charlie Parker; measure the temperature; analyse the the social interactions in a discussion forum; or, put your students to sleep as you read from your powerpoint slides (which they’re viewing via some lecture capture system) all of the data is represented as a combination of 0s and 1s. All the data is digital. Since all digital technologies deal with 0s and 1s, in theory at least, all digital technologies can handle all data.The content has been separated from the medium (Yoo, Henfridsson & Lyytinen, 2010).

Analog technologies, on the other hand, have a tight coupling between content and medium. If you had bought “Born in the USA” on a record, to play it on your Walkman you had to record it onto a cassette tape. Adding it as background to that video you recorded with your video camera involves another translation of the content from one medium to another.

Data homogenization is the primary reason why you – as per the standard meme – can now carry all of the following in your pocket.

convergence.jpg

 

Reprogrammability

It’s not just the content that is represented digitally with digital technology. Digital technology also stores digitally the instructions that tell it how and what to do. Digital technologies have a processing unit that will decode these digital technologies and perform the task they specify. More importantly those instructions can – in the right situations – be changed. A digital technology is reprogrammable. What a digital technology offers to the user does not need to be limited by its current function.

Questions  for formal education?

The above is but the first step in building a layered model for the nature of digital technology. The intent is that each layer should include a couple of questions related to how formal education is using digital technology. The following are a rough and fairly weak initial set. Really just thinking out loud.

Where is the convergence?

If data homogenisation is a fundamental property of digital technology, then why isn’t there more convergence within formal education’s digital technologies? Why is the information necessary for learning and teaching kept siloed in different systems?

When I’m answering a student question in the LMS, why do I need to spend 20 minutes heading out into the horrendous Peoplesoft web-interface to find out in which state of Australia the student is based?

Should we buy? Should we build?

I wonder if there is a large educational institution anywhere in the world that hasn’t at some stage, somewhat within the organisation had the discussion about whether they should buy OR build their digital technology? I wonder if there’s a large educational institution anywhere in the world that hasn’t felt it appropriate to lean heavily toward the buy (and NOT build) solution?

What is gained and/or lost by ignoring a fundamental property of digital technology?

References

Orlikowski, W., & Iacono, C. S. (2001). Research commentary: desperately seeking the IT in IT research a call to theorizing the IT artifact. Information Systems Research, 12(2), 121–134.

Yoo, Y., Henfridsson, O., & Lyytinen, K. (2010). The new organizing logic of digital innovation: An agenda for information systems research. Information Systems Research, 21(4), 724–735. doi:10.1287/isre.1100.0322

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

Learn to code for data analyis – step 1

An attempt to start another MOOC.  Learn to code for data analysis from FutureLearn/OUUK.  Interested in this one to perhaps start the migration from Perl to Python as my main vehicle for data munging; and, also to check out the use of Jupyter notebooks as a learning environment.

Reflections

  • The approach – not unexpectedly – resonates. Very much like the approach I use in my courses, but done much better.
  • The Juypter notebooks work well for learning, could be useful in other contexts.  Good example of the move toward a platform
  • The bit of Python I’ve seen so far looks good. The question is whether or not I have the time to come up to speed.

Getting started

Intro video from a BBC journalist and now the software.  Following a sequential approach, pared down interface, quite different from the standard, institutional Moodle interface. It does have a very visible and simple “Mark as complete” interface for the information.  Similar to, but perhaps better than the Moodle book approach from EDC3100.

Option to install the software locally (using Anaconda) or use the cloud (SageMathCloud).  Longer term, local installation would suit me better, but interested in the cloud approach.  The instructions are not part of the course, seem to be generic instructions used for the OUUK.

SageMathCloud

Intro using a video, which on my connection was a bit laggy. SageMathCloud allows connection with existing accounts, up and going.  Lots of warnings about this being a free service with degraded performance, and the start up process for the project is illustrating that nicely.  Offline might be the better option. Looks like the video is set up for the course.

The test notebook loads and runs. That’s nice.  Like I expected, will be interesting to see how it works in “anger”.

Python 3 is the go for this course, apparently.

Anaconda

Worried a little about installing another version of python.  Hoping it won’t trash what I have installed, looks like it might not.  Looks like the download is going to take a long time – 30 min+.  Go the NBN!

Course design

Two notebooks a week: exercise and project.  Encouraged to extend project. Exercises based on data from WHO, World Bank etc.  Quizzes to check knowledge and use of glossaries.  Comments/discussions on each page.  Again embedded in the interface, unlike Moodle.  Discussion threads expand into RHS of page.

Course content

Week 1

Start with a question – point about data analysis illustrated with a personal story. Has prompts to expand and share related to that story.  Encouraging connections.

Ahh, now the challenge of how to segue into first steps in programming and supporting the wide array of prior knowledge there must be. Variables and assignment. and a bit of Jupyter syntax.  Wonder how the addition of Jupyter impacts cognitive load?

Variable naming and also starting to talk about syntax, errors etc. camelCase is the go apparently.

And now for some coding. Mmm, the video is using Anaconda.  Could see that causing some problems for some learners. And the discussion seems to illustrate aspects of that.  Seems installing Anaconda was more of a problem. Hence the advantages of a cloud service if it is available..

Mmm, notebooks consist of cells. These can be edited and run. Useful possibilities.

Expressions.  Again Juypter adds it’s own little behavioural wrinkle that could prove interesting.  IF the last line in a cell is an expression, it’s value will be output.  Can see that being a practice people try when writing stand alone python code.

Functions. Using established functions.

Onto a quiz.  Comments on given answers include an avatar of the teaching staff.

Values and units.  With some discussion to connect to real examples.

Pandas. The transition to working with large amounts of data. And another quiz, connected to the notebook.  That’s a nice connection.  Works well.

Range of pages an exercises looking at the pandas module.  Some nice stuff here.

Do I bother with the practice project?  Not now.  But nice to see the notebooks can be exported.

Week 2 – Cleaning up our act

The BBC journalist giving an intro and doing an interview. Nodding head and all.

Ahh weather data.  Becoming part of the lefty conspiracy that is climate change?  :)

Comparison operators, with the addition of data frames.  Which appears to be a very useful abstraction.

Bitwise operators. Always called these logical or boolean operators.  Boolean isn’t given a lot of intro yet.

Ahh, the first bit of “don’t worry about they syntax, just use it as a template” advice. Looks like it’s using the equivalent of a hash that hasn’t yet been covered.