Helping teachers “know thy students”

The first key takeaway from Motz, Teague and Shepard (2015) is

Learner-centered approaches to higher education require that instructors have insight into their students’ characteristics, but instructors often prepare their courses long before they have an opportunity to meet the students.

The following illustrates one of the problems teaching staff (at least in my institution) face when trying to “know thy student”. It ponders if learner experience design (LX design) plus learning analytics (LA) might help. Shows off one example of what I’m currently doing to fix this problem and ponders some future directions for development.

The problem

One of the problems I identified in this talk was what it took for me to “know thy student” during semester. For example, the following is a question asked by a student on my course website earlier this year (in an offering that included 300+ students).

Question on a forum

To answer this question, it would be useful “know thy student” in the following terms

  1. Where is the student located?
    My students are distributed throughout Australian and the world. For this assignment they should be using curriculum documents specific to their location. It’s useful to know if the student is using the correct curriculum documents.
  2. What specialisation is the student working on?
    As a core course the Bachelor of Education degree, my course includes all types of pre-service teachers. Ranging from students studying to be Early Childhood teachers, Primary school teachers, Secondary teachers, and even some looking to be VET teachers/trainers.
  3. What activities and resources has the student engaged with on the course site?
    The activities and resources on the site are designed to help students learn. There is an activity focused on this question, has this student completed it? When did they complete it?
  4. What else has the student written and asked about?
    In this course, students are asked to maintain their own blog for reflection. What the student has written on that blog might help provide more insight. Ditto for other forum posts.

To “know thy student” in the terms outlined above and limited to the tools provided by my institution requires:

  • the use three different systems;
  • use of a number of different reports/services within those two systems; and,
  • at least 10 minutes to click through each of these.
Norman on affordances

Given Norman’s (1993) observations is it any wonder that perhaps I might not spend 10 minutes on that task every time I respond to a question from the 300+ students?

Can learner experience (LX) design help?

Yesterday, Joyce (@catspyjamasnz) and I spent some time exploring if and how learner experience design (Joyce’s expertise) and learning analytics (my interest) might be combined.

As I’m currently working on a proposal to help make it easier for teachers “know thy students” this was uppermost in my mind. And, as Joyce pointed out, “know the students” is a key step in LX design. And, as Motz et al (2015) illustrate there appears to be some value in using learning analytics to help teachers “know thy students”. And, beyond Motz’s et al (2015) focus on planning, learning analytics has been suggested to help with the orchestration of learning in the form of process analytics (Lockyer et al, 2013). A link I was thinking about before our talk.

Out of all this a few questions

  1. Can LX design practices be married with learning analytics in ways that enhance and transform the approach used by Motz et al (2015)?
  2. Learning analytics can be critiqued as being driven more by the available data and the algorithms available to analyse it (the expertise of the “data scientists”) driving it. Some LA work is driven by educational theories/ideas. Does LX design offer a different set of “purposes” to inform the development of LA applications?
  3. Can LX design practices + learning analytics be used to translate what Motz et al (2015) see as “relatively rare and special” into more common practice

    Exceptionally thoughtful, reflective instructors do exist, who customize and adapt their course after the start of the semester, but it’s our experience that these instructors are relatively rare and special, and these efforts at learning about students requires substantial time investment.

  4. Can this type of practice be done in a way that doesn’t require “data analysts responsible for developing and distributing” (Motz et al, 2015) the information?
  5. What type of affordances can and should such an approach provide?
  6. What ethical/privacy issues would need to be addressed?
  7. What additional data should be gathered and how?

    e.g. in the past I’ve used the course barometer idea to gather student experience during a course. Might something like this be added usefully?

More student details

“More student details” is the kludge that I’ve put in place to solve the problem at the top of this post. I couldn’t live with the current systems and had to scratch that itch.

The technical implementation of this scratch involves

  1. Extracting data from various institutional systems via manually produced reports and screen scraping and placing that data into a database on my laptop.
  2. Adapting the MAV architecture to create a Greasemonkey script that talks to a server on my laptop that in turn extracts data from the database.
  3. Install the Greasemonkey script on the browser I use on my laptop.

As a result, when I use that browser to view the forum post at the top of this post, I actually see the following (click on the image to see a larger version). The red arrows have been added to the image to highlight what’s changed. The addition of [details] links.

Forum post + more student details

Whenever the Greasemonkey script sees a Moodle user profile link, it adds a [details] link. Regardless of which page on my Moodle course sites I’m on. The following image shows an excerpt from the results page for a Quiz. It has the [details] links as well.

Quiz results + more student details

It’s not beautiful, but it’s only something I currently use and I was after utility.

Clicking on the [details] links results in a popup window appearing. A window that helps me “know they student”. The window has three tabs. The first is labelled “Personal Details” and is visible below. It provides information from the institutional student records system, including name, email address, age, specialisation, which campus or mode the student is enrolled in, the number of prior units they’ve completed, their GPA, and their location and phone numbers.

Student background

The second tab on “more student details” shows details of the student’s activity completion. This is a Moodle idea where it tracks if and when a student has completed an activity or resource. My course site is designed as a collection of weekly “learning
paths”. Each path is a series of activities and resources design to help the student learn. Each week belongs to one of three modules.

The following image shows part of the “Activity Completion” tab for “more student details”. It shows that Module 2 starts with week 4 (Effective planning: a first step) and week 5 (Developing your learning plan). Each week has a series of activities and resources.

For each activity the student has completed, it shows when they completed that activity. This student completed the “Welcome to Module 2” – 2 months ago. If I hold the mouse over “2 months ago” it will display the exact time and date it was completed.

I did mention above that it’s useful, rather the beautiful.

Student activity completion

The “blog posts tab shows details about all the posts the student has written on their blog for this course. Each of the blog posts include a link to that blog post and shows how long ago the post was made.

Student blog posts

With this tool available, when I answer a question on a discussion forum I can quickly refresh what I know about the student and their progress before answering. When I consider a request for an assignment extension, I can check on the student’s progress so far. Without spending 10+ minutes doing so.

API implementation and flexibility

As currently implemented, this tool relies on a number of manual steps and my personal technology infrastructure. To scale this approach will require addressing these problems.

The traditional approach to doing this might involve making modifications to Moodle to add this functionality into Moodle. I think this is the wrong way to do it. It’s too heavyweight, largely because Moodle is a complex bit of software used by huge numbers of people across the world, and because most of the really useful information here is going to be unique to different courses. For example, not many courses at my institution currently use activity completion in the way my course does. Almost none of the courses at my institution use BIM and student blogs the way my course does. Beyond this, the type of information required to “know thy student” extends beyond what is available in Moodle.

To “know thy student”, especially when thinking of process analytics that are unique to the specific learning design used, it will be important that any solution be flexible. It should allow individual courses to adapt and modify the data required to fit the specifics of the course and its learning design.

Which is why I plan to continue the use of augmented browsing as the primary mechanism, and why I’ve started exploring Moodle’s API. It appears to provide a way to allow the development of a flexible and customisable approach to allowing “know thy student” respond to the full diversity of learning and teaching.

Now, I wonder how LX design might help?

What might a project combining LX Design and Analaytics look like?

In a bit more than an hour I’ll be talking to @catspyjamasnz trying to nut out some ideas for a project around LX Design and Learning Analytics. The following is me thinking out loud and working through “my issues”.

What is LX Design

I’ve got some vague ideas which I need to work on. Obviously start with a Google search.

Oh dear, the top result is for Learning Experience Design TRADEMARK which is apparently

a synthesis of instructional design, educational pedagogy, neuroscience, social sciences, design thinking, and UI/UX—is critical for any organization looking to compete in the modern educational marketplace.

While I won’t dwell on this particular approach, it does link to some of my vague qualms about LX design. First, there’s a danger of it becoming too much of another collection of meaningless buzzwords used to label the same old practice as conforming to the latest buzzwords. Mainly because the people adopting don’t fully understand it and fail transform their practice. Old wine, new bottles.

Second, there’s the problem of the “product focus” in learning. Where the focus is on building the best product, which troubles me. Perhaps this says more about my biases, but I worry that LX Design will become just another tool (perhaps a very good tool) applied within the dominant SET mindset within institutional e-learning (which is my context). Which not surprisingly is one of my concerns about the direction of learning analytics.

And talking about old wine in new bottles, this post suggests that

Although LXD is a relatively new term in the field of design, there are some established best practices emerging as applied to creating online learning interfaces:

Mmm, not much there that I’d class as something that LXD has provided to the world. e.g. Donald Clark’s current sequence of “10” posts, including “10 essential rules on use of GRAPHICS in online learning”.

Needs and wants of the user?

This overview of User Experience Design (UX Design) – the foundation on which LX design is built – suggests

The term “user experience” was coined by Dr. Donald Norman, a cognitive science researcher who was also the first to describe the importance of user-centered design (the notion that design decisions should be based on the needs and wants of users).

As I wrote last week I’m not convinced that the “needs and wants of users” is always the best approach. Especially if we’re talking about something very new that the user doesn’t yet understand.

Which begs the question:

Who is the user in a learning experience?

The obvious answer from a LX design perspective is that the user is the learner. That the focus should be on the learner has been the broadly accepted in higher education for some time now. But then all models are wrong, but some are useful. In critiquing the raise of the term Technology Enhanced Learning, Bayne (2014) draws on a range of publications by Biesta to critique the focus on learning and learners. I’ve just skimmed this argument for this post, but there is potentially something interesting and useful here.

Beyond this more theoretical question about the value of a “learner focus”, I’d also like to mention something a little closer to home. The context in which I’m framing this post is within higher education’s practice of formal learning. A practice that currently still assumes that there is some value in having a teacher involved in the learning experience. Where “teacher” may not be a single individual, but actually be a small team with diverse roles. Which leads me to the proposition that the “teacher” is also a user within a learning experience.

As I’m employed as a teacher within higher education, I can speak to the negative impact of the blindingly obvious, almost complete lack of user experience design around the tools and systems teachers are required to engage with around learning and teaching. Given the low quality of those tools, it’s no surprise to me that most learning in higher education has some flaws.

This is one of the reasons behind the 4 paths for learning analytics focusing on the teacher (as designer of learning, if you must) and not the learner.

Increasingly, I wonder if the focus on being learner centered is arising from a frustration with the perceived lack of quality of the learning experiences produced by teachers combined with a deficit model of teachers. Which brings me to this quote from Bayne (2014)

points us toward a need to move beyond anthropocentrism and the focus on the individual, toward a greater concern with the networks, ecologies and sociomaterial contexts of our engagement with education and technology.

Impact of LX design for teachers?

What would happen to the quality of learning overall, if LX design were applied to the systems and processes that teachers use to design, implement, support, and revise learning and teaching? Would this help teachers learn more about how to teach better?

Learning analytics

I assume the link between LX design and learning analytics is that learning analytics can provide the data to better inform LX design. In particular, what Lockyer et al (2013) call “process analytics” would be useful

These data and analyses provide direct insight into learner information processing and knowledge application (Elias, 2011) within the tasks that the student completes as part of a learning design. (p. 1448)

One of the problems @beerc and I have with learning analytics is that it really only ever focuses on two bits of the PIRAC framework i.e. information and representation. It hardly ever does anything about affordances or change. This is why dashboards suck and are a broken metaphor. A dashboard without the ability to do anything to control the car are no value whatsoever.

My questions about LXD

  1. Just another FAD? Old wine in new bottles?
  2. Another tool reinforcing the SET mindset? Especially the product focus.
  3. Does LX design have a problem because it doesn’t include complex adaptive systems theory? It appears to treat learner experience design as a complicated problem, rather than a complex problem.
  4. The “meta-learning” problem – can it be applied to teachers learning how to teach?
  5. Where does it fit on the spectrum of: sage on the stage, guide on the side, and meddler in the middle?
  6. How to make it useful for the majority of teachers and learners?
  7. What type of affordances can/should analytics provide LX design to help all involved?

References

Bayne, S. (2014). What’s the matter with Techology Enhanced Learning? Learning, Media & Technology, 40(1), 5–20. doi:10.1080/17439884.2014.915851.Available

Exploring Moodle’s API

API centric architecture is a coming thing in technology circles. It’s the way vendors and central IT folk will build systems. It is also going to be manna from heaven for institutionalised people who are breaking a little BAD.

Moodle has a growing web services API. The following documents some initial exploration of how and if you can “break BAD” with those APIs.

Background

Web services API

Moodle has a capability for plugins to define a Web services API. The question is, how many plugins provide this and how much of Moodle core has exposed APIs. It’s likely to be quite large given APIs are increasingly used for mobile devices.

A quick check of my basic Moodle 2.9 install reveals

dj:moodle david$ find . -name services.php
./admin/mnet/services.php
./enrol/manual/db/services.php
./enrol/self/db/services.php
./lib/db/services.php
./message/output/airnotifier/db/services.php
./mod/assign/db/services.php
./mod/forum/db/services.php
./mod/lti/services.php

Not a huge number, but at least enough to start playing with (assign and forum are likely to be particularly useful) and there may well be more.

Of course, I should be looking to add a Web services API to BIM. This page will apparently help with that.

That page also includes a template with a test client. Could be useful later on.

What about the Core APIs

Moodle defines a number of Core APIs that are used within Moodle. Are these available via Web services? Some (all?) wouldn’t make sense, but maybe…

External functions API

The external functions API apparently “allows you to create fully parameterised methods that can be accessed by external programs (such as Web services API)”. Searching for evidence of that in my Moodle install is a little more heartening

dj:moodle david$ find . -name externallib.php
./calendar/externallib.php
./cohort/externallib.php
./course/externallib.php
./enrol/externallib.php
./enrol/manual/externallib.php
./enrol/self/externallib.php
./files/externallib.php
./grade/externallib.php
./group/externallib.php
./lib/external/externallib.php
./lib/externallib.php
./message/externallib.php
./message/output/airnotifier/externallib.php
./mod/assign/externallib.php
./mod/forum/externallib.php
./notes/externallib.php
./user/externallib.php
./webservice/externallib.php

Just have to figure out if the presence of these implies connections with a Web services API and the ability to access from a client.

Web Services

Which brings me to the Web Services category page. There’s also a web services forum and a related FAQ, which includes:

Security

External services security outlines various ways services can be called and how security is handled.

Using web services on my Moodle instance

As per these instructions and elsewhere

  1. Enabling web services.
  2. Enabling protocols

    Appears REST is enabled by default (don’t think I did this earlier).

Explore – Site administration / Plugins / Web Services – and its range of options

  1. Overview.
    Includes directions on steps for enabling web services for mobile devices and for external systems to control Moodle.
  2. User.
    Need to allocate permission to use web services to specified users.
  3. Add services to be used.
    Which web services can the user use. In this case, a range of “built-in services” were already enabled for “all users” (assuming they have the required capabilities). This might be interesting to test and explore. Includes a broad array of interesting functionality (mod_assign_get_???) but not overly long.

    Adding a service requires specification of the functions to be enabled.

  4. Each service can be configured to a particular user or multiple users.
  5. Create a token – select a user and the service.
  6. And then there’s a test client embedded in Moodle.
    Which only allows testing of a small subset. Looks like having to write a client will be required.
    Tried a function via the test API. Got a security error. Added it to the functions in the service I’d set up, and hey presto it worked.

Writing a client

There’s a github repo with sample-ws-clients. I’ll use the <a href="https://github.com/moodlehq/sample-ws-clients/tree/master/PHP-REST"PHP-REST code.

  1. Clone the repository
  2. Modify the token, URL, etc.
  3. Use the API documentation to figure out the correct format for the request.
    Which was quite straight forward

    $functionname = 'moodle_user_get_users_by_id';
    $restformat = 'xml'; 
    $userids = array( 489, 2 );
    $params = array('userids' => $userids);
    
  4. Change the format to json and it works just as well, but of course different format in the data returned.

The JSON option (from 2.2 onwards) means that planned use within the browser should work fine.

Exploring functions of interest

In the short term, I’m particular interested in whether there are existing functions for the following tasks

  • Get all enrolled users (and perhaps just students) in a course.
    course_enrol_get_enrolled_users( $course_id )
    Also accepts:

    options => array( 
        withcapability => string, 
        groupid => int,
        onlyactive => int,
        userfields => Array( string, string..),
        limitfrom => int,
        limitnumber => int )
    
  • Get a user’s activity completion details.
    Appears to be implemented in 2.9. Will update my version and see if it appears. Yes.
    core_completion_get_course_completion_status( int courseid )
    Returns a list of statuses including: comment id (cmid), activity module name (modname), instance ID (instance), state (0 incomplete, 1 complete, 2 complete pass, 3 complete fail), timecompleted, tracking (0 none, 1 manual, 2 automatic) )
  • Get information about status and results of assignments.
    • mod_assign_get_assignments( array of course ids )
      Returns a list of courses, but also a list of assignment details.
    • mod_assign_get_grades( array of assignment ids )
      Returns a list of assignments and a list of grades for each assignment. Grades include the userid, attemptnumber, timecreated, timemodified, grader and grade.
    • mod_assign_get_submissions( array of assignment ids )
      Similar to get grades but includes status, also submission plugin, list of files and additional information
    • mod_assign_get_user_flags( array of assignment ids )
      Flags include workflowstate, allocated marker, and extension date.

Some longer term services

Longer term some other areas of interest might include

  • Adding web services to BIM.

    A job for me at a later date.

  • core_message – list of services around the messaging services, perhaps as a way to intervene?

What type of “digital knowledge” does a teacher need?

Apparently teacher education has a technology knowledge problem.

The 2015 Horizon Report for K-12 lists as it’s second “Solvable Challenge” (defined as “Those that we understand and know how to solve”) the problem of “Integrating Technology in Teacher Education“.

It includes statements such as

Teacher training still does not acknowledge the fact that digital media literacy continues its rise in importance as a key skill in every discipline and profession….training in the digital-supported teaching methods is still too uncommon in teacher education and in the preparation of teachers….the most important finding is that the level of a teacher’s digital competence directly correlates with students’ learning outcomes when technology is used

Given that teacher education typically happens within higher education a mention should also be given to the 2014 Horizon Report for Higher Education that identified as its number 1 “Solvable Challenge” that “Low digital fluency of faculty” and has some obvious connections

Faculty training still does not acknowledge the fact that digital media literacy continues its rise in importance as a key skill in every discipline and profession…training in the supporting skills and techniques is rare in teacher education and non-existent in the preparation of faculty

The 2015 Horizon Report for Higher Education picks up this theme with “Improving Digital Literacy” as its number 2 “Solvable Challenge” and amongst other statements includes the following

Lack of consensus on what comprises digital literacy is impeding many colleges and universities from formulating adequate policies and programs that address this challenge.

This is the problem the following tries to engage with.

Aside: The Horizon Reports organise problems into three categories: solvable, difficult (“those we understand but for which solutions are elusive”), and wicked (“those that are complex to even define, much less address”). I have some significant reservations about the categorisation of these types of problems as solvable. If these problems are solvable, why is it that there is still a “lack of consensus on what comprises digital literacy”? Let alone examples of institutions that have successfully solved this problem?

Our problem

I work in teacher education. I teach pre-service teachers a course titled “ICT and Pedagogy”. At the moment, my colleagues and I are engaged in the process of re-designing our 4-year Bachelor’s program in Education. It would seem an appropriate time to address the above “significant challenges”.

Different types of “digital knowledge” and “digital knower”

The literature is overflowing with labels and ideas about how to identify the type of “digital knowledge” and “digital knowers” that we’re trying to develop. It involves labels such as: digital native/digital immigrant; digital resident/digital visitor/digital tourist; digital literacy; digital fluency; multiliteracies; and, computational thinking.

Fragile by bb_matt, on Flickr
Creative Commons Creative Commons Attribution 2.0 Generic License   by  bb_matt 

As the 2015 Horizon Report suggests, there is an apparent “lack of consensus on what comprises digital literacy”. It goes on further to suggest (emphasis added)

definitions are broad and ambiguous. Compounding this issue is the notion that digital literacy encompasses skills that differ for educators and learners, as teaching with technology is inherently different from learning with it.

Personally, I tend to see the influence of Maslow’s Hammer. People from a literacy background approach the question of the type of knowledge required in terms of communication and representation. Limiting what you can do with digital technologies to multimodal presentations. People from a coding background see computational thinking as the core. Librarians see digital literacies as involving the ability to “find, evaluate, create, and communicate information”.

Beyond that you have people who may not exactly live and breath in the new digital world making pronouncements on the importance or otherwise of various aspects of digital knowledge. For example, a recent review of the Australian Curriculum contained some reservations about the proposed “digital technologies” learning area that generated this response from one professional association. Not to mention some recent comments from the Australian Prime Minister.

Initial thoughts on coding in schools

Of course, there are also people who are engaged with the digital world who are questioning the value of coding to school children. For example, Bron Stuckey is left with two big questions around teaching coding in schools

Where should coding be positioned in the already overcrowded curriculum? And bottom line, where do we get the teachers with the knowledge and passion to teach it?

Rather than get drawn into the debate about whether students should be taught coding in school, the focus here is on what type of digital knowledge should teachers have in order to effectively teach?

A metaphoric typology of place and tool

Back in 2011 I asked “Residents and visitors, are builders the forgotten category?”. A question sparked by thinking about the Visitors and Residents typology proposed by White & Cornu (2011) (I paper I need to read again) for “individuals’ engagement with the web”. As a teacher who regularly used coding to enable the design of learning experiences, I wondered whether “builders” should be added. In a comment @palbion wondered whether there was a place for “renovator/handyman/DIY enthusiast”.

The aim here is to see if expanding the visitor/resident typology offers any value in understanding the breadth of “digital knowledge” and in turn identifying whether or not that offers any assistance in thinking about the type of “digital knowledge” that would be required and useful for a teacher. Especially if that teacher engages primarily in a digital learning space.

What follows is an initial attempt at expanding the White & Cornu (2011) visitor/resident typology. It has flaws, not the least of which is whether the “roles” added to this typology are defined in ways that fit with White & Cornu’s original thinking. In particular, their comments on “technical aptitude”

we do not consider the Visitor to be necessarily any less technically adept than the Resident. The concept of ‘technical’ aptitude should be viewed as more than simply an ability to manipulate hardware and software.

There are also questions to ask about whether these roles are distinct. Can you be a resident and not a decorator? Can you be a decorator and either a visitor or resident? And many more.

And importantly I’ll echo White and Cornu’s (2011) sentiment that this “typology should be understood as a continuum”.

Excluded visitor

There remain people who are disconnected or excluded from participation in digital spaces, especially online digital spaces. While the proportion is reducing there will remain people who are excluded visitors.

A potentially troubling factor in this is the balakanisation of the Internet, which apparently also goes under the name of the splinternet. Increasingly “online spaces” are not freely open spaces where anyone can wander through. The online spaces used by many formal educational institutions have boundaries which exclude people. Some times the people that are excluded were once residents.

We already have this at Universities with the LMS. @timklapdor inspired the following from @s_palm

And it’s not just the LMS. There’s Elke’s comment about what she misses most about studying at University “Access to all of those journal articles!”.

Visitor

White & Cornu (2011) define visitors as being those that

understand the Web as akin to an untidy garden tool shed. They have defined a goal or task and go into the shed to select an appropriate tool which they use to attain their goal. Task over, the tool is returned to the shed. It may not have been perfect for the task, but they are happy to make do so long as some progress is made. This is important, since Visitors need to see some concrete benefit resulting from their use of the platform. Significantly, Visitors are unlikely to have any form of persistent profile online which projects their identity into the digital space.

When it comes to institutional learning and teaching tools such as the LMS, can anyone ever be more than a visitor? Is it simply a place to visit, complete a task, and then exit? Is this part of the problem facing digital learning?

Resident

White & Cornu (2011) describe residents as those that

see the Web as a place, perhaps like a park or a building in which there are clusters of friends and colleagues whom they can approach and with whom they can share information about their life and work. A proportion of their lives is actually lived out online where the distinction between online and off–line is increasingly blurred. Residents are happy to go online simply to spend time with others and they are likely to consider that they ‘belong’ to a community which is located in the virtual. They have a profile in social networking platforms such as Facebook or Twitter and are comfortable expressing their persona in these online spaces. To Residents, the Web is a place to express opinions, a place in which relationships can be formed and extended.

Can you be a resident in a LMS? How does owning the space impact your sense of residency and ownership of that space? Should everyone own their own space, their own personal cyberinfrastructure (and this)?

Decorator

Decorators might be seen as residents that wish to extend their sense of belonging to/project more of their identity into a digital space by decorating that space. You can change the colour scheme, re-arrange the furniture, hang art on the wall, and put up new curtains.

The ability to decorate a space is more than simply having the knowledge to do so.

First, you need to have the permission and right to do this. You probably can’t decorate a public space. If you’re renting a space, the rental agreement probably limits what decoration you can undertake (no nails in the wall).

Concrete Lounge

Second, the space needs to offer the affordances necessary for decoration. For example, you’re probably not going to be able to re-located the concrete seating in the image to the right.

Does the amount of decoration (e.g. customising their profile) someone performs in an LMS give an indication of their sense of belonging to the space? Does it say anything about the perceived affordances of that space?

An example of decoration as a teacher might include what I’ve done with my digital course learning space. Thanks to the institution’s standard look and feel it looks like the following. Including, pre-defined locations for all the furniture. e.g. the “Assessment” furniture is all located in a specific location (URL) and the “Assessment” item in the left-hand menu is the hall way to that institutionally defined location.

And that location sucks. As a space it provides far less than what I’d like to provide. So I redecorated.

I used jQuery to change where the “Assessment” item in the left-hand menu pointed to. It now points to a much more useful space for Assessment.

tooltip

Renovator

In the words of @hapgood, the concept of a digital renovator

captures that idea – no one is just a resident of the digital world. We co-create the digital environment with others. We evolve with the environment in a never-ending cycle

And perhaps extends this co-creation beyond just using Facebook to share content or customising our Twitter home page to actually making changes to the environment. White & Cornu (2011) talk about both digital visitors and residents as using “‘tools’ such as online banking and shopping systems”, but the distinction they make is that residents “also use the Web to maintain and develop a digital identity”. A digital renovator may well be a tool user and maintain a digital identity, but they also use tools to significantly change the digital space.

An example of this would be @palbion’s creation of a Greasemonkey script to add functionality to the Moodle assignment submission activity. In particular, to enable the comparison of results from different markers. A script

that runs over Firefox works on the Moodle assignment system page that lists submissions. It extracts names of markers and marks awarded and calculates means and standard deviations of marks overall and for each marker. It then formats those statistics in a table and injects that into the page.

A digital renovator is quite happy to put up a new set of shelves, knock down a wall, revamp the kitchen, and generally make changes to the digital space so that it better suits their purpose.

(Owner) Builder

The distinction between digital renovator and digital builder may become increasingly blurred. It’s a distinction that might be made based on at least two different criteria:

  1. the complexity or novelty of what’s being built; or,
    Tweaking an existing space (e.g. @palbion’s Greasemonkey script above) is renovation, where as building is constructing a new space. This is perhaps a more meaningless distinction in the digital world where everything is linked.
  2. who they are building for.
    This is where the idea of “owner builder” might be a better metaphor for teachers and learners in a digital space. They “own” the space but are adding something significant to it for their own purposes. Where as a builder is arguably a professional employed to construct spaces that will be utilised by others.

The type of work that @cogdog does with ds106 and elsewhere is probably the best example of a “teacher” builder. Though a builder who largely works outside the staid digital spaces of formal education.

Which is the space my “builder” work tends to occur. The main example is perhaps building the BIM module for Moodle that I use in my own teaching.

Considerations and limitations

What follows are a few extra considerations/limtations around this typology.

The problem with typologies

White and Cornu (2011) mention a number of disadvantages with typologies

disadvantages focus principally on the inflexibility of types, as well as the tendency to box individuals into one type or another, overlooking contradictory evidence. Theories of learning styles favour typologies of this sort, as do certain theories of human development, and many struggle to allow individuals the space simultaneously to exhibit traits characteristic of different types.

But they also point out that there are also advantages

benefits are that these categories allow others to use this new knowledge to augment the learning experience

Avoid the teacher deficit model

Another problems with the above typology and the question that framed this whole post – What “digital knowledge” does a teacher need? – is that it appears to suggest that the whatever deficit of knowledge exists, it is a deficit on the part of the teacher. It’s the teacher that is lacking the necessary knowledge. That this is the problem to fix, and that this is obviously done by training them more and better.

This is a very limited view of knowledge. As suggested by various types of distributive views of knowledge (e.g. Jones, Heffernan and Albion (2015)), knowledge isn’t just within the head. It arises from the networks of people, tools, processes, policies etc. surrounding the teachers. The lack of knowledge or inability to “move up” the typology isn’t just about the teacher’s lack of knowledge and it won’t be solved simply by more and better training.

Questions

All this is still a work in progress and has generated additional questions for me. These are listed below.

What questions or problems has it generated for you?

My current questions

  • Is there really a single type suitable for all teachers? (Of course, no).
  • How and what can pre-service teacher education do to help build this knowledge?
    Is training enough? Walk the walk?
  • Is it all about the formal teaching, what about the environment?

    e.g the idea that “branding the LMS” hurts learning/digital literacy.

  • Beyond training, is there benefit in creating institutional, digital learning spaces that feel more like places you want to live, than hovels you wish to escape as quickly as possible?
  • How do teachers and students perceive current digital spaces? What impact does this have on their self-perceived place in the typology?
  • What would be the characteristics of a digital learning space where people wish to reside, rather than leave?
  • Is there literature and research about teachers, teaching, and physical spaces that can help inform the space/tool metaphor that underpins this typology?
  • Is it possible to map existing forms of “digital knowledge” (e.g. digital literacy, digital fluency, computational thinking) onto the above typology? Is that helpful?
  • What are the distinguishing types of digital knowledge for each role in the typology?

References

White, D., & Le Cornu, A. (2011). Visitors and Residents : A new typology for online engagement. First Monday, 16(9). http://firstmonday.org/article/view/3171/3049

Requirements, solutions, design, and who should decide

22 years ago I helped a group of undergraduate Information Technology students set up CQ-PAN – Central Queensland – Public Access Network. An early attempt to allow CQ residents get on the Internet. CQ-PAN got used by a range of people for a range of tasks. In 1994 CQ-PAN started hosting mailing lists for a range of purposes, including for courses being taught by my then employer, the Department of Mathematics and Computing (who were funding the hardware).

Thus began my life of producing feral or shadow IT systems. IT systems that were “feral” because they were not produced by the central IT group officially charged by the organisation to support its strategic goals. IT systems that I needed to write because central IT never seem to know and/or be able to deliver the type of IT-based functionality that were needed to improve learning and teaching.

Here were are decades later, living in the new age of digital learning and still suffering from the same problem. For example, a week or so ago Jon Dron wrote about A waste of time in which he talks about IT systems that

requires individuals to do the work of a machine. For instance, leave-reporting systems that require you to calculate how much leave you have left, how many hours there are in a day, or which days are public holidays

Personally, I’ve spent the last 3+ years since starting at my new institution engaging in various forms of bricolage to develop kludges that fill the gap between the concrete lounges provided by the institution and what’s required to be effective. Just like I did at my old institution.

Which brings me to the question…

Who should design the functionality of an IT system? Particularly one used for learning and teaching?

It’s stupid to ask the user

A few days ago @EdwardTufte re-tweeted (not necessarily a sign of approval) the following partial answer to this question provide by @MrAlanCooper (two gentlemen with a lot of runs on the board around these types of topics)

The idea that asking users what they might need is stupid generated a few responses similar to this one.

Cooper expanded with

Leave it to central IT and/or L&T

In a University context this generally means that folk from central IT or learning and teaching (though in the worst cases it’s some senior manager who some something in a airline magazine) are given this responsibility. However, if the experiences of Dron, myself, and countless others in Universities is anything to go by, then the track record hasn’t been all that good.

Why is this the case?

No deep understanding

In a post titled “Never delegate understanding” Tim Kastelle the in-depth process used by Charles Eames and Eero Saarinen to win a home furnishings design competition and link it to organisational design. Kastelle’s conclusion is

When we try to design better organisations and better outcomes for people, there are no shortcuts. We have to start with building a deep understanding of how they are now and operate within that framework.

I’ve never worked in a University where central IT and learning and teaching people have a “deep understanding” of how students and teachers engage with the daily process of learning and teaching. Given my recent experience, I’d have to say I’m not sure that the people responsible for designing systems have a deep understanding of an administrative task like process final results or booking travel.

Dron identifies one reason that may contribute to this lack of understanding

This is one of those tragedies of hierarchically managed systems. Our ICT department has been set the task of saving money and its managers only control their own staff and systems, so the only place they can make ‘savings’ is in getting rid of the support burden of making and managing cogs.

Rather than develop the deep understanding to design something effective, the focus is on saving money on the “staff and systems” within the budget reporting line of that particular department.

What are the alternatives

Better design approaches – ux/lx design

Of course design could always be done better. It could be focused on developing the type of “deep understanding” required to effectively design a change. User experience design and its offshoot learning experience design offer a range of techniques and processes for doing this.

The question is whether or not these approaches can battle the “tragedy of hierarchically managed systems” and other factors that are contributing to the long-term problem with University IT systems? After all, I’ve known a number of people who have worked in central IT and learning and teaching (including myself) who didn’t want to design crap systems.

I also wonder whether or not learning experience design (lxdesign) will have a broader problem caused by the problems of hierarchy? As I understand it lxdesign focuses on the design of learning experiences. In a university context, learning experiences typically take place within courses, which in turn are located within programs, which in turn are the responsibility of specific units within the university. For me, it looks like the tragedy of hierarchical systems all over again, but only worse.

Learning from Steve Jobs

For better or for worse, Apple remain a reasonable benchmark – if perhaps a somewhat extreme example – for designing quality artefacts. Can anything be learned from Apple and Jobs?

One of Jobs more famous quotes echoes the point made by Cooper at the start

it’s really hard to design products by focus groups. A lot of times, people don’t know what they want until you show it to them.

And if you have a company that includes world-class designers and an explicit focus on producing the highest quality products, then you might be use this as a template for success.

Personally, I find a couple other Jobs quotes more broadly useful. For example, in a 1985 interview with Playboy he’s quoted as saying

We built [the Mac] for ourselves. We were the group of people who were going to judge whether it was great or not. We weren’t going to go out and do market research.

This speaks to me as the importance of the people designing systems also being people who use the system. People specifying and developing a system should – In Taleb’s phrase have “skin in the game”. In AntiFragile, Taleb cites the Hammurabi code and quotes the following

If a builder builds a house and the house collapses and causes the death of the owner of the house – the build shall be put to death

. Poor software for learning and teaching (hopefully) won’t cause death, but some sort of “skin in the game” might be useful. Of course, with large organisational software projects it would be unfair to apply this to the developers. The project board might be a better recipient.

End-user development is the other form of “skin in the game” and that’s where all my (and most) “feral” systems come from. People who actually have to use these systems and know they suck. So they build work-arounds. But it’s not enough to allow developers, even end-user developers, to work alone.

It’s not a quote, but this article on “How Twitter users can generate better ideas” describes Job’s instructions to the architects of the Pixar headquarters as

to design physical space that encouraged staff to get out of their offices and mingle, particularly with those with whom they normally wouldn’t interact. Jobs believed that serendipitous exchanges fueled innovation.

Increasingly designing physical spaces that allow “serendipitous exchanges” isn’t possible in Australian multi-campus universities. Increasingly staff in such institutions aren’t physically co-located. Leaving at least two possibilities for encouraging the production of this type of innovation fuel.

First, is the practice of hackfests and similar. Get disparate people together from all over the organisation (and beyond?) and share problems and desires. Group people with problems and desires with people who can implement them, and then have a crack at implementing them.

Second, recognise that increasingly people are often using the same digital space. Design the digital space so that “serendipitous exchanges” can take place in the digital space. That’s one of the aims of this project.

Both these approaches break the “tragedy of hierarchically managed systems”. Pair them with approaches like UX and LX design and you might have something.

But for these approaches to work will rely on what Dron describes as “tools that make cog production fast and simple”. Traditional monolithic enterprise systems are not such tools. The technologies that make up the current move toward API centric architectures are such tools.

One of the problems with enterprise systems is that “decisions” about the design of the system are essentially forever. Once they are made, it’s very, very hard to change. “Tools that make cog production fast and simple” allow for decisions to be made, tested, and re-made when and if they fail. They allow for learning.

How NGL can inform my role as teacher

The students in a course I’m teaching are asked to reflect on their own work and respond to the question embedded in the title to this post. What follows is mostly a test post to illustrate how it will all work, but also captures some of my views.

My own learning outside of the institution.

This is currently the primary influence. To teach I need to learn and the vast majority of my learning is enabled through various aspects of NGL. This occurs largely outside of the organisation. The better I adopt NGL practices, the better my learning, and the better my teaching.

Limited steps (and barriers) around my learning within the institution

Principles and practices from NGL are almost entirely absent from institutional approaches to staff development and teacher learning. These practices are still largely stuck in objectivist and cognitivist approaches to learning.

What attempts are made toward adopting NGL are constrained by the underlying hierarchical mindset that infects the institution. This includes the fundamental organisational structure; how technology is selected, supported, and configured; and, how learning is organised into hierarchical structures consisting of faculties, schools, programs, and courses.

The institution does have a Yammer group that appears essentially dead. Recently some of the more technically minded staff have started using Slack and that appears to be a little more active. Whether that’s the initial novelty or something more about those involved being much more familiar with the type of NGL practices Slack encourages, it’s too soon to tell.

My last post before this one outlines a “system” that I’d like to implement that I think is more likely to embed NGL principles into how teachers learn how to teach. Whether it does or not is the $64K question. It is this “system” that would form my focus for Assignment 2 in the NGL course.

My students’ learning

Students in the courses I’m involved with are required and support to engage more in NGL than almost any other course I’m aware of. This is only possible because I’ve developed software to support it; leveraged externally available tools for student learning; and, ignored institutional policy around where courses and their content may reside. And with all of that, the engagement of these students in NGL is still only scratching the surface of the possibilities.

I can see some glimmer of possibilities for the “system” outlined above for helping with this. But time will tell.

Educational technology: deja vu all over again

I must be getting old and have spent far too long in universities and educational technology. I keep seeing the wheel being reinvented all over again.

Today, spam from Facebook lured me there to unsubscribe from notifications, once again. Whilst there I was lured into @kierenjamieson’s page/wall (whatever) by his unique collection of cat videos and food porn. But then there came a link to this opinion article titled “Reduced scope of the Office of Teaching and Learning should focus us on what works”.

It takes the announcement of changes to the OLT and turns it into some suggestions about what might work better. Suggestions that include investing in “platforms that support collaboration in teaching and learning”. There’s also the observation that “biology professors all over Australia” teach the same content, from the same history, using the same books and the question about “what if all of them could deliver a learning experience that is currently achieved by the top 10 per cent of their peers – an experience that engages, challenges and stimulates their students”.

It then points to BEST network as an example of this that is currently working. Apparently a world first, a “teaching network run by academics for academics” powered by technology from Smart Sparrow. It’s a cloud-based system that

supports a sustainable model of academic crowd-sourcing that frees teachers from the constraints of their institutional silos, to the benefit of student, teacher and institution alike

I’ve heard a fair few rumblings about Smart Sparrow over recent years all indicating that apparently it’s quite good. Plus the description of the BEST network makes it sound wonderful. Time to go have a look. Interesting to see whether I could get access. The site looks good and I was successful in loading up a lesson and engaging with it. At no stage did I need to login. That’s good.

But this is when the problems set in.

The “lesson” I engaged with was a multimedia page turning exercise with the (questionable) advantage of poorly constructed multiple-choice questions for interaction. Just like the various truly crappy HR/Quality/Legal “lessons” I’ve been required to complete over the years by organisations.

Granted, the Smart Sparrow technology has been used to implement something that is very well integrated and is has good graphic design. But, knowing little about the content area I was able to successfully work through the lesson answering questions based on common sense guessing, or simply eliminating the nonsensical options. The folk behind this particular lesson need to read “The 10 stupid mistakes in design of Multiple Choice questions”.

Now this might have been just a one-off. Maybe I picked the one bad one. Maybe there are some really well designed examples. But in the end it appears that the technology is re-producing the same old multimedia/MCQ lessons that have been around for yonks. Sure, I imagine that the “adaptive” part means that there is some good algorithms behind the scenes that allow you manage, branch and direct learners in good ways.

But those will still rely on the design team being able to leverage those algorithms by designing the knowledge base. Which is the really hard work.

Then there’s the question about these lessons being reliant on proprietary software. Can anyone say “loss leader” and “lockin”?

If I’m the biology lecturer at ACME University, how do I organise for my class of 1000 biology students (and their data) to get into the various systems required for them and I to effectively use the wonderful, adaptive lesson I’ve found? Is central IT and L&T at ACME University going to be able to help?

Is this Bates’ Lone Ranger problem all over again?

I had a quick look through the BEST network, and it wasn’t very explicit about the licensing. Certainly no clear statement of the content being openly licensed.

On the plus side, it does appear that people are using the site to share resources. But then again this isn’t all that new. Google “learning object repository”. Or you might want to visit Merlot. Even I – as someone skeptical about the value of specific repositories – added something to Merlot in 2007 and Merlot started 10 years before that.

And to give Merlot it’s due, at least it’s website is still operational almost 20 years later. It is arguably the exception that proves the rule that these types of specific community sites/repositories for sharing University learning and teaching resources/tips will disappear within 5 years of initiation. As many ALTC/OLT projects can attest.

Just because we have a new technology, it doesn’t fundamentally change the practice. It seems educational technology is destined to make a better wheel, or perhaps just build horseless carriages.

How about we try for some transformation?

Disclaimers

I’m in a skeptical frame of mind (more so than usual). I’ve only had a quick skim of the BEST network. I only looked at one “lesson” from it.

The perceived uselessness of the Technology Acceptance Model (TAM) for e-learning

Below you will find the slides, abstract, and references for a talk given to folk from the University of South Australia on 1 October, 2015. A later blog post outlines core parts of the argument.

Slides

Abstract

In a newspaper article (Laxon, 2013), Professor Mark Brown described e-learning as

a bit like teenage sex. Everyone says they’re doing it but not many people are and those that are doing it are doing it very poorly.

This is not a new problem with a long litany of publications spread over decades bemoaning the limited adoption of new technology-based pedagogical practices (e-learning). The dominant theoretical model used in research seeking to understand the adoption decisions of both staff and students has been the Technology Acceptance Model (TAM) (Šumak, Heričko, & Pušnik, 2011). TAM views an individual’s intention to adopt a particular digital technology as being most heavily influenced by two factors: perceived usefulness, and perceived ease of use. This presentation will explore and illustrate the perceived uselessness of TAM for understanding and responding to e-learning’s “teenage sex” problem using the BAD/SET mindsets (Jones & Clark, 2014) and experience from four years of teaching large, e-learning “rich” courses. The presentation will also seek to offer initial suggestions and ideas for addressing e-learning’s “teenage sex” problem.

References

Bichsel, J. (2012). Analytics in Higher Education: Benefits, Barriers, Progress and Recommendations. Louisville, CO. Retrieved from http://net.educause.edu/ir/library/pdf/ERS1207/ers1207.pdf

Box, G. E. P. (1979). Robustness in the Strategy of Scientific Model Building. In R. Launer & G. Wilkinson (Eds.), Robustness in Statistics (pp. 201–236). Academic Press. doi:0-12-4381 50-2

Burton-Jones, A., & Hubona, G. (2006). The mediation of external variables in the technology acceptance model. Information & Management, 43(6), 706–717. doi:10.1016/j.im.2006.03.007

Ciborra, C. (1992). From thinking to tinkering: The grassroots of strategic information systems. The Information Society, 8(4), 297–309.

Corrin, L., Kennedy, G., & Mulder, R. (2013). Enhancing learning analytics by understanding the needs of teachers. In Electric Dreams. Proceedings ascilite 2013 (pp. 201–205).

Davis, F. D. (1986). A Technology Acceptance Model for empirically testing new end-user information systems: Theory and results. MIT.

Davis, F. D. (1989). Perceived usefulness, perceived ease of use and user acceptance of information technology. MIS Quarterly, 13(3), 319.

Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982–1003.

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicator of learning and teaching performance. Canberra: Australian Learning and Teaching Council. Retrieved from http://moourl.com/hpds8

Ferguson, R., Clow, D., Macfadyen, L., Essa, A., Dawson, S., & Alexander, S. (2014). Setting Learning Analytics in Context : Overcoming the Barriers to Large-Scale Adoption. Journal of Learning Analytics, 1(3), 120–144. doi:10.1145/2567574.2567592

Hannafin, M., McCarthy, J., Hannafin, K., & Radtke, P. (2001). Scaffolding performance in EPSSs: Bridging theory and practice. In World Conference on Educational Multimedia, Hypermedia and Telecommunications (pp. 658–663). Retrieved from http://www.editlib.org/INDEX.CFM?fuseaction=Reader.ViewAbstract&paper_id=8792

Holt, D., Palmer, S., Munro, J., Solomonides, I., Gosper, M., Hicks, M., … Hollenbeck, R. (2013). Leading the quality management of online learning environments in Australian higher education. Australasian Journal of Educational Technology, 29(3), 387–402. Retrieved from http://www.ascilite.org.au/ajet/submission/index.php/AJET/article/view/84

Introna, L. (2013). Epilogue: Performativity and the Becoming of Sociomaterial Assemblages. In F.-X. de Vaujany & N. Mitev (Eds.), Materiality and Space: Organizations, Artefacts and Practices (pp. 330–342). Palgrave Macmillan.

Jasperson, S., Carter, P. E., & Zmud, R. W. (2005). A Comprehensive Conceptualization of Post-Adaptive Behaviors Associated with Information Technology Enabled Work Systems. MIS Quarterly, 29(3), 525–557.

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272). Dunedin.

Kay, A. (1984). Computer Software. Scientific American, 251(3), 53–59.

Kunin, V., Goldovsky, L., Darzentas, N., & Ouzounis, C. a. (2005). The net of life: Reconstructing the microbial phylogenetic network. Genome Research, 15(7), 954–959. doi:10.1101/gr.3666505

Laxon, A. (2013, September 14). Exams go online for university students. The New Zealand Herald.

Lee, Y., Kozar, K. A., & Larsen, K. R. T. (2003). The Technology Acceptance Model: Past, Present, and Future. Communications of the AIS, 12. Retrieved from http://aisel.aisnet.org/cais/vol12/iss1/50

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, 57(10), 1439–1459. doi:10.1177/0002764213479367

Müller, M. (2015). Assemblages and Actor-networks: Rethinking Socio-material Power, Politics and Space. Geography Compass, 9(1), 27–41. doi:10.1111/gec3.12192

Najmul Islam, A. K. M. (2014). Sources of satisfaction and dissatisfaction with a learning management system in post-adoption stage: A critical incident technique approach. Computers in Human Behavior, 30, 249–261. doi:10.1016/j.chb.2013.09.010

Nistor, N. (2014). When technology acceptance models won’t work: Non-significant intention-behavior effects. Computers in Human Behavior, pp. 299–300. Elsevier Ltd. doi:10.1016/j.chb.2014.02.052

Stead, D. R. (2005). A review of the one-minute paper. Active Learning in Higher Education, 6(2), 118–131. doi:10.1177/1469787405054237

Sturgess, P., & Nouwens, F. (2004). Evaluation of online learning management systems. Turkish Online Journal of Distance Education, 5(3). Retrieved from http://tojde.anadolu.edu.tr/tojde15/articles/sturgess.htm

Šumak, B., Heričko, M., & Pušnik, M. (2011). A meta-analysis of e-learning technology acceptance: The role of user types and e-learning technology types. Computers in Human Behavior, 27(6), 2067–2077. doi:10.1016/j.chb.2011.08.005

Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39(2), 273–315. doi:10.1111/j.1540-5915.2008.00192.x

Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the Technology Acceptance Model: Four longitudinal field studies. Management Science, 46(2), 186–204.
Venkatesh, V., Morris, M., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478.

All models are wrong, but some are useful and its application to e-learning

In a section with the heading “ALL MODELS ARE WRONG BUT SOME ARE USEFUL”, Box (1979) wrote

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. (p. 202)

Over recent weeks I’ve been increasingly interested in the application of this aphorism to the practice of institutional e-learning and why it is so bad.

Everything in e-learning is a model

For definition’s sake, the OECD (2005) defines e-learning as the use of information and communications technology (ICT) to support and enhance learning and teaching.

As the heading suggests, I’d like to propose that everything in institutional e-learning is a model. Borrowing from the Wikipedia page on this aphorism you get the definition of model as “a simplification or approximation of reality and hence will not reflect all of reality” (Burnham & Anderson, 2002).

The software that enables e-learning is a model. At some stage, some smart people sat down, generated and analysed a set of requirements, and then developed a model (in the form of the software) that aims to fulfill those requirements.

Instructional design and teaching are essentially the creations of models intended to enable learning. At some stage, some smart people sat down, generated and analysed a set of requirements, and then developed a model to achieve some learning outcome.

Organisational structures are models. At some stage, some smart people sat down, generated and analysed a set of requirements, and then developed a model to achieve some operational and strategic requirements. Those same set of smart people probably also worked on developing a range of models in the form of organisational policies and processes. Some of which may have been influenced by the software models that are available.

The theories, tools, and schema used in the generation of the above models, are in turn models.

And following Box, all models are wrong.

But it gets worse.

In e-learning, everyone is an expert model builder

E-learning within an institution – by its nature – must bring together a range of different disciplines, including (but not limited to): senior leadership, middle management, quality assurance (boo) and related; researchers; librarians; instructional designers, staff developers and related learning and teaching experts; various forms of technology experts (software developers, network and systems administrators, user support etc); various forms of content development experts (editors, illustrators, video and various multimedia developers); and, of course the teachers/subject matter experts. I’ll make special mention of the folk from marketing who are the experts of the institutional brand.

All of these people are – or at least should be – expert model builders. Experts at building and maintaining the types of models mentioned above. Even the institutional brand is a type of model.

This brings problems.

Each of these expert model builders suffer from expertise bias.

What do you mean you can’t traverse the byzantine mess of links from the staff intranet and find the support documentation? Here, you just click here, here, here, here, here, here, here, and here. See, obvious……

And each of these experts thinks that the key to improving the quality of e-learning at the institution can be found in the institution doing a much better job at their model. Can you guess which group of experts is most likely to suggest the following?

The quality of learning and teaching at our institution can be improved by:

  • requiring every academic to have a teaching qualification.
  • ensuring we only employ quality researchers who are leaders in their field.
  • adopt the latest version of ITIL, i.e. ITIL (the full straight-jacket).
  • all courses are required to meet the 30 page checklist of quality criteria.
  • all courses were redesigned using constructive alignment.
  • we re-write all our systems using an API-centric architecture.
  • adopted my latest theory on situated cognitive, self regulated learning and maturation.

What’s common about most of these suggestion is that it will be all better if we just adopt this new better model. All of the problems we’ve faced previously are due to the fact that we’ve used the wrong model. This model is better. It will solve it.

Some recent examples

I’ve seen a few examples of this recently.

Ben Werdmuller had an article on Medium titled “What would it take to save #EdTech?” Ben’s suggested model solution was an open startup.

Mark Smithers blogged recently reflecting on 20 years in e-learning. In it Mark suggests a new model for course development teams as one solution.

Then there is this post on Medium titled “Is Slack the new LMS?”. As the title suggests, the new model here is that embodied by Slack.

Tomorrow I’ll be attending a panel session titled “The role of Openness in Creating New Futures in higher education” (being streamed live). Indicative of how the “open” model is seen as yet another solution to the problem of institutional e-learning.

And going back a bit further Holt et al (2011) report on the strategic contributions of teaching and learning centres in Australian higher education and observe that

These centres remain in a state of flux, with seemingly endless reconfiguration. The drivers for such change appear to lie in decision makers’ search for their centres to add more strategic value to organisational teaching, learning and the student experience (p. 5)

i.e. every senior manager worth their salt does the same stupid thing that senior managers have always done. Changed the model that underpins the structure of the organisation.

Changing the model like this is seen as suggesting you know what you are doing and it can sometimes be made to appear logical.

And of course in the complex adaptive system that is institutional e-learning it is also completely and utterly wrong and destined to fail.

A new model is not a solution

This is because any model is “a simplification or approximation of reality and hence will not reflect all of reality” (Burnham & Anderson, 2002) and “it would be very remarkable if any system existing in the real world could be exactly represented by any simple model” (Box, 1979, p. 202).

As Box suggested, this is not to say you should ignore all models. After all, all models are wrong, but some are useful. You can achieve some benefits from moving to a new model.

But a new model can never be “the” solution. Especially as the size of the impact of the model grows. A new organisational structure for the entire university is never going to be the solution, it will only be really, really costly.

There are always problems

This is my 25th year working in Universities. I’ve spent my entire 25 years identifying and fixing the problems that exist with whatever model the institution has used. Almost my entire research career has been built around this. A selection of the titles from my publications illustrates the point

  1. Computing by Distance Education: Problems and Solutions
  2. Solving some problems of University Education: A Case Study
  3. Solving some problems with University Education: Part II
  4. How to live with ERP systems and thrive.
  5. The rise and fall of a shadow system: Lessons for Enterprise System Implementation
  6. Limits in developing innovative pedagogy with Moodle: The story of BIM
  7. The life and death of Webfuse: principles for learning and learning into the future
  8. Breaking BAD to bridge the reality/rhetoric chasm.

And I’m not alone. Scratch the surface at any University and you will find numerous examples of individual or small groups of academics identifying and fixing problems with whatever models the institutions has adopted. e.g. A workshop at CSU earlier this year included academics from CSU presenting a raft of systems they’ve had to develop to solve problems with the institutional models.

The problem is knowing how to combine the multitude of models

The TPACK (Technological Pedagogical Content Knowledge) framework provides one way to conceptualise what is required for quality learning and teaching with technology. In proposing the TPACK Framework, Mischra and Koehler (2006) argue that

Quality teaching requires developing a nuanced understanding of the complex relationships between technology, content, and pedagogy, and using this understanding to develop appropriate, context-specific strategies and representations. Productive technology integration in teaching needs to consider all three issues not in isolation, but rather within the complex relationships in the system defined by the three key elements (p. 1029).

i.e. good quality teaching requires the development of “appropriate, context-specific” combinations of all of the models involved with e-learning.

The reason why “all models are wrong” is because when you get down to the individual course (remember I’m focusing on university e-learning) you are getting much closer to the reality of learning. A reality that is hidden from the senior manager developing policy, the QA person deciding on standards for the entire institution, the software developer working on a system (open source or not) etc. are all removed from the context. They are all removed from the reality.

The task of the teacher (or the course design team depending on your model) is captured somewhat by Shulman (1987)

to transform the content knowledge he or she possesses into forms that are pedagogically powerful and yet adaptive to the variations in ability and background presented by the students (p. 15)

The task is to mix all those models together and produce the most effective learning experience for these particular students in this particular context. The better you can do that, the more pedagogical value. The better the learning.

All of the work outlined in my publications listed above has been attempts to mix the various models available into a form that has greater pedagogical value within the context which I was teaching.

A new model means a need to create a new mix

When a new LMS, a new organisational structure, a new QA process, or some other new model replaces the old model it doesn’t automatically bring an enhancement in the overall experience of e-learning. That enhancement is really only maximised by each of the teachers/course design teams having to go back and re-do all the work they’d previously done to get the mix of models right for their context.

This is where (I think) the “technology dip” comes from, Underwood and Dillon (2011)

Introducing new technologies into the classroom does not automatically bring about new forms of teaching and learning. There is a significant discontinuity between the introduction of ICT into any educational setting and the emergence of measurable impacts on pedagogy and learning outcomes (p. 320

Instead the quality of learning and teaching dips after the introduction of new technologies (new models) as teachers struggle to work out the new mix of models that are most appropriate for their context.

It’s not how bad you start, it’s how quickly you get better

In reply to my comment on his post, Mark asks the obvious question

What other model is there?

Given the argument that “all models are wrong”, how do I propose a model that is correct?

I’m not going expand on this very much, but I will point you to Dave Snowden’s recent series of posts, including this one titled “Towards a new theory of change” and his general argument

that we need to stop talking about how things should be, and start changing things in the here and now

For me this means, stop focusing on your new model of the ideal future. e.g. If only we used Slack for the LMS. Instead develop an on-going capacity to know in detail what is going on now (learner experience design is one enabler here), enable anyone and everyone in the organisation to be able to remix all of the models (the horrendously poor way most universities don’t use network technology to promote connections between people currently prevent this), make it easy for people to know about and re-use the mixtures developed by others (too much of the re-mixing that is currently done is manual), find out what works and promote it (this relies on doing a really good job on the first point, not course evaluation questionnaires), and find out what doesn’t work and kill it off.

This doesn’t mean doing away with strategic projects, it just means scaling them back a bit and focusing more on helping all the members of the organisation learn more about the unique collection of model mixtures that work best in the multitude of contexts that make up the organisation.

My suggestion is that there needs to be a more fruitful combination of the BAD and SET frameworks and a particular focus on developing the organisation’s distributed capacity to develop it’s TPACK.

References

Box, G. E. P. (1979). Robustness in the Strategy of Scientific Model Building. In R. Launer & G. Wilkinson (Eds.), Robustness in Statistics (pp. 201–236). Academic Press. doi:0-12-4381 50-2

Holt, D., Palmer, S., & Challis, D. (2011). Changing perspectives: Teaching and Learning Centres’ strategic contributions to academic development in Australian higher education. International Journal for Academic Development, 16(1), 5–17. Retrieved from http://www.tandfonline.com/doi/abs/10.1080/1360144X.2011.546211

OECD. (2005). E-Learning in Tertiary Education: Where do we stand? Paris, France: Centre for Educational Research and Innovation, Organisation for Economic Co-operation and Development. Retrieved from http://www.oecd-ilibrary.org/education/e-learning-in-tertiary-education_9789264009219-en

Underwood, J., & Dillon, G. (2011). Chasing dreams and recognising realities: teachers’ responses to ICT. Technology, Pedagogy and Education, 20(3), 317–330. doi:10.1080/1475939X.2011.610932

Homogeneity: the inevitable result of a strategic approach?

Is homogeneity an inevitable end result of a strategic approach to deciding what gets done?

The following presents some evidence to suggest a potential strong correlation.

What is the strategic approach?

In Jones and Clark (2014) we suggested that contemporary universities (along most other organisations) increasingly use a strategic approach to decide what work gets done. We described strategy as

following a global plan intended to achieve a pre-identified desired future state.

It’s where a bunch of really smart people get together. They analyse the current situation, identify the requirements and the challenges, and then decide that the entire institution should do X. Where X might include: a particular strategic vision; a single set of graduate attributes for the entire organisation; a particular approach to branding and marketing; the selection of a particular information system etc.

Once the strategic decision is made, the entire organisation becomes focused on moving toward the various institutionally approved strategic goals. Doing anything else is seen as inefficient, inappropriate, and is to be rooted out.

The underlying aim of the strategic approach is differentiation. To set the institution apart from the other institutions. To give various stakeholders/customers/clients a reason to go to this institution first.

How does that work out for them?

It’s Hard to Differentiate One Higher-Ed Brand From Another

This page reports on a study of 50 US-based higher education institutions and includes quotes such as (emphasis added)

found that the mission, purpose or vision statements of more than 50 higher education institutions share striking similarities, regardless of institution size, public or private status, land-grant status or religious affiliation, or for-profit or not-for-profit status….
statements may accurately represent the broad views and aspirations of education leaders and their institutions. And they probably differentiate the institutions from financial service and retail companies

Interestingly the suggested solution to this problem is to forge “a strong organizational identity only starts with establishing and committing to a clear and differentiated purpose, brand and culture”. i.e. yet another strategic approach.

The sameness of graduate attributes

Few a few years know there’s been a fetish that has required each Australian University to develop their own set of graduate attributes. These are meant to indicate what are the unique attributes of a graduate of that institution. To demonstrate the unique value that the educational experiences offered by institution adds to the development of their customer student. Surely this must be the most obvious place of differentiation and distinction. Something the truly captures what is unique about each university.

Oliver (2011) does a scan of the literature and practice around graduate attributes identifies that

Universities’ most common generic attributes, apart from knowledge outcomes, appear to cluster in seven broad areas:

  1. Written and oral communication
  2. Critical and analytical (and sometimes creative and reflective) thinking
  3. Problem-solving (including generating ideas and innovative solutions)
  4. Information literacy, often associated with technology
  5. Learning and working independently
  6. Learning and working collaboratively
  7. Ethical and inclusive engagement with communities, cultures and nations.

(p. 2)

Strategic Information Systems

And the other fad over recent years has been the adoption of Strategic Information Systems such as ERPs and LMS. If the institution adopts the same system and works effectively together to leverage its capabilities we will be able to gain a competitive advantage over the opposition. Well, no.

Over 20 years ago, Ciborra (1992) argues

Tapping standard models of strategy analysis and data sources for industry analysis will lead to similar systems and enhance, rather than decrease, imitation (p. 297)

Which is why e-learning within Universities is increasingly infected by LMS-based courses using institutional standard course site designs, a digital repository, a lecture capture system, an e-portfolio, and a couple of other standard systems offering the same broken experience. Whether your LMS is open source or not, typically doesn’t make a difference.

The solution

Ciborra (1992) suggested

How then should “true” SISs be developed? In order to avoid easy imitation, they should should emerge from from the grass roots of the organization, out of end-user hacking, computing, and tinkering. In this way the innovative SIS is going to be highly entrenched with the specific culture of the firm. Top management needs to appreciate local fluctuations in practices as a repository of unique innovations and commit adequate resources to their development, even if they fly if the face of traditional approaches. Rather than of looking for standard models in the business strategy literature, SISs should be looked for in the theory and practice of organizational leaming and innovation, both incremental and radical. (p. 297)

Or as we argued in Jones and Clark (2014)

Perhaps universities need to break a little BAD?

Instead, universities like most organisations, are attempting to solve the problems of the strategic approach by doing the strategic approach again (but we’ll do it better this time, promise).

Insanity by Albert Einstein by Mimsen, on Flickr
Creative Commons Creative Commons Attribution-Share Alike 2.0 Generic License   by  Mimsen 

References

Ciborra, C. (1992). From thinking to tinkering: The grassroots of strategic information systems. The Information Society, 8(4), 297–309.