Breaking BAD to bridge the e-learning reality/rhetoric chasm

@damoclarky and I got a bit lucky. Our ASCILITE paper has been accepted with revisions. Apparently the first reviewer hated the “theoretical construct” we were using to make our argument. The following is what we originally wrote, sharing it here to hopefully spark some critique and improvement (and also not to entirely waste the writing when I gut it and start again).

Start with the problem and then the “construct”, both adapted from the paper.

Problem

In a newspaper article (Laxon, 2013) Professor Mark Brown makes the following comment on the quality of contemporary University e-learning

E-learning’s a bit like teenage sex. Everyone says they’re doing it but not many people really are and those that are doing it are doing it very poorly. (n.p).

E-learning – defined by the OECD (2005) as the use of information and communications technology (ICT) to support and enhance learning and teaching – has been around for so long that there have been numerous debates about replacing it with other phrases. Regardless of the term used there “has been a long-standing tendency in education for digital technologies to eventually fall short of the exaggerated expectations”(Selwyn, 2012, n.p.). Writing in the early 1990s Geoghagen (1994) seeks to understand why a three decade long “vision of a pedagogical utopia” (n.p.) promised by instructional technologies has failed to eventuate. Ten years on Salmon (2005) notes that e-learning within universities is still struggling to move beyond projects driven by innovators and engage a significant percentage of students and staff. Even more recently concerns remain about how much technology is being used to effectively enhance student learning (Kirkwood & Price, 2013). Given that “Australian universities have made very large investments in corporate educational technologies” (Holt et al., 2013, p. 388) it is increasingly important to understand and address the rhetoric/reality chasm around e-learning.

Not surprisingly the literature provides a variety of answers to this complex question. Weimer (2007) observes that academics come to the task of teaching with immense amounts of content knowledge, but little or no knowledge of teaching and learning, beyond perhaps their personal experience. A situation which may not change significantly given that academics are expected to engage equally in research and teaching and yet work towards promotion criteria that are perceived to primarily value achievements in research (Zellweger, 2005). It has been argued that the limitations of the Learning Management System (LMS) – the most common university e-learning tool – make the LMS less than suitable for more effective learner-centered approaches and is contributing to growing educator dissatisfaction (Rahman & Dron, 2012). It’s also been argued that the “limited digital fluency of lecturers and professors is a great challenge” (Johnson, Adams Becker, Cummins, & Estrada, 2014, p. 3) for the creative leveraging of emerging technologies. Another contributing factor is likely to be Selwyn’s (2008) suggestion that educational technologists have failed to be cognisant of “the more critical analyses of technology that have come to the fore in other social science and humanities disciplines (p. 83). Of particular interest here is the observation of Goodyear et al (2014) that the “influence of the physical setting (digital and material) on learning activity is often important, but is under-researched and under-theorised: it is often taken for granted” (p. 138).

Our argument is that the set of implicit assumptions that underpin the practice of institutional e-learning within universities (which we’ll summarise under the acronym SET) leads to a digital and material environment that contributes significantly to the reality/rhetoric chasm. The argument is that while this mindset underpins how universities go about the task of institutional e-learning, they won’t be able to bridge the chasm.

Instead, we argument that another mindset needs to play a larger role in institutional practice. How much we don’t know. We’ll summarise this mindset under the acronym “BAD”. Yep, we think institutional e-learning needs to break BAD.

Breaking BAD versus SET in your ways

The following table contrasts the two frameworks and expands their acronyms. A slightly more detailed examination of the two frameworks follows

Table 1: The BAD and SET frameworks for e-learning implementation
Component BAD SET
How work gets done Bricolage – concrete problems are solved through creative recombination of existing resources Strategy – a desired future state is identified, all resources required to achieve state in most efficient way identified and provided.
How ICT is perceived Affordances – ICT is protean. It can be modified to enhance and transform current practice; and, to make it easier for the users. Established – ICT is fixed and implemented vanilla. Processes change to fit and users trained to use the provided functionality.
How you see the world Distributed – the world is complex, dynamic and unpredictable. Tree-like – the world is relatively stable and predictable. It can be understood through logical decomposition into a hierarchy.

How work gets done

(this was originally titled “How stuff happens” but was probably what one reviewer described as “inappropriately colloquial”. Need a better label for this. The idea is that the organisation only recognises work of a particular type. It’s the only way it conceives of anything interesting/important happening. Not sure the following explains this well enough)

It would be an unusual contemporary Australian university that was not – at least proclaiming the rhetoric of – following a strategic approach to its operations. Numerous environmental challenges and influences have led to universities being treated as businesses with an increasing prevalence of managers using “strategic control and a focus on outputs which can be quantified and compared” (Reid, 2009, p. 575) to manage academic activities. In line with this has been the increasing strategic approach to learning and teaching. The requirement that Australian universities have institutional learning and teaching strategic plans publicly available on their websites prior to accessing a government learning and teaching fund (Inglis, 2007) is just one example of how university teaching has become an object of policy with the learning and teaching excellence necessarily including the specification of goals (Clegg & Smith, 2008). The perceived importance of strategic approaches to institutional e-learning is illustrated by Carter et al’s (2011) identifying the importance of ensuring “Technology alignment with goals of the organization” (p. 207). The strategic or planning-by-objectives (e.g. learning outcomes, graduate attributes) approach also underpins how course design is largely assumed to occur with Visscher-Voerman and Gustafson (2004) finding that it underpins “a majority of the instructional design models in the literature” (p. 77). These approaches to understanding “how stuff happens” are so ingrained that it is often forgotten that these ideas have not always existed (Kezar, 2001) and that there is an alternate perspective.

(An example comparing bricolage and engineering approaches might be useful, might actually be a better structure for this section)

An example of this alternate perspective can be found in the idea of bricolage or “the art of creating with what is at hand” (Scribner, 2005, p. 297). Bricolage involves the manipulation and creative repurposing of existing, and often unlikely, resources into new arrangements to solve a concrete problem. A bricoleur (someone who engages in bricolage) when faced with a project does not analyse what resources may be required to fulfill that project (a more strategic approach), instead they ask how the project can be achieved with the resources already available (Hatton, 1989). Hatton (1989) used bricolage to understand the work of teachers, though Scribner (2005) thinks somewhat negatively. In terms of developing strategic applications of ICT, Ciborra (1992) argues that the “capability of integrating unique ideas and practical design solutions at the end-user level” (p. 299) (bricolage) is more important than strategic approaches.

As argued by Jones et al (2005) there are risky extremes inherent in both the strategic and bricolage approaches to process. The suggestion here within the context of university e-learning is that it would be fruitful to explore a dynamic and flexible interplay between the strategic and bricolage approaches. The problem is that at the moment the strategic is crowding out the bricolage. As Groom and Lamb (2014) observe the cost of supporting an enterprise learning tool (e.g. LMS) limits resources for user-driven innovation, in part because it draws “attention and users away” from the strategic tool. The demands of sustaining the large, complex and strategic tool dominates priorities and leads to “IT organizations…defined by what’s necessary rather than what’s possible” (Groom & Lamb, 2014, n.p). The established view of Information and Communication Technologies (ICT) in part arises from the predominance of the strategic view of how work happens.

How ICT is perceived: Affordances or Established

Widely accepted best practice within the IT industry is that large integrated systems – like an LMS – should be implemented in their “vanilla” form as they are too expensive (Robey, Ross, & Boudreau, 2002). This way of perceiving ICTs assumes that the functionality provided by technology is established and cannot be changed. This perception of an LMS encourages the adoption of only those pedagogical designs that are supported by the existing LMS functionality and precludes the exploration of contextually specific learning designs (Jones, 2012). Perceiving and implementing the LMS as a established product simplifies and reduces the cost of training and support, but increases the difficulty of adoption as teaching staff attempt to use a standardised system to support hugely diverse disciplines, teaching philosophies and instructional styles (Black, Beck, Dawson, Jinks, & DiPietro, 2007). Perhaps in no small way the established view of ICT in e-learning contributes to Dede’s (2008) observation that “widely used instructional technology applications have less variety in approach than a low-end fast-food restaurant” (p. 58). This perception of ICT challenges Kay’s (1984) discussion of the “protean nature of the computer” (p. 59) as “the first metamedium, and as such has degrees of freedom and expression never before encountered” (p. 59). However, this perception of ICT is closely linked with the techno-rational assumptions of the strategic view, an approach that is increasingly seen as a naïve view of ICT, technology and organisations.

(Remove some of the quotes and tell a better story).

Goodyear et al (2014) argue that in thinking about design for networked learning it is vital to acknowledge “the likelihood of slippage between the task as set and the actual activity” (p. 139). Hannon (2013) describes a case where “meso-level practitioners – teaching academics, learning technologies, and academic developers” (p. 175) undertake “hidden effort” (p. 175) to deal with the gap between technology and pedagogy that arise from the application of centralised technologies. Rather than stick with the established functionality provided by an information system increasingly technically literate users draw upon increasingly available technologies to develop systems that bridge the gaps between their needs and the established information system. While often seen as dangerous and inefficient such systems can provide a resource of creativity and innovation that helps organisations survive in a competitive environment (Behrens, 2009). Such systems arise because ICT is not seen as established, but rather as one of a number of components of an emergent process of change where the outcomes are indeterminate because they are contingent on the specifics of the context and the situation (Markus & Robey, 1988). In particular, they arise due to an on-going process – not unlike bricolage – where users are exploring how the affordances of ICT can be leveraged to address concrete problems. The phrase affordances is used here as defined by Goodyear et al (2014) “not as pre-given, but as co-evolving, emergent and partly co-constitutive” (p. 142) and as a way of exploring how what is actually done with e-learning systems is “influenced by the qualities of the place in which they are working” (p. 137). Our view is that it is necessary for the implementation of e-learning systems to be perceived as an on-going and emergent exploration of the affordances that could be the most useful for the students and teachers within a given context. Echoing Johri’s (2011) observation that bricolage shifts focus away from the established “design of an artefact towards emergent design of technology-in-use, particularly by the users” (p. 212).

(that can certainly be improved upon)

How you see the world: Distributed or Tree-like

Techno-rational methods such as strategic planning and software development (or at least act like they) perceive the world as a hierarchy or as being tree-like. These methods use analysis and logical decomposition to reduce larger wholes into smaller more easily understood and manageable parts (Truex, Baskerville, & Travis, 2000). This approach is problematic because the isolation of components is largely imaginary and their separation leads to a loss of rich interdependencies between components (Truex et al., 2000). Enterprise systems are informed heavily by these tree-like conceptions and this is reflected in university e-learning environments and their poor fit with the heterarchical and self-organised potential of contemporary technologies and educational practices (Hannon, Ryberg, & Riddle, 2014). Goodyear et al (2014) argue “that the dominant images of the object of our research do not yet reflect the extent to which learning networks now consist of heterogenous assemblages of tasks, activities, people, roles, rules, places, tools, artefacts and other resources, distributed in complex configurations across time and space and involving digital, non-digital and hybrid entities” (p. 140). We suggest that the same applies to the dominant conceptions underpinning the implementation of institutional e-learning systems.

The limitations of tree-like models and a preference for distributed models are evident in a number of sources. Holt et al (2013) argue for the importance of distributed leadership in institutional e-learning to the growing complexity of e-learning meaning that no one leader at the top of a hierarchical tree has the knowledge to “possibly contend with the complexity of issues” (p. 389). The tend towards distribution is obviously evident in connectivism and its “thesis that knowledge is distributed across a network of connections and therefore learning consists of the ability to construct and traverse those networks” (Downes, 2011, n.p). Siemens’ (2008) list of some of the concepts from which connectivism arises – such as activity theory, distributed and embodied cognition, complexity and network theory – illustrate the breadth of this move to distributed understandings. The socio-material approaches to studying and understanding networked learning (and technology embedded practices more broadly) mentioned by both Hannon (2013) and Goodyear et al (2014) echo a distributed view and underpins the emergent view of technology mentioned in the previous section. It also links with the idea of bricolage as paying close attention to what occurs within the distributed network and responding to context-specific problems by experimenting with the affordances perceived by the components of an network/assemblage to reduce the chasm between rhetoric and reality.

Bringing the LMS into the network – Experiment # 1 – Activity completion

The following is the first step in an attempt to modify the Moodle Activity Viewer (or at least a local instance). I’d like a modified version of MAV to allow me to

  1. find out how students are progressing with activity completion.

    Rather than use clicks (as MAV does currently) to track student usage, use activity completion. I use this in EDC3100, however, activity completion isn’t even turned on at the Moodle level at the other institution.

  2. Easily display additional information about students as a roll-over/popup for any links to a student profile page.

The following is an initial exploration of how MAV works and what changes I’ll need to kludge it to work within the local constraints.

It starts with a description of how this is going to work, follows with some initial explorations getting MAV to work within my browser, an exploration of how MAV actually does this and some initial explorations of the changes I’ll have to make. It finishes with some suggestions for the next step.

All as a local instance

First constraint is that this is all being done as a local instance. I’ll be the only one who can see it when I’m using my laptop. It will work something like this

  • I have MAV installed on a version of the Firefox browser.
  • When I visit one of my institutional Moodle course sites MAV will recognise this and as a result will
    • Send a query to a web server running on my laptop asking for activity completion (and other) data for all, some or one student.
    • The web server on my laptop will query a database on my laptop that contains a copy of the activity completion data for my courses and send a reply back to Firefox/MAV.
    • On receipt of the reply Firefox/MAV will update the display of my course site to colour code the activities based on how many student(s) have completed the activity.

The reliance on my laptop and a local database is due to the difficulty of making connections to the institutional servers/data.

From a “theoretical” perspective, this is part of our argument that the LMS is not a full fledged member of networked learning. It’s too hard to make new connections to the LMS to enable new learning. MAV and local databases are an attempt to make it easier to connect to the LMS and its large number of individual parts. The theory is that by making this easier, it is easier to innovate and encourage the development of more interesting learning that is more used.


MAV recognising institutional LMS page

First some documentation on MAV and how it works

The local mav code is /usr/local/www/mav and /usr/local/www/smarty

MAV has to separate servers it knows about

  • balmiServer – this will be my local laptop
  • Moodle Server – this is the institutional LMS

    POINT: Would be interesting to see if this could be multiple servers? e.g. when I want it to work on both my local Moodle server and the institution one

These are set in ~/mav/gmdocs. Set it to http://usqstudydesk.usq.edu.au/m2

Go to this link http://localhost/fred/mav/gm/moodleActivityViewer.user.js and install the updated version of MAV

That seems to working. Getting at least some information dumped into the console. Seems to be breaking on a call to balmi.getLoggedInUserIDNumber() — moodleActivityViewere.user.js 1199

Ahh, seems the USQ study desk has an extra bread crumb in the list that breaks the code. Modify the code in balmi.user.js and all is good.

getMoodleLinks

balmi.user.js has a function getMoodleLinks that extracts all the Moodle type links from the page. This includes setting up some regular expressions to do the extraction.

Change: the RE needs to be updated for my institutional Moodle. Also another RE replacement a little further down for link name.

How does MAV work?

Try to nut out the process MAV uses and identify what possible changes I’ll need to make for both the activity completion and also the student information idea.

The client runs the GreaseMonkey script ~/mav/gmdocs/moodleActivityView.user.js installed on Firefox. It starts off and calls.

  • balmi.getCoursePageLink – Will only run MAV is the page is a valid Moodle page.

    Looks for the Moodle breadcrumbs and extracts the course id.

    CHANGE: this is where I could hard code the detection of my courses and also do the translation between the course ID on the USQ Moodle server and the course ID on the Moodle server on my laptop.

    If not what MAV is looking (getCoursePageLink returns NULL) for MAV exits.

  • moodleActivityView.user.js – does a range of set up prep. Adding the MAV interface etc.

    CHANGE: Some of these will need to change based on what I want to be able to do.

  • Adds mavUpdatePage function as a listener for the load page event – i.e. this is what updates the page.
  • mavUpdatePage does some debug stuff and then calls
  • generateJSONRequest – generate the particular request to send to the MAV server in JSON
    • balmi.user.js – balmi.getCoursePageLink() – A duplicate call
    • balmi.user.js – balmi.getMoodleLinks

      get’s all the links that are part of a Moodle course page. This is for the activity tracking.

      returns data of the form

      “/mod/forum/view.php?id=12345”: [“forum”,”view.php?id=12345″]

      CHANGE: For activity completion the aim here will be to return the links only for value Moodle activities.

      CHANGE: For the user details option, looking at returning the links to user details.

    • Filters out a range of links that shouldn’t be included
    • calls requestData – actually makes the request
  • updatePage – takes the data returned from the MAV server and updates the links. Either through increasing font size of changing the background colour of the links.

    CHANGE: the activity completion will be closest to a version of the number of students. Rather than the number of students who clicked on the link, it will be the number of students who completed the activity.

    Has a loop that goes through all the links in the page. If they link matches something that’s come back from the MAV server, then make the change.

The server is implemented using ~/mav/phpdocs/api/getActivity.php – processes the request

  • decodes and logs the request
  • getCourseIdFromCourseHomePageLink – extracts the course id which is used to query the Moodle database
  • SQL to count # student in course

    CHANGE: Not needed for the student ID stuff.

  • Checks to see if the user wants # clicks or # students and whether just for an individual student, a group(s) or all.

    CHANGE: Again not needed for student details.

  • Calls ~/mav/lib/generateSQLQuery’generateSQLQuery – just a wrapper around a fairly standard PHP template for dynamically generated SQL.

    The template is in ~/mav/lib/getActivityQueryTemplate.php This uses a range of PHP code to generate the appropriate SQL query to extract the stats per link

    CHANGE: the activity completion modifications could be implemented in here. Fairly similar to the S approach, but using activity completion rather than the Moodle log tables.

  • Processes the query for each link, placing the results into a data structure
  • Constructs the JSON object to send back to the browser.

    CHANGE: This is where my kludge will have to translate the student and activity ids returned by the SQL into the values that are being used on the USQ Moodle server and are thus what the browser will find embedded in the HTML.

Approach for changes

Separate clients and servers for the two approaches. Perhaps modify the existing for activity completion, but still do this separate from the existing MAV stuff so I have a clean copy? Definitely have to put this under git.

Questions

  1. What’s the format for links to the student profile? Does it use the Moodle user id?

    Basically a link to the script user/view.php with the user’s id and the course id as parameters.

    <a href="~/user/view.php?id=USERID&course=COURSEID">Fred Nerf</a>
    
  2. How do you distinguish activity links from other links in Moodle?

    Looks like a list element with a class of activity is a good first start. If it in turn contains a span of class autocompletion that’s another good sign.

    Above, that with the activity completion you’re only looking for stuff within the course-content div, or below that the weeks unordered list.

    <li class="activity book modtype_book " id="module-263678">
      <div>
        <div class="mod-indent-outer"><div class="mod-indent"></div>
          <div>
            <div class="activityinstance">
               <a class="" onclick="" href="..mod/book/view.php?id=263678"><img src="" class="iconlarge activityicon" alt=" " role="presentation" />
                  <span class="instancename">Setting up your tools: Diigo, a blog and Twitter<span class="accesshide " > Book</span></span>
               </a>
            </div>
            <span class="actions">
               <span class="autocompletion"><img title="Completed: Setting up your tools: Diigo, a blog and Twitter" alt="Completed: Setting up your tools: Diigo, a blog and Twitter" class="smallicon" src="" /></span>
            </span>
          </div>
        </div>
      </div>
    </li>
    
  3. How am I going to get map the USQ Moodle activity and user ids with the ids used on my local server?

    A simple script to parse the HTML file for the course home page should be able to extract the ids for each of the activities on the USQ server and also the associated name. The above HTML shows that the id is in the id of the list element. Already have the names of the activities with a hard coded sequential idea in the local database. Can do the mapping that way.

To do

Misc tasks to do

  • Think through how this kludge is going to be done. Likely possibilities include
    1. Separate javascript plugins and servers for the activity completion and the user details.
    2. Modify the existing plugins and servers to handle the additional requests.
    3. Integrate activity completion into the existing MAV, but have user details separate

      Mainly because activity completion is largely the same as the existing display work that MAV does.

  • User detail display
    • Investigate what’s the best way to pass the data back to the browser – just data with the HTML generated by the browser or as HTML generated by the server and simply inserted by the
    • Chat with Rolley and see whether the rollover/popup idea can be implemented with the same HTML stuff used by the rest of MAV.
  • Extract the activity id data from the USQ server.
  • Find out if there is a report that Moodle will generate a list of all the users in a course so I can extract user ids from the USQ Moodle server to create a mapping to the local user ids.

    The activity participation report will generate a list of all students with a link that includes user id and their name.

  • Get the MAV code base into git.
  • Implement the separate user details version of MAV might be the first major change to do.

From thinking to tinkering: The grassroots of strategic information systems

What follows is a long overdue summary of Ciborra (1992). I think it will have a lot of insight for how universities implement e-learning. The abstract for Ciborra (1992) is

When building a Strategic Information. System (SIS), it may not be economically sound for a firm to be an innovator through the strategic deployment of information technology. The decreasing costs of the technology and the power of imitation may quickly curtail any competitive advantage acquired through an SIS. On the other hand, the iron law of market competition prescribes that those who do not imitate superior solutions are driven out of business. This means that any successful SIS becomes a competitive necessity for every player in the industry. Tapping standard models of strategy analysis and data sources for industry analysis will lead to similar systems and enhance, rather than decrease, imitation. How then should “true” SISs be developed? In order to avoid easy imitation, they should should emerge from from the grass roots of the organization, out of end-user hacking, computing, and tinkering. In this way the innovative SIS is going to be highly entrenched with the specific culture of the firm. Top management needs to appreciate local fluctuations in practices as a repository of unique innovations and commit adequate resources to their development, even if they fly if the face of traditional approaches. Rather than of looking for standard models in the business strategy literature, SISs should be looked for in the theory and practice of organizational leaming and innovation, both incremental and radical.

My final thoughts

The connection with e-learning

Learning and teaching is the core business of a university. For the 20+ years I’ve worked in Australian Higher Education there has been calls for universities to become more distinct. It would then seem logical that the information systems used to support, enhance and transform (as if there are many that do that) learning and teaching (I’ll use e-learning systems in the following) should be seen as Strategic Information Systems.

Since the late 1990s the implementation of e-learning systems has been strongly influenced by the traditional approaches to strategic and operational management. The influence of the adoption of ERP systems are in no small way a major contributor to this. This recent article (HT: @katemfd) shows the lengths to which universities are going when the select an LMS (sadly for many e-learning == LMS).

I wonder how much of the process is seen as being for strategic advantage. Part, or perhaps all, of Ciborra’s argument for tinkering is on the basis of generating strategic advantage. The question remains whether universities see e-learning as a source of strategic advantage (anymore)? Perhaps they don’t see selection of the LMS as a strategic advantage, but given the lemming like rush toward “we have to have a MOOC” of many VCs it would seem that technology enhanced learning (apologies to @sthcrft) is still seen as a potential “disruptor”/strategic advantage

For me this approach embodies the rational analytic theme to strategy that Ciborra critiques. The tinkering approach is what is missing from university e-learning and its absence is (IMHO) the reason much of it is less than stellar.

Ciborra argues that strategic advantage comes from systems where development is treated as an innovation process. Where innovation is defined as creating new knowledge “about resources, goals, tasks, markets, products and processes” (p. 304). To me this is the same as saying to treat the development of these systems as a learning process. Perhaps more appropriately a constructionist learning process. Not only does such a process provide institutional strategic advantage, it should improve the quality of e-learning.

The current rhetoric/reality gap in e-learning arises from not only an absence, but active prevention and rooting out, of tinkering and bricolage. An absence of learning.

The deficit model problem

Underpinning Ciborra’s approach is that the existing skills and competencies within an organisation provide both the source and the constraint on innovation/learning.

A problem with university e-learning is the deficit model of most existing staff. i.e. most senior management, central L&T, central L&T and middle managers (e.g. ADL&T) have a deficit model of academic staff. They aren’t good enough. They don’t know enough. They have to complete a formal teaching qualification before they can be effective teachers. We have to nail down systems so they don’t do anything different.

Consequently, wxisting skills and competencies are only seen as a constraint on innovation/learning. They are never seen as a source.

Ironically, the same problem arises in the view of students held by the teaching academics that are disparaged by central L&T etc.

The difficulties

The very notion of something being “unanalyzable” would be very difficult for many involved in University management and information technology to accept. Let alone deciding to use it as a foundation for the design of systems.

Summary of the paper

Introduction

Traditional approaches for designing information systems are based on “a set of guidlines” about how best to use IT in a competitive environment and “a planning and implementation strategy” (p. 297).

However, the “wealth of ‘how to build an SIS’ recipes” during the 1990s failed to “yield a commensurate number of successful cases” at least not measured against the rise of systems in the 1980s. Reviewing the literature suggests a number of reasons, including

  • Theoretical literature emphasises rational assessment by top management as the means for strategy formulation ignoring alternative conceptions from innovation literature valuing learning more than thinking and experimentation as a means for revealing new directions.
  • Examining precedent-setting SISs suggests that serendipity, reinvention and other facts were important in their creation. These are missing from the rational approach.

So there are empirical and theoretical grounds for a new kind of guidelines for SIS design.

Organisations should ask

  1. Does it pay to be innovative?
  2. Are SISs offering competitive advantage or are they competitive necessity?
  3. How can a firm implement systems that are not easily copied and thus generate returns?

In terms of e-learning this applies

the paradox of micro-economics: competition tends to force standardization of solutions and equalization of production and coordination costs among participants.

i.e. the pressures to standarise.

The argument is that an SIS must be based on new practical and conceptual foundations

  • Basing an SIS on something that can’t be analysed, like orgnisational culture will help avoid easy imitation. Leveraging the unique sources of practice and know-how of the firm and industry level can be th esource of sustained advantage.
  • SIS development should be closer to prototyping and engaging with end-users’ ingenuity than has been realised.

    The capability of integrating unique ideas and practical design solutions at the end-user level turns out to be important than the adoption of structured approaches to systems development or industry analysis (Schoen 1979; Ciborra and Lanzara, 1990)

Questionable advantage

During the 1980s a range of early adopters of strategic information systems (SISs) – think old style airline reservation systems – arose brought benefit to some organisations and bankruptcy to those that didn’t adopt. This arose to a range of frameworks for identifying SIS.

I’m guessing some of these contributed to the rise of ERP systems.

But the history of those cited success stories suggest that SIS only provide an ephemeral advantage before being copied. One study suggests 92% of systems followed industry wide trends. Only three were original.

I imagine the percentage in university e-learning would be significantly higher. i.e. you can’t get fired if you implement an LMS (or an eportfolio).

To avoid the imitation problem there are suggestions to figure out the lead time for competitors to copy. But that doesn’t avoid the problem. Especially given the rise of consultants and service to help overcome.

After all, if every university can throw millions of dollars at Accenture etc they’ll all end up with the same crappy systems.

Shifts in model of strategic thinking and competition

This is where the traditional approaches to strategy formulation get questioned.

i.e. “management should first engage in a purely cognitive process” that involves

  1. appraise the environment (e.g. SWOT analysis)
  2. identify success factors/distinctive competencies
  3. translate those into a range of competitive strategy alternatives
  4. select the optimal strategy
  5. plan it in sufficient details
  6. implement

At this stage I would add “fail to respond to how much the requirements have changed” and start over again as you employ new senior leadership

This model is seen in most SIS models.

Suggests that in reality actual strategy formulation involves incrementalism, muddling through, myopic and evolutionary decision making. “Structures tend to influence strategy formulation before they can be impacted by the new vision” (p. 300)

References Mintzberg (1990) to question this school of through 3 ways

  1. Assumes that the environment is highly predictable and events unfold in predicted sequences, when in fact implementation surprises happen. Resulting in the clash between inflexible plans and the need for revision.
  2. Assumes that the strategist is an objective decision maker not influenced by “frames of reference, cultural biases, or ingrained, routinized ways of action” (p. 301). Contrary to a raft of research.
  3. Strategy is seen as an intentional design process rather than as learning “the continuous acquisition of knowledge in various forms”. Quotes a range of folk to argue that strategy must be based on effective adaptation and learning involving both “incremental, trial-and-error learning, and radical second-order learning” (p. 301)

The models of competition implicit in SIS frameworks tend to rely on theories of business strategy from industrial organisation economics. i.e. returns are determined by industry structure. To generate advantage a firm must change the structural characteristics by “creating barriers to entry, product differentiation, links with suppliers” (p. 301).

There are alternative models

  • Chamberlin’s (1933) theory of monopolistic competition

    Firms are heterogeneous and compete on resource and asset differences – “technical know-how, reputation, ability for teamwork, organisational culture and skills, and other ‘invisible assets’ (Itami, 1987)” (p. 301)

    Differences enable high return strategies. You compete by cultivating unique strengths and capabilities and defending against imitation.

  • Schumpeter’s take based on innovation in product, market or technology

    Innovation arises from creative destruction, not strategic planning. The ability to guess, learn and luck appear to be the competitive factors.

Links these with Mintzberg’s critique of rational analytics approaches and identifies two themes in business strategy

  1. Rational analytic

    Formulate strategy in advance based on industry analysis. Plan and then implement. Gains advantage relative to firms in the same industry strucure.

  2. Tinkering (my use of the phrase)

    Strategy difficult to plan before the fact. Advantage arises from exploiting unique characteristics of the firm and unleashing its innovating capabilities

Reconsidering the empirical evidence

Turns to an examination of four well-known SIS based on the two themes and other considerations from above. This examination these “cases emphasize the discrepancy between ideal plans for an SIS and the realities of implementation” (p. 302). i.e.

The system was not developed according to a company-
by one of the business units. The system was not developed according to company-wide strategic plan; rather, it was the outcome of an evolutionary, piecemeal process that included the ingenious tactical use of systems already available.

i.e. bricolage and even more revaling

the conventional MIS unit was responsible not only for initial neglect of the new strategic applications within McKesson, but also, subsequently, for the slow pace of company-wide learning about McKesson’s new information systems

Another system “was supposed to address an internal inefficiency” (p. 303) not some grand strategic goal.

And further

The most frequently cited SIS successes of the 1980s, then, tell the same story. successes of the 1980s, then, tell the same story. Innovative SISs are not fully designed top-down or introduced in one shot; rather, they are tried out through prototyping and tinkering. In contrast, strategy formulation and design take place in pre-existing cognitive frames and organizational contexts that usually prevent designers and sponsors from seeing and exploiting the potential for innovation. (p. 303)

New foundations for SIS design

SIS development must be treated as an innovation process. The skills/competencies in an organisation is both a source and a constraint on innovation. The aim is to create knowledge.

New knowledge can be created in two non-exclusive ways

  1. Tinkering.

    Rely on local information and routine behaviour. Learning by doing, incremental decision making and muddling through).

    Accessing more diverse and distant information, when an adequate level of competence is not present, would instead lead to errors and further divergence from optimal performance (Heiner, 1983) (p. 304)

    People close to the operational level have to be able to tinker to solve new problems. “local cues from a situation are trusted and exploited in a somewhat unreflective way, aiming at ad hoc solutions by heuristics rather than high theory”

    The value of this approach is to keep development of an SIS close to the competencies of the organisation and ongoing fluctuations.

  2. Radical learning

    “entails restructuring the cognitive and organisational backgrounds that give meaning to the practices, routines and skills at hand” (p. 304). It requires more than analysis and requirements specifications. Aims at restructuring the context of both business policy and systems development”. Requires “intervening in situations and designing-in-action”.

    The change in context allows new ways of looking at the capabilities and devising new strategies. The sheer difference becomes difficult to imitate.

SIS planning by oxymorons

Time to translate those theoretical observations into practical guidelines.

Argues that the way to develop an SIS is to proceed by oxymoroon. Fusing “opposites in practice and being exposed to the mismatches that bound to occur” (p. 305). Defines 7

  • 4 to bolster incremental learning
    1. Value bricolage strategically
    2. Design tinkering

      This is important

      Activities, settings, and systems have to be arranged so that invention and prototyping by end-users can flourish, together with open experimentation (p. 305)

      Set up the organisation to favour local innovation. e.g. ad hoc project teams. ethnographic studies.

    3. Establish systematic serendipity

      Open experimentation results in largely incomplete designs, the constant intermingling of implementation and refinement, concurrent or simultaneous conception and execution – NOT sequential

      An ideal context for serendipity to merge and lead to unexpected solutions.

    4. Thrive on gradual breakthroughs.

      In a fluctuating environment the ideas that arise are likely to include those that don’t align with established organisational routines. The raw material for innovation. “management should appreciate and learn about such emerging practices”

  • Radical learning and innovation
    1. Practice unskilled learning

      Radically innovative approaches may be seen as incompetent when judged by old routines and norms. Management should value this behaviour as an attempt to unlearn old ways of thinking and doing. It’s where new perspectives arise.

    2. Strive for failure

      Going for excellence suggests doing better what you already do which generates routinized and efficient systems. The competency trap. Creative reflection over failures and suggest ways to novel ideas and designs. Also the recognition of discontinuities and flex points.

    3. Achieve collaborative inimitability

      Don’t be afraid to collaborate with competitors. Expose the org to new cultures and ideas.

These seven oxymorons can represent a new “systematic” approach for the establishment of an organizational environment where new information—and thus new systems can be generated. Precisely because they are paradoxical, they can unfreeze existing routines, cognitive frames and behaviors; they favor learning over monitoring and innovation. (p. 306)

References

Ciborra, C. (1992). From thinking to tinkering: The grassroots of strategic information systems. The Information Society, 8(4), 297–309.

Needed updates to cc_attrib.pl

The following is a list of updates I need to make to a perl script I wrote last year that helps me properly attribute the Creative Commons licenced Flickr photos I use in presentations. This list arises from prepare the welcome video for this year’s course. Most, if not all, of the updates are to make it easier to use, prevent the chance of “spam” like behaviour and deal with apparent reliability issues with the Flickr API.

Parse the slides file – ignore comments

As I discover images I want to use in a presentation, I maintain a text file with the details as follows

1,http://www.flickr.com/photos/rameshng/5930493923/  # Welcome picture Welcome.jpg
2,http://www.flickr.com/photos/rameshng/5930493923/  # Welcome picture Welcome.jpg

The script doesn’t parse this file yet. Also, the “comments” approach is a new thing and appears useful for tracking. The script should ignore those.

Track successful comments

One of the main tasks of the script is to post an acknowledgement comment to the Flickr page for an image. This morning the Flickr API would successfully post these comments to some pages, and not others. Meaning a manual check to see which worked and which didn’t, remove those that did work from the script and try again. Had to do this 3 times.

Would be useful if the script tracked which comments were successfully made and didn’t try to make another comment on those. Don’t want to start spamming.

Track all images used

Following on from that, I’m wondering whether the script should track all images ever run through the script. There’s a good chance I might use an image in more than one presentation associated with a course, not sure I’d want to make the same comment again. Perhaps I should. If I used the image in a research presentation – very different from the course – perhaps I should make a new comment.