Leadership as defining what’s successful

After spending a few days visiting friends and family in Central Queensland – not to mention enjoying the beach – a long 7+ hour drive home provided an opportunity for some thinking. I’ve long had significant qualms about the notion of leadership, especially as it is increasingly being understood and defined by the current corporatisation of universities and schools. The rhetoric is increasingly strong amongst schools with the current fashion for assuming that Principals can be the saviour of schools that have broken free from the evils of bureaucracy. I even work within an institution where a leadership research group is quite active amongst the education faculty.

On the whole, my experience of leadership in organisations has been negative. At the best the institution bumbles along through bad leadership. I’m wondering whether or not questioning this notion of leadership might form an interesting future research agenda. The following is an attempt to make concrete some thinking from the drive home, spark some comments, and set me up for some more (re-)reading. It’s an ill-informed mind dump sparked somewhat by some early experiences on return from leave.

Fisherman’s beach by David T Jones, on Flickr

In the current complex organisational environment, I’m thinking that “leadership” is essentially the power to define what success is, both prior to and after the fact. I wonder whether any apparent success attributed to the “great leader” is solely down to how they have defined success? I’m also wondering how much of that success is due to less than ethical or logical definitions of success?

The definition of success prior to the fact is embodied in the current model of process assumed by leaders, i.e. telological processes. Where the great leader must define some ideal future state (e.g. adoption of Moodle, Peoplesoft, or some other system; an organisational restructure that creates “one university”; or, perhaps even worse, a new 5 year strategic plan etc.) behind which the weight of the institution will then be thrown. All roads and work must lead to the defined point of success.

This is the Dave Snowden idea of giving up the evolutionary potential of the present for the promise of some ideal future state. A point he’ll often illustrate with this quote from Seneca

The greatest loss of time is delay and expectation, which depend upon the future. We let go the present, which we have in our power, and look forward to that which depends upon chance, and so relinquish a certainty for an uncertainty.

Snowden’s use of this quote comes from the observation that some systems/situations are examples of Complex Adaptive Systems (CAS). These are systems where traditional expectations of cause and effect don’t hold. When you intervene in such systems you cannot predict what will happen, only observe it in retrospect. In such systems the idea you can specify up front where you want to go is little more than wishful thinking. So defining success – in these systems – prior to the fact is a little silly. It questions the assumptions of such leadership, including that they can make a difference.

So when the Executive Dean of a Faculty – that includes programs in information technology and information systems – is awarded “ICT Educator of the Year” for the state because of the huge growth in student numbers, is it because of the changes he’s made? Or is it because he was lucky enough to be in power at (or just after) the peak of the IT boom? The assumption is that this leader (or perhaps his predecessor) made logical contributions and changes to the organisation to achieve this boom in student numbers. Or perhaps they made changes simply to enable the organisation to be better placed to handle and respond to the explosion in demand created by external changes.

But perhaps rather than this single reason for success (great leadership), it was instead there were simply a large number of small factors – with no central driving intelligence or purpose – that enabled this particular institution to achieve what it achieved. Similarly, when a few years later the same group of IT related programs had few if any students, it wasn’t because this “ICT Educator of the Year” had failed. Nor was it because of any other single factor, but instead hundreds and thousands of small factors both internally and externally (some larger than others).

The idea that there can be a single cause (or a single leader) for anything in a complex organisational environment seems to be faulty. But because it is demanded of them, leaders must spend more time attempting to define and convince people of their success. In essence then, successful leadership becomes more about your ability to define and promulgate widely acceptance of this definition of success.

KPIs and accountability galloping to help

This need to define and promulgate success is aided considerably by simple numeric measures. The number of student applications; DFW rates; numeric responses on student evaluation of courses – did you get 4.3?; journal impact factors and article citation metrics; and, many many more. These simple figures make it easy for leaders to define specific perspectives on success. This is problematic and it’s many problems are well known. For example,

  • Goodhart’s law – “When a measure becomes a target, it ceases to be a good measure.”
  • Campbell’s law – “The more any quantitative social indicator (or even some qualitative indicator) is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
  • the Lucas critique.

For example, you have the problem identified by Tutty et al (2008) where rather than improve teaching, institutional quality measures “actually encourage inferior teaching approaches” (p. 182). It’s why you have the LMS migration project receiving an institutional award for quality etc, even though for the first few weeks of the first semester it was largely unavailable to students due to dumb technical decisions by the project team and required a large additional investment in consultants to fix.

Would this project have received the award if a senior leader in the institution (and the institutional itself) heavily reliant upon the project being seen as a success?

Would the people involved in giving the project the award have reasonable reasons for thinking it award winning? Is success of the project and of leadership all about who defines what perspective is important?

Some other quick questions

Some questions for me to consider.

  • Where does this perspective sit within the plethora of literature on leadership and organisational studies? Especially within the education literature? How much of this influenced by earlier reading of “Managing without Leadership: Towards a Theory of Organizational Functioning”
  • Given the limited likelihood of changing how leadership is practiced within the current organisational and societal context, how do you act upon any insights this perspective might provide? i.e. how the hell do I live (and heaven forbid thrive) in such a context?

References

Tutty, J., Sheard, J., & Avram, C. (2008). Teaching in the current higher education environment: perceptions of IT academics. Computer Science Education, 18(3), 171–185.

Oh Academia

It’s been one of those weeks in academia.

Earlier in the week the “I quit academia” meme went through my Twitter stream. Perhaps the closest this meme came to me was @marksmithers “On leaving academia” post.

That was about the day when I had to pull the pin on a grant application. Great idea, something we could do and would probably make a difference, but I didn’t have the skills (or the time) to get it over the line.

As it happened, I was reading Asimov’s “Caves of Steel” this week and came across the following quote about the “Medievalists”, a dissaffected part of society

people sometimes mistake their own shortcomings for those of society and want to fix the Cities because they don’t know how to fix themselves

On Tuesday night it was wonder if you could replace “Cities” with “Universities” and capture some of drivers behind the “I quit academia” meme.

And then I attended a presentation today titled “Playing the research game well”. All the standard pragmatic tropes – know your H-Index (mine’s only 16), know the impact factor for journals, only publish in journals with an impact factor greater than 3, meta-analysis get cited more etc.

It is this sort of push for KPIs and objective measures that is being created by the corporatisation of the Australian University sector. The sort of push which makes me skeptical of Mark’s belief

that higher education institutions can and will find their way back to being genuinely positive friendly and enjoyable places to work and study.

If anything these moves are likely to increase the types of experiences Mark reports.

So, I certainly don’t think that the Asimov quote applies. That’s not to say that academics don’t have shortcomings. I have many – the grant application non-submission is indicative of some – but by far the larger looming problem (IMHO) is the changing nature of universities.

That said, it hasn’t been all that bad this week. I did get a phone call from a student in my course. A happy student. Telling stories about how he has been encouraged to experiment with the use of ICTs in his teaching and how he’s found a small group at his work who are collaborating.

Which raises the question, if you’re not going to quit academia (like Leigh commented on Mark’s post, I too am “trapped in wage slavery and servitude”) do you play the game or seek to change it?

Or should we all just take a spoonful?

IRAC – Four questions for learning analytics interventions

The following is an early description of work arising out of The Indicators Project an ongoing attempt to think about learning analytics. With IRAC (Information, Representation, Affordances and Change) Colin Beer, Damien Clark and I are trying to develop a set of questions that can guide the use of learning analytics to improve learning and teaching. The following briefly describes:

  • Why we’re doing this?
  • Introduces some of our assumptions.
  • Touches on the origins of IRAC.
  • Describes the four questions.
  • A very early and rough attempt to use the four questions to think about existing approaches to learning analytics.

Why?

The spark for this work is based on observations made in a presentation from last year. In summary, the argument is that learning analytics has become a management fashion/fad in higher education and how this generally means most implementation of learning analytics is not likely to be very mindful. In turn it is very likely to be limited in its impact on learning and teaching. Having much in common with the raft of expenditure in data warehouses some years ago. Let alone examples such as: graduate attributes, eportfolios, the LMS, open learning, learning objects etc. It would nice to avoid this yet again.

There are characteristics of learning analytics that make the difficulties associated with developing appropriate innovations beyond the faddish adoption of analytics. One of the major contributors is that the use of learning analytics encompasses many different bodies of literature both within and outside learning and teaching. In fact, many of these different bodies of literature have developed important insights that can directly help inform the use of learning analytics to improve learning and teaching. What’s worse is that early indications are that – not unsurprisingly – most institutional projects around learning analytics are apparently ignorant of the insights and lessons gained from this prior work.

In formulating IRAC – our four questions for learning analytics interventions – we’re attempting to aid institutions consider the insights from this earlier work and thus enhance the quality of their learning analytics interventions. We’re also hoping that these four questions will inform our attempts to explore the effective use of learning analytics to improve learning and teaching. For me personally, I’m hoping this work can provide me with the tools and insights necessary to make my own teaching manageable, enjoyable and effective.

Assumptions

Perhaps the largest assumption underpinning the four questions is that the aim of learning analytics interventions is to encourage and enable action by a range of stakeholders. If no action (use) results from a learning analytics project, then there can’t be any improvement to learning and teaching. This is simlar to the argument by Clow (2012) that the key to learning analytics is action in the form of appropriate interventions. Also, Elias (2011) describes two steps that are necessary for the advancement of learning analytics

(1) the development of new processes and tools aimed at improving learning and teaching for individual students and instructors, and (2) the integration of these tools and processes into the practice of teaching and learning (p. 5)

Earlier work has found this integration into practice difficult. For example, Dawson & McWilliam (2008) identify a significant challenge for learning analytics being able to “to readily and accurately interpret the data and translate such findings into practice” (p. 12). Adding further complexity is the observation from Harmelen & Workman (2012) that learning analytics are part of a socio-technical system where success relies as much on “human decision-making and consequent action…as the technical components” (p. 4). The four questions proposed here aim to aid in the design of learning analytics interventions that are integrated into the practice of learning and teaching.

Audrey Watters’ friday night rant serves a slightly similar perspective more succinctly and effectively.

Foundations

In thinking about the importance of action and of learning analytics tools being designed to aid action we were led to the notion of Electronic Performance Support Systems (EPSS). EPSS embody a “perspective on designing systems that support learning and/or performing” (Hannafin et al., 2001, p. 658). EPSS are computer-based systems that “provide workers with the help they need to perform certain job tasks, at the time they need that help, and in a form that will be most helpful” (Reiser, 2001. p. 63).

All well and good, in reading about EPSS we came across the notion of the performance zone. In framing the original definition of an EPSS, Gery (1991) identifies the need for people to enter the performance zone. The performance zone is defined as the metaphorical area where all of the necessary information, skills, dispositions, etc. come together to ensure successful completion of a task (Gery, 1991). For Villachica, Stone & Endicott (2006) the performance zone “emerges with the intersection of representations appropriate to the task, appropriate to the person, and containing critical features of the real world” (p. 550).

This definition of the performance zone is a restatement of Dickelman’s (1995) three design principles for cognitive artifacts drawn from Norman’s (1993) book “Things that make us smart”. In this book, Norman (1993) argues “that technology can make us smart” (p. 3) through our ability to create artifacts that expand our capabilities. At the same time, however, Norman (1993) argues that the “machine-centered view of the design of machines and, for that matter, the understanding of people” (p. 9) results in artifacts that “more often interferes and confuses than aids and clarifies” (p. 9).

Given our recent experience with institutional e-learning systems this view resonates quite strongly as a decent way of approaching the problem.

While the notions of EPSS, the Performance Zone and Norman’s (1993) insights into the design of cognitive artifacts form the scaffolding for the four questions, additional insight and support for each question arises from a range of other bodies of literature. The description of the four questions given below includes very brief descriptions of some of this literature. There is significantly more useful insights to be gained and extending this will form a part of our on-going work.

Our proposition is that effective consideration of these four questions with respect to a particular context, task and intervention will help focus attention on factors that will improve the implementation of a learning analytics intervention. In particular, it will increase the chances that the intervention will be integrated into practice and subsequently have a positive impact on the quality of the learning experience.

IRAC – the four questions

The following table summarises the four questions with a bit more of an expansion below.

Label Question
Information Is all the relevant information and only the relevant information available and being used appropriately?
Representation Does the representation of this information aid the task being undertaken?
Affordances Are there appropriate affordances for action?
Change How will the information, representation and the affordances be changed?

Information

While there is an “information explosion”, the information we collect is usually about “those things that are easiest to identify and count or measure” but which may have “little or no connection with those factors of greatest importance” (Norman, 1993, p. 13). Leading to Verhulst’s observation (cited in Bollier & Firestone, 2010) that “big data is driven more by storage capabilities than by superior ways to ascertain useful knowledge” (p. 14). Potential considerations include whether the information required is technically and ethically available for use? How is the information cleaned, analysed and manipulated during use? Is the information sufficient to fulfill the needs of the task? (and many, many more).

Representation

A bad representation will turn a problem into a reflective challenge, while an appropriate representation can transform the same problem into a simple, straight forward task (Norman, 1993). Representation has a profound impact on design work (Hevner et al., 2004), particularly on the way in which tasks and problems are conceived (Boland, 2002). How information is represented can make a dramatic difference in the ease of a task (Norman, 1993). In order to maintain performance, it is necessary for people to be “able to learn, use, and reference access necessary information within a single context and without breaks in the natural flow of performing their jobs.” (Villachica, Stone, & Endicott, 2006, p. 540). Considerations here are how easy is it for people to understand and analyse the implications of the findings from learning analytics? (and many, many more).

Action

A poorly designed or constructed artifact can greatly hinder its use (Norman, 1993). For an application of information technology to have a positive impact on individual performance then it must be utilised and be a good fit for the task it supports (Goodhue and Thompson, 1995). Human beings tend to use objects in “ways suggested by the most salient perceived affordances, not in ways that are difficult to discover” (Norman, p. 106). The nature of such affordances are not inherent to the artifact, but instead co-determined by the properties of the artifact in relation to the properties of the individual, including the goals of that individual (Young et al., 2000). Glassey (1998) observes that through the provision of “the wrong end-user tools and failing to engage and enable end users” even the best implemented data warehouses “sit abandoned” (p. 62). The consideration here is whether or not the tool provides support for action that is appropriate to the context, the individuals and the task.

Change

The idea of evolutionary development has been central to the theory of decision support systems (DSS) sinces its inception in the early 1970s (Arnott & Pervan, 2005). Rather than being implemented in linear or parallel, development occurs through continuous action cycles involving significant user participation (Arnott & Pervan, 2005). Beyond the systems or tools to under go change, there is a need for the information being captured to change. Buckingham-Shum (2012) identifies the risk that research and development based on data already being gathered will tend to perpetuate the existing dominant approaches through which the data was generated. Another factor is Bollier’s and Firestone’s (2010) observation that once “people know there is an automated system in place, they may deliberately try to game it” (p. 6). Finally, is the observation that universities are a complex system (Beer et al. 2012). Complex systems require reflective and adaptive approaches that seek to identify and respond to emergent behaviour in order to stimulate increased interaction and communication (Boustaini, 2010). Potential considerations here include who is able to implement change? Which of the three prior questions can be changed? How radical can those changes be? Is a diversity of change possible?

Using the four questions

It is not uncommon for Australian Universities to rely on a data warehouse system to support learning analytics interventions. This in part is due to the observation that data warehouses enable significant consideration of the information (question 1). This is not surprising given that the origins and purpose of data warehouses was to provide an integrated set of databases to provide information to decision makers (Arnott & Pervan, 2005). Data warehouses provide the foundation for learning analytics. However, the development of data warehouses can be dominated by IT departments with little experience with decision support (Arnott & Pervan, 2005) and a tendency to focus on technical implementation issues at the expense of user experience (Glassey, 1998).

In terms of consideration of the representation (question 2) the data warehouse generally provides reports and dashboards for ad hoc analysis and standard business measurements (van Dyk, 2008). In a learning analytics context, dashboards from a data warehouse will typically sit outside of the context in which learning and teaching occurs (e.g. the LMS). For a learner or teacher to consult the data warehouse requires the individual to break away from the LMS, open up another application and expend cognitive effort in connecting the dashboard representation with activity from the LMS. Data warehouses also provide a range of query tools that offer a swathe of options and filters for the information they hold. While such power potentially offers good support for change (question 4) that power comes with an increase in difficulty. At least one institution mandates the completion of training sessions to assure competence with the technology and ensure the information is not misinterpreted. This necessity could be interpreted as evidence of limited consideration of representation (question 2) and affordances (question 3). The source of at least some of these limitations arise from the origins of data warehouse tools in the management of businesses, rather than learning and teaching.

Harmelen and Workman (2012) use Purdue University’s Course Signals and Desire2Learn’s Student Success System (S3) as two examples of the more advanced learning analytics applications. The advances offered by these systems arise from greater considerations being given to the four questions. In particular, both tools provide a range of affordances (question 3) for action on the part of teaching staff. S3 goes so far as to provide a “basic case management tool for managing interventions” (Harmelen & Workman, 2012, p. 12) and has future intentions of using this feature to measure intervention effectiveness. Course Signals offers advancements in terms of information (question 1) and representation (question 2) by moving beyond simple tabular reporting of statistics, toward a traffic lights system based on an algorithm drawing on 44 different indicators from a range of sources to predict student risk status. While this algorithm has a history of development, Essa and Ayad (2012) argue that the reliance on a single algorithm contains “potential sources of bias” (n.p.) as it is based on the assumptions of a particular course model from a particular institution. Essa and Ayad (2012) go onto to describe S3’s advances such as an ensemble modeling strategy that supports model tuning (information and change); inclusion of social network analysis (information); and, a range of different visualisations including interactive visualisations allowing comparisons (representation, affordance and change).

References

Arnott, D., & Pervan, G. (2005). A critical analysis of decision support systems research. Journal of Information Technology, 20(2), 67–87. doi:10.1057/palgrave.jit.2000035

Beer, C., Jones, D., & Clark, D. (2012). Analytics and complexity : Learning and leading for the future. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Future Challenges, Sustainable Futures. Proceedings of ascilite Wellington 2012 (pp. 78–87). Wellington, NZ.

Boland, R. J. (2002). Design in the punctuation of management action. In R. Boland (Ed.), . Weatherhead School of Management.

Bollier, D., & Firestone, C. (2010). The promise and peril of big data. Washington DC: The Aspen Institute.

Buckingham Shum, S. (2012). Learning Analytics. UNESCO. Moscow. http://iite.unesco.org/pics/publications/en/files/3214711.pdf

Clow, D. (2012). The learning analytics cycle. Proceedings of the 2nd International Conference on Learning Analytics and Knowledge – LAK’12, 134–138. doi:10.1145/2330601.2330636

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicator of learning and teaching performance. Canberra: Australian Learning and Teaching Council.

Elias, T. (2011). Learning Analytics: Definitions, Processes and Potential. http://learninganalytics.net/LearningAnalyticsDefinitionsProcessesPotential.pdf.

Essa, A., & Ayad, H. (2012). Student success system: risk analytics and data visualization using ensembles of predictive models. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge – LAK’12 (pp. 2–5). Vancouver: ACM Press.

Glassey, K. (1998). Seducing the End User. Communications of the ACM, 41(9), 62–69.

Goodhue, D., & Thompson, R. (1995). Task-technology fit and individual performance. MIS quarterly, 19(2), 213. doi:10.2307/249689

Hevner, A., March, S., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75–105.

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Harmelen, M. Van, & Workman, D. (2012). Analytics for Learning and Teaching. http://publications.cetis.ac.uk/2012/516

Van Dyk, L. (2008). A data warehouse model for micro-level decision making in higher education. The Electronic Journal of e-Learning, 6(3), 235–244.

Villachica, S., Stone, D., & Endicott, J. (2006). Performance Suport Systems. In J. Pershing (Ed.), Handbook of Human Performance Technology (Third Edit., pp. 539–566). San Francisco, CA: John Wiley & Sons.

Schools and computers: Tales of a digital romance

It’s the last week of semester, EDC3100 ICTs and Pedagogy is drawing to a close and I’m putting together the last bit of activities/resources for the students in the course. Most are focused on the last assignment and in particular a final essay that asks them to evaluate their use of ICTs while on their three week Professional Experience where they were in schools and other locations teaching. Perhaps the most challenging activity I’d like them to engage in is questioning their assumptions around learning, teaching and the application of ICTs. A particularly challenging activity given that much of what passes for the use of ICTs – including much of my own work – in formal education hasn’t been very effective at questioning assumptions.

As one of the scaffolds for this activity I am planning to point the students toward Bigum (2012) as one strategy to illustrate questioning of assumptions. The following is a summary of my attempt to extract some messages from Bigum (2012) that I think are particularly interesting in the context of EDC3100. It also tracks some meanderings around related areas of knowledge.

Background

The rapid pace of change in terms of computing is made through some stats from Google’s CEO – every two days the world produces more information “than had been produced in total from the origin of the species to 2003” (p. 16)

Yet, if you return to 30 years ago, schools had more computers than the general community. A situation that is now reversed. Later in the paper Finger and Lee (2010) is cited as finding

For the class of 30 children the total home expenditure for computing and related technologies was $438,200. The expenditure for the classroom was $24,680. Even allowing for the sharing in families, the difference between the two locations is clearly significant.

Rather than transform or revolutionise the processes and outcomes of schooling, “it is hard to suggest that anything even remotely revolutionary has actually taken place”.

But once schools adjusted to these initial perturbations,schooling continued on much as it always had. More than this, schools learnt how to domesticate new technologies (Bigum 2002 ) , or as Tyack and Cuban ( 1995 , p. 126) put it, “computers meet classrooms, classrooms win.”

This observation fits with the expressed view that

schools have consistently attempted to make sense of “new” technologies by locating them within the logics and ways of doing things with which schools were familiar. (p. 17)

and the broader view that the “grammar of school”, in particular some of Papert’s observations. In particular, the interpretation of the computer/ICTs as a “teaching machine” rather than other interpretations, in Papert’s case constructionist related.

(Side note: in revisiting Papert’s “Why School Reform is Impossible” I’ve become more aware of this distinction Papert made

“Reform” and “change” are not synonymous. Tyack and Cuban clinched my belief that the prospects really are indeed bleak for deep change coming from deliberate attempts to impose a specific new form on education. However, some changes, arguably the most important ones in social cultural spheres, come about by evolution rather than by deliberate design — by what I am inspired by Dan Dennett (1994) to call “Darwinian design.”

This has some significant implications for my own thinking that I need to revisit.)

Budding romance

The entry of micro-computers into schools around the 80s was in part enabled by their similarity to calculators that had been used since the mid 1970s.

The similarities allowed teachers to imagine how to use the new technologies in ways consistent with the old…..for a technology to fi nd acceptance it has to generate uses.

which led to the development of applications for teaching and administrative work.

This led to the rise of vendors selling applications and the marketing of computers as “an unavoidable part of the educational landscape of the future”. At this stage, computers may have become like television, radio and video players – other devices already in classrooms (connecting somewhat here with Papert’s “computers as teaching machine” comment above). But a point of difference arose from the increasing spread of computers into other parts of society as solutions to a range of problems. ICTs were increasingly linked “with such seemingly desirable characteristics as ‘improvement’, ‘efficiency’ and, by extension, educational status” (p. 19).

Perhaps the strongest current indicator of this linkage (at least for EDC3100 students) is the presence of the ICT Capability in the Australian Curriculum. Not something that has happened with the other “teaching machines”.

Hence it became increasingly rational/obvious that schools had to have computers. What was happening with computers outside schools became an “evidence surrogate” for schools, i.e.

if ICTs are doing so much for banking, newspapers, or the military, it stands to reason that they are or can do good things in schooling. (p. 20)

This leads to comparison studies, each new wave of ICTs (e.g. iPads) come hand in hand with a new raft of comparison studies. Studies that are “like comparing oranges with orangutans”.

However, despite the oft-cited “schools + computers = improvement” claim, what computers are used for in schools is always constrained by dominant beliefs about how schools should work. (p. 20)

Domestic harmony

This is where the “grammar of school” or the schema perspective comes in.

Seeing new things in terms of what we know is how humans initially make sense of the new. When cars fi rst appeared they were talked about as horseless carriages. The fi rst motion pictures were made by filming actors on a stage and so on.

School leaders and teachers make decisions about which technologies fit within schools current routines and structures. If there is no fit, then ban it. Not to mention that “the more popular a particular technology is with students the greater the chance it will be banned”.

While the adoption of ICTs into schools begins with an aim of improvement, it often ends up with “integrating them into existing routines, deploying them to meet existing goals and, generally, failing to engage with technologies in ways consistent with the world beyond the classroom” (p. 22).

Summarising the pattern

Schools enter a cycle of identifying, buying and domesticating the “next best thing” on the assumption that there will be improvements to learning. With the increasing time/cost of staying in this game, there are more attempts to measure the improvement. Factors that are not measurable get swept under the carpet.

The folly of looking for improvement

The focus on improvement “reduces much debate about computers in schools to the level of right/wrong; good/bad; improved/not improved”.

Beyond this is the idea that “ICTs change things”. Sproull and Kiesler’s (1991) research

clearly demonstrates that when you introduce a technology, a new way of doing things into a setting, things change and that seeking to “assess” the change or compare the new way of doing things with the old makes little sense

An approach that is holistic, that does not separate social and technological allows a shift from looking at what has improved to looking to see what has changed. Changes that “may have very little to do with what was hoped for or imagined”.

Three different mindsets

This type of approach enables two mindsets currently informing current debates/practice to be questioned. Those mindsets are

  1. Embrace ICTs to improve schools

    This mindset sees schools doing well in preparing students for the future. The curriculum is focused on getting the right answer and teaching is focused on how to achieve this. Research here performs comparison studies, looking for improvement and the complexities of teaching with ICTs is embodied in concepts such as TPACK.

    This is the mindset that underpins much of what is in EDC3100.

  2. Schools cannot be improved, by ICTs or any other means.

    The idea that ICTs herald a change as significant as movable type. Connections with the de-schooling movement in terms of schools, that are based on a broadcast logic, will face the same difficulties facing newspapers, record companies etc. A mindset in which improving schools is a waste of time.

Proposes a different mindset, summarised as

  • Schools face real challenges and need to change.
  • Rather replace the current single solution with another, there is a need to “encourage a proliferation of thinking about and doing school differently”.
  • There is a need to focus on change and not measurement, on the social and not just the technical.
  • That this can help disrupt traditional relationships including those between: schools and knowledge, knowledge and children, children and teachers, and learners and communities.

References

Bigum, C. (2012). Schools and computers: Tales of a digital romance. In L. Rowan & C. Bigum (Eds.), Transformative Approaches to New Technologies and student diversity in futures oriented classrooms: Future Proofing Education (pp. 15–28). London: Springer.

One example of industrial e-learning as “on the web” not “of the web”

The following arises from some recent experiences with the idea of “minimum course sites” and this observation from @cogdog in this blog post

I have no idea if this is off base, but frankly it is a major (to me) difference of doing things ON the web (e.g. putting stuff inside LMSes) and doing things OF the web.

It’s also an query to see if anyone knows of an institution that has implemented a search engine across the institutional e-learning systems in a way that effectively allows users to search for resources in a course centric way.

The symptom

There’s a push on at my current institution’s central L&T folk to develop a minimum course site standard. Some minimum set of services, buttons etc. that will achieve the nirvana of consistency. Everything will be the same.

The main espoused reason as to why this is a good thing is that the students have been asking for it. There has been consistent feedback from students that none of the course sites are the same.

The problem

Of course, the real problem isn’t that students want everything to be the same. The real problem is that they can’t find what they are looking for. Sure, if everything was the same then they might have some ideas about where to find things, but that has problems including:

  • The idea that every course site at a university can be structured the same is a misunderstanding of the diversity inherent in course. Especially as people try to move away from the traditional models such as lecture/tutorial etc.
  • The idea that one particular structure will be understandable/appropriate to all people also is questionable.
  • Even if all the sites are consistent and this works, it won’t solve the problem of when the student is working on a question about “universal design” and wants to find where that was mentioned amongst the many artefacts in the course site.

The solution

The idea that the solution to this problem is to waste huge amounts of resources in the forlorn attempt to achieve some vaguely acceptable minimum standards that is broadly applicable seems to be a perfect example of “doing things ON the web, rather than doing things OF the web”.

I can’t remember the last time I visited a large website and attempted to find some important information by navigating through the site structure. Generally, I – like I expect most people – come to a large site almost directly to the content I am interested in either through a link provided by someone or via a search engine.

Broader implications

To me the idea of solving this problem through minimum standards is a rather large indication of the shortcomings of industrial e-learning. Industrial e-learning is the label I’ve applied to the current common paradigm of e-learning adopted by most universities. It’s techno-rational in its foundations and involves the planned management of large enterprise systems (be they open source or not). I propose that “industrial e-learning” is capable and concerned primarily with “doing things On the web, rather than doing things OF the web”.

Some potential contributing factors might include:

  1. Existing mindsets.
    At this institution, many of the central L&T folk come from a tradition of print-based distance education where consistency of appearance was a huge consideration. Many of these folk are perhaps not “of the web”.
  2. Limitations of the tools.
    It doesn’t appear that Moodle has a decent search engine, which is not surprising given the inspiration of its design and its stated intent of not being an information repository.
  3. The nature of industrial e-learning, its product and process.
    A key characteristic of industrial e-learning is a process that goes something like this
    1. Spend a long time objectively selecting an appropriate tool.
    2. Use that tool for along time to recoup the cost of moving to the new tool.
    3. Aim to keep the tool as vanilla as possible to reduce problems with upgrades from the vendor.
      This applies to open source systems as much as proprietary systems.
    4. Employ people to help others learn how to best use the system to achieve their ends.
      Importantly, the staff employed are generally not their to help others learn how to “best achieve their ends”, the focus definitely tends to be on ho to “best use the system to achieve their ends”.
    5. Any changes to the system have to be requested through a long-scale process that involves consensus amongst most people and the approval of the people employed in point d.

    This means that industrial e-learning is set up to do things the way the chosen systems work. If you have to do something that isn’t directly supported by the system, it’s very, very hard. e.g. add a search engine to Moodle.

All of these make it very hard for industrial e-learning to be “doing things OF the web”

Lessons for the meta-level of networked learning?

This semester I’m teaching EDU8117, Networked and Global Learning, one of the Masters level courses here at USQ. It’s been an interesting experience because I’m essentially supporting the design – a very detailed “constructive alignment” design – prepared by someone else. The following is a belated start of my plan to engage in the course at some level like a student. The requirement was to use one of a few provided quotes attempting to define either networked learning or global learning and link it to personal experience. A first step in developing a research article in the topic.

Networked learning

As a nascent/closet connectivist, networked learning is the term in this pair that is of most interest – though both are increasingly relevant to my current practice. All of the three quotes around networked learning spoke to various aspects of my experience, however, the Bonzo and Parchoma (2010, p. 912) quote really resonated, especially this part (my emphasis added)

that social media is a collection of ideas about community, openness, flexibility, collaboration, transformation and it is all user-centred. If education and educational institutions can understand and adopt these principles, perhaps there is a chance for significant change in how we teach and learn in formal and informal settings. The challenge is to discover how to facilitate this change.

At the moment I have yet to read the rest of the article – it is somewhat ironic that I am focusing on networked learning, whilst struggling with limited network access due to the limitations of a local telecommunications company – so I will have to assume that Bonzo and Parchoma are using this collection of ideas from social media as important ideas for networked learning.

What stikes me about this quote is that I think the majority of what passes for institutional support for networked learning – in my context I am talking about Australian Universities (though I believe there is significant similarities in universities across the world) – is failing, or at least struggling mightly “to discover how to facilitate this change”.

This perspective comes from two main sources:

  1. my PhD thesis; and,
    The thesis argued that how universities tend to implement e-learning is completely wrong for the nature of e-learning and formulated an alternate design theory. Interestingly, a primary difference between the “wrong” (how they are doing it now) and the “right” (my design theory) way is how well they match (or don’t) Bonzo and Parchoma’s (2010) collection of ideas from social media.
  2. my recent experience starting work as a teaching academic at a new university.
    In my prior roles – through most of the noughties I was in an environment where I had significant technical knowledge and access. This meant that when I taught I was able to engage in an awful lot on bricolage 1. In the main because the “LMS” I was using was one that I had designed to be user-centered, flexible and open and I still had the access to to make changes.

    On arriving at my new institution, I am now just a normal academic user of the institutional LMS, which means I’m stuck with what I’m given. What I’ve been given – the “LMS” and other systems – are missing great swathes of functionality and there is no way I can engage in bricolage to transform an aspect of the system into something more useful or interesting.

Meta-networked learning

Which brings me to a way in which I’m interested in extending this “definition” of networked learning to a community. Typically networked learning – at least within an institutional setting – is focused on how the students and the teachers are engaging in networked learning. More specifically, how they are using the LMS and associated institutional systems (because you can get in trouble for using something different). Whilst this level of interest in networked learning is important and something I need to engage in as a teaching academic within an institution. I feel what I can do at this level is being significantly constrained because the meta-level of networked learning is broken.

I’m defining the meta-level of networked learning as how the network of people (teaching staff, support staff, management, students), communities, technologies, policies, and processes within an institution learn about how to implement networked learning. How the network of all these elements work (or not) together to enable the other level of networked learning.

Perhaps the major problem I see with the meta-level of networked learning is that it isn’t though of as a learning process. Instead it is widely seen as the roll-out of an institutional, enterprise software system under the auspices of some senior member of staff. A conception that does not allow much space for being about “community, openness, flexibility, collaboration, transformation and it is all user-centred” (Bonzo and Parchoma, 2010, p. 912). Subsequently, I wonder “If education and educational institutions can understand and adopt these principles” (Bonzo and Parchoma, 2010, p. 912) and apply them to the meta-level of networked learning, then “perhaps there is a chance for significant change in how we teach and learn in formal and informal settings” (Bonzo and Parchoma, 2010, p. 912). As always, “The challenge is to discover how to facilitate this change” (Bonzo and Parchoma, 2010, p. 912). Beyond that, I wonder what impact such a change might have on the experience of the institution’s learners, teachers, other staff. Indeed, what impact it might have on the institutions.

References

Bonzo, J., & Parchoma, G. (2010). The Paradox of Social Media and Higher Education Institutions. Networked Learning: Seventh International Conference (pp. 912–918). Retrieved from http://lancaster.academia.edu/GaleParchoma/Papers/301035/The_Paradox_of_Social_Media_and_Higher_Education_Institutions

Hovorka, D., & Germonprez, M. (2009). Tinkering, tailoring and bricolage: Implications for theories of design. AMCIS’2009. Retrieved from http://aisel.aisnet.org/amcis2009/488

1 Hovorka and Germonprez (2009) cite Gabriel (2002) and Ciborra (2002) as describing bricolage as “as a way of describing modes of use characterized by tinkering, improvisation, and the resulting serendipitous, unexpected outcomes”.

Schemata and the source of dissonance?

The following is intended to be an illustration of one of the potential origins of the gap between learning technologists and educators. It picks up on the idea of schemata from this week’s study in one course and connects to my point about the dissonance between how educational technology is implemented in universities and what we know about how people learn.

I’m sure folk who have been around the education discipline longer than I will have seen this already. But it is a nice little activity and not one that I’d seen previously.

An experiment

Read the following paragraph and fill in the blanks. If you’re really keen add a comment below with what you got. Actually, gathering a collection of responses from a range of people would be really interesting.

The questions that p________ face as they raise ch________ from in_________ to adult are not easy to an _________. Both f______ and m________ can become concerned when health problems such as co_________ arise anytime after the e____ stage to later life. Experts recommend that young ch____ should have plenty of s________ and nutritious food for healthy growth. B___ and g____ should not share the same b______ or even be in the same r______. They may be afraid of the d_____.

Now, take a look at the original version of this paragraph.

Is there any difference between it and what you got? Certainly was for me.

Schemata

This problem was introduced in a week that was looking at Piaget and other theories about how folk learn. In particular, this example was used as an example of the role schemata play in how people perceive and process the world and what is happening within it.

I am a father of three wonderful kids. So, over the last 10+ years I’ve developed some significant schemata around raising kids. When I read the above paragraph, the words that filled the blanks for me were: parents, children, infant, answer, fathers, mothers,….and it was hear that I first paused. None of my children really suffered from colic, so that didn’t spring to mind, but I started actively searching for ways I could make this paragraph fit the schemata that I had activated. i.e. I was thinking “parent”, so I was trying to make these things fit.

Schemata are mental representations of an associated set of perceptions etc. The influence how you see what is going on.

I’m somewhat interested in seeing what words others have gotten from the above exercise, especially those without (recent) experience of parental responsibilities.

A difference of schemata

Learning technologists (or just plain innovative teachers) have significantly different schemata than your plain, everyday academic. Especially those that haven’t had much experience of online learning, constructivist learning, *insert “good” teaching practice of your choice*. Even within the population of learning technologists there is a vast difference in schemata.

Different schemata means that these folk see the world in very different ways.

A triumph of assimilation of accommodation

The on-going tendency of folk to say things like (as in an article from the Australian newspaper’s higher education section) Online no substitute for face to face teaching says something about their schemata and (to extend the (naive/simplistic) application of Piaget) the triumph of assimilation over accommodation.

For Piaget people are driven to maintain an equilibrium between what the know and what they observe in the outside world. When they perceive something new in the world they enter a state of disequilibrium and are driven to return to balance. For Piaget, there are two ways this is done.

  1. Assimilation – where the new insight is fitted into existing schemata.
  2. Accommodation – where schemata are changed (either old are modified or new are created) to account for the new insights.

I’d suggest that for a majority of academic staff (and senior management) when it comes to new approaches to learning and teaching their primary coping mechanism has been assimilation. Forcing those new approaches into the schemata they already have. i.e. the Moodle course site is a great place to upload all my handouts and have online lectures.

As I’ve argued before I believe this is because the approaches used to introduce new learning approaches in universities have had more in common with behaviourism than constructivism. Consequently the approaches have not been all that successful in changing schemata.

Some stories from teaching awards

This particular post tells some personal stories about teaching awards within Australian higher education. It’s inspired by a tweet or two from @jonpowles

Some personal success

For my sins, I was the “recipient of the Vice-Chancellor’s Award for Quality Teaching for the Year 2000”. The citation includes

in recoginition of demonstrated outstanding practices in teaching and learning at…., and in recognition of his contribution to the development of online learning and web-based teaching within the University and beyond

I remain a little annoyed that this was pre-ALTC. The potential extra funds from a national citation would have help the professional development fund. But the real problems with this award was the message I received about the value of teaching to the institution from this experience. Here’s a quick summary. (BTW, the institutional teaching awards had been going for at least 2 or 3 years before 2000, this was not the first time they’d done this.)

Jumping through hoops

As part of the application process, I had to create evidence to justify that my teaching was good quality. That’s a fairly standard process.

What troubled me then and troubles me to this day, is that the institution had no way of knowing. It’s core business is learning and teaching and it had no mechanisms in place that could identify the good and the bad teachers.

In fact, at that stage the institution didn’t have a teaching evaluation system. One of my “contributions to the development of online learning” was developing a web-based survey mechanism that I used in my own publications. This publication reports response rates of between 29-41% in one of my courses.

It is my understanding that the 2010 institutional evaluation system still dreams about reaching percentages that high.

Copy editing as a notification mechanism

Want to know how I found out I’d won the award? It was when an admin assistant from the L&T division rang me up and asked me to approve the wording of the citation.

Apparently, the Vice-Chancellor had been busy and/or away and hadn’t yet officially signed off on the result, or that I could be officially notified. However, the date for the graduation ceremony at which the award was to be given was fast approaching. In order to get the citation printed, framed and physically available at the ceremony the folk responsible for implementation had to go ahead and ask me to check the copy.

Seeing the other applications

I actually don’t remember exactly how this happened. I believe it was part of checking the copy of the citation, however it happened I ended up with a package that contained the submissions from all of the other applicants.

Double dipping

The award brought with it some financial reward, both at the faculty level (winning the faculty award was the first step) and the university level. The trouble was that even this part of the process was flawed. Though it was flawed in my favour. I got paid twice!

The money went into a professional development fund that was used for conference travel, equipment etc. Imagine my surprise and delight when my professional development fund received the reward, twice.

You didn’t make a difference

A significant part of the reason for the reward was my work in online learning and, in particular, the development of the Webfuse e-learning system. Parts of which are still in use at the institution and the story is told in more detail in my thesis.

About 4 years after receiving this award, recognising the contribution, a new Dean told me not to worry about working on Webfuse anymore, it had made no significant different to learning and teaching within the faculty.

Mixed messages and other errors

Can you see how the above experience might make someone a touch cynical about the value of teaching awards? It certainly didn’t appear to me that the recognition of quality teaching was so essential to the institution’s operations that they had efficient and effective processes. Instead it felt that the teaching award was just some add on. Not to mention a very subjective add on at that.

But the mixed messages didn’t stop there. They continued on with the rise of the ALTC. Some additional observed “errors”.

Invest at the tail end

With the rise of the ALTC it became increasingly important that an institution and its staff be seen to receive teaching citations. The number of ALTC teaching citations received became a KPI on management plans. Resources started to be assigned to ensuring the awarding of ALTC citations.

Obviously those resources were invested at the input stage of the process. Into the teaching environment to encourage and enable university staff to engage in quality learning and teaching. No.

Instead it was invested in hiring part-time staff to write the assist in the writing of the ALTC citation applications. It was invested in performing additional teaching evaluations for the institutional teaching award winners to cover up the shortcomings (read absence) of an effective broad-scale teaching evaluation system. It was invested in bringing ALTC winners onto campus to give “rah-rah” speeches about the value of teaching quality and “how I did it” pointers.

Reward the individual, not the team

Later in my career I briefly – in-between organisational restructures – was responsible for the curriculum design and development unit at the institution. During that time, a very talented curriculum designers worked very hard and very well with a keen and talented accounting academic to entirely re-design an Accounting course. The re-design incorporated all the right educational buzz words – “cognitive apprenticeship” – and the current ed tech fads – Second Life – and was a great success. Within a year or two the accounting academic received an institutional award and then an ALTC citation.

The problem was that the work the citation was for, could never have been completed by the academic alone. Without the curriculum designer involved – and the sheer amount of effort she invested in the project – the work would never have happened. Not unsurprisingly, the curriculum designer was somewhat miffed.

But it goes deeper than that. The work would not also have been possible without the efforts of a range of staff within the curriculum design unit, not to mention a whole range of other teaching staff (this course often has 10s of teaching staff at multiple campuses).

I know there are some ALTC citations that have been awarded to teams, but most ALTC citations are to individuals and this is certainly one example where a team missed out.

Attempt to repeat the success and fail to recognise diversity

But it goes deeper than that. The work for this course was not planned. It did not result from senior management developing a strategic plan that was translated into a management plan that informed decision making of some group that decided to invest X resources in Y projects to achieve Z goals.

It was all happenstance. There were the right people in the right place at the right time and they were encouraged and enable to run with their ideas. Some of the ideas were a bit silly, they had to be worked around, manipulated and cut back, but it was through a messy process of context-sensitive, collaboration between talented people that this good work arose.

Ignoring this perception, some folk then mistakenly tried to transplant the approach taken in this course into other courses. The failed to recognise that “lightning doesn’t strike twice”. You couldn’t transplant a successful approach from one course context into another. What you really had to do was start another messy process of context-sensitive, collaboration between talented people.

Quality teaching has to be embedded

This bring me back to some of the points that I made about the demise of the ALTC. Quality teaching doesn’t arise from external bodies and their actions, it arises from conditions within an university that enable and encourage messy processes of context-sensitive, collaboration between talented people.

Situated shared practice, curriculum design and academic development

Am currently reading Faegri et al (2010) as part of developing the justificatory knowledge for the final ISDT for e-learning that is meant to be the contribution of the thesis. The principle from the ISDT that this paper connects with is the idea of a “Multi-skilled, integrated development and support team” (the name is a work in progress). The following is simply a placeholder for a quote from the paper and a brief connection with the ISDT and what I think it means for curriculum design and academic development.

The quote

The paper itself is talking about an action research project where job rotation was introduced into a software development firm with the aim of increasing the quality of the knowledge held by software developers. The basic finding was that in this case, there were some benefits, however, the problems outweighed them. I haven’t read all the way through, I’m currently working through the literature review. The following quote is from the review.

Key enabling factors for knowledge creation is knowledge sharing
and integration [36,54]. Research in organizational learning has emphasized the value of practice; people acquire and share knowledge in socially situated work. Learning in the organization occurs in the interplay between tacit and explicit knowledge while it crosses boundaries of groups, departments, and organizations as people participate in work [17,54]. The process should be situated in shared practice with a joint, collective purpose [12,14,15].

Another related quote

The following is from a bit more related reading, in particular Seely Brown & Duguid (1991) – emphasis added

The source of the oppositions perceived between working, learning, and innovating lies primarily in the gulf between precepts and practice. Formal descriptions of work (e.g., “office procedures”) and of learning (e.g., “subject matter”) are abstracted from actual practice. They inevitably and intentionally omit the details. In a society that attaches particular value to “abstract knowledge,” the details of practice have come to be seen as nonessential, unimportant, and easily developed once the relevant abstractions have been grasped. Thus education, training, and technology design generally focus on abstract representations to the detriment, if not exclusion of actual practice. We, by contrast, suggest that practice is central to understanding work. Abstractions detached from practice distort or obscure intricacies of that practice. Without a clear understanding of those intricacies and the role they play, the practice itself cannot be well understood, engendered (through training), or enhanced (through innovation).

Relevance?

I see this as highly relevant to the question of how to improve learning and teaching in universities, especially in terms of the practice of e-learning, curriculum design and academic development. It’s my suggestion that the common approaches to these tasks in most universities ignore the key enabling factors mentioned in the above quote.

For example, the e-learning designers/developers, curriculum designers and academic developers are generally not directly involved with the everyday practice of learning and teaching within the institution. As a result the teaching academics and these other support staff don’t get the benefit of shared practice.

A further impediment to shared practice is the divisions between e-learning support staff, curriculum designers and academic developers that are introduced by organisational hierarchies. At one stage, I worked at a university where the e-learning support people reported to the IT division, the academic staff developers reported to the HR division, the curriculum designers reported to the library, and teaching academics were organised into faculties. There wasn’t a common shared practice amongst these folk.

Instead, any sharing that did occur was either at high level project or management boards and committees, or in design projects prior to implementation. The separation reduce the ability to combine, share and create new knowledge about what was possible.

The resulting problem

The following quote is from Seely Brown and Duiguid (1991)

Because this corporation’s training programs follow a similar downskilling approach, the reps regard them as generally unhelpful. As a result, a wedge is driven between the corporation and its reps: the corporation assumes the reps are untrainable, uncooperative, and unskilled; whereas the reps view the overly simplistic training programs as a reflection of the corporation’s low estimation of their worth and skills. In fact, their valuation is a testament to the depth of the rep’s insight. They recognize the superficiality of the training because they are conscious of the full complexity of the technology and what it takes to keep it running. The corporation, on the other hand, blinkered by its implicit faith in formal training and canonical practice and its misinterpretation of the rep’s behavior, is unable to appreciate either aspect of their insight.

It resonates strongly with some recent experience of mine at an institution rolling out a new LMS. The training programs around the new LMS, the view of management, and the subsequent response from the academics showed some very strong resemblances to the situation described above.

An alternative

One alternative, is what I’m proposing in the ISDT for e-learning. The following is an initial description of the roles/purpose of the “Multi-skilled, integrated development and support team”. Without too much effort you could probably translate this into broader learning and teaching, not just e-learning. Heaven forbid, you could even use it for “blended learning”.

An emergent university e-learning information system should have a team of people that:

  • is responsible for performing the necessary training, development, helpdesk, and other support tasks required by system use within the institution;
  • contains an appropriate combination of technical, training, media design and production, institutional, and learning and teaching skills and knowledge;
  • through the performance of its allocated tasks the team is integrated into the everyday practice of learning and teaching within the institution and cultivates relationships with system users, especially teaching staff;
  • is integrated into the one organisational unit, and as much as possible, co-located;
  • can perform small scale changes to the system in response to problems, observations, and lessons learned during system support and training tasks rapidly without needing formal governance approval;
  • actively examines and reflects on system use and non-use – with a particular emphasis on identifying and examining what early innovators – to identify areas for system improvement and extension;
  • is able to identify and to raise the need for large scale changes to the system with an appropriate governance process; and
  • is trusted by organisational leadership to translate organisational goals into changes within the system, its support and use.

References

Faegri, T. E., Dyba, T., & Dingsoyr, T. (2010). Introducing knowledge redundancy practice in software development: Experiences with job rotation in support work. Information and Software Technology, 52(10), 1118-1132.

Seely Brown, J., & Duguid, P. (1991). Organizational learning and communities-of-practice: Toward a unified view of working, learning, and innovation. Organization Science, 2(1), 40-57.

The rider, elephant, and shaping the path

Listened to this interview of Chip Heath, a Stanford Professor in Organizational Behaviour about his co-authored book Switch: How to change things when change is hard. My particular interest in this arises from figuring out how to improve learning and teaching in universities. From the interview and the podcast this seems to be another one in a line of “popular science” books aimed at making clear what science/research knows about the topic.

The basic summary of the findings seems to be. If you wish to make change more likely, then your approach has to (metaphorically):

  • direct the rider;
    The rider represents the rational/analytical decision making capability of an individual. This capability needs to be appropriately directed.
  • engage the elephant; and
    The elephant represents the individual’s emotional/instinctive decision making approach. From the interview, the elephant/rider metaphor has the express purpose of showing that the elephant is far stronger than the rider. In typical situations, the elephant is going to win, unless there’s some engagement.
  • shape the path.
    This represents the physical and related environment in which the change is going to take place. My recollection is that the shaping has to support the first two components, but also be designed to make it easier to traverse the path and get to the goal.

There are two parts of the discussion that stuck with me as I think they connect with the task of improving learning and teaching within universities.

  1. The over-rationalisation of experts.
  2. Small scale wins.

Over-rationalisation of experts

The connection between organisational change and losing weight seems increasingly common, it’s one I used and it’s mentioned in the interview. One example used in the interview is to show how a major problem with change is that it is driven by experts. Experts who have significantly larger “riders” (i.e. rational/analytical knowledge) of the problem area/target of change than the people they are trying to change. This overly large rider leads to change mechanisms that over complicate things.

The example they use is the recently modified food pyramid from the United States that makes suggestions something like, “For a balanced diet you should consume X tablespoons of Y a day”. While this makes sense to the experts, a normal person has no idea of how many tablespoons of Y is in their daily diet. In order to achieve the desired change, the individual needs to develop all sorts of additional knowledge and expertise. Which is just not likely.

They compare this with some US-based populariser of weight loss who proposes much simpler suggestions e.g. “Don’t eat anything that comes through your car window”. It’s a simpler, more evocative suggestion that appears to be easier for the rider to understand and helps engage the elephant somewhat.

I can see the equivalent of this within learning and teaching in higher education. Change processes are typically conceived and managed by experts. Experts who over rationalise.

Small scale wins

Related to the above is the idea that change always consists of barriers or steps that have to be stepped over. Change is difficult. The suggestion is that when shaping the path you want to design it in such a way so that the elephant can almost just walk over the barrier. The interviewer gives the example of never being able to get her teenage sons to stop taking towels out of the bathroom and into their bedroom. Eventually what worked was “shaping the path” by storing the sons’ underwear in the bathroom, not their bedroom.

When it comes to improving learning and teaching in universities, I don’t think enough attention is paid to “shaping the path” like this. I think this is in part due to the process being driven by the experts, so they simply don’t see the need. But it is also, increasingly, due to the fact that the people involved can’t shape the path. Some of the reasons the path can’t be shaped include:

  • Changing the “research is what gets me promoted” culture in higher education is very, very difficult and not likely to happen effectively if just one institution does it.
  • When it comes to L&T path (e.g. the LMS product model or the physical infrastructure of a campus) it is not exactly set up to enable “shaping”.
  • The people involved at a university, especially in e-learning, don’t have the skills or the organisational structure to enable “shaping”.