Category Archives: herding cats

BIM and BAD

This post arises from two events today

  1. The ASCILITE’2014 call for papers came out today and I’m thinking about a paper I might submit.
  2. The first #edc3100 assignment is due today and my use of BIM has struck a unique problem that I need to solve.

The third is that I’m a touch fried from answering queries about “I submitted my assignment the wrong way” (the main problem with these queries is that they mean I have to engage with a horrible online assignment submission system) that I need to engage in something else.

The paper

The working title for the paper I’m thinking of is “The story of BIM: Being BAD as a way to bridge rhetoric and reality”.

BAD is an acronym that captures what I think is missing from the institutional approach to university e-learning

  1. Bricolage – the LMS as Enteprise Systems doesn’t allow or cater for bricolage.
  2. Affordances – resulting in an inability to leverage the affordances of technology to improve learning and teaching.
  3. Distribution – the idea that knowledge about how to improve L&T is distributed and the implications that has for the institutional practice of e-learning.

    i.e. current methods rely on the single, unified view of learning and teaching. A view that is expressed most concretely in the form of the LMS.

    This component will draw on a range of related “network” type theoretical perspectives including connectivism, Complex Adaptive Systems, embodied cognition and ANT – to name but a few.

The idea is if institutional e-learning is to get better, it needs to be BAD (more BAD?).

The following is an example of how the reality of using BIM in action supports the idea that it needs to be BAD. Or at least it’s a very small step. It captures the messiness (the distribution) of e-learning in a typical university course. A messiness that isn’t captured properly by PRINCE II and other methodologies, hierarchical organisational structures, appropriately total quality assured forms and processes and “theory” based abstractions like adopter categories.

And yes, there is some strong connection (repetition) with earlier perspectives/frameworks of mine. If at first you don’t succeed, try, try again.

Background

300+ students in EDC3100 are currently using their own blogs to reflect on their learning journey (to varying levels of engagement). Their blog posts contribute almost 15% of their final result in the course.

BIM is being used to keep a track of what they are sharing. The students create their blogs on WordPress.com (or anywhere they like) and register them with BIM. BIM then keeps a copy of all their posts.

But BIM’s capability don’t match the learning design I’m using in this course. BIM was originally designed to have student posts made in response to specific prompts/questions and then have the posts marked manually be human markers. In EDC3100, the students blog about anything they want. We offer some prompts, but they can ignore them. Student posts aren’t marked, instead students have to post a certain number of posts (on average) every week, the posts have to average a certain word count, and a certain number have to contain links to online resources and the blog posts of other students.

This analysis of student posts and the subsequent mark they get is done by a program I wrote. A bit of bricolage that takes bits and pieces of information extracted (with some difficulty) from various institutional systems and makes use of them in a way that solves my problem.

With a bit of bricolage each of the 300+ students have received an email recently telling them what the system knows about their progress. This gives them sometime to tick all the boxes prior to the first assignment being due (yep, still have due dates, haven’t travelled too far from the well-worn paths).

The problem

One student has reported a problem with what the system knows about their blog. The system says that only one of the posts links to another student post, but the student’s blog actually has two posts that link to other student posts. This is confirmed.

But no-one else is reporting the problem. There’s something unique about this student’s blog that has picked up a bug in the system.

The uniqueness of this bug appears to me as one of the problems associated with the failure of institutional systems to deal with the Distribution aspect of BAD. In a complex, distributed knowledge network there is no one view. But the trad approach can only ever respond to one view. This argument needs a bit of work.

Typically this problem is because the author of the post has used a link other to the other student’s blog. The program I wrote knows the URLs for all the student blogs. It checks all the links in a post against the known student posts.

I’ve visually checked the students blog posts in BIM and they are showing valid links to student blogs.

Argghh.

The solution – Chrome is too smart – it’s distribution

This is what I see when I look at the blog post using the Chrome browser

Chrome by David T Jones, on Flickr

The link that is shown is to the blog of another student. The program should pick this up and count it as a link.

Here’s what I see when I view it under the Firefox browser

Firefox by David T Jones, on Flickr

See the difference?

The student appears to have used some form of URL shortener. Looks like a WordPress tool. While this shortened URL does point to the post of another student. My little system doesn’t know how to convert a shortened URL into a full URL. So it doesn’t count it.

It appears that I must have a plugin installed on Chrome (or perhaps Chrome is smart enough on its own) to automatically expand out the wp.me shortened URL into the full link and change what is shown to the user.

I as the user is ignorant of this change happening.

Not a bad example of Distribution. How cognition/smarts/learning is distributed amongst all of the tools. Change on bit of the network and the outcome changes.

Ateleological travels in a teleological world: Past and future journeys around ICTs in education

In my previous academic life, I never really saw the point of book chapters as a publication form. For a variety of reasons, however, my next phase in academia appears likely to involve an increasing number of book chapters. The need for the first such chapter has arisen this week and the first draft is due by February next year, which is a timeline to give me just a little pause for thought. (There is a chance that this book might end up as a special edition of a journal)

What’s you perception of book chapters as a form of academic publication? Am particularly interested in the view from the education field.

What follows is a first stab at an abstract for the book chapter. The title for the book/special edition is “Meanings for in and of education research”. The current working title for my contribution is the title to this post: “Ateleological travels in a teleological world: Past and future journeys around ICTs in education”.

Abstract

The Australian Federal Government are just one of a gaggle of global stakeholders suggesting that Information and Communication Technologies are contributing to the creation a brave, new, digital world. Such a digital world is seen as being radically different to what has gone before and consequently demanding a radically different education system to prepare the next generation of learners. A task that is easier said than done. This chapter argues that the difficulties associated with this task arise because the meanings underpinning the design of education systems for the digital world are decidedly inappropriate and ill-suited for the nature of the digital world. The chapter draws upon 15+ years of research formulating an Information Systems Design Theory for emergent e-learning systems for universities to critically examine these commonly accepted meanings, suggest alternate and more appropriate meanings, and discuss the potential implications that these alternate meanings hold for the practice of education and education research.

The plan

The plan is that this chapter/paper will reflect on the primary focus of my research over recent years and encourage me to think of future research directions and approaches. Obviously it will draw on the PhD research and in particular the Ps Framework and the presentation I gave at EdMedia a couple of years ago. It will also draw on the presentation I gave analysing the Digital Education Revolution as part of my GDLT studies this year.

Alan Kay and some reasons why the educational technology revolution hasn’t happened

While reading a recent post from Gardner Campbell I was taken by a quote from Alan Kay

The computer is simply an instrument whose music is ideas

A google search later and I came across this interview with Kay for the Scholastic Administrator magazine. The article is titled “Alan Kay still waiting for the revolution” and there are some, for me, interesting perspectives. A smattering below.

The difficult part is helping the helpers

Kay identifies the greatest obstacle to his work as being “helping the helpers”. i.e. the teachers. In talking about Logo, Kay a key failure being that the second and third waves of teachers were not interesting in Logo and didn’t have the math skills to teach well with Logo.

I see this as the biggest problem around e-learning (or blended, flexible, personal etc learning if that’s your buzz word of the moment) within universities, helping the helpers.

The tokenism of computers

On computers and tokenism

But I think the big problem is that schools have very few ideas about what to do with the computers once the kids have them. It’s basically just tokenism, and schools just won’t face up to what the actual problems of education are, whether you have technology or not.

Again there’s some resonance with universities. For a lot of senior and IT management in universities there’s an idea that we must have an LMS, but there’s not always a good idea of what the organisation should do with it once it has it. The most important part of that “idea”, is being able to identify what about the policies and practices of the institution needs to change to best achieve that idea.

For example, with the LMS the institution can increase interaction between staff and students via discussion forums, e-portfolios etc. But we won’t change the workload or funding model for teaching, or recognise the need to change the timetable to remove the traditional 2 hour lecture, 2 hour tutorial model.

The difference between music and instruments

In talking about some of the limits or potential problems associated with the trend to one-to-one computer

Think about it: How many books do schools have—and how well are children doing at reading? How many pencils do schools have—and how well are kids doing at math? It’s like missing the difference between music and instruments. You can put a piano in every classroom, but that won’t give you a developed music culture, because the music culture is embodied in people……The important thing here is that the music is not in the piano. And knowledge and edification is not in the computer. The computer is simply an instrument whose music is ideas.

The provision of the LMS or some other “instrument” is the simple task. Helping the people figure out what you want to do with it and how it can be done well, is the hard part.

Helping everyone find their inner musician

Why educational computing hasn’t lived up to the potential?

So computers are actually irrelevant at this level of discussion—they are just musical instruments. The real question is this: What is the prospect of turning every elementary school teacher in America into a musician? That’s what we’re talking about here. Afterward we can worry about the instruments.

How do you encourage and enable university academics to become musicians? I don’t think you can forget about computers, e-learning or the LMS. They are already in universities. There’s a need to look out how you can change how academics experience these technologies so that they can start developing their musical ability. Sending them to “band camp” (e.g. Grad Cert in Higher Education) isn’t enough if they return to a non-musical family. The environment they live in has to be musical in every aspect.

Nobody likes a do-gooder – another reason for e-learning not mainstreaming?

Came across the article, “Nobody likes a do-gooder: Study confirms selfless behaviour is alienating” from the Daily Mail via Morgaine’s amplify. I’m wondering if there’s a connection between this and the chasm in the adoption of instructional technology identified by Geoghegan (1994)

The chasm

Back in 1994, Geoghegan draw on Moore’s Crossing the Chasm to explain why instructional technology wasn’t being adopted by the majority of university academics. The suggestion is that there is a significant difference between the early adopters of instructional technology and the early majority. That what works for one group, doesn’t work for the others. There is a chasm. Geoghegan (1994) also suggested that the “technologists alliance” – vendors of instructional technology and the university folk charged with supporting instructional technology – adopt approaches that work for the early adopters, not the early majority.

Nobody likes do-gooders

The Daily Mail article reports on some psychological research that draws some conclusions about how “do-gooders” are seen by the majority

Researchers say do-gooders come to be resented because they ‘raise the bar’ for what is expected of everyone.

This resonates with my experience as an early adopter and more broadly with observations of higher education. The early adopters, those really keen on learning and teaching are seen a bit differently by those that aren’t keen. I wonder if the “raise the bar” issue applies? Would imagine this could be quite common in a higher education environment where research retains its primacy, but universities are under increasing pressure to improve their learning and teaching. And more importantly show to everyone that they have improved.

The complete study is outlined in a journal article.

References

Geoghegan, W. (1994). Whatever happened to instructional technology? Paper presented at the 22nd Annual Conferences of the International Business Schools Computing Association, Baltimore, MD.

How people learn and implications for academic development

While I’m traveling this week I am reading How people learn. This is a fairly well known book that arose out of a US National Academy of Science project to look at recent insights from research about how people learn and then generate insights for teaching. I’ll be reading it through the lens of my thesis and some broader thinking about “academic development” (one of the terms applied to trying to help improve the teaching and learning of university http://davidtjones.wordpress.com/wp-admin/post.php?post=3189&action=edit&message=10#category-popacademics).

Increasingly, I’ve been thinking that the “academic development” is essentially “teaching the teacher”, though it would be better phrased as creating an environment in which the academics can learn how to be better at enabling student learning. Hand in hand with this thought is the observation and increasing worry that much of what passes for academic development and management action around improving learning and teaching is not conducive to creating this learning environment. The aim of reading this book is to think about ways which this situation might be improved.

The last part of this summary of the first chapter connects with the point I’m trying to make about academic development within universities.

(As it turns out I only read the first chapter while traveling, remaining chapters come now).

Key findings for learning

The first chapter of the book provides three key (but not exhaustive) findings about learning:

  1. Learners arrive with their own preconceptions about how the world exists.
    As part of this, if the early stages of learning does not engage with the learner’s understanding of the world, then the learner will either not get it, or will get it enough to pass the test, but then revert to their existing understanding.
  2. Competence in a field of inquiry arises from three building blocks
    1. a deep foundation of factual knowledge;
    2. understand these facts and ideas within a conceptual framework;
    3. organise knowledge in ways that enable retrieval and application.

    A primary idea here is that experts aren’t “smart” people. But they do have conceptual frameworks that help apply/understand much quicker than others

  3. An approach to teaching that enables students to implement meta-cognitive strategies can help them take control of their learning and monitor their progress.
    Meta-cognitive strategies aren’t context or subject independent.

Implications for teaching

The suggestion is that the above findings around learning have significant implications for teaching, these are:

  1. Teachers have to draw out and work with pre-existing student understandings.
    This implies lots more formative assessment that focuses on demonstrating understanding.
  2. In teaching a subject area, important concepts must be taught in-depth.
    The superficial coverage of concepts (to fit it all in) needs to be avoided, with more of a focus on the those important subject concepts.
  3. The teaching of meta-cognitive skills needs to be integrated into the curriculum of a variety of subjects.

Four attributes of learning environments

A latter chapter expands on a framework to design and evaluate learning environments, it includes four interrelated attributes of these environments:

  1. They must be learner centered;
    i.e. a focus on the understandings and progress of individual students.
  2. The environment should be knowledge centered with attention given to what is taught, why it is taught and what competence or mastery looks like
    Suggests too many curricula fail to support learning because the knowledge is disconnected, assessment encourages memorisation rather than learning. A knowledge-centered environment “provides the necessary depth of study, assessing student understanding rather than factual memory and incorporates the teaching of meta-cognitive strategies”.

    There’s an interesting point here about engagement, that I’ll save for another time.

  3. Formative assessments
    The aim is for assessments that help both students and teachers monitor progress.
  4. Develop norms within the course, and connection with the outside world, that support core learning values.
    i.e. pay attention to activities, assessments etc within the course that promote collaboration and camaraderie.

Application to professional learning

In the final section of the chapter, the authors state that these principles apply equally well to adults as they do to children. They explain that

This point is particularly important because incorporating the principles in this volume into educational practice will require a good deal of adult learning.

i.e. if you want to improve learning and teaching within a university based on these principles, then the teaching staff will have to undergo a fair bit of learning. This is very troubling because the authors argue that “approaches to teaching adults consistently violate principles for optimizing learning”. In particular, they suggest that professional development programs for teachers frequently:

  • Are not learner centered.
    Rather than ask what help is required, teachers are expected to attend pre-arranged workshops.
  • Are not knowledge centered.
    i.e. these workshops introduce the principles of a new technique with little time spent to the more complex integration of the new technique with the other “knowledge” (e.g. the TPACK framework) associated with the course
  • Are not assessment centered.
    i.e. when learning these new techniques, the “learners” (teaching staff) aren’t given opportunities to try this out, get feedback and even to give teachers the skills to know whether or not they’ve implemented the new technique effectively.
  • Are not community centered.
    Professional development consists more of ad hoc, separate events with little opportunity for a community of teachers to develop connections for on-going support.

Here’s a challenge. Is there any university out there were academic development doesn’t suffer from these flaws? How has that been judged?

The McNamara Fallacy and pass rates, academic analytics, and engagement

In some reading for the thesis today I came across the concept of McNamara’s fallacy. I hadn’t heard this before. This is somewhat surprising as it points out another common problem with some of the more simplistic approaches to improving learning and teaching that are going around at the moment. It’s also likely to be a problem with any simplistic implementation of academic analytics.

What is it?

The quote I saw describes McNamara’s fallacy as

The first step is to measure whatever can be easily measured. This is ok as far as it goes. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.

The Wikipedia page on the McNamara fallacy describes it as referring to Robert McNamara’s – the US Secretary of Defense from 1961 through 1968 – explanation of the USA’s failure in Vietnam down to a focus on quantifying success through simply indicators such as enemy body count, while at the same time ignoring other more important factors. Factors that were more difficult to measure.

The PhD thesis which I saw the above quote ascribes it to Yankelovich (1972), a sociologist. Wikipedia ascribes it to Charles Handy’s “The Empty Raincoat”. Perhaps indicating that the quote is from McNamara himself, just presented in different places.

Pass rates

Within higher education it is easy to see “pass rates” as an example of McNamara’s fallacy. Much of the quality assurance within higher education institutions is focused on checking the number of students who do (or don’t) pass a course. If the pass rate for a course isn’t too high, everything is okay. Much easier to measure this than the quality of student learning experience, the learning theory which informs the course design, or the impact the experience has on the student, now and into the future. This sort of unquestioning application of McNamara’s fallacy sometimes make me think we’re losing the learning and teaching “war” within universities.

What are the more important, more difficult to measure indicators that provide a better and deeper insight into the quality of learning and teaching?

Analytics and engagement

Student engagement is one of the buzz words on the rise in recent years, it’s been presented as one of the ways/measures to improve student learning. After all, if they are more engaged, obviously they must have a better learning experience. Engagement has become an indication of institutional teaching quality. Col did a project last year in which he looked more closely at engagement, the write up of that project gives a good introduction to student engagement. It includes the following quote

Most of the research into measuring student engagement prior tot he widespread adoption of online, or web based classes, has concentrated on the simple measure of attendance (Douglas & Alemanne, 2007). While class attendance is a crude measure, in that it is only ever indicative of participation and does not necessarily consider the quality of the participation, it has nevertheless been found to be an important variable in determining student success (Douglas, 2008)

Sounds a bit like a case of McNamara’s fallacy to me. A point Col makes when he says “it could be said that class attendance is used as a metric for engagement, simply because it is one of the few indicators of engagement that are visible”.

With the move to the LMS, it was always going to happen that academic analytics would be used to develop measures of student engagement (and other indicators). Indeed, that’s the aim of Col’s project. However, I do think that academic analytics is going to run the danger of McNamara’s fallacy. So busy focused on what we can measure easily, we miss the more important stuff that we can’t.

The grammar of school, psychological dissonance and all professors are rather ludditical

Yesterday, via a tweet from @marksmithers I read this post from the author of the DIYU book titled “Vast Majority of Professors Are Rather Ludditical”. This is somewhat typical of the defict model of academics which is fairly prevalent and rather pointless. It’s pointless for a number of reasons, but the main one is that it is not a helpful starting point for bringing a out change as it ignores the broader problem and consequently most solutions that arise from a deficit model won’t work.

One of the major problems this approach tends to ignore is the broader impact of the grammar of school (first from Tyack and Cuban and then Papert). I’m currently reading The nature of technology (more on this later) by W. Brian Arthur. The following is a summary and a little bit of reflection upon a section titled “Lock-in and Adaptive Stretch”, which seems to connect closely with the grammar of school idea.

Psychological dissonance and adaptive stretch

Arthur offers the following quote from the sociologist Diane Vaughan around psychological dissonance

[In the situations we deal with as humans, we use] a frame of reference constructed from integrated sets of assumptions, expectations and experiences. Everything is perceived on the basis of this framework. The framework becomes self-confirming because, whenever we can, we tend to impost it on experiences and events, creating incidents and relationships that conform to it. And we tend to ignore, misperceive, or deny events that do not fit it. As a consequence, it generally leads us to what we are looking for. This frame of references is not easily altered or dismantled, because the way we tend to see the world is intimately linked to how we see and define ourselves in relation to the world. Thus, we have a vested interest in maintaining consistency because our own identity is at risk.

Arthur goes onto to suggest that “the greater the distances between a novel solution and the accepted one, the large is this lock-in to previous tradition”. He then defines the lock-in of the older approach as adaptive stretch. This is the situation where it is easier to reach for the old approaches and adapt it to the new circumstances through stretching.

Hence professors are ludditical

But haven’t I just made the case, this is exactly what happens with the vast majority of academic practice around e-learning. If they are using e-learning at all – and not simply sticking with face-to-face teaching – most teaching academics are still using lectures, printed notes and other relics of the past that they have stretched into the new context.

They don’t have the knowledge to move on, so we have to make them non-ludditical. This is when management and leadership at universities rolls into action and identifies plans and projects that will help generate non-ludditical academics.

The pot calling the kettle black

My argument is that if you step back a bit further the approaches being recommended and adopted by researchers and senior management; the way those approaches are implemented; and they way they are evaluated for success, are themselves suffering from psychological dissonance and adaptive stretch. The approaches almost without exception borrow from a traditional project management approach and go something like:

  • Small group of important people identify the problem and the best solution.
  • Hand it over to a project group to implement.
  • The project group tick the important project boxes:
    • Develop a detailed project plan with specific KPIs and deadlines.
    • Demonstrate importance of project by wheeling out senior managers to say how important the project is.
    • Implement a marketing push involving regular updates, newsletters, posters, coffee mugs and presentations.
    • Develop compulsory training sessions which all must attend.
    • Downplay any negative experiences and explain them away.
    • Ensure correct implementation.
    • Get an evaluation done by people paid for and reporting to the senior managers who have been visibly associated with the project.
    • Explain how successful the project was.
  • Complain about how the ludditical academics have ruined the project through adaptive stretching.

Frames of reference and coffee mugs

One of the fundamental problem with these approaches to projects within higher education is that it effectively ignores the frames of reference that academics bring to problem. Rather than start with the existing frames of reference and build on those, this approach to projects is all about moving people straight into a new frame of reference. In doing this, there is always incredible dissonance between how the project people think an action will be interpreted and how it actually is interpreted.

For example, a few years ago the institution I used to work for (at least as of CoB today) adopted Chickering and Gamson’s (1987) 7 principles for good practice in undergraduate teaching as a foundation for the new learning and teaching management plan. The project around this decision basically followed the above process. As part of the marketing push, all academics (and perhaps all staff) received a coffee mug and a little palm card with the 7 principles in nice text and a link to the project website. The intent of the project was to increase awareness of the academics of the 7 principles and how important they were to the institution.

The problem was, that at around this time the institution was going through yet more restructures and there was grave misgivings from senior management about how much money the institution didn’t have. The institution was having to save money and this was being felt by the academics in terms of limits on conference travel, marking support etc. It is with this frame of reference that the academics saw the institution spending a fair amount of money on coffee mugs and palm cards. Just a touch of dissonance.

What’s worse, a number of academics were able to look at the 7 principles and see principle #4 “gives prompt feedback” and relate that to the difficulty of giving prompt feedback because there’s no money for marking support. Not to mention the push from some senior managers about how important research is to future career progression.

So, the solution is?

I return to a quote from Cavallo (2004) that I’ve used before

As we see it, real change is inherently a kind of learning. For
people to change the way they think about and practice education, rather than merely being told what to do differently, we believe that practitioners must have experiences that enable appropriation of new modes of teaching and learning that enable them to reconsider and restructure their thinking and practice.

Rather than tell academics what to do, you need to create contextualised experiences for academics that enable appropriation of new models of teaching and learning. What most senior managers at universities and many of the commentators don’t see, is that the environment at most universities is preventing academics from having these experiences and then preventing them from appropriating the new models of teaching.

The policies, processes, systems and expectations senior managers create within universities are preventing academics from becoming “non-ludditical”. You can implement all the “projects” you want, but if you don’t work on the policies, processes, systems and expectations in ways that connect with the frames of reference of the academics within the institution, you won’t get growth.

References

Cavallo, D. (2004). Models of growth – Towards fundamental change in learning environments. BT Technology Journal, 22(4), 96-112.

Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin, 39(7), 3-7.

McGuffins, learning, teaching and universities

D’Arcy Norman suggests that Edupunk is a McGuffin. I like the metaphor. But I think it breaks down a bit, at least in the context I’m interested in.

Wikipedia uses a definition of a McGuffin that suggests it is “a plot element that catches the viewers’ attention or drives the plot of a work of fiction”. Wikipedia suggests that the defining characteristic of a McGuffin is

the major players in the story are (at least initially) willing to do and sacrifice almost anything to obtain it, regardless of what the MacGuffin actually is.

.

Importantly, as Wikipedia suggests

the specific nature of the MacGuffin may be ambiguous, undefined, generic, left open to interpretation or otherwise completely unimportant to the plot.

What is important is not the details or nature of edupunk, top-down quality assurance, problem-based learning, teacher-of-the year awards, or anything else. What is important is what happens as a result of the characters wanting to obtain the McGuffin. In movies, what’s important is a good plot.

I work in a university context. In that context, I think what’s important is improving the quality of learning and teaching. I don’t see enough of that happening. To a large extent I think this is due to the absence of appropriate McGuffins. The current McGuffins within a university context aren’t driving the majority of academics to improve the quality of learning and teaching.

Edupunk is the right McGuffin for some. But I’m not sure how widespread that is. The folk interested in Edupunk are generally not the ones that need a McGuffin.

So, what is the McGuffin for improving L&T within a university? Does it make sense for there to be one, or even a small number of McGuffins?

Functional fixedness, analytics, and the LMS

A blog post on the website of Gilfes Education Group (apparently a “network of independent education experts”) picks up on the Indicators project and its take on academic analytics. The post seems to largely in agreement with what we’re doing and the reasons behind it.

The following seeks to pick up on a point made in the Gilfus post about the problem arising from ownership of the data and some of the other barriers that have been proposed. The argument I develop in the following that functional fixedness is a major barrier to the effective appropriation of academic analytics to help improve learning and teaching.

But first, an experiment

Imagine if you will that we’re in a room together. I’m going to set you a task. Here’s some matches, a box of tacks and a candle (see the image below). Your task is to attach the candle to a cork board on the wall in way that means that wax from the candle does not drip onto the table that is underneath the cork board.

Candle problem set up

How do you do it?

The solution is given an image at the end of this post.

Apparently, if I rephrase the problem solution a little to the following, it might improve your chances of success.

Here’s some matches, a box of tacks and a candle (see the image below). Your task is to attach the candle to a cork board on the wall in way that means that wax from the candle does not drip onto the table that is underneath the cork board.

Functional fixedness

If you’re anything like my brother-in-law on whom I tested this out in person, you did not arrive at the solution quickly, if at all. This experiment is called the candle problem and has been used to demonstrate the problem of functional fixedness.

Functional fixedness suggests that you have fixated on the design function of the object – i.e. the box of tacks is designed to hold the tacks – so much that you cannot see how it might be put to a different use to solve this problem. To put it in the words of German and Barrett (2005)

Problem solving can be inefficient when the solution requires subjects to generate an atypical function for an object and the object’s typical function has been primed

In other words, the problem description above had the box’s typical function primed as holding the tacks, hindering your ability to see another use for the box.

Academic analytics, the LMS and functional fixedness

For most universities there is an existing set of information systems. There’s the learning management system (LMS) in which learning takes place, and there is the data warehouse and associated business intelligence tools for providing reports and information. The people within these organisations, especially those already supporting (the IT folk) and using (management) the data warehouse, have been primed to see a typical use for these systems. They are fixated on using the LMS and data warehouse in a particular way.

Add into this mix the typical under resourcing/inefficient management of IT, and the typical top-down, techno-rational approach to management and it is simply too difficult for organisational members to see the case for moving aspects of academic analytics into the LMS.

It doesn’t help that it’s messy

The matter isn’t helped much by the benefits of moving aspects of academic analytics into the LMS are somewhat uncertain and messy. Being uncertain and messy aren’t characteristics of an approach likely to overcome functional fixedness. Especially in organisational environments where being efficient (defined as doing what we already do or have strategically planned to do) is the main intermediate goal. But then this is why innovation is hard in organisations, innovation is messy.

References

German, T. and H. C. Barrett (2005). “Functional fixedness in a technologically sparse culture.” Psychological Science 16(1): 1-5.

Solution

The solution to the Candle problem is represented in the following image.

Candle problem solution

The confusion of small and big changes

Over the last couple of days I’ve enjoyed a small discussion that has arisen out of some comments Kevin has made on my blog. This post is an attempt to partially engage with the most recent comment. I echo Kevin’s conclusion, I’d love to hear anyone else’s take on this.

The unanswered question

The main point I’d like to discuss is the question of small versus big changes. I have an opinion on this, but there’s not enough evidence to suggest that it’s an answer. The basic question might be phrased as: How do you improve the quality significant improvement in the quality of L&T in universities? You could make this much more general, along the lines of “How do you change organisational practices?”, but I’m going to stick with the specific.

I’m familiar with two broad responses:

  • Revolutionary (usually top-down) change; and
    This is where the necessary change is identified by someone, eventually they get agreement/the ability to implement the change through some sort of change management process. This usually involves some big change. e.g. the adoption of a new LMS for a university, trashing the LMS and adopting WPMU for L&T, adopting university wide graduate attributes, requiring every academic to have a formal teaching qualification etc. Or even more radical, the death of universities and their replacement by something else.
  • Evolutionary (usually bottom-up) change.
    Small-scale changes in practice, usually at the local level.

Kevin’s comment gives a good summary of the common problem with the evolutionary change approach

In my experience, especially at a large institution, taking the “small changes” route is the road to perdition. For me, this means I have to engage in a million little negotiations to get the small to accumulate to something significant. At the rate I’m going it will take me two lifetimes to bring about real change in the English Department.

As I mentioned above and indicate by the heading for this section, I don’t have what I would call an answer. I have an argument for the approach I would take and some evidence to support it, but I don’t think I can call it “the answer” (yet).

What I think is the answer

Last I year I gave a presentation called Herding cats, losing weight and how to improve learning and teaching (slides and video are available). In that presentation, the analogy used is that revolutionary change is like herding cats and that evolutionary change is like losing weight. Using this analogy I argue that the herding cats approach to improve the quality of teaching at a University has not worked empirically and that there is significant theory to explain why it will never work. That same theory suggests that an evolutionary approach informed by lessons learned from weight loss, is much more promising.

The general solution I suggest is one slide 200 or so (it was only a 60 minute presentation) and goes under the title “reflective alignment” and can be summarised as

All aspects of the learning and teaching environment are aligned to enable and encourage academic staff to reflect on their teaching with the aim of achieving 3rd order change.

Framed another way, the teaching environment at a university encourages and enables academics to be changing their thinking and practice of teaching. That is essentially do what they do now, make small changes each time they teach a course, but rather than changes that are not constrained by the same ways of thinking about teaching.

Having academics continually making these sorts of 3rd order changes (within an institution that encourages and enables them to make those 3rd order changes) will result (I think) in radically different and significantly improved learning and teaching.

When small changes won’t work

Like Kevin, I think that universities relying on small changes to improve learning and teaching will not work. Mostly because the university environment does not encourage nor enable the type of small scale changes that are required.

In the herding cats presentation a large part of the time was listing the parts of the university teaching environment that actively prevents the type of 3rd order change that is necessary. In fact, much of the bleating in posts on this blog are complaining about these limitations. Some examples include:

  • Rewards that favour research, not teaching.
    No matter how many formal teaching qualifications an academic is forced to acquire, if they get promoted (both at their current and other universities) through the quality of their promotion, then they will focus on research, not teaching.
  • Pressures arising from quality assurance and simplistic KPIs.
    Since the mid-1990s I’ve observed that it is only the courses with large failure rates or student complaints that get any attention from university management. Students, like most people get scared when their expectations aren’t meant. That means if you try something innovative students will complain. In addition, if you try something innovative you might have problems, which management hate. If you try something different, you are more likely to have to waste time responding to “management concerns”. The presentation references research showing that this is preventing academics from trying innovative work.

    With the rise of quality assurance and corporate aproaches to management, this trend is getting worse.

  • Removal of autonomy;
    As I’ve argued in a couple of posts top-down management is removing academic autonomy and perhaps purpose and subsequently reducing academic motivation.
  • Constraining systems;
    Increasingly universities are using information systems to perform learning and teaching. Those systems are designed on particular assumptions that limit the ability to change. The most obvious example is the LMS (be it open or closed source). This recent post includes discussion of this point around the LMS.

    The people, processes and policies within universities are being set up to use these systems. If you use something different, you are being inefficient.

  • Simplistic understandings of innovation.
    When it comes to understanding innovations (e.g. something as simple as a new LMS), universities have naive perspectives of the adoption process. As recognised by Bigum and Rowan (2004) this naive perspective assumes that the innovation passes through the adoption process largely unchanged, which means that the social must conform with the innovation.

    i.e. As the institution starts to adopt Moodle across all its courses, Moodle can and should stay exactly the same. You only need to show people how to use Moodle, nothing more. If what they want to do is not supported by Moodle, then they need to conform to what Moodle does, regardless of the ramifications.

My argument is that all of this and other factors within a university environment actively prevent small changes having broad outcomes. The university environment is actively discouraging 3rd order change and isn’t even very good at achieving 2nd order change.

But small change can’t make a big difference

Ignoring all that, people still get stuck on the idea of lots of small change creating really big change. They are wrong.

To justify that, first let me draw on people recognised as being much smarter and more important than I (Weick and Quinn, 1999)

The distinctive quality of continuous change is the idea that small continuous adjustments, created simultaneously across units, can cumulate and create substantial change.

The main reason people have trouble with this idea (I think) is that they think that the world is ordered and predictable. That the world is an ordered system. If you make a small change, you get a small effect. However, when you’re talking about a complex system, small changes can create radical outcomes.

I don’t have time to expand on this here, it’s talked about in the presentation I mentioned above. Anyway, Dave Snowden and any number of other people make this point better than I.

Big and small change in the wrong place

Here’s a new idea. One of the reasons why I think most universities are failing to improve the quality of their teaching is that they are focusing on big and small change in the wrong places.

In my experience, most universities are trying to make big improvement in teaching by introducing big changes in what academics do. Use a different system, use a different pedagogy, radically change your teaching so you are constructively aligned, get a teaching qualification etc. But at the same time, there is no radical change in the how the teaching environment works. There are no solutions to the above problems with the environment.

What I am suggesting is that there should be big changes in the environment to enable small changes on the part of the academic. In fact, in the presentation I argue that the aim is to help the academics do what good teaching academics have always done (Common, 1989)

Master teachers are not born; they become. They become primarily by developing a habit of mind, a way of looking critically at the work they do by developing the courage to recognise faults, and struggling to improve.

References

Bigum, C. and L. Rowan (2004). “Flexible learning in teacher education: myths, muddles and models.” Asia-Pacific Journal of Teacher Education 32(3): 213-226.

Common, D. (1989). “Master teachers in higher education: A matter of settings.” The Review of Higher Education 12(4): 375-387.

Weick, K. and R. Quinn (1999). “Organizational change and development.” Annual Review of Psychology 50: 361-386.