Leadership as defining what’s successful

After spending a few days visiting friends and family in Central Queensland – not to mention enjoying the beach – a long 7+ hour drive home provided an opportunity for some thinking. I’ve long had significant qualms about the notion of leadership, especially as it is increasingly being understood and defined by the current corporatisation of universities and schools. The rhetoric is increasingly strong amongst schools with the current fashion for assuming that Principals can be the saviour of schools that have broken free from the evils of bureaucracy. I even work within an institution where a leadership research group is quite active amongst the education faculty.

On the whole, my experience of leadership in organisations has been negative. At the best the institution bumbles along through bad leadership. I’m wondering whether or not questioning this notion of leadership might form an interesting future research agenda. The following is an attempt to make concrete some thinking from the drive home, spark some comments, and set me up for some more (re-)reading. It’s an ill-informed mind dump sparked somewhat by some early experiences on return from leave.

Fisherman’s beach by David T Jones, on Flickr

In the current complex organisational environment, I’m thinking that “leadership” is essentially the power to define what success is, both prior to and after the fact. I wonder whether any apparent success attributed to the “great leader” is solely down to how they have defined success? I’m also wondering how much of that success is due to less than ethical or logical definitions of success?

The definition of success prior to the fact is embodied in the current model of process assumed by leaders, i.e. telological processes. Where the great leader must define some ideal future state (e.g. adoption of Moodle, Peoplesoft, or some other system; an organisational restructure that creates “one university”; or, perhaps even worse, a new 5 year strategic plan etc.) behind which the weight of the institution will then be thrown. All roads and work must lead to the defined point of success.

This is the Dave Snowden idea of giving up the evolutionary potential of the present for the promise of some ideal future state. A point he’ll often illustrate with this quote from Seneca

The greatest loss of time is delay and expectation, which depend upon the future. We let go the present, which we have in our power, and look forward to that which depends upon chance, and so relinquish a certainty for an uncertainty.

Snowden’s use of this quote comes from the observation that some systems/situations are examples of Complex Adaptive Systems (CAS). These are systems where traditional expectations of cause and effect don’t hold. When you intervene in such systems you cannot predict what will happen, only observe it in retrospect. In such systems the idea you can specify up front where you want to go is little more than wishful thinking. So defining success – in these systems – prior to the fact is a little silly. It questions the assumptions of such leadership, including that they can make a difference.

So when the Executive Dean of a Faculty – that includes programs in information technology and information systems – is awarded “ICT Educator of the Year” for the state because of the huge growth in student numbers, is it because of the changes he’s made? Or is it because he was lucky enough to be in power at (or just after) the peak of the IT boom? The assumption is that this leader (or perhaps his predecessor) made logical contributions and changes to the organisation to achieve this boom in student numbers. Or perhaps they made changes simply to enable the organisation to be better placed to handle and respond to the explosion in demand created by external changes.

But perhaps rather than this single reason for success (great leadership), it was instead there were simply a large number of small factors – with no central driving intelligence or purpose – that enabled this particular institution to achieve what it achieved. Similarly, when a few years later the same group of IT related programs had few if any students, it wasn’t because this “ICT Educator of the Year” had failed. Nor was it because of any other single factor, but instead hundreds and thousands of small factors both internally and externally (some larger than others).

The idea that there can be a single cause (or a single leader) for anything in a complex organisational environment seems to be faulty. But because it is demanded of them, leaders must spend more time attempting to define and convince people of their success. In essence then, successful leadership becomes more about your ability to define and promulgate widely acceptance of this definition of success.

KPIs and accountability galloping to help

This need to define and promulgate success is aided considerably by simple numeric measures. The number of student applications; DFW rates; numeric responses on student evaluation of courses – did you get 4.3?; journal impact factors and article citation metrics; and, many many more. These simple figures make it easy for leaders to define specific perspectives on success. This is problematic and it’s many problems are well known. For example,

  • Goodhart’s law – “When a measure becomes a target, it ceases to be a good measure.”
  • Campbell’s law – “The more any quantitative social indicator (or even some qualitative indicator) is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
  • the Lucas critique.

For example, you have the problem identified by Tutty et al (2008) where rather than improve teaching, institutional quality measures “actually encourage inferior teaching approaches” (p. 182). It’s why you have the LMS migration project receiving an institutional award for quality etc, even though for the first few weeks of the first semester it was largely unavailable to students due to dumb technical decisions by the project team and required a large additional investment in consultants to fix.

Would this project have received the award if a senior leader in the institution (and the institutional itself) heavily reliant upon the project being seen as a success?

Would the people involved in giving the project the award have reasonable reasons for thinking it award winning? Is success of the project and of leadership all about who defines what perspective is important?

Some other quick questions

Some questions for me to consider.

  • Where does this perspective sit within the plethora of literature on leadership and organisational studies? Especially within the education literature? How much of this influenced by earlier reading of “Managing without Leadership: Towards a Theory of Organizational Functioning”
  • Given the limited likelihood of changing how leadership is practiced within the current organisational and societal context, how do you act upon any insights this perspective might provide? i.e. how the hell do I live (and heaven forbid thrive) in such a context?

References

Tutty, J., Sheard, J., & Avram, C. (2008). Teaching in the current higher education environment: perceptions of IT academics. Computer Science Education, 18(3), 171–185.

Learning analytics and complexity

Another day and another #ascilite12 paper to think about. This is the one where my periodic engagement is potentially driving my co-author slightly crazy. I’m sure this contribution will add further to that.

The idea

The basic idea of the paper is to

  1. Draw on a few more insights/patterns from the data gathered as part of the Indicators project.
    This includes usage data from a single Australian university from 3 different “Learning Management Systems” over a 10+ year period.
  2. Use the lens of complex adaptive systems to
    1. Identify potential problems with existing trends in the application of learning analytics.
    2. Identify some potential alternative applications of learning analytics.

The idea builds on some of Col’s earlier thinking in this area and is likely to inform some of the next steps we take in this area.

Potential problems

The problems that we seem to have identified so far are

  1. The hidden complexity behind the simple patterns.
    1. Abstraction losing detail.
    2. Decomposition preventing action.
  2. It’s not a causal system
    1. Correlation, causation confusion.
    2. Overweening conceit of causality.

There must be more of these problems. I do wonder if a closer reading of some of the CAS literature would provide more insights.

For each of these problems we’re hoping to

  • Illustrate the nature of the problem with evidence from the data.
  • Offer insight into why this is a problem from Complex Adaptive Systems theory.
    Morrison (2006) gives a good overview of CAS, some its application to education and some limitations.
  • Suggest a better approach based on the insights from CAS.

Hidden complexity – abstraction losing detail

This in part picks up and illustrates part of the message Gardner Campbell made in his presentation “Here I stand” as part of LAK’12. i.e. that the nature of learning analytics and its reliance on abstracting patterns or relationships from data has a tendency to hide the complexity of reality. Especially when used for decision making by people who are not directly engaged in the reality.

Col has illustrated this in this post using the traditional relationship between LMS use and grades (more use == better grades). The nice trend gets interrupted when you start looking at the complexity behind that nice trend. For example, one student who achieved a HD in every course having widely varying numbers of posts/replies in different courses. Similar to Ken’s discoveries when looking at his teaching. Same academic in a few different courses having widely varying practices.

A concrete example is management passing a rule that every course must have a course website that includes a discussion forum, even when for entirely appropriate reasons an entirely on-campus course decides it’s not appropriate.

Decomposition preventing action

The structure and organisation of universities are based on top-down decomposition. All of the various required functions are identified and divided into various parts. There’s HR, IT, the teaching academics in faculties etc. With each decomposition there is loss of the whole. Each sub-component starts to focus on their bit. This is where you get IT departments focusing on uptime, security and risk regardless of the effects it has on the quality of learning.

You can see the effect of this in the learning analytics literature. ALTC grants having to take a browser/client-based approach to tool development because the IT department won’t provide access to the database. It’s one of the reasons why the Indicators project is a little further ahead than most, even though we are a very small group. Through a series of accidents we had access to data and the skills necessary to do something with it.

The effect is also visible in the location of data. Student results are in the student records system. LMS activity is in the LMS etc. This is why “dashboards” are the solution. They bring the data into a single system that is maintained by the data mining folk within an institution. Even though the real value of the patterns revealed by these systems is within the learning environment (the LMS for most), not in yet another systems

You can also see this in the increasing tendency for “dashboards” to be the organisational solution to learning analytics. It’s what the data mining folk in the institution already do, so why not do it for learning analytics? The only trouble is that the information provided by learning analytics is most useful within the LMS. A contributing factor to some of the limitations of the tools and the difficulty staff and students have using them.

The major difficulty for learning analytics is that action in response to learning analytics takes at the teaching/learning coal-face. Not in the dashboard system or the other places inhabited by senior management and support staff.

It’s not a causal system

University senior management assume that they can manipulate the behaviour of people. For example, is the lovely quote I often use from an LMS working group. One of the technical people suggested “that we should change people’s behaviour because information technology systems are difficult to change”. As a complex system there simply isn’t the causality there.

For example, when Moodle was introduced at the institution in question there was grave concern about how few Blackboard course sites actually contained discussion forums. A solution to this was the implementation of “minimum course standards” accompanied by a checklist that was ticked by the academic and double checked by the moderator to assure certain standards were implemented. e.g. a discussion forum. Subsequent data reveals that while all courses may have had a discussion forum, a significant proportion of courses have discussion forums with fewer than 5 posts. This is the mistaken “overweening conceit of casuality”.

Then there is the obvious confusion between correlation and causation. i.e. simply because HD students average more use of the LMS, this doesn’t mean that if all students use the LMS more they’ll get better marks.

Some alternatives

Okay, so given these problems what might you do differently. A few initial suggestions

  • Put a focus on the technology to aid sense-making and action especially to aid academics and students.
    The technology can’t know enough abou the context to make decisions. It can, however, help the people make decisions.
  • Put this into the learning environment currently used by these folk (i.e. the LMS).
    It has to be a part of the environment. If it’s separate, it won’t be used.
  • Break-down the silos.
    Currently, much of learning analytics is within a course or across an institution, or perhaps focused on a specific student. Academics within a program need to be sharing their insights and actions. Students need to be able to see how they are going against others…

This is not meant to represent a new direction for the practice of learning analytics. Rather one interesting avenue for further research.

References

Morrison, K. (2006). Complexity theory and education. APERA Conference, Hong Kong (pp. 1-12). Retrieved from http://edisdat.ied.edu.hk/pubarch/b15907314/full_paper/SYMPO-000004_Keith Morrison.pdf

Measuring the design process – implications for learning design, e-learning and university teaching

I came across the discussion underpinning these thoughts via a post by Scott Stevenson. His post was titled “Measuring the design process”. It is his take on a post titled “Goodbye Google” by Doug Bowman. Bowman was the “Visual Design Lead” at Google and has recently moved to Twitter as Creative Director.

My take of the heart of the discussion is the mismatch between the design and engineering cultures. Design is okay with relying on experience and intuition for the basis for a decision. While the engineering culture wants everything measured, tested and backed up by data.

In particular, Bowman suggests that the reason for this data-driven reliance is

a company eventually runs out of reasons for design decisions. With every new design decision, critics cry foul. Without conviction, doubt creeps in. Instincts fail. “Is this the right move?”

The doubt, the lack of a reason, purpose, or vision for a change creates a vacuum that needs to be filled. There needs to be some reason to point to for the decision.

When a company is filled with engineers, it turns to engineering to solve problems. Reduce each decision to a simple logic problem. Remove all subjectivity and just look at the data. Data in your favor? Ok, launch it. Data shows negative effects? Back to the drawing board.

Can’t be anything wrong with that? Can there? If you’re rational and have data to back you up then you can’t be blamed. Bowman suggests that there is a problem

And that data eventually becomes a crutch for every decision, paralyzing the company and preventing it from making any daring design decisions.

He goes on to illustrate the point, where the focus goes to small questions – should a border be 3, 4 or 5 pixels wide – while the big questions, the important questions that can make a design distinctive become ignored. This happens because hard problems are hard and almost certainly impossible to gather objective data for.

Stevenson makes this point

Visual design is often the polar opposite of engineering: trading hard edges for subjective decisions based on gut feelings and personal experiences. It’s messy, unpredictable, and notoriously hard to measure.

Learning design, e-learning and university teaching

This same problem arises in universities around learning design, e-learning and university teaching. The design of university teaching and learning has some strong connections with visual design. It involves subjective and contextual decisions, it’s messy, unpredictable and hard to measure.

The inherently subjective and messy nature of university teaching brings it into direct tension with two increasingly important and powerful cultures within the modern university:

  1. Corporate management; and
    Since sometime in the 90s, at least within Australia, corporate manageralism has been on the rise within universities. Newton (2003) has a nice section on some of the external factors that have contributed to this rise, I’ve summarised Newton here. Further underpinning this rise has been what Birnbaum (2000) calls “education’s Second Management Revolution” from around 1960 and which “marks the ascendance of rationality in academic management”.
  2. Information technology.
    With the rise of e-learning and other enterprise systems, the corporate IT culture within universities is increasingly strong. In particular, from my cynical perspective, when they can talk the same “rational” talk as the management culture and back this up with reams of data (regardless of validity) and can always resort of techno-babble to confuse management.

Both these cultures put an emphasis on rationality, on having data to support decisions and on being able to quantify things.

Symptoms of this problem

Just taking the last couple of years, I’ve seen the following symptoms of this:

  • The desire to have a fixed, up-front estimate of how long it takes to re-design a course.
    I want you to re-design 4 courses. How long will it take?
  • The attempt to achieve quality through consistency.
    This is such a fundamentally flawed idea, but it is still around. Sometimes it is proposed by people who should know better. The idea that a single course design, word template or educational theory is suitable for all courses at an institution, let alone all learners, sounds good, but doesn’t work.
  • Reports indicating that the re-design and conversion of courses to a new LMS are XX% complete.
    Heard about this just recently. If you are re-designing a raft of different courses, taught be different people, in different disciplines, using different approaches and then porting them to a new LMS, how can you say it is XX% complete. The variety in courses will mean that you can’t quantify how long it will take. You might have 5 or 10 courses completed, but that doesn’t mean you’re 50% completed. The last 5 courses might take much longer.
  • The use of a checklist to evaluate LMSes.
    This has to be the ultimate, use a check list to reduce the performance of an LMS to a single number!!
  • Designing innovation by going out to ask people what they want.
    For example, let’s go and ask students or staff how they want to use Web 2.0 tools in their learning and teaching. That old “fordist”, the archetypal example of rationalism, Henry Ford even knew better than this

    “If I had asked people what they wanted, they would have said faster horses.”

The scary thing is, because design is messy and hard, the rational folk don’t want to deal with it. Much easier to deal with the data and quantifiable problems.

Of course, the trouble with this is summarised by a sign that used to hang in Einstein’s office at Princeton (apparently)

Not everything that counts can be counted, and not everything that can be counted counts.

Results

This mismatch between rationality and the nature of learning and teaching leads, from my perspective, to most of the problems facing universities around teaching. Task corruption and a reliance on “blame the teacher”/prescription approaches to improving teaching arise from this mismatch.

This mismatch arises, I believe, for much the same reason as Bowman used in his post about Google. The IT and management folk don’t have any convictions or understanding about teaching or, perhaps, about leading academics. Consequently, they fall back onto the age-old (and disproved) management/rational techniques. As they give the appearance of rationality.

References

Birnbaum, R. (2000). Management Fads in Higher Education: Where They Come From, What They Do, Why They Fail. San Francisco, Jossey-Bass.

The Dreyfus Model – From Novice to Expert

This presentation by Dave Thomas talks about the Dreyfuss Model of Skill Aquisition and how it applies to software development. However, the ideas and insights seem to apply to a number of other contexts, in particularly learning and teaching at Universities. I certainly found a lot of points that resonated.

The content in this presentation is expanded upon in this book which is also available here.

Choosing your indicators – why, how and what

The unit I work with is undertaking a project called Blackboard Indicators. Essentially the development of a tool that will perform some automated checks on our institution’s Blackboard course sites and show some indicators which might identify potential problems or areas for improvement.

The current status is that we’re starting to develop a slightly better idea of what people are currently doing through use of the literature and also some professional networks (e.g. the Australasian Council on Open, Distance and E-learning) and have an initial prototype running.

Our current problem is how do you choose what the indicators should be? What are the types of problems you might see? What is a “good” course website?

Where are we up to?

Our initial development work has focused on three groupings of category: course content, coordinator presence and all interactions. Some more detail on this previous post.

Colin Beer has contributed some additional thinking about some potential indicators in a recent post on his blog.

Col and I have talked about using our blogs and other locations to talk through what we’re thinking to develop a concrete record of our thoughts and hopefully generate some interest from other folk.

Col’s list includes

  • Learner.
  • Instructor.
  • Content.
  • Interactions: learner/learner, learner/instructor, learner/content, instructor/content

Why and what?

In identifying a list of indicators, as when trying to evaluate anything, it’s probably a good idea to start with a clear definition of why you are starting on this, what are you trying to achieve.

The stated purpose of this project is to help us develop a better understanding of how and how well staff are using the Blackboard courses sites. In particular, we want to know about any potential problems (e.g. a course site not being available to students) that might cause a large amount of “helpdesk activity”. We would also like to know about trends across the board which might indicate the need for some staff development, improvements in the tools or some support resources to improve the experience of both staff and students.

There are many other aims which might apply, but this is the one I feel most comfortable with, at the moment.

Some of the other aims include

  • Providing academic staff with a tool that can aid them during course site creation by checking their work and offering guidance on what might be missing.
  • Provide management with a tool to “check on” course sites they are responsible for.
  • Identify correlations between characteristics of a course website and success.

The constraints we need to work within include

  • Little or no resources – implication being that manual, human checking of course sites is not currently a possibility.
  • Difficult organisational context due to on-going restructure – which makes it hard to get engagement from staff in a task that is seen as additional to existing practice and also suggests a need to be helping staff deal with existing problems more so than creating more work. A need to be seen to be working with staff to improve and change, rather than being seen as inflicting change upon them.
  • LMS will be changing – come 2010 we’ll be using a new LMS, whatever we’re doing has to be transportable.

How?

From one perspective there are two types of process which can be used in a project like this

  1. Teleological or idealist.
    A group of experts get together, decide and design what is going to happen and then explain to everyone else why they should use it and seek to maintain obedience to that original design.
  2. Ateleological or naturalist.
    A group of folk, including significant numbers of folk doing real work, collaborate together to look at the current state of the local context and undertake a lot of small scale experiments to figure out if anything makes sense, they examine and reflect on those small scale experiments and chuck out the ones that didn’t work and build on the ones that did.

(For more on this check out: this presentation video or this presentation video or this paper or this one.)

From the biased way I explained the choices I think it’s fairly obvious which approach I prefer. A preference for the atelelogical approach also means that I’m not likely to want to spend vast amounts of time evaluating and designing criteria based on my perspectives. It’s more important to get a set of useful indicators up and going, in a form that can be accessed by folk and have a range of processes by which discussion and debate is encouraged and then fed back into the improvement of the design.

The on-going discussion about the project is more likely to generate something more useful and contextually important than large up-front analysis.

What next then?

As a first step, we have to get something useful (for both us and others) up and going in a form that is usable and meaningful. We then have to engage with them and find out what they think and where they’d like to take it next. In parallel with this is the idea of finding out, in more detail, what other institutions are doing and see what we can learn.

The engagement is likely going to need to be aimed at a number of different communities including

  • Quality assurance folk: most Australian universities have quality assurance folk charged with helping the university be seen by AUQA as being good.
    This will almost certainly, eventually, require identifying what are effective/good outcomes for a course website as outcomes are a main aim for the next AUQA round.
  • Management folk: the managers/supervisors at CQU who are responsible for the quality of learning and teaching at CQU.
  • Teaching staff: the people responsible for creating these artifacts.
  • Students: for their insights.

Initially, the indicators we develop should match our stated aim – to identify problems with course sites and become more aware with how they are being used. To a large extent this means not worrying about potential indicators of good outcomes and whether or not there is a causal link.

I think we’ll start discussing/describing the indicators we’re using and thinking about on a project page and we’ll see where we go from there.

Alternate foundations – the presentation

A previous post outlined the abstract for a presentation I gave last Monday on some alternate foundations for leadership of learning and teaching at CQUniversity. Well, I’ve finally got the video and slides online so this post reflects on the presentation and gives access to the multimedia resources

Reflection

It seemed to go over well but there’s significant room for improvement.

The basketball video worked well this time, mainly because the introduction was much better handled.

What was missing

  • Didn’t make the distinction between safe-fail and fail-safe projects.
  • Not enough time on implications, strategies and approaches to work with this alternate foundation.
  • The description of the different parts of the Cynefin Framework were not good

The second point about strategies of working within this area is important as the thinking outlined in the presentation is hopefully going to inform the PLEs@CQUni project.

The resources

The video of the presentation is on Google Video

The slides are on Slideshare

Some alternate foundations for leadership in L&T at CQUniversity

On Monday the 25th of August I am meant to be giving a talk that attempts to link complexity theory (and related topics) to the practice of leadership of learning and teaching within a university setting. The talk is part of a broader seminar series occurring this year at CQUniversity as part of the institution’s learning and teaching seminars. The leadership in L&T series is being pushed/encouraged by Dr Peter Reaburn.

This, and perhaps a couple of other blogs posts, is meant to be a part of a small experiment in the use of social software. The abstract of the talk that goes out to CQUniversity staff will mention this blog post and some related del.icio.us bookmarks. I actually don’t expect it to work all that well as I don’t have the energy to do the necessary preparations.

Enough guff, what follows is the current abstract that will get sent out.

Title

Some alternate foundations for leadership in L&T at CQUniversity

Abstract

Over recent years an increasing interest in improving the quality of university learning and teaching has driven a number of projects such as the ALTC, LTPF and AUQA. One of the more recent areas of interest has been the question of learning and teaching leaders. In 2006 and 2007 ALTC funded 20 projects worth about $3.4M around leadership in learning and teaching. Locally, there has been a series of CQUniversity L&T seminars focusing on the question of leadership in L&T.

This presentation arises from a long-term sense of disquiet about the foundations of much of this work, an on-going attempt to identify the source of this disquiet and find alternate, hopefully better, foundations. The presentation will attempt to illustrate the disquiet and explain how insights from a number of sources (see some references below) might help provide alternate foundations. It will briefly discuss the implications these alternate foundations may have for the practice of L&T at CQUniversity.

This presentation is very much a work in progress and is aimed at generating an on-going discussion about this topic and its application at CQUniversity. Some parts of that discussion and gathering of related resources is already occuring online at
http://cq-pan.cqu.edu.au/david-jones/blog/?p=202
feel free to join in.

References and Resources

Snowden, D. and M. Boone (2007). A leader’s framework for decision making. Harvard Business Review 85(11): 68-76

Lakomski, G. (2005). Managing without Leadership: Towards a Theory of Organizational Functioning, Elsevier Science.

Davis, B. and D. Sumara (2006). Complexity and education: Inquiries into learning, teaching, and research. Mahwah, New Jersey, Lawrence Erlbaum Associates