Eduhacking – a better use for (part of) academic conferences?

In short, can we get an Eduhack style event running at ASCILITE’12? Want to help? If you want, skip to the point

 

Possibly the most productive conference I’ve ever been on was the 1996 ITiCSE Conference in Barcelona. (It seems the conferences have evolved from “Integrating Technology into CS Education” to “Innovation and Technology in CS Education”). Apart from my first trip to Spain, the conference introduced me to something different in terms of conferences, the working groups.

We were the first set of working groups and at that stage it worked a bit like this:

  • Someone came up with a topic – in our case “World Wide Web as an Interactive Teaching Resource”.
  • They called for participants.
  • We started collaborating ahead of the conference.
  • During the conference we (based on my vague recollection of 16 years ago)
    • Worked for a day or two before the conference proper started.
    • Did some work during the conference, including presenting a “poster” on our current progress. (apparently shown in the image below)
    • Did some final work at the end/after of the conference.
  • Produces a final document

Poster of working group

The benefit

The biggest benefit that flowed from that event was meeting the co-author of the book we wrote, which (even with its limitations) remains the most cited of my publications. Without being a member of the working group with my co-author, the book would never have been written.

Having to work with folk at a conference on a specific project, rather than sit and listen or sit and drink network, provides additional insights and connections. It can also be a bit more challenging, but nothing in life is free.

The wasted opportunity

This type of approach seems to address the wasted opportunity that is most conferences. You have all these talented, diverse and skilled folk in the one location, but limit their interaction to presentations, panels and socialising. Nothing, at least in my experience, works to bring those diverse perspectives together to produce something.

For a long time, I’ve been wondering if something different is possible.

Looking for alternatives

The ITiCSE working group approach was okay, but fairly traditional. It aimed to produce an academic paper. I was involved with the first one, it would be interesting to see how they’ve evolved and changed based on the experience.

The REACT project tried to ensure that planned innovations in L&T benefited from diverse perspectives before implementation. But like the working group idea used an academic paper as the main artifact. REACT never really went anywhere.

And then there is Eduhacking in the style used by @sthcrft and @stuffy65 at UNE and in particular @sthcrft ‘s call

do we need a cross-institution eduhack event? From my point of view, anything that’s about collaborating on ideas and possibilities has got to be better than yet another show and tell event. Who’s in?

I’m thinking: Yes and me. The question is where to now?

How might it work?

Education Hack Day describes the aim this way

The mission was simple: listen to problems sourced by teachers from around the world, pick a dozen or so to tackle, and form teams around those problems that would each come up with and execute a creative solution to solve them.

This seems to have been based on the older/broader idea of from the developer world of a hackathon. As with the UNE experiment, the focus here wouldn’t necessarily be on software developers, but a broader cross-section of people.

So a process might be:

  • Pick a conference, one that has a good cross section of educational technology type folk.
    For example, ASCILITE’12.
  • Run an Eduhack day just before the conference proper starts, probably as a workshop.
  • Actively encourage a diverse collection of folk to come along.
  • Distribute a call for problems prior to the conference.
  • Ensure that the setting for the Eduhack is appropriate (i.e. not the normal conference breakout room).
  • Have a loose process to select the problems to be worked on and let folk go.
  • Have some of the outcomes presented as posters during the conference.
  • Encourage/enable groups to keep working on the problems post-conference, perhaps for presentation as traditional papers at the next conference?

I’m sure there are improvements to be made. Who’s interested?

The biggest flaw in university L&T/e-learning?

Welcome folk from UHI. Hope you find this interesting. Your e-learning portal is here. Good luck with it all.

Over recent years I’ve been employed in a position to help improve the quality of learning and teaching (and e-learning) at a university. If all goes according to plan, I might well have a related position for the next few years, at least. This post is intended to identify and provide some early insights into what I think has been the biggest flaw about my practice and in the practice at most universities when it comes to learning and teaching and how they attempt to improve learning and teaching.

The “biggest” adjective is not intended to indicate certainty, there may be bigger flaws, there are certainly other flaws. But at this point in time, given my current thinking, this is what I think are the biggest flaws. The term “flaw” could also be replaced by “hurdle” and/or “barrier”.

The biggest flaw?

Over the last couple of years, as I’ve had a less than positive experience, I’ve increasingly become convinced that the biggest flaw in individual and organisational attempts to improve learning and teaching is quite simply that there is no widely accepted measure of what is good or bad learning and teaching. There are two main problems with the approaches that are used:

  1. They don’t work.
  2. There is not wide acceptance of the value.

This absence of an effective measure leads to what I’ve talked about in a recent post – task corruption and the observation that task corruption occurs most frequently with tasks where it is difficult to define or measure the quality of service. Learning and teaching within a university, for me at least and especially when applied to institutions that I’m familiar with, suffers from just this flaw.

Most, if not all, of the problems, debates, struggles and political fire-storms around learning and teaching within universities can be tracked down to the uncertainty about what is quality learning and teaching.

The don’t work

At this point in time I am pretty certain that the following methods don’t work (at least not by themselves, and probably not even when complimented by other methods):

  • Student results – given the realities of university learning and teaching I don’t believe (a belief backed by the published research of others) that these are a good indication of student learning. Certainly not for comparison purposes between offerings of courses, especially if taken by different staff or across disciplines.
  • Level 1 smile sheets – i.e. the majority of what passes for learning and teaching “evaluation” at universities in Australia. Surveys of students at the end of courses or programs asking how they felt. This is broken.

Absence of wide acceptance

Now there may be methods to measure the quality of learning and teaching that do work. You may know of some, feel free to share them. But the point is that when it comes to the complexity and diversity inherent in the organisational practice of learning and teaching within higher education, there is no method that is broadly accepted.

The absence of this broad acceptance and subsequent widespread, disciplined use totally voids any validity the evaluation method may have. Unless the senior management, middle management, coal-face practitioners and all other stakeholders see the value of the measure, it doesn’t matter that they work.

Teaching is not rocket science

This lack of acceptance is not unexpected as teaching is a wicked design problem. A point made by the quote from Prof Richard Elmore illustrated by the attached photo. Many of the defining characteristics of wicked design problem make it very difficult to effectively get wide acceptance of a solution. For example, from the Wikipedia page

  • There is no definitive formulation of a wicked problem.
    i.e. everyone will have their own understanding of the problem, which implies their own beliefs about what the solution should be.
  • There is no immediate and no ultimate test of a solution to a wicked problem.
    “No ultimate test of a solution”, makes it somewhat hard to evaluate and measure.

Impacts on improving learning and teaching?

In the absence of any measure of quality learning and teaching, I can’t see how you can possibly implement any improvements to learning and teaching within a university in any meaningful way. If you can’t measure it and get broad acceptance of the value, then whatever you do is likely not to be accepted and will eventually be replaced.

Over the 19 years I’ve been involved with learning and teaching at Universities I’ve seen the negative ramifications of this again and again. Some examples include:

  • Resting on their laurels (a foundation built of sand).
    I’ve heard any number of academics proudly claim that they are brilliant teachers or that there courses are fantastic. Only to take those courses as a student, hear from other students or take over the courses and discover the reality. In the absence of any effective and accepted measures of teaching quality it’s possible to defend any practice, including doing nothing to improve.
  • Fearing change and reverting to past practice.
    People hate change. When there are different measures of value/outcome, it’s possible to ignore something good, especially when it is different. I’ve seen courses re-designed by talented teachers or instructional designers get thrown in the bin.
  • Task corruption.
    In some cases the “good design” hasn’t been trashed, it has been “corrupted” – as in task corruption. For example, an approach based on reflective journals has the questions modified so they don’t encourage deeper reflection (such questions are easier to come up with and easier to make) and necessary steps to support and encourage students to reflect, ceased. So the reflective journal is still “there”, but its use has been corrupted.

Disclaimer and request for insights and case studies

Perhaps all of the above is due to the limitations of my experience and knowledge. If you know better, please feel free to share.

What next?

This is not new. So why talk about it? Well, it is a problem that will have to be addressed in someway. So this post is an attempt to think about the problem, identify its outcomes and start me thinking about how/if it can be solved.

Reflective problematisation – description of reflection in “reflective alignment”?

Thinking about reflective alignment, I came across the following quote in Booth and Anderberg (2005). Thought it might be useful so am saving it here.

the equally important notion of reflective problematization – deliberately distancing oneself from the familiar, deliberately avoiding the taken-for-granted and considering the alternatives that might be at hand, relating to theories and experience and reaching an analytical insight into productive change.

The connection with “reflective alignment” is that this is a pretty good description of the type of reflection which I observe in the “good” teachers. It’s the type of reflection “reflective alignment” would seek to encourage and enable.

References

Booth, S. and E. Anderberg (2005). “Academic development for knowledge capabilities: Learning, reflecting and developing.” Higher Education Research & Development 24(4): 373-386.

Patterns for e-learning – a lost opportunity or destined to fail

In the following I reflect on my aborted and half-baked attempts at harnessing design patterns within the practice of e-learning at universities and wonder whether it was a lost opportunity and/or a project that was destined to fail. This is written in the light shed by the work of a number of other folk (Google “patterns for e-learning”), including the current JISC-emerge project and, I believe, the related Pattern Language Network.

I think I’ll end up contending that it was destined to fail and hope I can provide some justification for that. Or at least that’s what I currently think, before writing the following. Any such suggestion will be very tentative.

Context

Way back in 1999 I was a young, naive guy at the crossroads of software development and e-learning, I was wondering why more academics weren’t being innovative. Actually, the biggest and most troubling question was much simpler, “Why were they repeating the same mistakes I and others had made previously?”. For example, I lost count of the number of folk who tried to use email for online assignment submission in courses with more than 10 or 20 students. Even though many folk tried it, had problems and talked about the problems with additional workload it creates.

At the same time I was looking at how to improve the design of Webfuse, the e-learning system I was working upon, and object-oriented programming seemed like a good answer (it was). Adopting OOP also brought me into contact with the design patterns community within the broader OOP community. Design patterns within OOP were aimed at solving many of the same problems I was facing with e-learning.

Or perhaps this was an example of Kaplan’s law of instrument. i.e. patterns were the hammer and the issues around e-learning looked like a nail.

Whatever the reason some colleagues and I tried to start up a patterns project for online learning (I’m somewhat amazed that the website is still operating). The why page” for the project explains the rationale. We wrote a couple of papers explaining the project (Jones and Stewart, 1999; Jones, Stewart and Power, 1999), gave a presentation (the audio for the presentation is there in RealAudio format, shows how old this stuff is) and ran an initial workshop with some folk at CQU. One of the publications also got featured in ERIC and on OLDaily.

The project did produce a few patterns before dieing out:

There’s also one that was proposed but nothing concrete was produced – “The Disneyland Approach”. This was based on the idea of adapting ideas from how Disney designs their theme parks to online learning.

I can’t even remember what all the reasons were. Though I did get married a few months afterwards and that probably impacted my interest in doing additional work. Not to mention that my chief partner in crime also left the university for the paradise of private enterprise around the same time. That was a big loss.

One explanation and a “warning” for other patterns projects?

At the moment I have a feeling (it needs to be discussed and tested to become more than that) that these types of patterns projects are likely to be very difficult to get to work within the e-learning environment, especially if the aim is to get a broad array of academics to, at least, read and use the patterns. If the aim is to get a broad array of academics to contribute to patterns, then I think it’s become almost impossible. This feeling/belief is based on three “perspectives” that I’ve come to draw upon recently:

  1. Seven principles for knowledge management that suggest pattern mining will be difficult;
  2. the limitations of using the Technologists’ Alliance to bridge the gap;
  3. people (and academics) aren’t rational and this is why they won’t use patterns when designing e-learning and

7 Principles – difficulty of mining patterns

Developing patterns is essentially an attempt at knowledge management. Pattern mining is an attempt to capture what is known about a solution and its implementation and distill it into a form that is suitable for others to access and read. To abstract that knowledge.

Consequently, I think the 7 principles for knowledge management proposed by Dave Snowden apply directly to pattern mining. To illustrate the potential barriers here’s my quick summary of the connection between these 7 principles and pattern mining.

  1. Knowledge can only be volunteered it cannot be conscripted.
    First barrier in engaging academics to share knowledge to aid pattern mining is to get them engaged. To get them to volunteer. By nature, people don’t share complex knowledge, unless they know and trust you. Even then, if their busy…. This has been known about for a while.
  2. We only know what we know when we need to know it.
    Even if you get them to volunteer, then chances are they won’t be able to give you everything you need to know. You’ll be asking them out of the context when they designed or implemented the good practice you’re trying to abstract for a pattern.
  3. In the context of real need few people will withhold their knowledge.
    Pattern mining is almost certainly not going to be in a situation of real need. i.e. those asking aren’t going to need to apply the provided knowledge to solve an immediate problem. We’re talking about abstracting this knowledge into a form someone may need to use at some stage in the future.
  4. Everything is fragmented.
    Patterns may actually be a good match here, depending on the granularity of the pattern and the form used to express it. Patterns are generally fairly small documents.
  5. Tolerated failure imprints learning better than success.
    Patterns attempt to capture good practice which violates this adage. Though the idea of anti-patterns may be more useful, though not without their problems.
  6. The way we know things is not the way we report we know things.
    Even if you are given a very nice, structured explanation as part of pattern mining, chances are that’s not how the design decisions were made. This principle has interesting applications to how/if academics might harness patterns to design e-learning. If the patterns become “embedded” amongst the academics “pattern matching” process, it might just succeed. But that’s a big if.
  7. We always know more than we can say, and we will always say more than we can write down.
    The processes used to pattern mine would have to be well designed to get around this limitation.

Limitations of the technologists’ alliance

Technology adoption life-cycle - Moore's chasm

Given that pattern mining directly to coal-face academics is difficult for the above reasons, a common solution is to use the “Technologists’ Alliance” (Geoghegan, 1994). i.e. the collection of really keen and innovative academics and the associated learning designers and other folk who fit into the left hand two catagories of the technology adoption life cycle. i.e. those to the left of Moore’s chasm.

The problem with this is that the folk on the left of Moore’s chasm are very different to the folk on the right (the majority of academic staff). What the lefties think appropriate is not likely to match what the righties are interested in.

Geoghegan (1994) goes so far as to claim that the “alliance”, and the difference between them the righties, has been the major negative influence on the adoption of instructional technology.

Patterns developed by the lefties are like to be in language not understood by the righties and solve problems that the righties aren’t interested and probably weren’t even aware existed. Which isn’t going to positively contribute to adoption.

People aren’t rational decision makers

The basic idea of gathering patterns is that coal face academics will be so attracted to the idea of design patterns as an easy and effective way to design their courses that they will actually use the resulting pattern language to design their courses. This ignores the way the human mind makes decisions.

People aren’t rational. Most academics are not going to follow a structured approach to the design of their courses. Most aren’t going to quickly adopt a radically different approach to learning and teaching. Not because their recalcitrant mongrels more interested in research (or doing nothing), because they have the same biases and ways of thinking as the rest of us.

I’ve talked about some of the cognitive biases or limitations on how we think in previous posts including:

In this audio snippet (mp3) Dave Snowden argues that any assumption of rational, objective decision making that entails examining all available data and examining all possible alternate solutions is fighting against thousands of years of evolution.

Much of the above applies directly to learning and teaching where the experience of most academics is that they aren’t valued or promoted on the value of their teaching. It’s their research that is of prime concern to the organisation, as long as they can demonstrate a modicum of acceptable teaching ability (i.e. there aren’t great amounts of complaints or other events out of the ordinary).

In this environment with these objectives, is it any surprise that they aren’t all that interested in spending vast amounts of time to overcome their cognitive biases and limitations to adopt radically different approaches to learning and teaching?

Design patterns anyone?

It’s just a theory

Gravity, just a theory

Remember what I said above, this is just a theory, a thought, a proposition. Your mileage may vary. One of these days, when I have the time and if I have the inclination I’d love to read some more and maybe do some research around this “theory”.

I have another feeling that some of the above have significant negative implications for much of the practice of e-learning and attempts to improve learning and teaching in general. In particular, other approaches that attempt to improve the design processes used by academics by coming up with new abstractions. For example, learning design and tools like LAMS. To some extent some of the above might partially explain why learning objects (in the formal sense) never took off.

Please, prove me wrong. Can you point to an institution of higher education where the vast majority of teaching staff have adopted an innovative approach to the design or implementation of learning? I’m talking at least 60/70%.

If I were setting the bar really high, I would ask for prove that they weren’t simply being seen to comply with the innovative approach, that they were actively engaging and embedding it into their everyday thinking about teaching.

What are the solutions?

Based on my current limited understanding and the prejudices I’ve formed during my PhD, I believe that what I currently understand about TPACK offers some promise. Once I read some more I’ll be more certain. There is a chance that it may suffer many of the same problems, but my initial impressions are positive.

References

Geoghegan, W. (1994). Whatever happened to instructional technology? 22nd Annual Conferences of the International Business Schools Computing Association, Baltimore, MD, IBM.

David Jones, Sharonn Stewart, The case for patterns in online learning, Proceedings of Webnet’99 Conference, De Bar, P. & Legget, J. (eds), Association for the Advancement of Computing in Education, Honolulu, Hawaii, Oct 24-30, pp 592-597

David Jones, Sharonn, Stewart, Leonie Power, Patterns: using proven experience to develop online learning, Proceedings of ASCILITE’99, Responding to Diversity, Brisbane: QUT, pp 155-162

Getting half-baked ideas out there: improving research and the academy

In a previous post examining one reason folk don’t take to e-learning I included the following quote from a book by Carolyn Marvin

the introduction of new media is a special historical occasion when patterns anchored in older media that have provided the stable currency for social exchange are reexamined, challenged, and defended.

In that previous post I applied this idea to e-learning. In this post I’d like to apply this idea to academic research.

Half-baked ideas

In this post Jon Udell talks about the dissonance between the nature of blogs, the narrative form he recommends for blogs and the practices of academics. In it he quotes an academic’s response to his ideas for writing blogs as

I wouldn’t want to publish a half-baked idea.

Jon closes the blog post with the following paragraph

That outcome left me wondering again about the tradeoffs between academia’s longer cycles and the blogosphere’s shorter ones. Granting that these are complementary modes, does blogging exemplify agile methods — advance in small increments, test continuously, release early and often — that academia could use more of? That’s my half-baked thought for today.

I think this perspective sums it up nicely. The patterns of use around the old/current media for academic research (conference and journal papers) are similar to heavyweight software development methodologies. They rely on a lot of up-front analysis and design to ensure that the solution is 100% okay. While the patterns of use of the blogosphere is very much more like that of agile development methods. Small changes, get it working, get it out and learn from that experience to inform the next small change.

Update: This post talks a bit more about Udell’s views in light of a talk he gave at an EDUCAUSE conference. There is a podcast of the presentation.

There are many other examples of this, just two include:

Essentially the standard practices associated with research projects in academia prevent many folk from engaging in getting the “half-baked ideas” out into the blogosphere. There are a number of reasons, but most come back to not looking like a fool. I’ve seen this many times with my colleagues wanting to spend vast amounts of time completing a blog post.

As a strong proponent and promoter of ateleological design processes, I’m interested in how this could be incorporated into research. Yesterday, in discussions with a colleague, I think we decided to give it a go.

What we’re doing and what is the problem?

For varying reasons, Col and I are involved, in different ways, with a project going under the title of the indicators project.. However, at the core of our interest is the question

How do you data mine/evaluate usage statistics from the logs and databases of a learning management system to draw useful conclusions about student learning, or the success or otherwise of these systems.

This is not a new set of questions. The data mining of such logs is quite a common practice and has a collection of approaches and publications. So, the questions for use become:

  • How can we contribute or do something different than what already exists?
  • How can we ensure that what we do is interesting and correct?
  • How do we effectively identify the limitations and holes underpinning existing work and our own work?

The traditional approach would be for us (or at least Col) to go away, read all the literature, do a lot of thinking and come up with some ideas that are tested. The drawback of this approach is that there is limited input from other people with different perspectives. A few friends and colleagues of Col’s might get involved during the process, however, most of the feedback comes at the end when he’s published (or trying to publish) the work.

This might be too late. Is there a way to get more feedback earlier? To implement Udell’s idea of release early and release often?

Safe-fail probes as a basis for research

The nature of the indicators project is that there will be a lot of exploration to see if there are interesting metrics/analyses that can be done on the logs to establish useful KPIs, measurements etc. Some will work, some won’t and some will be fundamentally flawed from a statistical, learning or some other perspective.

So rather than do all this “internally” I suggested to Col that we blog any and all of the indicators we try and then encourage a broad array of folk to examine and discuss what was found. Hopefully generate some input that will take the project in new and interesting directions.

Col’s already started this process with the latest post on his blog.

In thinking about this I can come up with at least two major problems to overcome:

  • How to encourage a sufficient number and diversity of people to read the blog posts and contribute?
    People are busy. Especially where we are. My initial suggestion is that it would be best if the people commenting on these posts included expertise in: statistics; instructional design (or associated areas); a couple of “coal-face” academics of varying backgrounds, approaches and disciplines; a senior manager or two; and some other researchers within this area. Not an easy group to get together!
  • How to enable that diversity of folk to understand what we’re doing and for us to understand what they’re getting at?
    By its nature this type of work draws on a range of different expertise. Each expert will bring a different set of perspectives and will typically assume everyone is aware of them. We won’t be. How do you keep all this at a level that everyone can effectively share their perspectives?

    For example, I’m not sure I fully understand all of the details of the couple of metrics Col has talked about in his recent post. This makes it very difficult to comment on the metrics and re-create them.

Overcoming these problems, in itself, is probably a worthwhile activity. It could establish a broader network of contacts that may prove useful in the longer term. It would also require that the people sharing perspectives on the indicators would gain experience in crafting their writing in a way that maximises understandability by others.

If we’re able to overcome these two problems it should produce a lot of discussion and ideas that contributes to new approaches to this type of work and also to publications.

Questions

Outstanding questions include:

  • What are the potential drawbacks of this idea?
    The main fear I guess of folk is that someone, not directly involved in the discussion, steals the ideas and publishes them unattributed and before we can publish. There’s probably a chance that we’ll also look like fools.
  • How do you attribute ideas and handle authorship of publications?
    If a bunch of folk contribute good ideas which we incorporate and then publish, should they be co-authors, simply referenced appropriately, or something else? Should it be a case by case basis with a lot of up-front discussion?
  • How should it be done?
    Should we simply post to our blogs and invite people to participate and comment on the blogs? Should we make use of some of the ideas Col has identified around learning networks? For example, agree on common tags for blog posts and del.icio.us etc. Provide a central point to bring all this together?

References

Lucas Introna. (1996) Notes on ateleological information systems development, Information Technology & People. 9(4): 20-39

The Dreyfus Model – From Novice to Expert

This presentation by Dave Thomas talks about the Dreyfuss Model of Skill Aquisition and how it applies to software development. However, the ideas and insights seem to apply to a number of other contexts, in particularly learning and teaching at Universities. I certainly found a lot of points that resonated.

The content in this presentation is expanded upon in this book which is also available here.

Extending the classroom: Open content and open teaching: Summary and reflection

Yesterday I attended a session by George Siemens with the title “Extending the classroom: Open content and open teaching”. The presentation was video-conferenced from the Sunshine Coast to a range of locations across Australia. I was in a room at CQUniversity in Rockhampton with quite a number of folk. What follows are my reflections on that talk.

Overview

A good talk for starting discussions and thinking amongst folk who may not have come across these ideas previous. Also a good talk for identifying a couple of resources, perspectives and quotes for those that have heard some of it before. My recollection was that the talk had three main sections, separate by Q&A from around the sites. The three sections were

  1. What do we know about learning?
    This was summarised in 6 points. Good learning is

    • Social
    • Situated
    • Incorporates prior learning/existing knowledge
    • Requires reflection
    • Is multi-faceted, multi-dimensional – considers the “whole person”
    • Is distributed between people, minds and tools.

    George references the Cambridge Handbook of Learning Sciences (Amazon and Google books) as the source of much of this.

  2. What is technologies role in learning?
    Another 6 points

    • Access
    • Presence
    • Expression
    • Creation
    • Interaction/co-creation
    • Aggregation

    These were illustrated with examples of existing Web 2.0 tools

  3. Openness, in its various forms
    With the groundwork established, the next step was to talk about openness. At this stage there were a number of examples of different types of openness, I didn’t write them down. So I’m sure I missed at least one. What I remember included

    • Content
    • Teaching
    • Accreditation

    Very quickly at the end there were four steps/suggestions for starting to be more open

    • Use Open Educational Resources (OER)
    • Share resources
    • Work collaboratively to produce OERs
    • Experiment with open/transparent teaching

Reflection

Various somewhat related reflections or thoughts from the talk follow.

Importance of context, of place

A number of times the presented emphasised the importance of context. Of how the staff, the students, the content and the institution within which you were going to apply some idea should play a major role in the considerations you go through when planning an implementation. What you do in your context might not be the same as someone else has done.

This resonated strongly with me as it’s a foundation of my practice. In the Ps Framework stuff “place” is an important, if not the most important consideration when looking at the application of educational technology.

Who’s installed elgg or some other portfolio

The importance of place and how it is regularly ignored was reinforced for me in the discussion of e-portfolios. George was very positive about the benefits of e-portfolios. He mentioned the elgg is perhaps the best/most popular e-portfolio tool to have adopted a social media approach.

I’m now waiting for someone at the session, or someone they talk to, to be asking about or attempting to install elgg and think about using in their course. This is one example of how the “product” (in terms of the Ps Framework) overwhelms consideration of “place”, of context. How fads and fashions arise in educational technology and organisational/management practice in general.

This is not to suggest the elgg or e-portfolios is a bad idea, there is some value in them. If they are appropriate for the organisational context. If they are adopted because of a large organisational need and not because someone heard or read something positive about the idea.

Systemic constraints on innovation

A regular theme throughout the talk was that the “systemic structure is a drag on innovation”. i.e. the organisational policies and structures were a major limiting factor on what can be done in terms of innovation.

Technologies embody a perspective

Another common point, but one not well recognised or understood in terms of its impacts within institutions is the idea that all technologies embody a particular perspective. They tend to serve the purposes of the designer. When it comes to learning management systems Col has written about some of the limitations of the purpose they embody.

Teaching is a learning process and its implication for staff development/curriculum design

My occasional responsibility is associated with how do you improve the quality of learning and teaching at a university. In relation to this task, George’s talk, and in particular the 6 things we know about learning lead me to reflect on teaching being learning.

Traditionally, at most institutions teaching is seen as something different than learning. In particular, it usually embodies the idea that the academic teacher knows it all and has the task of sharing his/her expertise with the learners. The learners learn and the teacher teaches.

All the best teachers I know, have treated teaching as a learning process. As they are “teaching” they are also learning about what is working and what isn’t. Learning about the content they are teaching and what pedagogy works best and what technology works best within the place in which they operate for the people they are “teaching”.

Assuming then that it would be a good idea to encourage this practice of teaching as learning then the organisational units responsible for helping academics teach should be aiming to help them learn how to do it better.

Which brings us back to the 6 things we know about learning and how well they apply to the practice of helping teaching staff learn.

  • Social – How much of the design, development and delivery of a course is social? How much of it depends upon and actively uses social connections and activities between multiple academics?
  • Situated – How much of this occurs during the act of teaching? Not at the start of the end, but during the act? How much support do staff receive during term?
  • Incorporates prior learning/existing knowledge – How much of the “support” is tailored to the differing capabilities and backgrounds of the learners? How much of it is delivered at a consistent level, assuming that all are starting from the same place?
  • Requires reflection – How, if at all, is reflection by teaching staff encouraged and assisted in a way that is both situated and social?
  • Is multi-faceted, multi-dimensional – considers the “whole person” – How much of this support moves beyond purely efficiency and effectivess aproaches and looks at the affective?
  • Is distributed between people, minds and tools – What examples/supports/tools/processes are teaching staff provided with that encourage and enable distributed cognition about how they practice L&T? How much of it is locked away within the heads of individuals?

Questions

  • Could the presence of the above 6 be used to evaluate/identify good courses/teaching/teachers?
  • If the above 6 were used as the basis for designing processes for helping teaching staff teach, what would it look like?

The importance of diversity to improving learning and teaching

For some time I have thought that one of the major barriers to improving/innovation in learning and teaching has been the consistency of practice and mindset held by discipline based groups. Now I’ve got some suggestion of a research basis for this view. This post attempts to explain my view, outline the research basis and draw implications for the practice of learning and teaching at Universities.

The problems with discipline-based groups

Almost without exception, academic staff at Universities are organised into discipline-based groups. All the computer scientists are in one unit, the management folk in another and yet another for the historians. These discipline groups generally have a fairly large common perspective on research and learning and teaching. They tend to teach based on methods they’ve experienced and all the members within a discipline group tend to have experience the same methods.

Anything outside of that experience is seen as strange and in the absence of outside knowledge they aren’t even aware that there are alternatives.

For example, way back when I was a member of an information technology group that were thrown together into an organisational unit with journalists, cultural studies and other decidedly non-technical, non-autistic disciplines. At some later stage I was responsible for supporting the learning and teaching of these different groups. Those staff from a more “human communications” based discipline, almost without exception, placed a great deal of emphasis on face-to-face tutorials with a heavy emphasis on student/student and student/teacher discussion. Which made it very difficult to come up with approaches for distance education students. The IT folk without that history, didn’t have the same problem. Neither group, without interactions with the other group, would normally have thought of the other approach to teaching.

Discipline-based groups tend to exclude awareness of alternatives, the tend to emphasise the importance of shared experience. They make it difficult to be aware of alternatives.

This is particularly problematic because, almost without exception, most of the major projects that attempt to improve and/or innovate around learning and teaching are discipline-based. This fundamental assumption, in my mind has always, limited the chances of true improvement or innovation. It limits the chance of them escaping the past.

Related to this practice is the suggestion that instructional/learning/curriculum designers should be physically located within faculties or departments. i.e. that they should work predominantly with folk from within a particular discipline (or set of disciplines). Over time, because of the nature of the work (e.g. these instructional designers will start to publish more within the literature of a particular discipline), I believe that this practice has the likely outcome of further constraining innovation.

What others have said

In a recent blog post Dave Snowden has suggested that perspective shift is one of the necessary (but not sufficient) conditions for innovation. Discipline-based attempts improving learning and teaching make this very difficult as they generally only involve people who have very similar perspectives.

Dave’s blog post mentioned above includes a link to an MP3 file of a talk he gave in Melbourne. I fully recommend people listen to this, even though it is disappointing to have missed out on the end of the talk due to flat batteries. This blog post which gives one summary on Dave’s talk offers some related insights.

This morning’s post from the Tomorrow’s Professor Mailing List was titled “Do Faculty Interactions Among Diverse Students Enhance Intellectual Development”. It was looking at the practice in the USA of having racially mixed classes and its effect on intellectual development. While it may be a leap (a leap too far?) to apply some of the findings to improving teaching, I certainly see some connection and value in doing so.

The post was an excerpt from

Chapter 4, Accounting for Diversity Within the Teaching and Learning Paradigm, in the book: Driving Change through Diversity and Globalization, Transformative Leadership in the Academy, by James A. Anderson, professor of psychology and Vice President for Student Success and the Vice Provost for Institutional Assessment and Diversity at the University of Albany

One of the foundation pieces of evidence the post is based upon is from the following paper

Anthony Lising Antonio, Mitchell J. Chang, Kenji Hakuta, David A. Kenny, Shana Levin,
and Jeffrey F. Milem, Effects of Racial Diversity on Complex Thinking in College Students, Psychological Science, 15(8): 507-510.

The paper aims to examine the effects of diversity on integrative complexity. Integrative complexity is the degree to which cognitive style involves differentiation and integration of multiple perspectives. The idea is that the level of IC has the following effects:

  • Low integrative complexity – take a less complicated approach to reasoning, decision making and evaluating information.
  • High integrative complexity – evaluation is reflective, involves various perspectives, solutions and discussions.

The paper’s findings included

  • Racial diversity in a group of white students led to greater level of cognitive complexity.
  • Racial diversity of a student’s friends had a greater impact on integrative complexity than the diversity of the group.

Some of the other points in the paper

  • Groupthink.
    The cohesiveness and solidarity that arises from a common group is a foundation for unanimity of opinion which results in poor decision making.
  • Minority influence.
    Presence of group members who hold divergent opinions lead to increased divergent thinking and perspective taking. Interaction with the minority enhances the integrative complexity of the members of the group who hold the majority opinion.

Implications of learning and teaching

In summary, at a high level

  • Homogeneous groups considered harmful.
    Any approach to improving learning and teaching which uses homogeneous groups will limit, possibly even prevent, innovation and improvement as the group will get bogged down in group think. Given that the majority of such projects within universities involve homogeneous groups, it questions some of the fundamental operations of universities.
  • Actively design projects to encourage positive interactions amongst people with diverse backgrounds.
    The positive flip side is that projects should actively seek diversity in its membership and engage in processes that enable positive interactions between these diverse group members. i.e. not an attempt to encourage group think or cohesion amongst the diverse members, but instead to leverage the diversity for something truly innovative.

    Hopefully those that are familiar with the unit I currently work with can see why I value and encourage the diversity of the unit and think any attempt to encourage uniformity of background and thinking is a hugely negative thing.

From these two observations a number of potential implications can be drawn

  • Discipline-based innovation in L&T will be less than successful.
  • Top-down innovations in L&T will be less than successful (as they embed an assumption that a very small, generally similar, group can make decisions and get everyone to buy into the group think.)
  • Any committee or group that contains members that have the same discipline or organisational experience (e.g. everyone has been at University X for 10+ years) will generate sub-optimal outcomes.
  • The best and most innovative teachers will have the most diverse set of teaching influences and experiences. (The diffusion of innovations literature backs this up).
  • Organisational units (e.g. teaching and learning or academic staff development units) which all have pretty much the same background (e.g. all graduated with Masters in Instructional Design) or same experience (all publish in the same set of conferences or journals) will be less innovative than they could be.
  • An L&T support unit that doesn’t regularly, actively and deeply engage with the L&T context of their organisation are destined to do things that are less innovative and appropriate.

Of course I believe this, over the last 5 years I’ve occasionally attempt to get the REACT process off the ground as an approach to improving learning and teaching. A key aim of that project was to

opening up the design of teaching to enable collaboration with and input from a diverse set of peers;

What is research? How do you do it?

A previous post announced that a group of folk at CQUniversity are about to embark on a project/exercise with the aim of helping people develop ideas for research and turn them into publications.

Any such process should probably talk about answers to two questions (amongst many others)

  1. What is research?
  2. How do you do it?

The following provides some simple answers to those questions that will form the basis for the react2008 process.

Disclaimer

This is not to suggest that these are the only answers, or potentially even good answers. However, the claim is that they are sufficiently useful for the purpose of react2008 and reasonably defendable.

The aim is to ensure that the workshop participants have some sort of common understanding of answers to these questions that they can use as a starting point for conversation. Some level of common understanding is important.

What is research?

Generally, research aims to address important problems, or provide answers to difficult questions through the application of a disciplined process that generates new and useful knowledge. That knowledge is expressed in the form of theory.

The question of what is theory is left to later.

How do you do it?

Answers to these questions often come down to battles between research paradigms. Are you a positivist or interpretivist (or some other sort of “ist”)? The answer to this question governs how you do research, what you think it is etc.

Mingers (2001) identifies three perspectives on how to handle the question of paradigms. These include:

  1. Isolationism – where paradigms are seen to be based on mutually exclusive and contradictory assumptions and where individual researchers should or do follow a single paradigm.
  2. Complementarist – where the no paradigm is superior, but that different approaches are more or less suitable for particular problems or questions and that there should be a process of choice.
  3. Multi-method – where paradigms are seen to focus on different aspects of reality and that a richer understanding of a research problem can be gained by combining several methods, particularly from different paradigms.

The approach for react2008 will be somewhere between/inclusive of the complementarist and multi-method approaches. In short, it ignores (and probably thinks unimportant) questions of “ists”. This view has close similarities to the view suggested by Sandy’s paper (Behrens, 2008) on the use of Vaihinger’s theory of fictions as a basis for looking at information systems research. It’s used in an attempt to unite the usually battling factions of positivists and interpretivists.

Instead, it goes for a simple process of doing research (which is not seen as simply sequential), including the following steps

  1. What is the research problem and/or the research question that is of interest? Why is it important?
  2. What type of theory is most appropriate for the type of knowledge you need to answer the question?
    What types of theories there are will be outlined below.
  3. What is the most appropriate process(es) to use to develop this type of theory?

Types of theory

Arising out of the second question is the notion of what is theory, what type of theories there are and what structure should they take. Questions that are taken up by Gregor (2007) and which I draw on briefly and poorly below. Especially in the following table that attempts to summarise the 5 types of theory identified by Gregor.

Theory Type Attributes
Analysis “What is”. Analyses and describes a phenomena but no causal relationshihps are made nor predictions generated
Explanation “What is”, “how”, “why”, “when”, “where”. Aims to explain, but not predict with any precision. No testable propositions.
Prediction “What is” and “what will be”. Provides predictions and has testable propositions but does not have a well-developed justificatory causal explanation.
Explanation and prediction “What is”, “how”, “why”, “when”, “where” and “what will be”. Provides predictions and has both testable propositions and causal explanations
Design and action “How to do something”.

Gregor (2007) was writing within the information systems discipline. Consequently, this appropriation into the “e-learning” field may not be entirely appropriate. However, I would argue that there is a great deal of overlap between the two disciplines.

References

Behrens 2008
Gregor 2007

REACT 2008 – An exercise in scholarship?

We’re about to embark on a little experiment in the scholarship of learning and teaching going under the tag react2008. The fundamental aim is to help improve the quality of papers we will write that are targeted for publication at EdMedia’2009. The experiment is going by the names of either react2008 or writers’ workshop. The project as a central website.

Other aims of the project include

  • Helping some of the less experienced researchers develop some insight into one way of developing and writing papers – of performing research.
  • Help the participants gain some appreciation of the differences of perspective within the group and how those differences compliment each other.
  • Increasing the quantity and quality of research outputs of CDDU and the PLEs@CQUni project.

This post is intended to give a brief description of how the whole thing might work.

Subsequent posts will start talking about implementation issues and what’s next. After that each of the steps will be expanded and tasks allocated.

Principles and Background

The ideas underpinning this exercise is much informed by the Reflection, Evaluation and Collaboration in Teaching (REACT) project. The react2008 project will have a very different aim and process, however, it will be based on/informed by many of the same principles and foundations of the original REACT process.

In particular, react2008 will draw on the ideas of Shulman (1993) around the scholarship of L&T, in particular, the importance of

  • Communication and community
    react2008 will use face-to-face discussions and social network software to create a community of researchers who will have to communicate with each other about their process of writing and research.
  • Creation of an artifact
    react2008 participants are creating a paper as their final artifact. There will also be other artifacts produced along the way in terms of wiki pages and presentations.
  • Peer review
    A key part of the react2008 process is that there will be peer review throughout the process of the discussion and artifacts produced by the participants.

How will it work

react2008 will use a simple process with a small number of steps as the framework through which people will develop, implement and write about their research idea. There is no claims made that this is the only process, the best process, a sequential process or broadly suitable to all people.

However, there is a claim that it is a fairly generic process that provides enough structure to provide participants with a common language to talk about their process and enough freedom to do their own thing in terms of process and research perspective. Usefulness is seen as more important than ideological adherence.

Most steps in the process will require the production of an artifact and the submission of that artifact to a semi-structured peer-review process. The participants will be expected to both produce their artifacts and comment on the artifacts of others. The artifacts and the communication/review will be primarily done through blogs and wiki.

The current suggested list of steps is summarised below. A later post will expand in more detail on these steps.

The steps are:

  1. What’s the problem problem? What’s the question?
    The aim of research is, in this context, always seen as an attempt to solve a problem or answer a question. The first step is to develop a statement about what the research problem or question is about. (remember this isn’t a sequential process, this step will be revisited numerous times).
  2. What type of knowledge are you going to contribute?
    This work assumes that there are different types of knowledge (of theory) produced in research. Depending on the type of knowledge you wish to generate/contribute you will use different types of method.
  3. How are you going to develop that knowledge?
    You have to use some sort of appropriate process to develop the knowledge you want to write about. There are multiple choices, you need to be clear about what process and why you will be using.
  4. How does this knowledge add to and fit with existing knowledge?
    It’s important that you know how this knowledge you will generate fits with existing knowledge in the area. You need to be able to explain why it is valuable.
  5. What do you need to do to make it fit with the outlet?
    Different publication outlets have different perspectives and difference approaches are required to get accepted. You need to become familiar with the outlet.
  6. Do the work.
    At this stage you need to generate the knowledge.
  7. Give a presentation.
    Once you’ve generated the knowledge you need to begin work on presenting it to folk in some sort of finished form. A presentation will precede writing of the paper.
  8. Write the paper.
    The actual process of turning the knowledge into an appropriate form for the publication outlet.
  9. Respond to reviewer comments.
    An art form in itself.
  10. Present the paper.
    If we’re talking about a conference, then another presentation is required to further refine the initial presentation.