Category Archives: herding cats

BIM and BAD

This post arises from two events today

  1. The ASCILITE’2014 call for papers came out today and I’m thinking about a paper I might submit.
  2. The first #edc3100 assignment is due today and my use of BIM has struck a unique problem that I need to solve.

The third is that I’m a touch fried from answering queries about “I submitted my assignment the wrong way” (the main problem with these queries is that they mean I have to engage with a horrible online assignment submission system) that I need to engage in something else.

The paper

The working title for the paper I’m thinking of is “The story of BIM: Being BAD as a way to bridge rhetoric and reality”.

BAD is an acronym that captures what I think is missing from the institutional approach to university e-learning

  1. Bricolage – the LMS as Enteprise Systems doesn’t allow or cater for bricolage.
  2. Affordances – resulting in an inability to leverage the affordances of technology to improve learning and teaching.
  3. Distribution – the idea that knowledge about how to improve L&T is distributed and the implications that has for the institutional practice of e-learning.

    i.e. current methods rely on the single, unified view of learning and teaching. A view that is expressed most concretely in the form of the LMS.

    This component will draw on a range of related “network” type theoretical perspectives including connectivism, Complex Adaptive Systems, embodied cognition and ANT – to name but a few.

The idea is if institutional e-learning is to get better, it needs to be BAD (more BAD?).

The following is an example of how the reality of using BIM in action supports the idea that it needs to be BAD. Or at least it’s a very small step. It captures the messiness (the distribution) of e-learning in a typical university course. A messiness that isn’t captured properly by PRINCE II and other methodologies, hierarchical organisational structures, appropriately total quality assured forms and processes and “theory” based abstractions like adopter categories.

And yes, there is some strong connection (repetition) with earlier perspectives/frameworks of mine. If at first you don’t succeed, try, try again.

Background

300+ students in EDC3100 are currently using their own blogs to reflect on their learning journey (to varying levels of engagement). Their blog posts contribute almost 15% of their final result in the course.

BIM is being used to keep a track of what they are sharing. The students create their blogs on WordPress.com (or anywhere they like) and register them with BIM. BIM then keeps a copy of all their posts.

But BIM’s capability don’t match the learning design I’m using in this course. BIM was originally designed to have student posts made in response to specific prompts/questions and then have the posts marked manually be human markers. In EDC3100, the students blog about anything they want. We offer some prompts, but they can ignore them. Student posts aren’t marked, instead students have to post a certain number of posts (on average) every week, the posts have to average a certain word count, and a certain number have to contain links to online resources and the blog posts of other students.

This analysis of student posts and the subsequent mark they get is done by a program I wrote. A bit of bricolage that takes bits and pieces of information extracted (with some difficulty) from various institutional systems and makes use of them in a way that solves my problem.

With a bit of bricolage each of the 300+ students have received an email recently telling them what the system knows about their progress. This gives them sometime to tick all the boxes prior to the first assignment being due (yep, still have due dates, haven’t travelled too far from the well-worn paths).

The problem

One student has reported a problem with what the system knows about their blog. The system says that only one of the posts links to another student post, but the student’s blog actually has two posts that link to other student posts. This is confirmed.

But no-one else is reporting the problem. There’s something unique about this student’s blog that has picked up a bug in the system.

The uniqueness of this bug appears to me as one of the problems associated with the failure of institutional systems to deal with the Distribution aspect of BAD. In a complex, distributed knowledge network there is no one view. But the trad approach can only ever respond to one view. This argument needs a bit of work.

Typically this problem is because the author of the post has used a link other to the other student’s blog. The program I wrote knows the URLs for all the student blogs. It checks all the links in a post against the known student posts.

I’ve visually checked the students blog posts in BIM and they are showing valid links to student blogs.

Argghh.

The solution – Chrome is too smart – it’s distribution

This is what I see when I look at the blog post using the Chrome browser

Chrome by David T Jones, on Flickr

The link that is shown is to the blog of another student. The program should pick this up and count it as a link.

Here’s what I see when I view it under the Firefox browser

Firefox by David T Jones, on Flickr

See the difference?

The student appears to have used some form of URL shortener. Looks like a WordPress tool. While this shortened URL does point to the post of another student. My little system doesn’t know how to convert a shortened URL into a full URL. So it doesn’t count it.

It appears that I must have a plugin installed on Chrome (or perhaps Chrome is smart enough on its own) to automatically expand out the wp.me shortened URL into the full link and change what is shown to the user.

I as the user is ignorant of this change happening.

Not a bad example of Distribution. How cognition/smarts/learning is distributed amongst all of the tools. Change on bit of the network and the outcome changes.

Ateleological travels in a teleological world: Past and future journeys around ICTs in education

In my previous academic life, I never really saw the point of book chapters as a publication form. For a variety of reasons, however, my next phase in academia appears likely to involve an increasing number of book chapters. The need for the first such chapter has arisen this week and the first draft is due by February next year, which is a timeline to give me just a little pause for thought. (There is a chance that this book might end up as a special edition of a journal)

What’s you perception of book chapters as a form of academic publication? Am particularly interested in the view from the education field.

What follows is a first stab at an abstract for the book chapter. The title for the book/special edition is “Meanings for in and of education research”. The current working title for my contribution is the title to this post: “Ateleological travels in a teleological world: Past and future journeys around ICTs in education”.

Abstract

The Australian Federal Government are just one of a gaggle of global stakeholders suggesting that Information and Communication Technologies are contributing to the creation a brave, new, digital world. Such a digital world is seen as being radically different to what has gone before and consequently demanding a radically different education system to prepare the next generation of learners. A task that is easier said than done. This chapter argues that the difficulties associated with this task arise because the meanings underpinning the design of education systems for the digital world are decidedly inappropriate and ill-suited for the nature of the digital world. The chapter draws upon 15+ years of research formulating an Information Systems Design Theory for emergent e-learning systems for universities to critically examine these commonly accepted meanings, suggest alternate and more appropriate meanings, and discuss the potential implications that these alternate meanings hold for the practice of education and education research.

The plan

The plan is that this chapter/paper will reflect on the primary focus of my research over recent years and encourage me to think of future research directions and approaches. Obviously it will draw on the PhD research and in particular the Ps Framework and the presentation I gave at EdMedia a couple of years ago. It will also draw on the presentation I gave analysing the Digital Education Revolution as part of my GDLT studies this year.

Alan Kay and some reasons why the educational technology revolution hasn’t happened

While reading a recent post from Gardner Campbell I was taken by a quote from Alan Kay

The computer is simply an instrument whose music is ideas

A google search later and I came across this interview with Kay for the Scholastic Administrator magazine. The article is titled “Alan Kay still waiting for the revolution” and there are some, for me, interesting perspectives. A smattering below.

The difficult part is helping the helpers

Kay identifies the greatest obstacle to his work as being “helping the helpers”. i.e. the teachers. In talking about Logo, Kay a key failure being that the second and third waves of teachers were not interesting in Logo and didn’t have the math skills to teach well with Logo.

I see this as the biggest problem around e-learning (or blended, flexible, personal etc learning if that’s your buzz word of the moment) within universities, helping the helpers.

The tokenism of computers

On computers and tokenism

But I think the big problem is that schools have very few ideas about what to do with the computers once the kids have them. It’s basically just tokenism, and schools just won’t face up to what the actual problems of education are, whether you have technology or not.

Again there’s some resonance with universities. For a lot of senior and IT management in universities there’s an idea that we must have an LMS, but there’s not always a good idea of what the organisation should do with it once it has it. The most important part of that “idea”, is being able to identify what about the policies and practices of the institution needs to change to best achieve that idea.

For example, with the LMS the institution can increase interaction between staff and students via discussion forums, e-portfolios etc. But we won’t change the workload or funding model for teaching, or recognise the need to change the timetable to remove the traditional 2 hour lecture, 2 hour tutorial model.

The difference between music and instruments

In talking about some of the limits or potential problems associated with the trend to one-to-one computer

Think about it: How many books do schools have—and how well are children doing at reading? How many pencils do schools have—and how well are kids doing at math? It’s like missing the difference between music and instruments. You can put a piano in every classroom, but that won’t give you a developed music culture, because the music culture is embodied in people……The important thing here is that the music is not in the piano. And knowledge and edification is not in the computer. The computer is simply an instrument whose music is ideas.

The provision of the LMS or some other “instrument” is the simple task. Helping the people figure out what you want to do with it and how it can be done well, is the hard part.

Helping everyone find their inner musician

Why educational computing hasn’t lived up to the potential?

So computers are actually irrelevant at this level of discussion—they are just musical instruments. The real question is this: What is the prospect of turning every elementary school teacher in America into a musician? That’s what we’re talking about here. Afterward we can worry about the instruments.

How do you encourage and enable university academics to become musicians? I don’t think you can forget about computers, e-learning or the LMS. They are already in universities. There’s a need to look out how you can change how academics experience these technologies so that they can start developing their musical ability. Sending them to “band camp” (e.g. Grad Cert in Higher Education) isn’t enough if they return to a non-musical family. The environment they live in has to be musical in every aspect.

Nobody likes a do-gooder – another reason for e-learning not mainstreaming?

Came across the article, “Nobody likes a do-gooder: Study confirms selfless behaviour is alienating” from the Daily Mail via Morgaine’s amplify. I’m wondering if there’s a connection between this and the chasm in the adoption of instructional technology identified by Geoghegan (1994)

The chasm

Back in 1994, Geoghegan draw on Moore’s Crossing the Chasm to explain why instructional technology wasn’t being adopted by the majority of university academics. The suggestion is that there is a significant difference between the early adopters of instructional technology and the early majority. That what works for one group, doesn’t work for the others. There is a chasm. Geoghegan (1994) also suggested that the “technologists alliance” – vendors of instructional technology and the university folk charged with supporting instructional technology – adopt approaches that work for the early adopters, not the early majority.

Nobody likes do-gooders

The Daily Mail article reports on some psychological research that draws some conclusions about how “do-gooders” are seen by the majority

Researchers say do-gooders come to be resented because they ‘raise the bar’ for what is expected of everyone.

This resonates with my experience as an early adopter and more broadly with observations of higher education. The early adopters, those really keen on learning and teaching are seen a bit differently by those that aren’t keen. I wonder if the “raise the bar” issue applies? Would imagine this could be quite common in a higher education environment where research retains its primacy, but universities are under increasing pressure to improve their learning and teaching. And more importantly show to everyone that they have improved.

The complete study is outlined in a journal article.

References

Geoghegan, W. (1994). Whatever happened to instructional technology? Paper presented at the 22nd Annual Conferences of the International Business Schools Computing Association, Baltimore, MD.

How people learn and implications for academic development

While I’m traveling this week I am reading How people learn. This is a fairly well known book that arose out of a US National Academy of Science project to look at recent insights from research about how people learn and then generate insights for teaching. I’ll be reading it through the lens of my thesis and some broader thinking about “academic development” (one of the terms applied to trying to help improve the teaching and learning of university http://davidtjones.wordpress.com/wp-admin/post.php?post=3189&action=edit&message=10#category-popacademics).

Increasingly, I’ve been thinking that the “academic development” is essentially “teaching the teacher”, though it would be better phrased as creating an environment in which the academics can learn how to be better at enabling student learning. Hand in hand with this thought is the observation and increasing worry that much of what passes for academic development and management action around improving learning and teaching is not conducive to creating this learning environment. The aim of reading this book is to think about ways which this situation might be improved.

The last part of this summary of the first chapter connects with the point I’m trying to make about academic development within universities.

(As it turns out I only read the first chapter while traveling, remaining chapters come now).

Key findings for learning

The first chapter of the book provides three key (but not exhaustive) findings about learning:

  1. Learners arrive with their own preconceptions about how the world exists.
    As part of this, if the early stages of learning does not engage with the learner’s understanding of the world, then the learner will either not get it, or will get it enough to pass the test, but then revert to their existing understanding.
  2. Competence in a field of inquiry arises from three building blocks
    1. a deep foundation of factual knowledge;
    2. understand these facts and ideas within a conceptual framework;
    3. organise knowledge in ways that enable retrieval and application.

    A primary idea here is that experts aren’t “smart” people. But they do have conceptual frameworks that help apply/understand much quicker than others

  3. An approach to teaching that enables students to implement meta-cognitive strategies can help them take control of their learning and monitor their progress.
    Meta-cognitive strategies aren’t context or subject independent.

Implications for teaching

The suggestion is that the above findings around learning have significant implications for teaching, these are:

  1. Teachers have to draw out and work with pre-existing student understandings.
    This implies lots more formative assessment that focuses on demonstrating understanding.
  2. In teaching a subject area, important concepts must be taught in-depth.
    The superficial coverage of concepts (to fit it all in) needs to be avoided, with more of a focus on the those important subject concepts.
  3. The teaching of meta-cognitive skills needs to be integrated into the curriculum of a variety of subjects.

Four attributes of learning environments

A latter chapter expands on a framework to design and evaluate learning environments, it includes four interrelated attributes of these environments:

  1. They must be learner centered;
    i.e. a focus on the understandings and progress of individual students.
  2. The environment should be knowledge centered with attention given to what is taught, why it is taught and what competence or mastery looks like
    Suggests too many curricula fail to support learning because the knowledge is disconnected, assessment encourages memorisation rather than learning. A knowledge-centered environment “provides the necessary depth of study, assessing student understanding rather than factual memory and incorporates the teaching of meta-cognitive strategies”.

    There’s an interesting point here about engagement, that I’ll save for another time.

  3. Formative assessments
    The aim is for assessments that help both students and teachers monitor progress.
  4. Develop norms within the course, and connection with the outside world, that support core learning values.
    i.e. pay attention to activities, assessments etc within the course that promote collaboration and camaraderie.

Application to professional learning

In the final section of the chapter, the authors state that these principles apply equally well to adults as they do to children. They explain that

This point is particularly important because incorporating the principles in this volume into educational practice will require a good deal of adult learning.

i.e. if you want to improve learning and teaching within a university based on these principles, then the teaching staff will have to undergo a fair bit of learning. This is very troubling because the authors argue that “approaches to teaching adults consistently violate principles for optimizing learning”. In particular, they suggest that professional development programs for teachers frequently:

  • Are not learner centered.
    Rather than ask what help is required, teachers are expected to attend pre-arranged workshops.
  • Are not knowledge centered.
    i.e. these workshops introduce the principles of a new technique with little time spent to the more complex integration of the new technique with the other “knowledge” (e.g. the TPACK framework) associated with the course
  • Are not assessment centered.
    i.e. when learning these new techniques, the “learners” (teaching staff) aren’t given opportunities to try this out, get feedback and even to give teachers the skills to know whether or not they’ve implemented the new technique effectively.
  • Are not community centered.
    Professional development consists more of ad hoc, separate events with little opportunity for a community of teachers to develop connections for on-going support.

Here’s a challenge. Is there any university out there were academic development doesn’t suffer from these flaws? How has that been judged?

The McNamara Fallacy and pass rates, academic analytics, and engagement

In some reading for the thesis today I came across the concept of McNamara’s fallacy. I hadn’t heard this before. This is somewhat surprising as it points out another common problem with some of the more simplistic approaches to improving learning and teaching that are going around at the moment. It’s also likely to be a problem with any simplistic implementation of academic analytics.

What is it?

The quote I saw describes McNamara’s fallacy as

The first step is to measure whatever can be easily measured. This is ok as far as it goes. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.

The Wikipedia page on the McNamara fallacy describes it as referring to Robert McNamara’s – the US Secretary of Defense from 1961 through 1968 – explanation of the USA’s failure in Vietnam down to a focus on quantifying success through simply indicators such as enemy body count, while at the same time ignoring other more important factors. Factors that were more difficult to measure.

The PhD thesis which I saw the above quote ascribes it to Yankelovich (1972), a sociologist. Wikipedia ascribes it to Charles Handy’s “The Empty Raincoat”. Perhaps indicating that the quote is from McNamara himself, just presented in different places.

Pass rates

Within higher education it is easy to see “pass rates” as an example of McNamara’s fallacy. Much of the quality assurance within higher education institutions is focused on checking the number of students who do (or don’t) pass a course. If the pass rate for a course isn’t too high, everything is okay. Much easier to measure this than the quality of student learning experience, the learning theory which informs the course design, or the impact the experience has on the student, now and into the future. This sort of unquestioning application of McNamara’s fallacy sometimes make me think we’re losing the learning and teaching “war” within universities.

What are the more important, more difficult to measure indicators that provide a better and deeper insight into the quality of learning and teaching?

Analytics and engagement

Student engagement is one of the buzz words on the rise in recent years, it’s been presented as one of the ways/measures to improve student learning. After all, if they are more engaged, obviously they must have a better learning experience. Engagement has become an indication of institutional teaching quality. Col did a project last year in which he looked more closely at engagement, the write up of that project gives a good introduction to student engagement. It includes the following quote

Most of the research into measuring student engagement prior tot he widespread adoption of online, or web based classes, has concentrated on the simple measure of attendance (Douglas & Alemanne, 2007). While class attendance is a crude measure, in that it is only ever indicative of participation and does not necessarily consider the quality of the participation, it has nevertheless been found to be an important variable in determining student success (Douglas, 2008)

Sounds a bit like a case of McNamara’s fallacy to me. A point Col makes when he says “it could be said that class attendance is used as a metric for engagement, simply because it is one of the few indicators of engagement that are visible”.

With the move to the LMS, it was always going to happen that academic analytics would be used to develop measures of student engagement (and other indicators). Indeed, that’s the aim of Col’s project. However, I do think that academic analytics is going to run the danger of McNamara’s fallacy. So busy focused on what we can measure easily, we miss the more important stuff that we can’t.

The grammar of school, psychological dissonance and all professors are rather ludditical

Yesterday, via a tweet from @marksmithers I read this post from the author of the DIYU book titled “Vast Majority of Professors Are Rather Ludditical”. This is somewhat typical of the defict model of academics which is fairly prevalent and rather pointless. It’s pointless for a number of reasons, but the main one is that it is not a helpful starting point for bringing a out change as it ignores the broader problem and consequently most solutions that arise from a deficit model won’t work.

One of the major problems this approach tends to ignore is the broader impact of the grammar of school (first from Tyack and Cuban and then Papert). I’m currently reading The nature of technology (more on this later) by W. Brian Arthur. The following is a summary and a little bit of reflection upon a section titled “Lock-in and Adaptive Stretch”, which seems to connect closely with the grammar of school idea.

Psychological dissonance and adaptive stretch

Arthur offers the following quote from the sociologist Diane Vaughan around psychological dissonance

[In the situations we deal with as humans, we use] a frame of reference constructed from integrated sets of assumptions, expectations and experiences. Everything is perceived on the basis of this framework. The framework becomes self-confirming because, whenever we can, we tend to impost it on experiences and events, creating incidents and relationships that conform to it. And we tend to ignore, misperceive, or deny events that do not fit it. As a consequence, it generally leads us to what we are looking for. This frame of references is not easily altered or dismantled, because the way we tend to see the world is intimately linked to how we see and define ourselves in relation to the world. Thus, we have a vested interest in maintaining consistency because our own identity is at risk.

Arthur goes onto to suggest that “the greater the distances between a novel solution and the accepted one, the large is this lock-in to previous tradition”. He then defines the lock-in of the older approach as adaptive stretch. This is the situation where it is easier to reach for the old approaches and adapt it to the new circumstances through stretching.

Hence professors are ludditical

But haven’t I just made the case, this is exactly what happens with the vast majority of academic practice around e-learning. If they are using e-learning at all – and not simply sticking with face-to-face teaching – most teaching academics are still using lectures, printed notes and other relics of the past that they have stretched into the new context.

They don’t have the knowledge to move on, so we have to make them non-ludditical. This is when management and leadership at universities rolls into action and identifies plans and projects that will help generate non-ludditical academics.

The pot calling the kettle black

My argument is that if you step back a bit further the approaches being recommended and adopted by researchers and senior management; the way those approaches are implemented; and they way they are evaluated for success, are themselves suffering from psychological dissonance and adaptive stretch. The approaches almost without exception borrow from a traditional project management approach and go something like:

  • Small group of important people identify the problem and the best solution.
  • Hand it over to a project group to implement.
  • The project group tick the important project boxes:
    • Develop a detailed project plan with specific KPIs and deadlines.
    • Demonstrate importance of project by wheeling out senior managers to say how important the project is.
    • Implement a marketing push involving regular updates, newsletters, posters, coffee mugs and presentations.
    • Develop compulsory training sessions which all must attend.
    • Downplay any negative experiences and explain them away.
    • Ensure correct implementation.
    • Get an evaluation done by people paid for and reporting to the senior managers who have been visibly associated with the project.
    • Explain how successful the project was.
  • Complain about how the ludditical academics have ruined the project through adaptive stretching.

Frames of reference and coffee mugs

One of the fundamental problem with these approaches to projects within higher education is that it effectively ignores the frames of reference that academics bring to problem. Rather than start with the existing frames of reference and build on those, this approach to projects is all about moving people straight into a new frame of reference. In doing this, there is always incredible dissonance between how the project people think an action will be interpreted and how it actually is interpreted.

For example, a few years ago the institution I used to work for (at least as of CoB today) adopted Chickering and Gamson’s (1987) 7 principles for good practice in undergraduate teaching as a foundation for the new learning and teaching management plan. The project around this decision basically followed the above process. As part of the marketing push, all academics (and perhaps all staff) received a coffee mug and a little palm card with the 7 principles in nice text and a link to the project website. The intent of the project was to increase awareness of the academics of the 7 principles and how important they were to the institution.

The problem was, that at around this time the institution was going through yet more restructures and there was grave misgivings from senior management about how much money the institution didn’t have. The institution was having to save money and this was being felt by the academics in terms of limits on conference travel, marking support etc. It is with this frame of reference that the academics saw the institution spending a fair amount of money on coffee mugs and palm cards. Just a touch of dissonance.

What’s worse, a number of academics were able to look at the 7 principles and see principle #4 “gives prompt feedback” and relate that to the difficulty of giving prompt feedback because there’s no money for marking support. Not to mention the push from some senior managers about how important research is to future career progression.

So, the solution is?

I return to a quote from Cavallo (2004) that I’ve used before

As we see it, real change is inherently a kind of learning. For
people to change the way they think about and practice education, rather than merely being told what to do differently, we believe that practitioners must have experiences that enable appropriation of new modes of teaching and learning that enable them to reconsider and restructure their thinking and practice.

Rather than tell academics what to do, you need to create contextualised experiences for academics that enable appropriation of new models of teaching and learning. What most senior managers at universities and many of the commentators don’t see, is that the environment at most universities is preventing academics from having these experiences and then preventing them from appropriating the new models of teaching.

The policies, processes, systems and expectations senior managers create within universities are preventing academics from becoming “non-ludditical”. You can implement all the “projects” you want, but if you don’t work on the policies, processes, systems and expectations in ways that connect with the frames of reference of the academics within the institution, you won’t get growth.

References

Cavallo, D. (2004). Models of growth – Towards fundamental change in learning environments. BT Technology Journal, 22(4), 96-112.

Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin, 39(7), 3-7.