Category Archives: design theory

Documenting the gap between “start of art” and “state of the actual”

Came across Perrotta et al (2013) in my morning random ramblings through my PLN and was particular struck by this

a rising awareness of a gap between ‘state of art’ experimental studies on learning and technology and the ‘state of the actual’ (Selwyn, 2011), that is, the messy realities of schooling where compromise, pragmatism and politics take centre stage, and where the technological transformation promised by enthusiasts over the last three decades failed to materialize. (pp. 261-262)

For my own selfish reasons (i.e. I have work within the “state of the actual”) my research interests are in understanding and figuring out how to improve the “state of the actual”. My Moodlemoot’AU 2013 presentation next week is an attempt to establish the rationale and map out one set of interventions I’m hoping to undertake. This post is about an attempt to make explicit some on-going thinking about this and related work. In particular, I’m trying to come up with a research project to document the “state of the actual” with the aim of trying to figure out how to intervene, but also, hopefully, to inform policy makers.

Some questions I need to think about

  1. What literature do I need to look at that documents the reality of working with current generation university information systems?
  2. What’s a good research method – especially data capture – to get the detail of the state of the actual?

Why this is important

A few observations can and have been made about the quality of institutional learning and teaching, especially university e-learning. These are

  1. It’s not that good.

    This is the core problem. It needs to be better.

  2. The current practices being adopted to remedy these problems aren’t working.

    Doing more of the same isn’t going to fix this problem. It’s time to look elsewhere.

  3. The workload for teaching staff is high and increasing.

    This is my personal problem, but I also think it’s indicative of a broader issue. i.e. much of the current practices aimed at improving quality assume a “blame the teacher” approach. Sure there are some pretty poor academics, but the most of the teachers I know are trying the best they can.

My proposition

Good TPACK == Good learning and teaching

Good quality learning and teaching requires good TPACK – Technological Pedagogical and Content Knowledge. The quote I use in the abstract for the Moodlemoot presentation offers a good summary (emphasis added)

Quality teaching requires developing a nuanced understanding of the complex relationships between technology, content, and pedagogy, and using this understanding to develop appropriate, context-specific strategies and representations. Productive technology integration in teaching needs to consider all three issues not in isolation, but rather within the complex relationships in the system defined by the three key elements. (Mishra & Koehler, 2006, p. 1029)

For some people the above is obvious. You can’t have quality teaching without a nuanced and context specific understanding of the complex relationships between technology, pedagogy and context. Beyond this simple statement there are a lot of different perspectives on the nature of this understanding, the nature of the three components and their relationships. For now, I’m not getting engaged in those. Instead, I’m simply arguing that

the better the quality of the TPACK, then the better the quality of the learning and teaching

Knowledge is not found (just) in the teacher

The current organisational responses to improving the quality of learning and teaching is almost entirely focused on increasing the level of TPACK held by the teacher. This is done by a variety of means

  1. Require formal teaching qualifications for all teachers.

    Because obviously, if you have a teaching qualification then you have better TPACK and the quality of your teaching will be better. Which is obviously way the online courses taught by folk from the Education disciplines are the best.

  2. Running training sessions introducing new tools.
  3. “Scaffolding” staff by requiring them to follow minimum standards and other policies.

This is where I quote Loveless (2011)

Our theoretical understandings of pedagogy have developed beyond Shulman’s early characteristics of teacher knowledge as static and located in the individual. They now incorporate understandings of the construction of knowledge through distributed cognition, design, interaction, integration, context, complexity, dialogue, conversation, concepts and relationships. (p. 304)

Better tools == Better TPACK == Better quality learning and teaching

TPACK isn’t just found in the head of the academic. It’s found in the tools, the interaction etc they engage in. The problem that interests me is that the quality of the tools etc found in the “state of the actual” within university e-learning is incredibly bad. Especially in terms of helping the generation of TPACK.

Norman (1993) argues “that technology can make us smart” (p. 3) through our ability to create artifacts that expand our capabilities. Due, however, to the “machine-centered view of the design of machines and, for that matter, the understanding of people” (Norman, 1993, p. 9) our artifacts, rather than aiding cognition, “more often interferes and confuses than aids and clarifies” (p. 9). Without appropriately designed artifacts “human beings perform poorly or cannot perform at all” (Dickelman, 1995, p. 24). Norman (1993) identifies the long history of tool/artifact making amongst human beings and suggests that

The technology of artifacts is essential for the growth in human knowledge and mental capabilities (p. 5)

Documenting the “state of the actual”

So, one of the questions I’m interested in is just how well are the current artifacts being used in institutional e-learning helping “the growth in human knowledge and mental capabilities”?

For a long time, I’ve talked with a range of people about a research project that would aim to capture the experiences of those at the coal face to answer this question. The hoops I am having to currently jump through in trying to bring together a raft of disparate information systems to finalise results for 300+ students has really got me thinking about this process.

As a first step, I’m thinking I’ll take the time to document this process. Not to mention my next task which is the creation/modification of three course sites for the courses I’m teaching next semester. The combination of both these tasks at the same time could be quite revealing.

References

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Perrotta, C., & Evans, M. A. (2013). Instructional design or school politics? A discussion of “orchestration” in TEL research. Journal of Computer Assisted Learning, 29(3), 260–269. doi:10.1111/j.1365-2729.2012.00494.x

Does institutional e-learning have a TPACK problem?

The following is the first attempt to expand upon an idea that’s been bubbling along for the last few weeks. It arises from a combination of recent experiences, including

  • Working through the institutional processes to get BIM installed on the institutional Moodle.
  • Using BIM in my own teaching and the resulting changes (and maybe something along these lines) that will be made.
  • Talking about TPACK to students in the ICTs and Pedagogy course.
  • On-going observations of what passes for institutional e-learning within some Australian Universities (and which is likely fairly common across the sector).

Note: the focus here is on the practice of e-learning within Universities and the institutionally provided systems and processes.

The problem(s)

A couple of problems that spark this thinking

  1. How people and institutions identify the tools available/required.
  2. How the tools provide appropriate support, especially pedagogical, to the people using it.

Which tools?

One of the questions I was asked to address in my presentation to ask for BIM to be installed on the institutional LMS was something along the lines “Why would other people want to use this tool? We can’t install a tool just for one peson.”

Well one answer was that a quick Google search of the institution’s course specifications that revealed 30+ 2012 courses using reflective journals of varying types. BIM is a tool designed primarily to support the use of reflective learning journals by students via individual blogs.

I was quite surprised to find 30+ courses already doing this. This generated some questions

  • How are they managing the workload and the limitations of traditional approaches?
    The origins of BIM go back to when I took over a course that was using a reflective journal assessment task. Implemented by students keeping them as Word documents and submitting at the end of semester. There were problems.
  • I wonder how many of the IT and central L&T people knew that there were 30+ courses already using this approach?
    In this context, it would be quite easy to draw the conclusion that the IT and central L&T folk are there to help people with the existing tools and keep their own workload to a minimum by controlling what new tools are added to the mix. Rather than look for opportunities for innovation within the institution. Which leads to..
  • I wonder why the institution wasn’t already actively looking for tools to help these folk?
    Especially given that reflective learning journals (diaries etc) are “recognised as a significant tool in promoting active learning” (Thorpe, 2004, p. 327) but at the same time the are also “demanding and time-consuming for both students and educators” (Thorpe, 2004, p. 339)

A combination of those questions/factors seem to contribute to recent findings about the workloads faced by academics in terms of e-learning (Tynan et al, 2012)

have increased both the number and type of teaching tasks undertaken by staff, with a consequent increase in their work hours

and (Bright, 2012, n.p)

Lecturers who move into the online learning environment often discover that the workload involved not only changes, but can be overwhelming as they cope with using digital technologies. Questions arise, given the dissatisfaction of lecturers with lowering morale and increasing workload, whether future expansion of this teaching component in tertiary institutions is sustainable.

How the tools provide support?

One of the problems I’m facing with BIM is that the pedagogical approach I originally used and which drove the design of BIM is not the pedagogical approach I’m using now. The features and functions in BIM currently, don’t match what I want to do pedagogically. I’m lucky, I can change the system. But not many folk are in this boat.

And this isn’t the first time we’ve faced this problem. Reaburn et al (2009) used BIM’s predecessor in a “work integrated learning” course where the students were working in a professional context. They got by, but this pedagogical approach had yet again different requirements.

TPACK

“Technological Pedagogical Content Knowledge (TPACK) is a framework that identifies the knowledge teachers need to teach effectively with technology” (Koehler, n.d.). i.e. it identifies a range of different types of knowledge that are useful, perhaps required, for the effective use of technology in teaching and learning. While it has it’s detractors, I believe that TPACK can provide a useful lens for examining the problems with institutional e-learning and perhaps identify some suggestions for how institutional e-learning (and e-learning tools) can be better designed.

To start, TPACK proposes that successful e-learning (I’m going to use that as short-hand for the use of technology in learning and teaching) requires the following types of knowledge (with my very brief descriptions)

  • Technological knowledge (TK) – how to use technologies.
  • Pedagogical knowledge (PK) – how to teach.
  • Content knowledge (CK) – knowledge of what the students are meant to be learning.

Within institutional e-learning you can see this separation in organisational structures and also the assumptions of some of the folk involved. i.e.

  • Technological knowledge – is housed in the institutional IT division.
  • Pedagogical knowledge – is housed in the central L&T division.
  • Content knowledge – academics and faculties are the silos of content knowledge.

Obviously there is overlap. Most academics have some form of TK, PK and CK. But when it comes to the source of expertise around TK, it’s the IT division. etc.

TPACK proposes that there are combinations of these three types of knowledge that offer important insights

  • Pedagogical Content Knowledge (PCK) – the idea that certain types of content is best taught using certain types of pedagogy.
  • Technological Pedagogical Knowledge (TPK) – the knowledge that certain types of technologies work well with certain types of pedagogy (e.g. teaching critical analysis using a calculator probably isn’t a good combination)
  • Technological Content Knowledge (TCK) – that content areas draw on technologies in unique ways (e.g. mathematicians use certain types of technologies that aren’t used by historians)

Lastly, TPACK suggests that there is a type of knowledge in which all of the above is combined and when used effectively this is where the best examples of e-learning arise.  i.e. TPACK – Technological, Pedagogical and Content Knowledge.

The problem I see is that institutional e-learning, its tools, its processes and its organisational structures are getting in the way of allowing the generation and application of effective TPACK.

Some Implications

Running out of time, so some quick implications that I take from the above and want to explore some more. These are going to be framed mostly around my work with BIM, but there are potentially some implications for broader institutional e-learning systems which I’ll briefly touch on.

BIM’s evolution is best when I’m teaching with it

Assuming that I have the time, the best insights for the future development of BIM have arisen when I’m using BIM in my teaching. When I’m able to apply the TPACK that I have to identify ways the tool can help me. When I’m not using BIM in my teaching I don’t have the same experience.

At this very moment, however, I’m only really able to apply this TPACK because I’m running BIM on my laptop (and using a bit of data munging to bridge the gap between it and the institutional systems). This means I am able to modify BIM in response to a need, test it out and use it almost immediately. When/if I begin using BIM on the institutional version of Moodle, I won’t have this ability. At best, I might hope for the opportunity for a new version of BIM to be installed at the end of the semester.

There are reasons why institutional systems have these constraints. The problem is that these constraints get in the way of generating and applying TPACK and thus limit the quality of the institutional e-learning.

I also wonder if there’s a connection here and the adoption of Web 2.0 and other non-institutional tools by academics. i.e. do they find it easier to generate and apply TPACK to these external tools because they don’t have the same problems and constraints as the institutional e-learning tools?

BIM and multiple pedagogies

Arising from the above point is the recognition that BIM needs to be able to support multiple pedagogical approaches. i.e. the PK around reflective learning journals reveals many different pedagogical approaches. If BIM as an e-learning tool is going to effectively support these pedagogies then new forms of TPK need to be produced. i.e. BIM itself needs to know about and support the different reflective journal pedagogies.

There’s a lot of talk about how various systems are designed to support a particular pedagogical approach. However, I wonder just how many of these systems actually provide real TPK assistance? For example, the design of Moodle “is guided by a ‘social constructionist pedagogy'” but it’s pretty easy to see examples of how it’s not used that way when course sites are designed.

There are a range of reasons for this. Not the least of which is that the focus of teachers and academics creating course sites is often focused on more pragmatic tasks. But part of the problem is also, I propose, the level of TPK provided by Moodle. The level of technological support it provides for people to recognise, understand and apply that pedagogical approach.

There’s a two-edged sword here. Providing more TPK may help people adopt this approach, but it can also close off opportunities for different approaches. Scaffolding can quickly become a cage. Too much focus on a particular approach also closes off opportunities for adoption.

But on the other hand, the limited amount of specific TPK provided by the e-learning tools is, I propose, a major contributing factor to the workload issues around institutional e-learning. The tools aren’t providing enough direct support for what teachers want to achieve. So the people have to bridge the gap. They have to do more work.

BIM and distributed cognition – generating TPACK

One of the concerns raised in the committee that had to approve the adoption of BIM was about the level of support. How is the institution going to support academics who want to use BIM? The assumption being that we can’t provide the tool without some level of support and training.

This is a valid concern. But I believe there are two asumptions underpinning it which I’d like to question and explore alternatives. The observations are

  1. You can’t learn how to use the tool, simply by using the tool.
    If you buy a good computer/console game, you don’t need to read the instructions. Stick it in and play. The games are designed to scaffold your entry into the game. I haven’t yet met an institutional e-learning tool that can claim the same. Some of this arises, I believe, from the limited amount of TPK most tools provide. But it’s also how the tool is designed. How can BIM be designed to support this?
  2. The introduction of anything new has to be accompanied by professional development and other forms of formal support.
    This arises from the previous point but it also connected to a previous post titled “Professional development is created, not provided”. In part, this is because the IT folk and the central L&T folk see their job as (and some have their effectiveness measured by) providing professional development sessions or the number of helpdesk calls they process.

It’s difficult to generate TPACK

I believe that the current practices, processes and tools used by institutional e-learning systems make it difficult for the individuals and organisations involved to develop TPACK. Consequently the quality of institutional e-learning suffers. This contributes to the poor quality of most institutional e-learning, the limited adoption of features beyond content distribution and forums, and is part of the reason behind the perceptions of increasing workload around e-learning.

If this is the case, then can it be addressed? How?

References

Bright, S. (2012). eLearning lecturer workload: working smarter or working harder? In M. Brown, M. Hartnett, & T. Stewart (Eds.), ASCILITE’2012. Wellington, NZ.

Reaburn, P., Muldoon, N., & Bookallil, C. (2009). <a href=”“>Blended spaces, work based learning and constructive alignment: Impacts on student engagement. Same places, different spaces. Proceedings ascilite Auckland 2009 (pp. 820–831). Auckland, NZ.

Thorpe, K. (2004). Reflective learning journals : From concept to practice. Reflective practice: International and Multidisciplinary Perspectives, 5(3), 327–343.

Tynan, B., Ryan, Y., Hinton, L., & Mills, L. (2012). Out of hours Final Report of the project e-Teaching leadership: planning and implementing a benefits-oriented costs model for technology-enhanced learning. Strawberry Hills, Australia.

Professional development is created, not provided

Over recent weeks I’ve been so busy that I’ve largely ignored Twitter. To my detriment. A quick return to it this afternoon found me following two links via tweets from @palbion. The two links were

  1. How effective is the professional development undertaken by teachers?, and
  2. Removing the lids of learning.

The first is a blog post outlining the many limitations of professional development as practiced in schools and many other locations (e.g. the L&T PD at Universities) and suggesting how it can be fixed to become both “useful and cost effective”. This post troubled me greatly. I agree that much of Professional Development is essentially worthless. But at least two aspects of the post troubled me.

The assumption that impact on student learning outcomes is the only true measure of the value of Professional Development worries me significantly. It’s simplistic in that it reduces the complexity of schools, teaching and teachers to a single measure. The practice of such abstraction is always going to lose something. But worse, if you focus everything on one particular measure and it becomes a target, it’s useless. i.e. Goodhart’s law

But what really bugged me was that the solution to the woes of Professional Development was better Professional Development. I disagree. I think you have to get rid of Professional Development and replace it with learning. i.e. the teachers (and academics) essentially have to continue learning. Here’s my provocative proposal

Professional development is mostly a solution provided by management due to flaws in the system that management preside over.

i.e. the education (or university) system – in its broadest understanding – is set up to make it difficult for the members of that system to learn and more importantly make changes based on what they learn.

The post actually makes the point itself when it says

Fortunately there have been a raft of reports (e.g. from EPPI and from Ofsted, among many others) that tell us exactly what to look for, and the good news is that great teacher learning is a remarkably similar beast to the great pupil learning.

Slide 19 of the Removing the lids of learning presentation by Dean Shareski contains the following quote from Stephen Downes

We need to move beyond the idea that an education is something that is provided for us, and toward the idea that an education is something that we create for ourselves.

I suggest that you can replace “education” with “professional development” and as a result you identify the solution to the problem of Professional Development.

Understanding management students’ reflective practice through blogging

The following is a summary and perhaps some reflection upon Osman and Koh (2013). It’s part of the thinking and reading behind the re-design of the ICTs and pedagogy course I help teach to pre-service teachers.

Abstract

65 business (MBA/Egyptian) students participated in collaborative blogging over 5 weeks. Analysis (content analysis for critical thinking and theory/practice links) support the potential of blogs as a tool “for reflection and learning in practitioner-oriented courses. Implications for the design of blogging tasks are discussed.

Thoughts and todo

Provides some empirical evidence for the use of blogs for reflection and connecting theory and practice. Though the findings are generally what people would expect.

The task for these students was somewhat like a forced connection. You must post 1 topic and comment on two others. I wonder if this was more open/flexible/student controlled more contributions would arise? Perhaps only if appropriate support/connection made.

To do

  • Look at Ho and Richards (1993) for a framework specific to journals of student teachers.
  • Look at framework of Greenlaw and DeLoach (2003) and also Osman & Duffy (2010)
  • Look at Osman & Duffy (2010) for the idea that theories are not actively taken up by students and remain detached areas of knowledge, not integrated into decision-making.
  • Loving et al (2007) another framework for evaluating evaluation.

Introduction

Problems in practitioner courses in combining theory and practice. Need to encourage reflection etc.

Reflection has some plusses. Uses Moon’s (1999, p. 155) definition

a mental process purpose and/or outcome in which manipulation of meaning is applied to relatively complicated or unstructured ideas in learning or to problems for which there is no obvious solution”

Blogs are a recent tool for this, benefits of blogs from the literature are listed and referenced

  • empowering students by giving a voice and venue for self-expression.
  • increasing sense of ownership, engagement and interest in learning.
  • may facilitate enriched opportunities for communication, challenge, cognitive conflict, deeper thinking and knowledge construction.

But scarcity of studies that investigate “empirically”. Many relying on self-report data or anecdotal evidence. Few studies critically examine the quality of students’ reflection, especially in management education. Few provide explanations and thus limit guidelines that suport the design of blogging tasks to facilitate reflection and learning.

Literature review

Starts with references to the problem of MBA programs problem of combining academic rigor and practical application. A problem that teaching programs have had for a long time. Critical reflection is seen as a way to bridge this.

The rest is broken into the following sections

  1. Blogging and reflection.
    Individual journals a common approach. Privacy provides a sheltered place but limits sharing/collaboration etc. Blogging provides some affordances that address this. Value is accepted by enthusiasts, but limited analysis. Some studies mentioned. A few using coding frameworks are mentioned. One shows blogging has a positive impact on reflection, but peer comments has a negative impact.
  2. Critical thinking through blogging.
    Critical thinking defined as “development of a habit of continuous reflection and questioning”. Few studies of blogging looking at critical thinking.
  3. Fostering theory-practice linkages in management education.
    Explains the use Kolb’s learning cycle in this study.

Research questions

  1. How critically evaluative were the reflections of graduate business students when they engaged in blogging?
  2. In their reflections, to what extent did these students link theory and practice? What phases of Kolb’s experiential learning cycle did these students focus on?

Methods

Students blogged for last 5 weeks of 10 week term. 20% of assessment for the task. Guidelines kept to a minimum. Graded on the quantity and quality of postings. Students introduced to reflective practice and Kolb’s cycle prior.

Blogging groups (max 8) were self-assigned and access to the blogs limited to the group. During first week the instructor moderated blog posts. Discontinued as a threat to student ownership.

Students had the opportunity of opting out of their blog contributions from being analysed for the research. 54 provided signed consent.

Blog archives coded by two independent coders.

Results

The assessment task required students to initiate one topic and comment on two posts submitted by other groups per week. 65 students expected to make 325 posts and 650 comments. In the end 144 topics and 399 comments. Students only posted 543 times, 44% less than anticipated.

RQ #1 – how critically evaluative were posts

A peak at simplistic alternatives/argument (30%) and basic analysis (26%). About 14% theoretical inference i.e. building arguments around theory. Apparently this was the expected level.

No significant differences between posts and comments.

RQ #2 – To what extent did students link theory and practice

Focused on higher level critical thinking posts. Used a “Kolb-based framework”.

Significant differences between posts in types of reflection. Students seemed more comfortable considering theory with experience, observation or experimentation.

Discussion

Results support use of blogging as a tool to encourage reflection. Mmm, not sure it’s innate to the technology, though the affordance is there.

Few posts off task – but I think that’s probably a result of asking those questions in other areas. But author’s compare this with content oriented posts in discussion forums only being 40/50% of posts. Again, possibly the design of blogs in this context suggesting it’s not the place to raise non-content questions. Authors do point out that this was a blended context, the discussion forum references were totally online. And they pick up my point.

Surprising level of students reflecting on their learning via blogs. Mostly positive, but a prominent concern was a desire for feedback, especially from the instructor. Suggests some reasons: novelty of reflection requiring reassurance; a by-product of culture.

Some suggestions around student confusion because of reasons from the literature: what to write in a blog post, low self-efficacy re: the worthiness of their contribution; difficulty generating topics.

In this study students wanted instructor to post discussion questions. i.e. the instructor needs to be more active in scaffolding struggling students.

Guidelines for designing blogging tasks

The article closes with the following list of guidelines (p. 30)

  1. Explain the importance of reflection a vehicle for learning and continued professional development.
  2. Provide different forms of scaffolding. Many students are new to reflection and critical thinking as a more formal activity. In addition to giving them a framework and guidelines to inform their reflections, examples that illustrate quality reflection and critical thinking might be necessary. Students in this context seemed to especially need help with building theory based arguments, evaluating theories, and addressing ethical concerns for business issues.
  3. Give prompts to encourage reflections. Some students are often apprehensive about initiating reflections.
  4. Promote reflection and critical thinking over longer durations. A reflection task that extends for part of the semester might not be sufficient to adequately develop students’ reflective and critical thinking skills.
  5. Relate students’ reflections to class topic so that students see the value of reflection as an integral and legitimate ingredient of learning.
  6. Provide technical orientation at the beginning of the session. Although we assume that our students are tech savvy, they might not be.

Nothing to surprising there, it’s what I’ve done in the past and will aim to do next year.

References

Osman, G., & Koh, J. H. L. (2013). Understanding management students’ reflective practice through blogging. The Internet and Higher Education, 16, 23–31. doi:10.1016/j.iheduc.2012.07.001

Ho, B., & Richards, J. C. (1993). Reflective thinking through teacher journal writing: Myths and realities. Prospect, 8, 7–24.

Greenlaw, S. A., & DeLoach, S. B. (2003). Teaching critical thinking with electronic discussion. The Journal of Economic Education, 34(1), 36–52.

Loving, C. C., Schroeder, C., Kang, R., Shimek, C., & Herbert, B. (2007). Blogs: Enhancing links in a professional learning community of science and mathematics teachers. Contemporary Issues in Technology and Teacher Education, 7(3), 178–198.

Osman, G., & Duffy, T. (2010). Scaffolding critical discourse in online problem-based scenarios: The role of articulation and evaluative feedback. In M. B. Nunes, & M. McPherson (Eds.), IADIS International Conference e-Learning 2010: Vol 1 (pp. 156–160). International Association for Development of the Information Society.

Can/will learning analytics challenge the current QA mentality of university teaching

Ensuring the quality of the student learning experience has become an increasingly important task for Australian universities. Experience over the last 10 years and some recent reading suggests there are some limitations to how this is currently being done. New innovations/fashions like learning analytics appear likely to reinforce these limitations, rather than actually make significant progress. I’m wondering whether the current paradigm/mindset that underpins university quality assurance (QA) processes can be challenged by learning analytics.

The black box approach to QA

In their presentation at ascilite’2012, Melinda Lewis and Jason Lodge included the following slide.

ascilite'2012 Lodge & Lewis

The point I took from this image and the associated discussion was that the Quality Assurance approach used by universities treats the students as a black box. I’d go a step further and suggest that it is the course (or unit, or subject) as the aggregation of student opinion, satisfaction and results that is treated as the black box.

For example, I know of an academic organisational unit (faculty, school, department, not sure what it’s currently called) that provides additional funding to the teaching staff of a course if they achieve a certain minimum response rate on end of term course evaluations and exceed a particular mean level of response on 3 Likert scale questions. The quality of the course, and subsequent reward, is being based on a hugely flawed measure of the quality. A measure of quality that doesn’t care or know what happens within a course, just what students say at the end of the course. Grade distribution (i.e. you don’t have too many fails or too many top results) is the other black box measure.

If you perform particularly badly on these indicators then you and your course will be scheduled for revision. A situation where a bunch of experts work with you to redesign the course curriculum, learning experiences etc. To help you produce the brand new, perfect black course box. These experts will have no knowledge of what went on in prior offerings of the course and they will disappear long before the course is offered again.

Increasingly institutions are expected to be able to demonstrate that they are paying attention to the quality of the student learning experience. This pressure has led to the creation of organisational structures, institutional leaders and experts, policies and processes that all enshrine this black box approach to QA. It creates a paradigm, a certain way of looking at the world that de-values alternatives. It creates inertia.

Learning analytics reinforcing the black box

Lodge and Lewis (2012, pp 561) suggest

The real power and potential of learning analytics is not just to save “at risk” students but also to lead to tangible improvements in the quality of the student learning experience.

.

The problem is that almost every university in Australia is currently embarking on a Learning Analytics project. Almost without exception most of those projects have as their focus, “at risk” students. Attrition and retention is the focus. Some of these projects are multi-million dollar budgets. Given changing funding models and the Australian Government’s push to increase the diversity and percentage of Australians with higher education qualifications, this focus is not surprising.

It’s also not surprising that many of these projects appear to be reinforcing the current black box approach to quality assurance. Data warehouses are being built to enable people and divisions not directly involved with actually teaching the courses to identify “at risk” students and implement policies and processes that keep them around.

At it’s best these projects will not impact on the actual learning experience. The interventions will occur outside of the course context. At worse, these projects will negatively impact the learning experience as already overworked teaching staff are made to jump through additional hoops to respond to the insights gained by the “at risk” learning analytics.

How to change this?

The argument we put forward in a recent presentation was that the institutional implementation of learning analytics needs to focus on “doing it with academics/students” rather than on doing it “for” and “to” academics/students. The argument here is that the “for” and “to” paths for learning analytics continues the tradition of treating the course as a black box. On the other hand, the “with” path requires direct engagement with academics within the course context to explore and learn how and with what impacts learning analytics can help improve the quality of the student learning experience.

In the presentation Trigwell’s (2001) model of factors that impact upon the learning of a student was used to illustrate the difference. The following is a representation of that model.

Trigwell's model of teaching

Do it to the academics/students

In terms of learning analytics, this path will involve some people within the institution developing some systems, processes and policies that identify problems and define how those problems are to be addressed. For example, a data warehouse and its dashboards will highlight those students at risk. Another group at the institution will contact the students or perhaps their teachers. i.e. there will be changes to the institutional context level that will essentially by pass the thinking and planning of the teacher and go direct to the teaching context. It’s done to them.

Doing it to

The course level is largely ignored and if it is considered then courses are treated as black boxes.

Do it for the academics/students

In this model a group – perhaps the IT division of the central L&T folk – will make changes to the context by selecting some tools for the LMS, some dashboards in the data warehouse etc that are deemed to be useful for the academics and students. They might even run some professional development activities, perhaps even invite a big name in the field to come and give a talk about learning analytics and learning design. i.e the changes are done for the academics/students in the hope that this will change their thinking and the planning.

Doing it for

The trouble is that this approach is typically informed by a rose-coloured view of how teaching/learning occurs in a course (e.g. very, very few academics actively engage in learning design in developing their courses); ignores the diversity of academics, students and learning; and forgets that we don’t really know how learning analytics can be used to understand student learning and how we might intervene.

The course is still treated as a black box.

Do it with the academics/students

Doing it with

In this model, a group of people (including academics/students) work together to explore and learn how learning analytics can be applied. It starts with the situated context and looks for ways in which what we know can be harnessed effectively by academics within that context. It assumes that we don’t currently know how to do this and that by working within the specifics of the course context we can learn how and identify interesting directions.

The course is treated as an open box.

This is the approach which our failed OLT application was trying to engage in. We’re thinking about going around again, if you’re interested then let me know.

The challenge of analytics to strategy

This post was actually sparked today by reading this article titled “Does analytics make us smart or stupid?” in which someone from an analytics vendor uses McLuhan’s Tetrad to analyse the possible changes that arise from analytics. In particular, it was this proposition

With access to comprehensive data sets and an ability to leave no stone unturned, execution becomes the most troublesome business uncertainty. Successful adaptation to changing conditions will drive competitive advantage more than superior planning. While not disappearing altogether, strategy is likely to combine with execution to become a single business function.

This seems to resonate with the idea that perhaps the black box approach to the course might be challenged by learning analytics. The “to” and “for” paths are much more closely tied with traditional views of QA which are in turn largely based on the idea of strategy and top-down management practices. Perhaps learning analytics can be the spark that turns this QA approach away from the black box approach toward on focused more on execution, on what happens within the course.

I’m not holding my breath.

References

Lodge, J., & Lewis, M. (2012). Pigeon pecks and mouse clicks : Putting the learning back into learning analytics. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Future challenges, sustainable futures. Proceedings ascilite Wellington 2012 (pp. 560–564). Wellington, NZ.

Trigwell, K. (2001). Judging university teaching. The International Journal for Academic Development, 6(1), 65–73.

Tertiary course design is very poor, and we solve it by “blame the teacher”

The following is inspired by this tweet

which links to this newspaper article titled “Tertiary course design ‘very poor'”. An article certain to get a rise out of me because it continues the “blame the teacher” refrain of common to certain types of central L&T type people

After 33 years of working in higher education in all parts in NZ, the US and UK, the one thing we’ve become very clear about in curriculum design is that our people in higher education need to actually be educated as educators to work at that level

This seems to imply then that all of the courses taught by those with teaching qualifications should be beacons of quality learning experiences. My observations of courses at a number of universities taught by graduates of higher education teaching certificates and by those in Faculties of Education would seem to indicate otherwise. Not to mention reports of “ticking the box” from colleagues at top universities required to complete graduate certificates in higher education teaching. i.e. they have to complete the certificate to have a job, so they complete it. They are successful products of formal education, they know how to successfully jump through the required hoops.

This is not to suggest there is no value in these courses. But it’s not the solution to the problem. It’s not even the best way to build knowledge of teaching and learning amongst academics.

The following figure is from Richardson (2005)

Integrated model of teachers' approaches to teaching

The findings from this research is that there can be significant differences between the espoused theories information teaching and learning and the theories in use (Leveson, 2004). Teachers can know all the “best” learning theory but not use that in their teaching. While teachers may hold higher-level views of teaching, other contextual factors may prevent the use of those conceptions (Leveson, 2004). Environmental, institutional, or other issues may impel teachers to teach in a way that is against their preferred approach (Samuelowicz & Bain, 2001). Prosser and Trigwell (1997) found that teachers with a student-focused approach were more likely to report that their departments valued teaching, that their class sizes were not too large, and that they had control over what was taught and how it was taught. In examining conceptions of e-learning held by academic staff, Gonzalez (2009) found that institutional factors and the nature of the students were the most relevant contextual factors influencing teaching.

Now, consider the world of Australian (and New Zealand?) Universities as we move into 2013. Do you think the environmental factors have gotten any better in terms or enabling teachers to teach in the ways they want? An increasing focus on retention, an increasingly diverse intake of students, decreasing funding, increasing use of e-learning, decreasing quality of institutional e-learning systems, increasing casualisation of the academic work-force, research versus teaching, increasing managerialisation and increasingly rapid rounds of restructuring….are any of these factors destined to encourage quality approaches to teaching and learning?

My argument is that given this environment, even if you could get every academic at a university to have a formal qualification in learning and teaching, there wouldn’t be any significant increase in the quality of student learning because the environment would limit any chance of action and only encourage academics to “tick” the qualifications box.

On the other hand, if the teaching and learning environment at a university wasn’t focused on the efficient performance of a set of plans (which limit learning) and instead focused on encouraging and enabling academics and the system to learn more about about teaching and learning within their specific context…….

References

Leveson, L. (2004). Encouraging better learning through better teaching: a study of approaches to teaching in accounting. Accounting Education, 13(4), 529–549.

Prosser, M., & Trigwell, K. (1997). Relations between perceptions of the teaching environment and approaches to teaching. British Journal of Educational Psychology, 67(1), 25–35.

Richardson, J. (2005). Students’ approaches to learning and teachers’ approaches to teaching in higher education. Educational Psychology, 25(6), 673–680.

Samuelowicz, K., & Bain, J. (2001). Revisiting academics’ beliefs about teaching and learning. Higher Education, 41(3), 299–325.

#ascilite2012 technical support and the tail wagging the dog

I’m slowly recovering from a week at conferences. First, ASCILITE’2012 (#ascilite2012) and second the SoLAR Southern Flare Conference (#FlareAus). I was going to spend the week before preparing, but marking and other tasks intervened. This meant I spent much of the week preparing presentations which meant a couple of late nights and limited social collaboration. Add in a couple of early flights and I’m a little tired and frustrated. This may come through in the following.

Perhaps the biggest frustration this week was the audio-visual support at #ascilite2012. This is summed up nicely by the following quote from the “Information for Presenters” page from the conference website

All authors are required to email their final PowerPoint presentation (with all embedded images and videos in the same folder) by no later than 20 November 2012.

Just to be clear on the point, the conference started on the 25th of November. That’s right, the expectation was that we’d have our presentations completed 5 days before the conference started.

This probably wasn’t going to happen for most people. So a follow up option was provided (from the same page)

We would prefer that presenters use the equipment we provide in the venue as each venue will have mac and pc capability, therefore we ask for your presentation before hand, or at least 5 hours before your presentation at the event.

According to an overhead comment from one of the people organising the presentation support, this is how all conferences work.

Sorry, but no.

Tail wagging the dog

To me this is a perfect example of the tail represented by technology and the technologists wagging the dog.

Edu Doggy

For at least the last 10 years I’ve been taking laptops to conferences. For me – and many others I know – our process is to work on the presentations until the very last minute due to two main factors. First, we’re busy. I didn’t get a chance to work directly on my presentation for #ascilite2012 until I left home to travel to Wellington. I didn’t really get into my #FlareAus presentation until the night before. Second, we like to incorporate insights, comments and events from the conference. In the hour or so before my #ascilite the conference chair introduced the idea of FOMO to describe MOOCs and other hypes and Neil Selwyn decried the absence of a focus on the present in educational technology research. Both points that resonated strongly with my presentation. I had to work these into the presentation.

Theoretically, this necessary change was not possible.

Which is somewhat ironic given that the aim of the presentation, the paper and my thesis was to argue that university e-learning suffers from exactly the same problem.

Especially when the #ascilite2012 call for papers is talking about

Recent waves of global uncertainty coupled with local crises and government reforms are reshaping the tertiary education landscape.

Doing it with academics not possible

An extension to this proposition is that since the people, process and product of university e-learning is inflexible, then university e-learning by definition is either done “to” or “for” the academics. i.e. the tail wags the dog. The practice of e-learning is constrained by the people, process and product. This prevents university being done “with” the academics. i.e. as a learning process. This was the theme picked up in our #FlareAus presentation.

The proposal being that learning analytics within universities will largely be done “to” and “for” academics, rather than “with”. Subsequent to this will be a whole range of pitfalls and eventually the likely end result that learning analytics will become yet another fashion, fad or band-wagon.

Evidence of workarounds

Just like I chose to ignore the requirements of the audio-visual folk at #ascilite2012. There was evidence at #FlareAus of people working around the requirements/constraints of university e-learning.

The presentation from Abelardo Pardo used the client-side (browser) approach to working around the inflexibility of the LMS (Moodle). i.e. staff install a browser plugin that identifies when a particular LMS web page arrives in the browser and adds something useful to the page.

Susan Tull presented on the University of Cantebury’s LearnTrak system (more detail in this EDUCAUSE Review article). LearnTrak is a customised version of GISMO which is a “Graphical Interactive Student Monitoring Tool for Moodle”. Susan’s presentation was before mine at #FlareAus. I liked the idea because they were working with their academics to provide a system that worked for them. That responded to local needs. At least that was the impression.

GISMO apparently takes the Moodle plugin approach but it appears that it does breakaway from Moodle’s interface fairly quickly in order to present a fairly detailed collection of reports, mostly charts.

Both these approaches have their limitations. But I am now wondering if there is a vein (rich or otherwise) of research opportunities in developing better and different approaches to breaking the inflexibility of the product and the process of university e-learning. This might become a theme.