Category Archives: design theory

Statistics in Education – Week 1

Have signed up for another MOOC – Statistics in Education for Mere Mortals – to fill a hole. What follows is the diary of the first week.

According to the syllabus, I’m already a bit behind.

So the instructor is doing some research around participation – one of the motivations for offering the course. Google map of participants. Seems I’m #2 for Queensland. Concentration seems to be Eastern US.

Canvas

The MOOC is being run using the Canvas LMS. Second time using the system. Am finding it interesting that there seems to be in-built support for the idea of a learning path. The series of activities/resources is sequential and the system seems to support that. The lack of support for this type of functionality in Moodle is something I’ve missed. Finding the ability to step through each step sequentially appealingly efficient.

Research questions

Will be interesting to see the research that comes of this. Have to admit to some of the questions leaving me a little underwhelmed.

First presentation

And the content begins. A 20 minute video. Lecture with a talking head in the bottom left hand corner, which disappears when the slides start. No annotation of the slides during the lecture, might have helped in places.

:) A “wii play station” as a type of video game console.

Good quote

Measurement is limiting the data … so that those data may be interpreted and, ultimately, compared to an acceptable qualitative or quantitative standard

Data limited by: measurement construct; instrument capability; amount of raw information we are prepared to deal with

Need to think about this applies to analytics. Data mining has approaches to get around this limitation of statistical approaches.

Metaphor

Isolating meaningful data when conducting most research studies is like …. filling a tea cup with a fire hose

Four scales of measurement

Different scales suggest different operations are possible.

  1. Nominal scale – Frequency distribution

    Nominal == name. Numbers are used as a name, not as a quantity. Doing arithmetic on these numbers is nonsensical.

  2. Ordinal scale – median and percentiles

    Ordinal == order/ranking. e.g. ranking preferred candidates.

  3. Interval scale – add or subtract, mean, standard deviation, standard error of the man
    • Has equal amounts of measurement.
    • Zero point established arbitrarily.

      e.g. temperature and 0 degrees.

    • Can determine, mean, standard deviation, and product moment correlation.
    • Can apply inferential statistical analysis.
  4. Ratio scale – ratio
    • Equal measurement units.
    • Has an absolute zero point.
    • Expresses values in terms of multiples and fractional parts and the ratios are true ratios (e.g. ruler )
    • Can determine geometric mean and percentage variation
    • Can conduct virtually any inferential statistical analysis

Measuring temperature is given as example of interval scale. Where you can’t say 40o is twice as hot as 20o. It’s just an interval, not a ratio. Where as length is.

Types of Statistics

Presents two

  1. Measures of central tendency – first module
  2. Measures of variability – second module

Measures of central tendency

  • Mean – average of a set of numbers
  • Median – the number at the midpoint of a set of numbers.
  • Mode – the most popular number.

All are the same in a normal distribution. But not in a skewed distribution.

So the statistics in this course assume a normal distribution. Seems limiting.

In passing the central tendency gets defined as the number that best represents a group of numbers. The explanation of median/mean would have been better illustrated visually, rather than by narration of a text-based powerpoint slide.

Normal distribution

Woo hoo. Narrated lecture + graphics tablet.

As the description of the normal distribution proceeds, I’m wondering how on earth I would ever be doing anything that would have data in a normal distribution? But perhaps just indicates the value of “central tendency”.

The Galton Machine

A bit of fun. Link to Java applet.

Computing the man of a set of scores

It appears that Excel will be the statistical software of choice. Perhaps including some auto marking of student work. The first is a simple task to test this out. Apparently going to take an hour to do. 11 minutes in, not sure how it could drag on that long.

It will be interesting to see how many questions arise from simple technical issues – like using different versions of Excel. Shall also be interesting to see how the difficulty of the activities grows.

Interesting that the quiz self evaluation asks for results in tenths, but the quiz system wants to add a couple of zeros to the end. I can see that throwing a few people off.

Woo hoo. 100%.

The next page after the quiz is a discussion forum for general help. There are a few folk reporting problems. Especially with the second and third questions. I’m guessing this arises from this combination of factors

  1. The video creating the spreadsheet only rounded off results in set of averages, and not the averages that would be used for the second and third questions. It didn’t need to, because that data meant the averages were rounded to a tenth
  2. The new data doesn’t result in data rounded to the tenth.
  3. The quiz question asks for results rounded to the tenth.

In entering the new data, I added the rounding. But I imagine others didn’t.

Appears that other folk didn’t modify their existing spreadsheet.

Ahh, other folk from China and Pakistan reporting being unable to access the YouTube videos.

Descriptive statistics – Standard Deviation

Onward and upward.

Statistic has two meanings: a description or an estimate about population.

Mm, the video didn’t do a great job of clearly defining the difference betwen sample and population. Use an example, but didn’t clearly define it. Google is my friend.

And here comes the terminology

  • Population – “a set of entities concerning which statistical inferences are to be drawn.
  • Sample – a subset of the population.

The questions

  1. Collecting data on a population?

    Describe the population using a parameter.

  2. Want to know about a population, but can only collect data on a sample?

    Sampling gives the sample and then we do a description using a statistic, which is then used to make an inference about the parameter for the population.

  3. Only collecting data on a sample?

Mmm, seems Broad is cleaning up in Durham.

Here comes the maths and symbols.

Population is mu. Sample is Summation of X i.e. X bar (a bar over the top of X).

So, we did central tendency above. That’s one type of descriptive statistic. Now it’s the other main type of descriptive statistics. Measures of variability. Kurtosis – shape of the curve, it’s peakedness.

Three methods to measure variability

  1. Range – Difference between high and low scores.

    Only tells the difference between two scores. Ignore the others.

  2. Deviation scores – Computer the difference between each score and the grand mean.

    Now take the average. Well the average of this always 0.

    This is described, could have been much better with a visual.

  3. Standard deviation

    Square the difference scores first (gets rid of the negatives). Then take the square root.

 by kxp130, on Flickr
Creative Commons Attribution-Noncommercial 2.0 Generic License  by  kxp130 

When estimating the population parameter for a sample has a N-1, rather than N. Why? Apparently a rule thumb developed over time, imagine there must be some research behind it. Instructor admits not a good explanation. It’s a problem area.

At this point, interesting that there hasn’t been much placement of why we’re doing this.

Another example with this table where some visualisation accompanying the verbal explanation would have helped.

And now the difficult stuff. What does a SD of 8.66 mean?

If all sold the same then SD = 0. If all sold about the same, then SD should be small. But what is considered small?

And now another Excel exercise, apparently 26 minutes of video == 90 minutes worth of activity. I don’t think so….really dragging now. Small win. Picked up an Excel tip.

2nd quiz done. I think I’ll stop there for the night. Time to watch some cricket and read a book.

Identifying and filling some TPACK holes

The following post started over the weekend. I’m adding this little preface as a result of the wasted hours I spent yesterday battling badly designed systems and the subsequent stories I’ve heard from others today. One of those stories revolved around how the shortening available time and the poorly designed systems is driving one academic to make a change to her course that she knows is pedagogically inappropriate, but which is necessary due to the constraints of these systems.

And today (after a comment from Mark Brown in his Moodlemoot’AU 2013 keynote last week) I came across this blog post from Larry Cuban titled “Blaming Doctors and Teachers for Underuse of High-tech tools”. It includes the following quote

For many doctors, IT-designed digital record-keeping is a Rube Goldberg designed system.

which sums up nicely my perspective of the systems I’ve just had to deal with.

Cuban’s post finishes with three suggested reasons why he thinks doctors and teachers get blamed for resisting technology. Personally, I think he’s missed the impact of “enterprise” IT projects, including

  • Can’t make the boss look bad.

    Increasingly IT projects around e-learning have become “enterprise”, i.e. big. As big projects, the best practice manual requires that the project be visibly led by someone in the upper echelons of senior management. When large IT projects fail to deliver the goods, you can’t make this senior leader look bad. So someone has to be blamed.

  • The upgrade boat.

    When you implement a large IT project, it has to evolve and change. Most large systems – including open source systems like Moodle – do this by having a vendor driven upgrade process. So every year or so the system will be upgraded. An organisational can’t fall behind versions of a system, because eventually they are no longer supported. So, significant resources have to be invested in regularly upgrading the system. Those resources contribute to the intertia of change. You can’t change the system to suit local requirements as all the resources are invested in the upgrade boat. Plus, if you did make a change, then you’d miss the boat.

  • The technology dip.

    The upgrade boat creates another problem, the technology dip. Underwood and Dillon (2011) talk about the technology dip as dip in educational outcomes that arises after the introduction of technological change. As the teachers and students grapple with the changes in technology they have less time and energy to expend on learning and teaching. When you have an upgrade boat coming every 12 months, then the technology dip becomes a regular part of life.

The weekend start to this post

Back from Moodlemoot’AU 2013 and time to finalise results and prepare course sites for next semester. Both are due by Monday. The argument from my presentation at the Moot was that the presence of “TPACK holes” (or misalignment) causes problems. The following is a slide from the talk which illustrates the point.

Slide14

I’d be surprised if anyone thought this was an earth breaking insight. It’s kind of obvious. If this was the case then I wouldn’t expect institutional e-learning to be replete with examples of this. The following is an attempt to document some of the TPACK holes I’m experiencing in the tasks I have to complete this weekend. It’s also an example of recording the gap outlined in this post.

Those who haven’t submitted

Of the 300+ students in my course there are some that have had extension, but haven’t submitted their final assignment. Most likely failing the course. I’d like to contact them and double check that all is ok. I’m not alone in this, I know most people do it. All of my assignments are submitted via an online submission system, but there is no direct support in this system for this task.

The assignment system will give me a spreadsheet of those who haven’t submitted. But it doesn’t provide an email address for those students, nor does it connect with other information about the students. For example, those who have dropped the course or have failed other core requirements. Focusing on those students with extensions works around that requirement. But I do have to get the email addresses.

Warning markers about late submissions

The markers for the course have done a stellar job. But there are still a few late assignments to arrive. In thanking the markers I want to warn them of the assignments still to come, but even with only less than 10 assignments to come this is more difficult than it sounds due to the following reasons

  • The online assignment submission treats “not yet submitted” assignments as different from submitted assignments and submitted assignments is the only place you can allocate students to markers. You can’t allocate before submission.
  • The online assignment submission system doesn’t know about all the different types of students. e.g. overseas students studying with a university partner are listed as “Toowoomba, web” by the system. I have to go check the student records system (or some other system) to determine the answer.
  • The single sign-on for the student records system doesn’t work with the Chrome browser (at least in my context) and I have to open up Safari to get into the student records system.

Contacting students in a course

I’d like to send a welcome message to students in a course prior to the Moodle site being made available.

The institution’s version of Peoplesoft provides such a notify method (working in Chrome, not Safari) but doesn’t allow the attachement of any files to the notification.

I can copy the email addresses of students from that Peoplesoft system, but Peoplesoft uses commas to separate the email addresses meaning I can’t copy and paste the list into the Outlook client (it expects semi-colons as the separator).

Changing dates in a study schedule

Paint me as old school, but personally, I believe there remains a value to students of having a study schedule that maps out the semester. A Moodle site home page doesn’t cut it. I’ve got a reasonable one set up for the course from last semester, but new semester means new dates. So I’m having to manually change the dates, something that could be automated.

Processing final results

As someone in charge of a course, part of my responsibilities is to check the overall results for students, ensure that it’s all okay as per formal policy and then put them through the formal approval processes. The trouble is that none of the systems provided by the institution support this. I can’t see all student results in a single system in a form that allows my to examine and analyse the results.

All the results will eventually end up in a Peoplesoft gradebook system. In which the results are broken up based on the students “mode” of learning i.e. one category for each of the 3 different campuses and another for online students. But from which I cannot actually get any information out of in a usable form. It is only available in a range of different web pages. If the Peoplesoft web interface was halfway decent this wouldn’t be such a problem, but dealing with it is incredibly time consuming. Especially in a course with 300+ students.

I need to get all the information into a spreadsheet so that I can examine, compare etc. I think I’m going to need

  • Student name, number and email address (just in case contact is needed), campus/online.

    Traditionally, this will come from Peoplesoft. Might be some of it in EASE (online assignment submission).

  • Mark for each assignment and their Professional Experience.

    The assignment marks are in EASE. The PE mark is in the Moodle gradebook.

    There is a question as to whether or not the Moodle gradebook will have an indication of whether they have an exemption for PE.

EASE provides the following spreadsheets, and you’re not the only one to wonder why these two spreadsheets weren’t combined into one.

  1. name, number, submission details, grades, marker.
  2. name, number, campus, mode, extension date, status.

Moodle gradebook will provide a spreadsheet with

  • firstname, surname, number…..email address, Professional Experience result

Looks like the process will have to be

  1. Download Moodle gradebook spreadsheet.
  2. Download EASE spreadsheet #1 and #2 (see above) for Assignment 1.
  3. Download EASE spreadsheet #1 and #2 (see above) for Assignment 2.
  4. Download EASE spreadsheet #1 and #2 (see above) for Assignment 3.
  5. Bring these together into a spreadsheet.

    One option would be to use Excel. Another simpler method (for me) might be to use Perl. I know Perl much better than Excel and frankly it will be more automated with Perl than it would be with Excel (I believe).

    Perl script to extract data from the CSV files, stick it in a database for safe keeping and then generate an Excel spreadsheet with all the information? Perhaps.

Final spreadsheet might be

  • Student number, name, email address, campus/mode,
  • marker would be good, but there’ll be different markers for each assignment.
  • a1 mark, a2 mark, a3 mark, PE mark, total, grade

An obvious extension would be to highlight students who are in situations that I need to look more closely at.

A further extension would be to have the Perl script do comparisons of marking between markers, results between campuses, generate statistics etc.

Also, probably better to have the Perl script download the spreadsheets directly, rather than do it manually. But that’s a process I have’t tried yet. Actually, over the last week I did try this, but the institution uses a single sign on method that involves Javascript which breaks the traditional Perl approaches. There is a potential method involving Selenium, but that’s apparently a little flaky – a task for later.

Slumming it with Peoplesoft

I got the spreadsheet process working. It helped a lot. But in the end I still had to deal with the Peoplesoft gradebook and the kludged connection between it and the online assignment submission system. Even though the spreadsheet helped reduce a bit of work, it didn’t cover all of the significant cracks. In the absence of better systems, these are cracks that have to be covered over by human beings completing tasks for which evolution has poorly equipped them. Lots of repetitive, manual copying of information from one computer application to another. Not a process destined to be completed without human error.

Documenting the gap between “start of art” and “state of the actual”

Came across Perrotta et al (2013) in my morning random ramblings through my PLN and was particular struck by this

a rising awareness of a gap between ‘state of art’ experimental studies on learning and technology and the ‘state of the actual’ (Selwyn, 2011), that is, the messy realities of schooling where compromise, pragmatism and politics take centre stage, and where the technological transformation promised by enthusiasts over the last three decades failed to materialize. (pp. 261-262)

For my own selfish reasons (i.e. I have work within the “state of the actual”) my research interests are in understanding and figuring out how to improve the “state of the actual”. My Moodlemoot’AU 2013 presentation next week is an attempt to establish the rationale and map out one set of interventions I’m hoping to undertake. This post is about an attempt to make explicit some on-going thinking about this and related work. In particular, I’m trying to come up with a research project to document the “state of the actual” with the aim of trying to figure out how to intervene, but also, hopefully, to inform policy makers.

Some questions I need to think about

  1. What literature do I need to look at that documents the reality of working with current generation university information systems?
  2. What’s a good research method – especially data capture – to get the detail of the state of the actual?

Why this is important

A few observations can and have been made about the quality of institutional learning and teaching, especially university e-learning. These are

  1. It’s not that good.

    This is the core problem. It needs to be better.

  2. The current practices being adopted to remedy these problems aren’t working.

    Doing more of the same isn’t going to fix this problem. It’s time to look elsewhere.

  3. The workload for teaching staff is high and increasing.

    This is my personal problem, but I also think it’s indicative of a broader issue. i.e. much of the current practices aimed at improving quality assume a “blame the teacher” approach. Sure there are some pretty poor academics, but the most of the teachers I know are trying the best they can.

My proposition

Good TPACK == Good learning and teaching

Good quality learning and teaching requires good TPACK – Technological Pedagogical and Content Knowledge. The quote I use in the abstract for the Moodlemoot presentation offers a good summary (emphasis added)

Quality teaching requires developing a nuanced understanding of the complex relationships between technology, content, and pedagogy, and using this understanding to develop appropriate, context-specific strategies and representations. Productive technology integration in teaching needs to consider all three issues not in isolation, but rather within the complex relationships in the system defined by the three key elements. (Mishra & Koehler, 2006, p. 1029)

For some people the above is obvious. You can’t have quality teaching without a nuanced and context specific understanding of the complex relationships between technology, pedagogy and context. Beyond this simple statement there are a lot of different perspectives on the nature of this understanding, the nature of the three components and their relationships. For now, I’m not getting engaged in those. Instead, I’m simply arguing that

the better the quality of the TPACK, then the better the quality of the learning and teaching

Knowledge is not found (just) in the teacher

The current organisational responses to improving the quality of learning and teaching is almost entirely focused on increasing the level of TPACK held by the teacher. This is done by a variety of means

  1. Require formal teaching qualifications for all teachers.

    Because obviously, if you have a teaching qualification then you have better TPACK and the quality of your teaching will be better. Which is obviously way the online courses taught by folk from the Education disciplines are the best.

  2. Running training sessions introducing new tools.
  3. “Scaffolding” staff by requiring them to follow minimum standards and other policies.

This is where I quote Loveless (2011)

Our theoretical understandings of pedagogy have developed beyond Shulman’s early characteristics of teacher knowledge as static and located in the individual. They now incorporate understandings of the construction of knowledge through distributed cognition, design, interaction, integration, context, complexity, dialogue, conversation, concepts and relationships. (p. 304)

Better tools == Better TPACK == Better quality learning and teaching

TPACK isn’t just found in the head of the academic. It’s found in the tools, the interaction etc they engage in. The problem that interests me is that the quality of the tools etc found in the “state of the actual” within university e-learning is incredibly bad. Especially in terms of helping the generation of TPACK.

Norman (1993) argues “that technology can make us smart” (p. 3) through our ability to create artifacts that expand our capabilities. Due, however, to the “machine-centered view of the design of machines and, for that matter, the understanding of people” (Norman, 1993, p. 9) our artifacts, rather than aiding cognition, “more often interferes and confuses than aids and clarifies” (p. 9). Without appropriately designed artifacts “human beings perform poorly or cannot perform at all” (Dickelman, 1995, p. 24). Norman (1993) identifies the long history of tool/artifact making amongst human beings and suggests that

The technology of artifacts is essential for the growth in human knowledge and mental capabilities (p. 5)

Documenting the “state of the actual”

So, one of the questions I’m interested in is just how well are the current artifacts being used in institutional e-learning helping “the growth in human knowledge and mental capabilities”?

For a long time, I’ve talked with a range of people about a research project that would aim to capture the experiences of those at the coal face to answer this question. The hoops I am having to currently jump through in trying to bring together a raft of disparate information systems to finalise results for 300+ students has really got me thinking about this process.

As a first step, I’m thinking I’ll take the time to document this process. Not to mention my next task which is the creation/modification of three course sites for the courses I’m teaching next semester. The combination of both these tasks at the same time could be quite revealing.

References

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Perrotta, C., & Evans, M. A. (2013). Instructional design or school politics? A discussion of “orchestration” in TEL research. Journal of Computer Assisted Learning, 29(3), 260–269. doi:10.1111/j.1365-2729.2012.00494.x

Does institutional e-learning have a TPACK problem?

The following is the first attempt to expand upon an idea that’s been bubbling along for the last few weeks. It arises from a combination of recent experiences, including

  • Working through the institutional processes to get BIM installed on the institutional Moodle.
  • Using BIM in my own teaching and the resulting changes (and maybe something along these lines) that will be made.
  • Talking about TPACK to students in the ICTs and Pedagogy course.
  • On-going observations of what passes for institutional e-learning within some Australian Universities (and which is likely fairly common across the sector).

Note: the focus here is on the practice of e-learning within Universities and the institutionally provided systems and processes.

The problem(s)

A couple of problems that spark this thinking

  1. How people and institutions identify the tools available/required.
  2. How the tools provide appropriate support, especially pedagogical, to the people using it.

Which tools?

One of the questions I was asked to address in my presentation to ask for BIM to be installed on the institutional LMS was something along the lines “Why would other people want to use this tool? We can’t install a tool just for one peson.”

Well one answer was that a quick Google search of the institution’s course specifications that revealed 30+ 2012 courses using reflective journals of varying types. BIM is a tool designed primarily to support the use of reflective learning journals by students via individual blogs.

I was quite surprised to find 30+ courses already doing this. This generated some questions

  • How are they managing the workload and the limitations of traditional approaches?
    The origins of BIM go back to when I took over a course that was using a reflective journal assessment task. Implemented by students keeping them as Word documents and submitting at the end of semester. There were problems.
  • I wonder how many of the IT and central L&T people knew that there were 30+ courses already using this approach?
    In this context, it would be quite easy to draw the conclusion that the IT and central L&T folk are there to help people with the existing tools and keep their own workload to a minimum by controlling what new tools are added to the mix. Rather than look for opportunities for innovation within the institution. Which leads to..
  • I wonder why the institution wasn’t already actively looking for tools to help these folk?
    Especially given that reflective learning journals (diaries etc) are “recognised as a significant tool in promoting active learning” (Thorpe, 2004, p. 327) but at the same time the are also “demanding and time-consuming for both students and educators” (Thorpe, 2004, p. 339)

A combination of those questions/factors seem to contribute to recent findings about the workloads faced by academics in terms of e-learning (Tynan et al, 2012)

have increased both the number and type of teaching tasks undertaken by staff, with a consequent increase in their work hours

and (Bright, 2012, n.p)

Lecturers who move into the online learning environment often discover that the workload involved not only changes, but can be overwhelming as they cope with using digital technologies. Questions arise, given the dissatisfaction of lecturers with lowering morale and increasing workload, whether future expansion of this teaching component in tertiary institutions is sustainable.

How the tools provide support?

One of the problems I’m facing with BIM is that the pedagogical approach I originally used and which drove the design of BIM is not the pedagogical approach I’m using now. The features and functions in BIM currently, don’t match what I want to do pedagogically. I’m lucky, I can change the system. But not many folk are in this boat.

And this isn’t the first time we’ve faced this problem. Reaburn et al (2009) used BIM’s predecessor in a “work integrated learning” course where the students were working in a professional context. They got by, but this pedagogical approach had yet again different requirements.

TPACK

“Technological Pedagogical Content Knowledge (TPACK) is a framework that identifies the knowledge teachers need to teach effectively with technology” (Koehler, n.d.). i.e. it identifies a range of different types of knowledge that are useful, perhaps required, for the effective use of technology in teaching and learning. While it has it’s detractors, I believe that TPACK can provide a useful lens for examining the problems with institutional e-learning and perhaps identify some suggestions for how institutional e-learning (and e-learning tools) can be better designed.

To start, TPACK proposes that successful e-learning (I’m going to use that as short-hand for the use of technology in learning and teaching) requires the following types of knowledge (with my very brief descriptions)

  • Technological knowledge (TK) – how to use technologies.
  • Pedagogical knowledge (PK) – how to teach.
  • Content knowledge (CK) – knowledge of what the students are meant to be learning.

Within institutional e-learning you can see this separation in organisational structures and also the assumptions of some of the folk involved. i.e.

  • Technological knowledge – is housed in the institutional IT division.
  • Pedagogical knowledge – is housed in the central L&T division.
  • Content knowledge – academics and faculties are the silos of content knowledge.

Obviously there is overlap. Most academics have some form of TK, PK and CK. But when it comes to the source of expertise around TK, it’s the IT division. etc.

TPACK proposes that there are combinations of these three types of knowledge that offer important insights

  • Pedagogical Content Knowledge (PCK) – the idea that certain types of content is best taught using certain types of pedagogy.
  • Technological Pedagogical Knowledge (TPK) – the knowledge that certain types of technologies work well with certain types of pedagogy (e.g. teaching critical analysis using a calculator probably isn’t a good combination)
  • Technological Content Knowledge (TCK) – that content areas draw on technologies in unique ways (e.g. mathematicians use certain types of technologies that aren’t used by historians)

Lastly, TPACK suggests that there is a type of knowledge in which all of the above is combined and when used effectively this is where the best examples of e-learning arise.  i.e. TPACK – Technological, Pedagogical and Content Knowledge.

The problem I see is that institutional e-learning, its tools, its processes and its organisational structures are getting in the way of allowing the generation and application of effective TPACK.

Some Implications

Running out of time, so some quick implications that I take from the above and want to explore some more. These are going to be framed mostly around my work with BIM, but there are potentially some implications for broader institutional e-learning systems which I’ll briefly touch on.

BIM’s evolution is best when I’m teaching with it

Assuming that I have the time, the best insights for the future development of BIM have arisen when I’m using BIM in my teaching. When I’m able to apply the TPACK that I have to identify ways the tool can help me. When I’m not using BIM in my teaching I don’t have the same experience.

At this very moment, however, I’m only really able to apply this TPACK because I’m running BIM on my laptop (and using a bit of data munging to bridge the gap between it and the institutional systems). This means I am able to modify BIM in response to a need, test it out and use it almost immediately. When/if I begin using BIM on the institutional version of Moodle, I won’t have this ability. At best, I might hope for the opportunity for a new version of BIM to be installed at the end of the semester.

There are reasons why institutional systems have these constraints. The problem is that these constraints get in the way of generating and applying TPACK and thus limit the quality of the institutional e-learning.

I also wonder if there’s a connection here and the adoption of Web 2.0 and other non-institutional tools by academics. i.e. do they find it easier to generate and apply TPACK to these external tools because they don’t have the same problems and constraints as the institutional e-learning tools?

BIM and multiple pedagogies

Arising from the above point is the recognition that BIM needs to be able to support multiple pedagogical approaches. i.e. the PK around reflective learning journals reveals many different pedagogical approaches. If BIM as an e-learning tool is going to effectively support these pedagogies then new forms of TPK need to be produced. i.e. BIM itself needs to know about and support the different reflective journal pedagogies.

There’s a lot of talk about how various systems are designed to support a particular pedagogical approach. However, I wonder just how many of these systems actually provide real TPK assistance? For example, the design of Moodle “is guided by a ‘social constructionist pedagogy’” but it’s pretty easy to see examples of how it’s not used that way when course sites are designed.

There are a range of reasons for this. Not the least of which is that the focus of teachers and academics creating course sites is often focused on more pragmatic tasks. But part of the problem is also, I propose, the level of TPK provided by Moodle. The level of technological support it provides for people to recognise, understand and apply that pedagogical approach.

There’s a two-edged sword here. Providing more TPK may help people adopt this approach, but it can also close off opportunities for different approaches. Scaffolding can quickly become a cage. Too much focus on a particular approach also closes off opportunities for adoption.

But on the other hand, the limited amount of specific TPK provided by the e-learning tools is, I propose, a major contributing factor to the workload issues around institutional e-learning. The tools aren’t providing enough direct support for what teachers want to achieve. So the people have to bridge the gap. They have to do more work.

BIM and distributed cognition – generating TPACK

One of the concerns raised in the committee that had to approve the adoption of BIM was about the level of support. How is the institution going to support academics who want to use BIM? The assumption being that we can’t provide the tool without some level of support and training.

This is a valid concern. But I believe there are two asumptions underpinning it which I’d like to question and explore alternatives. The observations are

  1. You can’t learn how to use the tool, simply by using the tool.
    If you buy a good computer/console game, you don’t need to read the instructions. Stick it in and play. The games are designed to scaffold your entry into the game. I haven’t yet met an institutional e-learning tool that can claim the same. Some of this arises, I believe, from the limited amount of TPK most tools provide. But it’s also how the tool is designed. How can BIM be designed to support this?
  2. The introduction of anything new has to be accompanied by professional development and other forms of formal support.
    This arises from the previous point but it also connected to a previous post titled “Professional development is created, not provided”. In part, this is because the IT folk and the central L&T folk see their job as (and some have their effectiveness measured by) providing professional development sessions or the number of helpdesk calls they process.

It’s difficult to generate TPACK

I believe that the current practices, processes and tools used by institutional e-learning systems make it difficult for the individuals and organisations involved to develop TPACK. Consequently the quality of institutional e-learning suffers. This contributes to the poor quality of most institutional e-learning, the limited adoption of features beyond content distribution and forums, and is part of the reason behind the perceptions of increasing workload around e-learning.

If this is the case, then can it be addressed? How?

References

Bright, S. (2012). eLearning lecturer workload: working smarter or working harder? In M. Brown, M. Hartnett, & T. Stewart (Eds.), ASCILITE’2012. Wellington, NZ.

Reaburn, P., Muldoon, N., & Bookallil, C. (2009). <a href=”“>Blended spaces, work based learning and constructive alignment: Impacts on student engagement. Same places, different spaces. Proceedings ascilite Auckland 2009 (pp. 820–831). Auckland, NZ.

Thorpe, K. (2004). Reflective learning journals : From concept to practice. Reflective practice: International and Multidisciplinary Perspectives, 5(3), 327–343.

Tynan, B., Ryan, Y., Hinton, L., & Mills, L. (2012). Out of hours Final Report of the project e-Teaching leadership: planning and implementing a benefits-oriented costs model for technology-enhanced learning. Strawberry Hills, Australia.

Professional development is created, not provided

Over recent weeks I’ve been so busy that I’ve largely ignored Twitter. To my detriment. A quick return to it this afternoon found me following two links via tweets from @palbion. The two links were

  1. How effective is the professional development undertaken by teachers?, and
  2. Removing the lids of learning.

The first is a blog post outlining the many limitations of professional development as practiced in schools and many other locations (e.g. the L&T PD at Universities) and suggesting how it can be fixed to become both “useful and cost effective”. This post troubled me greatly. I agree that much of Professional Development is essentially worthless. But at least two aspects of the post troubled me.

The assumption that impact on student learning outcomes is the only true measure of the value of Professional Development worries me significantly. It’s simplistic in that it reduces the complexity of schools, teaching and teachers to a single measure. The practice of such abstraction is always going to lose something. But worse, if you focus everything on one particular measure and it becomes a target, it’s useless. i.e. Goodhart’s law

But what really bugged me was that the solution to the woes of Professional Development was better Professional Development. I disagree. I think you have to get rid of Professional Development and replace it with learning. i.e. the teachers (and academics) essentially have to continue learning. Here’s my provocative proposal

Professional development is mostly a solution provided by management due to flaws in the system that management preside over.

i.e. the education (or university) system – in its broadest understanding – is set up to make it difficult for the members of that system to learn and more importantly make changes based on what they learn.

The post actually makes the point itself when it says

Fortunately there have been a raft of reports (e.g. from EPPI and from Ofsted, among many others) that tell us exactly what to look for, and the good news is that great teacher learning is a remarkably similar beast to the great pupil learning.

Slide 19 of the Removing the lids of learning presentation by Dean Shareski contains the following quote from Stephen Downes

We need to move beyond the idea that an education is something that is provided for us, and toward the idea that an education is something that we create for ourselves.

I suggest that you can replace “education” with “professional development” and as a result you identify the solution to the problem of Professional Development.

Understanding management students’ reflective practice through blogging

The following is a summary and perhaps some reflection upon Osman and Koh (2013). It’s part of the thinking and reading behind the re-design of the ICTs and pedagogy course I help teach to pre-service teachers.

Abstract

65 business (MBA/Egyptian) students participated in collaborative blogging over 5 weeks. Analysis (content analysis for critical thinking and theory/practice links) support the potential of blogs as a tool “for reflection and learning in practitioner-oriented courses. Implications for the design of blogging tasks are discussed.

Thoughts and todo

Provides some empirical evidence for the use of blogs for reflection and connecting theory and practice. Though the findings are generally what people would expect.

The task for these students was somewhat like a forced connection. You must post 1 topic and comment on two others. I wonder if this was more open/flexible/student controlled more contributions would arise? Perhaps only if appropriate support/connection made.

To do

  • Look at Ho and Richards (1993) for a framework specific to journals of student teachers.
  • Look at framework of Greenlaw and DeLoach (2003) and also Osman & Duffy (2010)
  • Look at Osman & Duffy (2010) for the idea that theories are not actively taken up by students and remain detached areas of knowledge, not integrated into decision-making.
  • Loving et al (2007) another framework for evaluating evaluation.

Introduction

Problems in practitioner courses in combining theory and practice. Need to encourage reflection etc.

Reflection has some plusses. Uses Moon’s (1999, p. 155) definition

a mental process purpose and/or outcome in which manipulation of meaning is applied to relatively complicated or unstructured ideas in learning or to problems for which there is no obvious solution”

Blogs are a recent tool for this, benefits of blogs from the literature are listed and referenced

  • empowering students by giving a voice and venue for self-expression.
  • increasing sense of ownership, engagement and interest in learning.
  • may facilitate enriched opportunities for communication, challenge, cognitive conflict, deeper thinking and knowledge construction.

But scarcity of studies that investigate “empirically”. Many relying on self-report data or anecdotal evidence. Few studies critically examine the quality of students’ reflection, especially in management education. Few provide explanations and thus limit guidelines that suport the design of blogging tasks to facilitate reflection and learning.

Literature review

Starts with references to the problem of MBA programs problem of combining academic rigor and practical application. A problem that teaching programs have had for a long time. Critical reflection is seen as a way to bridge this.

The rest is broken into the following sections

  1. Blogging and reflection.
    Individual journals a common approach. Privacy provides a sheltered place but limits sharing/collaboration etc. Blogging provides some affordances that address this. Value is accepted by enthusiasts, but limited analysis. Some studies mentioned. A few using coding frameworks are mentioned. One shows blogging has a positive impact on reflection, but peer comments has a negative impact.
  2. Critical thinking through blogging.
    Critical thinking defined as “development of a habit of continuous reflection and questioning”. Few studies of blogging looking at critical thinking.
  3. Fostering theory-practice linkages in management education.
    Explains the use Kolb’s learning cycle in this study.

Research questions

  1. How critically evaluative were the reflections of graduate business students when they engaged in blogging?
  2. In their reflections, to what extent did these students link theory and practice? What phases of Kolb’s experiential learning cycle did these students focus on?

Methods

Students blogged for last 5 weeks of 10 week term. 20% of assessment for the task. Guidelines kept to a minimum. Graded on the quantity and quality of postings. Students introduced to reflective practice and Kolb’s cycle prior.

Blogging groups (max 8) were self-assigned and access to the blogs limited to the group. During first week the instructor moderated blog posts. Discontinued as a threat to student ownership.

Students had the opportunity of opting out of their blog contributions from being analysed for the research. 54 provided signed consent.

Blog archives coded by two independent coders.

Results

The assessment task required students to initiate one topic and comment on two posts submitted by other groups per week. 65 students expected to make 325 posts and 650 comments. In the end 144 topics and 399 comments. Students only posted 543 times, 44% less than anticipated.

RQ #1 – how critically evaluative were posts

A peak at simplistic alternatives/argument (30%) and basic analysis (26%). About 14% theoretical inference i.e. building arguments around theory. Apparently this was the expected level.

No significant differences between posts and comments.

RQ #2 – To what extent did students link theory and practice

Focused on higher level critical thinking posts. Used a “Kolb-based framework”.

Significant differences between posts in types of reflection. Students seemed more comfortable considering theory with experience, observation or experimentation.

Discussion

Results support use of blogging as a tool to encourage reflection. Mmm, not sure it’s innate to the technology, though the affordance is there.

Few posts off task – but I think that’s probably a result of asking those questions in other areas. But author’s compare this with content oriented posts in discussion forums only being 40/50% of posts. Again, possibly the design of blogs in this context suggesting it’s not the place to raise non-content questions. Authors do point out that this was a blended context, the discussion forum references were totally online. And they pick up my point.

Surprising level of students reflecting on their learning via blogs. Mostly positive, but a prominent concern was a desire for feedback, especially from the instructor. Suggests some reasons: novelty of reflection requiring reassurance; a by-product of culture.

Some suggestions around student confusion because of reasons from the literature: what to write in a blog post, low self-efficacy re: the worthiness of their contribution; difficulty generating topics.

In this study students wanted instructor to post discussion questions. i.e. the instructor needs to be more active in scaffolding struggling students.

Guidelines for designing blogging tasks

The article closes with the following list of guidelines (p. 30)

  1. Explain the importance of reflection a vehicle for learning and continued professional development.
  2. Provide different forms of scaffolding. Many students are new to reflection and critical thinking as a more formal activity. In addition to giving them a framework and guidelines to inform their reflections, examples that illustrate quality reflection and critical thinking might be necessary. Students in this context seemed to especially need help with building theory based arguments, evaluating theories, and addressing ethical concerns for business issues.
  3. Give prompts to encourage reflections. Some students are often apprehensive about initiating reflections.
  4. Promote reflection and critical thinking over longer durations. A reflection task that extends for part of the semester might not be sufficient to adequately develop students’ reflective and critical thinking skills.
  5. Relate students’ reflections to class topic so that students see the value of reflection as an integral and legitimate ingredient of learning.
  6. Provide technical orientation at the beginning of the session. Although we assume that our students are tech savvy, they might not be.

Nothing to surprising there, it’s what I’ve done in the past and will aim to do next year.

References

Osman, G., & Koh, J. H. L. (2013). Understanding management students’ reflective practice through blogging. The Internet and Higher Education, 16, 23–31. doi:10.1016/j.iheduc.2012.07.001

Ho, B., & Richards, J. C. (1993). Reflective thinking through teacher journal writing: Myths and realities. Prospect, 8, 7–24.

Greenlaw, S. A., & DeLoach, S. B. (2003). Teaching critical thinking with electronic discussion. The Journal of Economic Education, 34(1), 36–52.

Loving, C. C., Schroeder, C., Kang, R., Shimek, C., & Herbert, B. (2007). Blogs: Enhancing links in a professional learning community of science and mathematics teachers. Contemporary Issues in Technology and Teacher Education, 7(3), 178–198.

Osman, G., & Duffy, T. (2010). Scaffolding critical discourse in online problem-based scenarios: The role of articulation and evaluative feedback. In M. B. Nunes, & M. McPherson (Eds.), IADIS International Conference e-Learning 2010: Vol 1 (pp. 156–160). International Association for Development of the Information Society.

Can/will learning analytics challenge the current QA mentality of university teaching

Ensuring the quality of the student learning experience has become an increasingly important task for Australian universities. Experience over the last 10 years and some recent reading suggests there are some limitations to how this is currently being done. New innovations/fashions like learning analytics appear likely to reinforce these limitations, rather than actually make significant progress. I’m wondering whether the current paradigm/mindset that underpins university quality assurance (QA) processes can be challenged by learning analytics.

The black box approach to QA

In their presentation at ascilite’2012, Melinda Lewis and Jason Lodge included the following slide.

ascilite'2012 Lodge & Lewis

The point I took from this image and the associated discussion was that the Quality Assurance approach used by universities treats the students as a black box. I’d go a step further and suggest that it is the course (or unit, or subject) as the aggregation of student opinion, satisfaction and results that is treated as the black box.

For example, I know of an academic organisational unit (faculty, school, department, not sure what it’s currently called) that provides additional funding to the teaching staff of a course if they achieve a certain minimum response rate on end of term course evaluations and exceed a particular mean level of response on 3 Likert scale questions. The quality of the course, and subsequent reward, is being based on a hugely flawed measure of the quality. A measure of quality that doesn’t care or know what happens within a course, just what students say at the end of the course. Grade distribution (i.e. you don’t have too many fails or too many top results) is the other black box measure.

If you perform particularly badly on these indicators then you and your course will be scheduled for revision. A situation where a bunch of experts work with you to redesign the course curriculum, learning experiences etc. To help you produce the brand new, perfect black course box. These experts will have no knowledge of what went on in prior offerings of the course and they will disappear long before the course is offered again.

Increasingly institutions are expected to be able to demonstrate that they are paying attention to the quality of the student learning experience. This pressure has led to the creation of organisational structures, institutional leaders and experts, policies and processes that all enshrine this black box approach to QA. It creates a paradigm, a certain way of looking at the world that de-values alternatives. It creates inertia.

Learning analytics reinforcing the black box

Lodge and Lewis (2012, pp 561) suggest

The real power and potential of learning analytics is not just to save “at risk” students but also to lead to tangible improvements in the quality of the student learning experience.

.

The problem is that almost every university in Australia is currently embarking on a Learning Analytics project. Almost without exception most of those projects have as their focus, “at risk” students. Attrition and retention is the focus. Some of these projects are multi-million dollar budgets. Given changing funding models and the Australian Government’s push to increase the diversity and percentage of Australians with higher education qualifications, this focus is not surprising.

It’s also not surprising that many of these projects appear to be reinforcing the current black box approach to quality assurance. Data warehouses are being built to enable people and divisions not directly involved with actually teaching the courses to identify “at risk” students and implement policies and processes that keep them around.

At it’s best these projects will not impact on the actual learning experience. The interventions will occur outside of the course context. At worse, these projects will negatively impact the learning experience as already overworked teaching staff are made to jump through additional hoops to respond to the insights gained by the “at risk” learning analytics.

How to change this?

The argument we put forward in a recent presentation was that the institutional implementation of learning analytics needs to focus on “doing it with academics/students” rather than on doing it “for” and “to” academics/students. The argument here is that the “for” and “to” paths for learning analytics continues the tradition of treating the course as a black box. On the other hand, the “with” path requires direct engagement with academics within the course context to explore and learn how and with what impacts learning analytics can help improve the quality of the student learning experience.

In the presentation Trigwell’s (2001) model of factors that impact upon the learning of a student was used to illustrate the difference. The following is a representation of that model.

Trigwell's model of teaching

Do it to the academics/students

In terms of learning analytics, this path will involve some people within the institution developing some systems, processes and policies that identify problems and define how those problems are to be addressed. For example, a data warehouse and its dashboards will highlight those students at risk. Another group at the institution will contact the students or perhaps their teachers. i.e. there will be changes to the institutional context level that will essentially by pass the thinking and planning of the teacher and go direct to the teaching context. It’s done to them.

Doing it to

The course level is largely ignored and if it is considered then courses are treated as black boxes.

Do it for the academics/students

In this model a group – perhaps the IT division of the central L&T folk – will make changes to the context by selecting some tools for the LMS, some dashboards in the data warehouse etc that are deemed to be useful for the academics and students. They might even run some professional development activities, perhaps even invite a big name in the field to come and give a talk about learning analytics and learning design. i.e the changes are done for the academics/students in the hope that this will change their thinking and the planning.

Doing it for

The trouble is that this approach is typically informed by a rose-coloured view of how teaching/learning occurs in a course (e.g. very, very few academics actively engage in learning design in developing their courses); ignores the diversity of academics, students and learning; and forgets that we don’t really know how learning analytics can be used to understand student learning and how we might intervene.

The course is still treated as a black box.

Do it with the academics/students

Doing it with

In this model, a group of people (including academics/students) work together to explore and learn how learning analytics can be applied. It starts with the situated context and looks for ways in which what we know can be harnessed effectively by academics within that context. It assumes that we don’t currently know how to do this and that by working within the specifics of the course context we can learn how and identify interesting directions.

The course is treated as an open box.

This is the approach which our failed OLT application was trying to engage in. We’re thinking about going around again, if you’re interested then let me know.

The challenge of analytics to strategy

This post was actually sparked today by reading this article titled “Does analytics make us smart or stupid?” in which someone from an analytics vendor uses McLuhan’s Tetrad to analyse the possible changes that arise from analytics. In particular, it was this proposition

With access to comprehensive data sets and an ability to leave no stone unturned, execution becomes the most troublesome business uncertainty. Successful adaptation to changing conditions will drive competitive advantage more than superior planning. While not disappearing altogether, strategy is likely to combine with execution to become a single business function.

This seems to resonate with the idea that perhaps the black box approach to the course might be challenged by learning analytics. The “to” and “for” paths are much more closely tied with traditional views of QA which are in turn largely based on the idea of strategy and top-down management practices. Perhaps learning analytics can be the spark that turns this QA approach away from the black box approach toward on focused more on execution, on what happens within the course.

I’m not holding my breath.

References

Lodge, J., & Lewis, M. (2012). Pigeon pecks and mouse clicks : Putting the learning back into learning analytics. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Future challenges, sustainable futures. Proceedings ascilite Wellington 2012 (pp. 560–564). Wellington, NZ.

Trigwell, K. (2001). Judging university teaching. The International Journal for Academic Development, 6(1), 65–73.