Missing affordances – A partial explanation for the quality of University L&T

A friday afternoon rant/story illustrating what I see as a fatal flaw in institutional University L&T systems (at least those I’m experienced within in Australia). This flaw helps explain why the quality of L&T – especially e-learning – leaves something to be desired.

Evaluating teaching and learning

I’m in the midst of thinking about the main course I’m teaching this year. As part of that I’m trying to take a serious look at the responses from last year’s students on the standard course evaluation survey.

Doing this is actually part of my responsibilities as laid out in institutional policy (emphasis added)

All staff who are employed on a continuing (full-time or fractional), fixed-term or casual basis and who have a substantial involvement in teaching are required to:

  1. Evaluate their teaching using the SELT or SERE instrument, as appropriate. In addition, staff may choose to use other, optional instruments such as PELT, short, open-ended written responses, meetings with student representative groups and nonstandard questionnaires.
  2. Reflect on evaluation feedback and, where necessary, determine, implement and communicate to students a timely response that is consistent with the continuous improvement of teaching.

Now there are some widely known limitations of the type of data arising from these surveys. However, at the same time there are a range of techniques and strategies that can be adopted that can help address these limitations somewhat. At the simplest is the idea of sorting comments by respondent (Lewis, 2001) or something like Wongsurawat’s (2011) approach of using each respondents correlation with the mean class ratings. Beyond this, I’m certain the quantitative researchers at the institution could come up with a range of analysis that might be beneficial.

What does the institution provide?

A simple web interface that provides tables of statistical data and bar graphs for the closed questions and lists of responses for the open questions. We can choose whether to see just the closed questions, just the open questions or both. There’s no apparent way in the interface to get the raw data, so no easy way of importing into an analysis tool. I can generate a PDF file with the data.

My institution is not alone as illustrated by this tweet from @s_palm

I’m required to do this reflection by institutional policy. It’s also a good thing to do, but the institutional ecosystem does a poor job of enabling me to do this effectively

Affordances

In writing about cognitive artefacts, Norman sums up the problem

..no surprise that those things that the affordances make easy are apt to get done, those things that the affordances make difficult are not apt to get done (Norman, 1993, p. 106)

i.e. standard human behaviour.

If Norman’s argument holds, what does it mean for the institution requirement of reflecting on evaluation feedback that the affordances offered by the institution make this difficult to do? What does this suggest about the impact on the quality of teaching and learning at the institution?

And this isn’t the only situation where there are limited affordances. For example, I can’t easily find the final grade breakdown for students in courses I’ve taught. There’s no way I or my students can search a course website. Apparently the alignment of course learning outcomes with learning content, activities and assessment is good practice. What affordances are built into institutional systems to encourage continuous consideration of alignment (i.e. not accreditation induced mapping projects)?

What might happen if what is deemed important for quality learning and teaching was made easy? What might that do for the quality of learning and teaching at an institution?

Ignorance and the big picture

I’m aware that my current institution has expended significant resources in the design of the institutional course evaluation policy and instruments. It does appear that the amount of effort and resources expended in that effort has starved focus and resources from the ensuring that there are appropriate affordances in place to make it easy for staff to fulfil institutional policy. I wonder if the small picture question of affordances is visible to those thinking of the institutional big picture.

I often think that decision makers taking the “big picture” means they are completely ignorant of these smaller level details.

Be careful what you wish for

Of course, the only thing worse than this affordance not being provided by the institutional infrastructure. Is the likely affordance that would result from the institution’s attempt to address this problem.

The best solution to this may not be to purchase a you beaut enterprise system that comes with all the bells and whistles. It might be just to provide an export option for the raw data, thus allowing academics to leverage their respective experience and skills with a range of analysis tools. In addition, it might be a good idea to provide a simple mechanism (integrated into the export function) by which people can share the analysis they do.

Solve the cause, not the symptom

And of course, if (heaven forbid) someone from the “big picture” crew in the institution got wind of this blog post, the most likely outcome would be a focus entirely on student evaluation. All the while missing the fundamental cause, that the structure, policies and practices of the institution are incapable of paying attention to affordances, let alone doing something about them.

Biggs and the reflective institution

There’s echos of this in Biggs’ (2001) idea of the reflective institution and the notion of Quality Feasibility

What can be done to remove the impediments to quality teaching? This is a question that institutions rarely ask, although individual expert teachers continually do. It is one thing to have a model for good teaching, but if there are institutional policies or structures that actually prevent the model from operating effectively, then they need to be removed. (Biggs, 2001, p. 223)

References

Biggs, J. (2001). The Reflective Institution: Assuring and Enhancing the Quality of Teaching and Learning. Higher Education, 41(3), 221–238.

Lewis, K. G. (2001). Making Sense of Student Written Comments. New Directions for Teaching and Learning, 2001(87), 25-32.

Wongsurawat, W. (2011). What’s a comment worth? How to better understand student evaluations of teaching. Quality Assurance in Education, 19(1), 67–83. doi:10.1108/09684881111107762

3 thoughts on “Missing affordances – A partial explanation for the quality of University L&T

    1. On a similar note, up until last year sometime, we could apply for bonus payments based on student evaluations. If we got a response rate of x% and an average response of y% above some arbitrary number we would get z hundreds of dollars.

      Putting aside the implications of such an approach, what makes it worse is they couldn’t even do this efficiently. The institutional ecosystem is so limited that the teaching staff had to gather the evidences and make an application for the money. When it would’ve taken a semi-decent programmer with access to the data half a day (or less) to generate a query/script that would automatically generate a list of “winners”.

  1. Pingback: Evaluating EDC3100 in 2013 – step 1 | The Weblog of (a) David Jones

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s