What should be covered in EDC3100?

The following is the next step in thinking about the course I teach. The rough process and background is available in the first post. The first post and the most recent in this series considered student feedback. This post moves onto to thinking about the course itself, what it covers, and whether that’s any good.

In the end this post is perhaps more of a description of the current state of play, some newish requirements and some ad hoc observations. More thinking required.

Considerations and assumptions

Some of the underpinnings of this analysis includes

Current structure and content

The aim here is document (and refresh my memory) of the current structure and content of the course. The aim being to identify issues with the current situation in the next section.

The 15 week course is currently divided into four distinct parts

  • Module 1 – What and why? (Weeks 1-3)

    Tries to encourage the students to look at questions like: What are ICTs? What does ICTs and Pedagogy mean? What does it look like? Why should you use ICTs in your learning and teaching? What might it mean for your practice?

    A major focus is on students increasing their use of ICTs for learning, both Uni and professional.

    Current weekly themes are:

    1. ICTS, PLNs and You
    2. ICTs and Pedagogy: Why and what?
    3. Building your TPACK.

      Introduce TPACK/SAMR models and the ICT general capability. Encourage students to start thinking about how they might use ICTs in their teaching.

  • Module 2 – How do you do it? (Weeks 4, 5 and 8)

    Currently the aim is to “make you effective planners of units of work that have effectively integrated ICTs to enhance student learning”.

    Current weekly themes are:

    1. Effective planning – a first step.
    2. Develop your learning plan.
    3. Finishing your UoW.
  • Module 3 – Issues and implementation (Weeks 9-11 and 15)

    The aim here is to focus explicitly on activities and resources the prepare the students for using ICTs for student learning while on Professional Experience.

    Currently weekly themes are:

    1. Professional experience expectations and design.

      Outline requirements/expectations for Professional Experience and how students may integrate ICTs into their teaching (i.e. a bit more focus on lesson planning and implementation).

    2. Digital citizenship.

      Revisiting and expanding on the broad topic of digitial citizenship.

    3. Interactive white boards

      As the title suggests, learning about using IWBs. By this stage, students should know about where they’re going for PE and now what type of IWB they have.

    4. What happened and what comes next.

      This is a combination of reflection on Professional Experience, on the course and thinking about the future of ICTs and learning/teaching at both a personal and broader scales.

  • Professional experience (Weeks 12-14).

    Where the students get to put what they learned into practice.

What’s required?

AITSL standards

The obvious requirements are these standards

  • 2.6 – Information and Communication Technology (ICT) – Implement teaching strategies for using ICT to expand curriculum learning opportunities for students.

    Demonstrate the ability to use a range of digital resources and tools in ways that enable deep engagement with curriculum and support a range of approaches to learning

  • 3.4 – Select and use resources – Demonstrate knowledge of a range of resources, including ICT, that engage students in their learning.

    Demonstrate knowledge of the use of digital resources and tools to support students in locating, analysing, evaluating and processing information when engaged in learning.

  • 4.5 – Use ICT safely, responsibly and ethically – Demonstrate an understanding of the relevant issues and the strategies available to support the safe, responsible and ethical use of ICT in learning and teaching.

    Demonstrate understanding of safe, legal and ethical use of digital resources
    and tools, including cyber safety practices, respect for copyright, intellectual
    property, and the appropriate documentation of sources.

The remaining standards can perhaps all apply at varying levels depending on what the students do. Some that are particularly relevant to the current structure of EDC3100 include

  • 3.2 – Plan, structure and sequence learning programs – Plan lesson sequences using knowledge of student learning, content and effective teaching strategies.

    Select and sequence digital resources and tools in ways that demonstrate knowledge and understanding of how these can support learning of the content of specific teaching areas and effective teaching strategies.

  • 6.2 – Engage in professional learning and improve practice – Understand the relevant and appropriate sources of professional learning for teachers.

    Understand how to improve professional practice in the effective use of digital resources and tools through means including evaluation and reflection on current research and professional practice on a regular basis, and collaboration with colleagues both nationally and internationally through participation in online learning communities.

  • 7.2 – Comply with legislative, administrative and organisational requirements – Understand the relevant legislative, administrative and organisational policies and processes required for teachers according to school stage.

    Not applicable, apparently.

  • 7.4 – Engage with professional teaching networks and broader communities – Understand the role of external professionals and community representatives in broadening teachers’ professional knowledge and practice.

    Understand the range of opportunities for sharing and enhancing professional practice available through online communication with experts and community representatives, and contribution to professional and community sites, online discussions and forums.

<a href="the ICT statements for the graduate standards“>The ICT statements from AITSL have also been added to the appropriate standards above.

Drawing on the ICT statement, the following become interesting or just stuff we might be expected to touch upon follows. I’ve emphasised a couple that probably already apply.

  • 1.1 – Physical, social and intellectual development and characteristics of students – Demonstrate knowledge and understanding of physical, social and intellectual development and characteristics of students and how these may affect learning.

    Demonstrate knowledge and understanding of ways that students’ ICT use can influence their physical, social and intellectual development and how this may affect the students’ engagement and learning.

  • 1.2 – Understand how students learn – Demonstrate knowledge and understanding of research into how students learn and the implications for teaching. Assignments 2 and 3

    Demonstrate knowledge and understanding of research into how student engagement and learning can be enhanced through the use of digital resources and tools.

  • 1.3 – Students with diverse linguistic, cultural, religious and socioeconomic backgrounds – Demonstrate knowledge of teaching strategies that are responsive to the learning strengths and needs of students from diverse linguistic, cultural, religious and socioeconomic backgrounds. Assignments 2 and 3

    Demonstrate the ability to match digital resources and tools with teaching strategies in ways that are responsive to students’ diverse backgrounds.

  • 1.5 – Differentiate teaching to meet the specific learning needs of students across the full range of abilities – Demonstrate knowledge and understanding of strategies for differentiating teaching to meet the specific learning needs of students across the full range of abilities. Assignments 2, 3

    Select and use specific digital resources and tools that are matched to teaching strategies designed to meet students’ individual and diverse learning needs.

  • 1.6 – Strategies to support full participation of students with disability – Demonstrate broad knowledge and understanding of legislative requirements and teaching strategies that support participation and learning of students with disability.

    Demonstrate knowledge and understanding of digital resources and tools, including adaptive and assistive technologies, that can support the participation and learning of students with disability.

  • 2.1 – Content and teaching strategies of the teaching area – Demonstrate knowledge and understanding of the concepts, substance and structure of the content and teaching strategies of the teaching area. use of TPACK in places

    Demonstrate knowledge and understanding of ways that the use of digital resources and tools can complement teaching strategies of specific teaching areas.

  • 2.2 – Content selection and organisation – Organise content into an effective learning and teaching sequence. Assignment 2 and PE

    Demonstrate the ability to select and organise appropriate digital content in relation to relevant curriculum.

  • 2.3 – Curriculum, assessment and reporting – Use curriculum, assessment and reporting knowledge to design learning sequences and lesson plans. Assignment 2

    Demonstrate the ability to use digital resources and tools when devising learning sequences and lesson plans designed to meet curriculum, assessment and reporting requirements.

  • 2.4 – Understand and respect Aboriginal and Torres Strait Islander people to promote reconciliation between Indigenous and non-Indigenous Australians – Demonstrate broad knowledge of, understanding of and respect for Aboriginal and Torres Strait Islander histories, cultures and languages.

    Demonstrate broad knowledge and understanding of how digital resources and tools can be used to promote understanding and respect for Aboriginal and Torres Strait Islander histories, cultures and societies.

  • 2.5 – Literacy and numeracy strategies – Know and understand literacy and numeracy teaching strategies and their application in teaching areas.

    Know and understand how teaching and learning with technologies can enable, support and enhance literacy and numeracy development.

  • 3.1 – Establish challenging learning goals – Set learning goals that provide achievable challenges for students of varying abilities and characteristics.

    Demonstrate how to set goals that include the use of digital resources and tools to support differentiated approaches to teaching and learning.

  • 3.3 – Use teaching strategies – Include a range of teaching strategies.

    Demonstrate knowledge and understanding of how to support a range of teaching strategies through the use of digital resources and tools. These ways may include the promotion of creative and innovative thinking and inventiveness, engagement of students by exploring real world issues and solving authentic problems, the promotion of student reflection and promotion of collaborative knowledge construction.

  • 3.4 – Select and use resources – Demonstrate knowledge of a range of resources, including ICT, that engage students in their learning.

    Demonstrate knowledge of the use of digital resources and tools to support students in locating, analysing, evaluating and processing information when engaged in learning.

  • 3.5 – Use effective classroom communication – Demonstrate a range of verbal and non-verbal communication strategies to support student engagement.

    Use a range of digital resources and tools to support effective communication of relevant information and ideas, taking into account individual students’ learning needs and backgrounds, the learning context, and teaching area content.

  • 3.6 – Evaluate and improve teaching programs – Demonstrate broad knowledge of strategies that can be used to evaluate teaching programs to improve student learning.

    Demonstrate the capacity to assess the impact of digital resources and tools on students’ engagement and learning when adapting and modifying teaching programs.

  • 3.7 – Engage parents/carers in the educative process – Describe a broad range of strategies for involving parents/carers in the educative process.

    Describe how digital resources and tools can support innovative ways of communicating and collaborating with parents/carers to engage them in their children’s learning.

  • 4.1 – Support student participation – Identify strategies to support inclusive student participation and engagement in classroom activities.

    Identify strategies that address the diverse needs of learners through learner-centred approaches that are supported by selection and sequencing of available digital resources and tools.

  • 4.2 – Manage classroom activities – Demonstrate the capacity to organise classroom activities and provide clear directions.

    Demonstrate the capacity to provide clear directions and manage student access to digital resources and tools to support student engagement and learning.

  • 4.3 – Manage challenging behaviour – Demonstrate knowledge of practical approaches to manage challenging behaviour.

    Demonstrate knowledge of practical approaches for encouraging responsible social interactions and make use of digital resources and tools, as appropriate to the needs, backgrounds and interests of students, when managing challenging behaviours.

  • 4.4 – Maintain student safety – Describe strategies that support students’ well-being and safety working within school and/or system, curriculum and legislative requirements.

    Demonstrate understanding of risks to students’ well-being and safety while using digital resources and tools. Demonstrate understanding of practices and tools to mitigate these risks.

  • 5.1 – Assess student learning – Demonstrate understanding of assessment strategies, including informal and formal, diagnostic, formative and summative approaches to assess student learning.

    Demonstrate understanding of the educative value of providing students with multiple and varied diagnostic, formative and summative assessments and the application of digital resources and tools in facilitating a range of approaches to assessment.

  • 5.3 – Make consistent and comparable judgements – Demonstrate understanding of assessment moderation and its application to support consistent and comparable judgements of student learning.

    Demonstrate knowledge and understanding of the ways that digital resources and tools can be used to enhance the validity, reliability and efficiency of approaches to assessment and evaluation.

  • 5.4 – Interpret student data Demonstrate the capacity to interpret student assessment data to evaluate student learning and modify teaching practice.

    Demonstrate the capacity to use digital tools for recording, managing and analysing student assessment data to inform future practice.

  • 5.5 – Report on student achievement – Demonstrate understanding of a range of strategies for reporting to students and parents/carers and the purpose of keeping accurate and reliable records of student achievement.

    Demonstrate knowledge and understanding of current and potential use of digital resources and tools to support reporting to students and parents/carers and for achievement record keeping.

  • 6.1 – Identify and plan professional learning needs – Demonstrate an understanding of the role of the National Professional Standards for Teachers in identifying professional learning needs.

    Demonstrate an ability to use the ICT Statements of the National Professional Standards for Teachers to identify personal goals for professional development.

  • 7.1 – Meet professional ethics and responsibilities – Understand and apply the key principles described in codes of ethics and conduct for the teaching profession.

    Understand and apply ethical and professional practice principles when using digital resources and tools for teaching and learning.

  • 7.3 – Engage with the parents/carers – Understand strategies for working effectively,
    sensitively and confidentially with parents/carers.

    Understand how to use digital resources and tools for communicating effectively, ethically, sensitively and confidentially with parents/carers.

Obviously not all of this should be covered in EDC3100, but just doing the copy and paste into the above (a mindless task) has generated the obvious idea that many of the above appear to be based on common technical knowledge for gathering, storing, analysing, manipulating etc information using ICTs. At the same time there is common content around topics such a cybersafety/digital citizenship. A bit of coding using NVIVO might help with this. Identify a the common categories and allow better evaluation of the content of the course and also help with communication to the students.

Learning objectives

The competition

EDUC261: Information and Communication Technologies and Education – Macquarie University

  • “..ways in which ICT is changing education…how to successfully select and apply learning technologies to achieve intended learning outcomes; the new literacies that educational technologies create; and appropriate pedagogies for the contemporary global classroom…”
  • Learning outcomes – like these better than those for the course I teach.
    1. Perform basic contemporary ICT related tasks use computer software/hardware and the internet (for instance, creating accounts, searching for information, uploading files, posting data)
    2. Describe a range of contemporary ICTs and critically evaluate their potentials for educational purposes
    3. Develop ICT-based learning designs based on appropriate selection and use of contemporary educational technologies
    4. Critically evaluate and justify technology selection and design decisions with reference to current scholarly commentary, research and theory in pedagogy regarding ICTs in educational contexts
    5. Articulate strategies for classroom management that promote inclusive education
    6. Explain in a broad sense how ICTs impact on our social, cultural and educational lives
    7. Model positive attitudes and social behaviours relating to the integration of ICTs within teaching and learning, including effective participation in groupwork processes.
  • Assessment is interesting –
    1. Create a wiki page explaining a particular ICT and its potential applications to learning.
    2. design a LAMs sequence and explain the choices.
    3. As a group develop a module of work in Moodle.
    4. In class exam – essay on a topic.
  • Content much more focused on learning using “technology x” (e.g. virtual worlds, games) – different approach than EDC3100.

Misc thoughts on problems, requirements and possible changes

Various thoughts/comments that arose while preparing the above

  • The “questions” used as titles for the modules need to improved.

    e.g. Module 3’s title actually needs to be a question. But the other two need to help the students grasp the point of each module.

  • Timing of assignment 2 – needs to be due a week later.
  • The focus on unit planning in Module 2, rather than perhaps on lots more detail on planning ICT use within lessons.
  • The absence of the “explore your own idea” approach from Sem 1, 2013.
  • Is the IWB week too specific on IWB? Is there scope for using this very concrete approach to illustrate some broader points?
  • Add some discussion of the course evaluations of the course to the orientation week.
  • Perhaps the week 5 title/focus needs to be more explicitly focused on learning experiences.
  • Week 9 – Professional Experience prep needs to draw more on the experience of 2013 students. Encourage students to start working on Assignment 3 in terms of planning what to ask etc.
  • Could the assignment 1 artefact be changed to something helping communicate with parents?
  • The idea of ICTs as mindtools.
  • The requirement that PSTs have some capacity for data analysis and perhaps link to something like this
  • More obvious encouragement for students to develop a portfolio of evidence against the AITSL standards and use this more broadly in the course.
  • Reflect more about the potential of resources/ideas like this linked to the standards, not to mention the ICT statements for the graduate standards
  • Enhance the coverage of basic classroom management topics in Module 3 – link to #4.2

A #moodle course site wide “macro” facility?

UPDATE (Feb 2015): Have implemented a version of this using a simple Javascript approach. Not quite course site wide, but functional

In the process – like an increasing number of Oz academics – updating a course site for a new semester. I’m fairly happy with the site as it stands and the Moodle copy process does a good job of updating links (i.e. the link to assignment 1 now links to assignment in the new course site, not the old). The trouble is that the current process doesn’t currently do a good job with dates and other values that may change from semester to semester.

So I’m wondering

  1. How one might implement a course site wide macro facility?
  2. Alternatively, what existing feature of Moodle am I unaware of?

The problem

Throughout the course site there are a range of labels that will exist each semester, but which may have slightly different values. Some examples include:

  • Due dates for assessment.

    Assignment 1 tends to be due at the start of week 5, but week 5 has a different date each semester.

  • Weekly titles.

    The course is structured by week. Each week has a particular question as its title. I like to tweak these titles based on what works (or doesn’t).

Since these “labels” are used in multiple places in the course site, if the values of these labels have to change, I have to find all uses of these labels and change them manually.

One solution type – a macro facility?

I’d find it useful if there was a central macro facility where I could define a range of variables and their values. For example

  • ASSIGNMENT_1_DUE_DATE = 31st March – start of week 5
  • WEEK_10_TITLE = “Digital citizenship”

On the course site, I would then use the variable (WEEK_10_TITLE), rather than the value. Some technology would replace the variable with the value prior to display to the user.


Really thinking aloud here.

Given the cost of changing the Moodle core – I assume putting something like this in Moodle would require a change to core – that’s probably not an option in the short-term. But this is also where existing discussion of an idea like this may have occurred or been addressed.

The only potential method by which I could implement something like this that springs to mind is Javascript. i.e. include some javascript in all the Moodle course pages that implement some form of macro substitution on the client’s end.

Post script

This is an example of attempting to live within the constraints of the system. A longer term solution would be to break out of it entirely. Give up the question of fixed weeks, fixed due dates for assignments, the idea of a course remaining so consistent that only a few values need to change each semester etc. On the last point, there will be more detailed changes made. A feature like this simply gives more time to make those “better” changes by saving time on the more menial requirements.

I also think it’s an interesting example of how institutions that are increasingly moving to standardised, industrialised, approaches to supporting learning & teaching are doing it in ways that they are either unaware of gaps in their systems/processes or are unable to conceptualise how this might work.

Leadership as defining what’s successful

After spending a few days visiting friends and family in Central Queensland – not to mention enjoying the beach – a long 7+ hour drive home provided an opportunity for some thinking. I’ve long had significant qualms about the notion of leadership, especially as it is increasingly being understood and defined by the current corporatisation of universities and schools. The rhetoric is increasingly strong amongst schools with the current fashion for assuming that Principals can be the saviour of schools that have broken free from the evils of bureaucracy. I even work within an institution where a leadership research group is quite active amongst the education faculty.

On the whole, my experience of leadership in organisations has been negative. At the best the institution bumbles along through bad leadership. I’m wondering whether or not questioning this notion of leadership might form an interesting future research agenda. The following is an attempt to make concrete some thinking from the drive home, spark some comments, and set me up for some more (re-)reading. It’s an ill-informed mind dump sparked somewhat by some early experiences on return from leave.

Fisherman’s beach by David T Jones, on Flickr

In the current complex organisational environment, I’m thinking that “leadership” is essentially the power to define what success is, both prior to and after the fact. I wonder whether any apparent success attributed to the “great leader” is solely down to how they have defined success? I’m also wondering how much of that success is due to less than ethical or logical definitions of success?

The definition of success prior to the fact is embodied in the current model of process assumed by leaders, i.e. telological processes. Where the great leader must define some ideal future state (e.g. adoption of Moodle, Peoplesoft, or some other system; an organisational restructure that creates “one university”; or, perhaps even worse, a new 5 year strategic plan etc.) behind which the weight of the institution will then be thrown. All roads and work must lead to the defined point of success.

This is the Dave Snowden idea of giving up the evolutionary potential of the present for the promise of some ideal future state. A point he’ll often illustrate with this quote from Seneca

The greatest loss of time is delay and expectation, which depend upon the future. We let go the present, which we have in our power, and look forward to that which depends upon chance, and so relinquish a certainty for an uncertainty.

Snowden’s use of this quote comes from the observation that some systems/situations are examples of Complex Adaptive Systems (CAS). These are systems where traditional expectations of cause and effect don’t hold. When you intervene in such systems you cannot predict what will happen, only observe it in retrospect. In such systems the idea you can specify up front where you want to go is little more than wishful thinking. So defining success – in these systems – prior to the fact is a little silly. It questions the assumptions of such leadership, including that they can make a difference.

So when the Executive Dean of a Faculty – that includes programs in information technology and information systems – is awarded “ICT Educator of the Year” for the state because of the huge growth in student numbers, is it because of the changes he’s made? Or is it because he was lucky enough to be in power at (or just after) the peak of the IT boom? The assumption is that this leader (or perhaps his predecessor) made logical contributions and changes to the organisation to achieve this boom in student numbers. Or perhaps they made changes simply to enable the organisation to be better placed to handle and respond to the explosion in demand created by external changes.

But perhaps rather than this single reason for success (great leadership), it was instead there were simply a large number of small factors – with no central driving intelligence or purpose – that enabled this particular institution to achieve what it achieved. Similarly, when a few years later the same group of IT related programs had few if any students, it wasn’t because this “ICT Educator of the Year” had failed. Nor was it because of any other single factor, but instead hundreds and thousands of small factors both internally and externally (some larger than others).

The idea that there can be a single cause (or a single leader) for anything in a complex organisational environment seems to be faulty. But because it is demanded of them, leaders must spend more time attempting to define and convince people of their success. In essence then, successful leadership becomes more about your ability to define and promulgate widely acceptance of this definition of success.

KPIs and accountability galloping to help

This need to define and promulgate success is aided considerably by simple numeric measures. The number of student applications; DFW rates; numeric responses on student evaluation of courses – did you get 4.3?; journal impact factors and article citation metrics; and, many many more. These simple figures make it easy for leaders to define specific perspectives on success. This is problematic and it’s many problems are well known. For example,

  • Goodhart’s law – “When a measure becomes a target, it ceases to be a good measure.”
  • Campbell’s law – “The more any quantitative social indicator (or even some qualitative indicator) is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
  • the Lucas critique.

For example, you have the problem identified by Tutty et al (2008) where rather than improve teaching, institutional quality measures “actually encourage inferior teaching approaches” (p. 182). It’s why you have the LMS migration project receiving an institutional award for quality etc, even though for the first few weeks of the first semester it was largely unavailable to students due to dumb technical decisions by the project team and required a large additional investment in consultants to fix.

Would this project have received the award if a senior leader in the institution (and the institutional itself) heavily reliant upon the project being seen as a success?

Would the people involved in giving the project the award have reasonable reasons for thinking it award winning? Is success of the project and of leadership all about who defines what perspective is important?

Some other quick questions

Some questions for me to consider.

  • Where does this perspective sit within the plethora of literature on leadership and organisational studies? Especially within the education literature? How much of this influenced by earlier reading of “Managing without Leadership: Towards a Theory of Organizational Functioning”
  • Given the limited likelihood of changing how leadership is practiced within the current organisational and societal context, how do you act upon any insights this perspective might provide? i.e. how the hell do I live (and heaven forbid thrive) in such a context?


Tutty, J., Sheard, J., & Avram, C. (2008). Teaching in the current higher education environment: perceptions of IT academics. Computer Science Education, 18(3), 171–185.

Analysing some course evaluation comments

The following reports on some analysis of students responses to open questions on the institutional, end of semester course evaluation survey for the course I taught in 2013. This initial post gives some background to the course, the evaluation process (and its limits), links to some of the raw results (including summaries of close responses) and an outline of the process I’m following.

This post starts with discussion of the results followed by a quick description of the process used.


The results reported below are drawn only from the Semester 2, 2013 offering of the course. I haven’t analysed in detail the responses from the Semester 1, 2013. In part, this is due to time limitations. However, as explained in the first post, it is also due to the Semester 1, 2013 offering of the course being developed as it taught. This did not work well and looking through the student comments this underpinned much of their perspective on the course. The semester 2 offering was essentially complete from the start of semester.

The semester 2 offering – as explained in the first post – was also (from one limited perspective) reasonably successful. Could be argued that I’m cherry picking the results I like.

One significant limitation of using only the Semester 2 responses is that Semester 2 is an online only offering. There are no on-campus students. A particular issue with this is that the increase in apparent workload for on-campus students from the apparent duplication between attending on-campus lectures and tutes and having to complete online activities is not an issue for online students.

The open coding of student comments into categories (see the method section below) was only done by 1 person – myself. Hence it’s not the only, nor likely the best categorisation.

As explained in the method section, I’ve also removed the “Teaching staff” category from the results. This category had by far the most positive comments (n=27) and no comments coded as negative or suggestions. It’s excluded because it made it difficult to get value from the chart and given that I’m already perfect there’s no need to consider those comments.


The following chart (click on the image to see a larger version) provides an overview of the analysis results and some description follows. In summary,

  • The overall course in S2, 2013 was well received, as was the Moodle course site.
  • The sample assignments and the assignment descriptions were positively received, though the workload remains an issue.
  • Workload remains perhaps the main issue, especially in the first few weeks. Though the students still appear to have enjoyed the course.

Student comments - EDC3100 by David T Jones, on Flickr

Study desk

The most positive comments were for the Study Desk category and its related child categories (see the method section for a list of all the categories)

  • 4 comments on activity completion, the students liked being able to track what they are up to.

    I have a fear hear about what this may encourage.

  • 6 comments on the learning path.
  • 2 comments on Moodle books.
  • 10 general comments on the study desk.

Course comments

Overall course comments are next. All positive comments. So not interesting in this context, but pleasing to the ego.


Content has 13 positive comments, but also has 1 suggestion (“course materials that link to readings and information” in the context of what can be improved) that I’m not sure I can parse. The same student also gave a negative content comment “there wasn’t any information/textbooks off the net for information”. This combination plus the wealth of content pointed to during the course seems to suggest that this is a mistake. But uncertain.

The other negative content comment was “Some times it was difficult to navigate back through all of the moodles to find info previously learn”. This is pointing to the absence of search engine barrow I’ve been pushing.


Assessment was the next most positive with 6 positive comments mentioned “All … were so well explained”, a couple mentioned something along the lines of “content directly linked to assessment and content”, a couple also liked “the examples given they helped a lot when completing my own assignment” and one even “Loved the assessment structure”

In terms of negatives or suggestions, mention was made of more time prior to assignment 1 (thrice); a mention of the final assignment being due 3 days after Professional Experience, and the size of the assessment tasks (twice). One negative comment was more fundamental “assessment tasks changed”.


Also with 6 positive comments, but also the first category with more negative comments (9) than positive.

The positive comments were generally of the form “So all content was necessary but just way too much!!”.

The negative comments all indicated too much work, with a particular emphasis on the first few weeks (a known problem). For example “This course nearly broke me but I have come a long way”.


This category included both setting up a PLN and using a range of tools (Diigo, twitter, blogs) to do that. Only 8 comments overall. A couple of positive ones mentioned the value of the PLN and the tools. The negatives included one that complained about managing multiple accounts for all the tools, one that complained that the tools set up unrealistic expectations for Professional Experience, and one raised not being confident enough to learn all those tools early on.

The remaining categories included very few comments.


The analysis was used as an initial exploration of using NVivo. The method used was:

  1. Import data from the institutional course evaluation website into NVivo.

    Due to the limitations of this website there are a number of limitations of this data including: no ability to link between responses to closed questions and responses to other open questions; and, only able include responses from 37 of the 42 responses to the survey.

    I’m particularly displeased about this institutional inability.

  2. Initial coding of student comments (or part thereof) into three categories
    • Suggestion – the comment makes a particular suggestion to change the course.
    • Negative – the comment criticises some aspect of the course.
    • Positive – the comment praises some aspect of the course.
  3. Open coding of student comments into categories based on an aspect of the course.

    A description of the resulting categories is given below. Where needed I’ve added an explanation. This is a hierarchical list. In the chart above, the child categories have been aggregated into their parent categories (i.e. you won’t see “Learning path” in the chart above, all those responses appear as part of the “Study Desk” category). The links below will take you to web pages (produced by NVivo) that show you the comments coded in each category.

    • Assessment (n=12)
    • Blogging (n=2) – any comment around the assessable requirement to write blog posts throughout the semester.
      • Blog updates – throughout the semester I sent out emails summarising students’ blogging progress against what was required for assessment.
    • Content (n=17) – the actual content of the course
    • Course (n=15) – general comments about the course e.g. “bad course”, “best course ever”
    • Misc other (n=1) – for comments that I couldn’t think of an appropriate category for
    • PLN (n=7) – about the requirement for students to engage with social media (blogs, diigo, twitter etc) to build a personal learning network
      • ICTs – the use of the various ICTs (diigo, twiiter, blogs) as part of the PLN process.
    • Professional Experience (n=3) – as part of the course students spend 3 weeks in schools teaching
    • Study Desk (n=22) – the USQ label for the Moodle course site
      • Activity Completion – a Moodle feature that will display a tick beside activities that the student has completed
      • Learning path – each week on the study desk was designed as a learning path. A series of online resources and activities that all students had to complete
      • Moodle Books – a Moodle feature used to structure the learning path
    • Teaching Staff (n=27)
    • Workload (n=21) – a comment about the level of work required to complete the course
      • Start of course – the workload for this course is quite heavy in the first few weeks.
  4. The use of NVivo’s matrix coding capability to produce the graph above that compares which aspects of the course the students commented on negatively, positively or as suggestions.

Getting started with NVivo

What follows is some initial explorations into the use of NVivo for the qualitative analysis of content. I’m going to use the task of analysing student comments from the evaluation of a course as a test case.

Aside: Just been told that “raw” data (i.e. de-identified course evaluation data with responses grouped by student) “is not given out in any form”. So rather than having useful data to import easily, I’ll have to kludge up the importation of some hamstrung data.

Start with the question(s)

As with all good research, I’ll start with the questions I want to answer with this analysis.

  1. What themes do the student comments cover?
  2. What themes are the most prevalent?
  3. Is there any difference between offerings or mode of delivery (on-campus, which campus, online)?

(Remember, the aim here is mostly to learn Nvivo so the limitations of my “research” questions are noted)

Sourcing assistance

I was going to include both help from Nvivo and the broader web, but so far the NVivo documentation is providing sufficient. Though as you get deeper into it

On first glance, it appears NVivo comes with some significant help. Help that seems to be well designed (at version 10 of the product you’d hope they’d built up some expertise). Nice and very early introduction of sample research projects and how you’d go about doing that research. I’ll borrow some of that for the process below.

Key concepts

All systems have some abstractions they use. Nvivo is no different. The nice “understand the key concepts” section seems to list four (only four?)

  1. Sources – the (multimedia) stuff you’re analysing
    • Internals – the sources that can be imported into Nvivo.
    • Externals – those you can’t import.
    • Memos – the place you store your insights about the analysis
    • Framework matrices – to summarise source materials.
  2. Coding and nodes – you code your source material into nodes. Provides access to all the references to that node.

    Appears this can be hierarchical

    Includes “case” node to store attributes. Often used to identify people/places. Classification sheets used to view the values for these nodes.

    Nodes can be organised into folders.

  3. Node classifications – apparently intended for “demographic attributes”.
  4. Source classifications – using case nodes to manage bibliographic data


Sources/data can be organised using folders (recommended for the start), sets (used later to gather sources/nodes from different folders) and search folders (?). Also source classifications to organise/compare based on attributes. e.g. book, journal article, web page.

Create a project

Enough reading. Let’s get going. Some of the following is initial exploration, confirming assumptions drawn from the help material in the actual tool.

Hit the new project button enter title, description, accept the default filename. Mmm, user actions can go to an event log – transparency/repeatability measure I assume. Leave it off for this play.

Ok, a window where I can recognise the point of most elements. Let’s see if I can create a node. Yep, but I want it to be a case node, can’t figure out how to change it to that. Ahh, I need to create a classification first. ANd I also need to take care of the type of node. Mm, might wait on doing this until we think things through a bit.

Import some data

Given the inability to get the data in a manipulable format, I need to kludge something out of the institutional web interface.

To remind myself, the main survey had

  • 8 closed questions; and,
    1. Overall, I was satisfied with the quality of this course.
    2. I had a clear idea of what was expected of me in this course.
    3. My learning was assisted by the way the course was structured.
    4. My learning was supported by the course resources.
    5. I found the assessment in this course reasonable.
    6. I received useful feedback in this course.
    7. The teaching team supported my learning.
    8. Overall, I was satisfied with the quality of teaching in this course.
  • Four open questions
    1. What did you find were the most helpful/effective aspects of this course?
    2. What did you find were the least helpful/effective aspects of this course?
    3. What improvement would you suggest to the course itself?
    4. Please feel free to make any other comments, particularly in relation to your ratings for this course
  • And a couple of optional closed questions, for semester 2 these were (pick which question I selected – from a pre-approved list – and which the institution included).
    1. It was easy to navigate my way around the StudyDesk.
    2. You may have completed previous USQ student surveys in the past. Is this new set of questions an improvement? Please ignore this question if this is the first time you have completed a USQ student survey

For each of the closed questions there was also an option for students to provide some free text responses.

The web interface I have access to will provide the following associations

  • For each closed question, it shows a list of any textual responses for that question (with the numeric response) plus the option to expand out all the other free text responses made by that student.
  • For each open question, a list of the textual responses plus the option to expand out all other free text responses made by the student.

Theoretically possible to combine these two to get a better picture, but it would be manual and also plagued by the issue of not all students answered all open questions.

So the best option for the purposes of this exercise may be to select one of the open questions (hopefully one most students answered) and copy and paste those responses into a Word document. Then rely on Nvivo’s data wizards to import and separate responses by question and student. So the format would be something like

  • Chosen question – What did you find were the most helpful/effective aspects of this course?
    • Students response to this question – The quick responses from the teaching team.

      And for each of the other open questions the student responded to.

      • Question text: What did you find were the most helpful/effective aspects of this course? (yes it duplicates)
        • Student response: The quick response from the teaching team.
      • Question text: My learning was supported by the course resources.
        • Student response: Sometimes was frustrating that assessable course material was added after I had completed the weeks work (5/5)

          Note: the numeric response from the student for this comment on one of the closed questions.

Copy and paste this into word and play with paragraph styles might work? Of course the copy and paste is ugly. The manual conversion with macros is doable for this test. The question is whether Nvivo will auto-import?

Have the data file in the format. Create a node for the offering. Import the external data as a source. Do the auto-coding and hey presto. Imported correctly!!! FTW.

Concept proved. Now I need to develop the appropriate strategy to import the sources.

Planning the analysis

The following is an attempt to plan out the process I’ll use. Just for once, having planned it all out first, prior to starting might be an idea. The following process is loosely adapted from the Nvivo help docs

  1. Set up the project – research design, project journal, and make a model.

    Going to leave this until later. This isn’t really a research project.

  2. Import that data organised by folders.

    The data I’ll import will come in two forms

    1. Word documents for open question responses.

      For each mode I’ll create a Word document by copying and pasting from the web application. The responses will be grouped by student identified by a code and use paragraph formatting so I can use the auto-coding feature.

    2. Spreadsheets for closed question responses and “demographics”.

      Each mode will have it’s own spreadsheet that contains information about the students. This will include: the year, semester and mode of them taking the course; and, their responses to the closed questions. Note: it will only include responses for those closed questions where the student provided a comment (the limitation of the data).

  3. Node structure and coding
    • Does a word frequency query reveal anything?
    • Use an initial list of nodes including: negative, positive. The rest I’ll leave open.
  4. Set up nodes for people, places etc.
  5. Explore the material and code themes.
  6. Run a matrix coding query to see about prevalence of themes.

Prepare the data

Need a test run of the data and its importation. The process is to

  • Word document
    • Select the open response question that has the most responses on the assumption we’ll get the most data.

      Significant problem here as not all students respond to all open questions. So even picking the open question with the most responses, I may be missing the data. God I hate badly designed systems and institutions that don’t recognise the importance of rapid response.

    • copy and paste without formatting into Word.
    • change the paragraph formatting to the following
      • H1 – student identifier
      • H2 – the text of the open question.
      • Body – the student’s response.
    • Replace the student identifier with code – NUMYEARSEMMODE where
      • NUM = unique number starting at 1 and incrementing.
      • YEAR = the year (duh)
      • SEM = 1 or 2 representing the semester
      • MODE = one of the following representing the mode of delivery: Toowoomba, Fraser Coast, Springfield, Online.
  • Spreadsheet
    • Create columns: Student,Semester,Year,Mode, SEC01, SEC02, SEC03, SEC04, SEC05, SEC06, SEC07, SEC08.

      Student will the be the id from the Word document. Yes the id will duplicate the next three columns, but I’m assuming this might be needed to allow manipulation by NVivo and ease of understanding by human beings.

      SEC01 etc are the ids used by the institutional system for each of the closed questions. These will be used to hold the student’s response to these questions if it is available from the Word document.

    • For each student add a row from the Word doc.

Okay, let’s try this.

  • Both files created for the smallest sample of students (n=11)
  • Import the word document into NVivo – no worries.
  • Import the spreadsheet – counts as a data source (perhaps).

    Can choose columns to be classifying or codable fields. In this case, I believe I’m looking at classifying fields.

  • Set up the “respondent” classification.
  • Set up the positive/negative nodes initially, also a node for the specific offering (overkill?).
  • Auto create the nodes from the document – done.
  • Classify the student nodes as “Respondents”.
  • Fill in the attribute values for the student nodes?

    Actually, that happens automatically when importing the spreadsheet. The label for the first column becomes the name of the classification and the value (as I’d hoped) gets linked with the content node.

  • Let’s do a bit of manual coding for positive/negative.

    Figuring out the mechanism for coding is surprisingly more difficult that I expected.

    Didn’t take long looking at the responses to think about “Suggestion” as another category/node. All done, not a bad process.

Current status

The basics of NVivo learnt. Some possible limitations for certain activities identified. But there will be work arounds. More on what the evaluation revealed (not a lot new) tomorrow.

Evaluating EDC3100 in 2013 – step 1

It’s time to reflect on what happened in EDC3100, ICTs and Pedagogy last year. In another couple of months semester will start and in a couple of weeks I’m scheduled to talk with some of the teaching staff. The following is the first of a few posts that will serve the basis for discussions with other members of the teaching staff, an artefact to show the 2014 students some evidence of thought and rationale for the course they’re taking, and perhaps to spark some ideas from you, the reader.

This post


The course in question is taken by 3rd year students undertaking a Bachelor of Education (i.e. pre-service teachers and the vast majority of students in the course) and also a small number of students studying a Bachelor of Vocational Education and Training (the tension between requirements laid down by the accrediting bodies for school teachers and the needs of the VET students is one of the difficulties in this course). The aim of the course is for students to develop the ability to use ICTs to improve their teaching and their students learning.

I’ve now taught the course for two years. In 2012 I taught the course as developed by someone else. I wasn’t going to let my ignorance get away from me. For 2013 I was able to redesign the course to something closer to what I prefer. However, a range of issues meant that almost all of the redesign occurred as the first semester 2013 offering progressed. This was far from optimum as is shown below. The second semester 2013 offering was a tweaked version of the redesign and was seemingly significantly more successful.

The course is offered twice a year (semesters 1 and 2). The semester 1 offering is the largest offering. Semester 1, 2013 had approaching 300 students spread amongst 3 different campuses, a Malaysian partner and online. The online student cohort (n=180) was the largest. I was responsible for the online students and those at the Toowoomba campus. Two other teaching staff were responsible for the students at the two other campuses and the Malaysian partner. The semester 2 offering is online only and in 2013 had around 100 students.

The aim for 2014 is to only do minor revisions. To build on what is there and improve it. Workload (we’re only meant to do major revisions every 2/3 years) and expectations around research outcomes is one of the reasons for this. But another important reason is avoid throwing away the knowledge and resources gathered during the 2013 offering. For example, I know have a collection of very good sample assignments from the 2013 students. These and other insights from 2013 are seen as valuable. A major revision would throw them out. It would also mean a third year in a row where I’m dealing with essentially a new course.

On the other hand, the course synopsis etc is from before my time and I’m not a big fan (of course I’m not, I didn’t write it) and that along with learning outcomes should perhaps be revisited.

Bias and perceptions

Before I start I’d like to make explicit what my current perceptions are of the course. These are based on having developed and taught it, having skimmed some of the student feedback, and talked with other staff and students. I’m trying to make explicit my current perspectives to challenge myself to think differently once I’ve taken a more indepth look at the course.

Semester 1 bad, Semester 2 better

Semester 1 was problematic. Mostly because the assessment and specific tasks each week being developed during the semester. This created confusion amongst the staff and the students. It prevented students from being able to plan ahead. It also created some of the workload issues. Semester 2 was more successful because everything was available up front and the insights generated during semester 1 led to some minor improvements.

Having taught both semester this was evident. It also shows up in the student evaluations. For example, the image (click on it to see a bigger version) that follows is the first chart from the Semester 2 students (remember these are online only). It shows comparative means for the standard 8 questions (see the PDF complete view for the actual questions) for the class, the course and the faculty. In this image class (online students) and course are the same as all students are in the online “class”.

Semester 2 EDC3100 Comparative Means by David T Jones, on Flickr

All of the means for this offering are above the Faculty (of Education) mean and all above 4, most above 4.5.

Compare this with the means from the Semester 1 online students. I’m using the online students from semester 1 as they are they group most comparable to the Semester 2 students (all online). As it happens, the semester 1 online students were also the cohort that liked the course the most.

Semester 1 online students EDC3100 Compa by David T Jones, on Flickr

A very different story. Only 2 of the class means just above 4. Note that the course means are significantly below the class means. The online students in semester 1 were the happiest of the cohorts (i.e. the on-campus students ranked it much worse). For the class means (online students) most were above faculty mean, but not SEC02 (I had a clear idea of what was expected of me in this course) and SEC05 (I found the assessment in this course reasonable). The “in-semester development” of the course was a problem.

Workload and assignments

Workload for the course remains high for the students, even in semester 2. This is due to a number of factors including the assessment and the weekly activities. It is, however, also related to the students being challenged. i.e. many are not highly experienced with ICTs, come in with a perception of being ICT challenged, and yet the course expects them to make heavy use of ICTs for their learning as well as their teaching. The inclusion of 15 days of Professional Experience – where students are expected to teach – doesn’t help this perception.

Duplication for on-campus students

Part of the workload issue for on-campus students is the apparent need to complete both the learning path and attend on-campus classes. The learning path is the series of activities and resources students need to complete each week (some early origins of this idea described here). This is tracked by the LMS and they get a small percentage of the overall course mark for completing this work.

The trouble is that on-campus students also expect to attend the on-campus classes. They then see the expectation to complete the learning path and, not surprisingly, see that they have to do twice the work they would normally have to do.

They don’t get the blogs or reflection

Students are expected to blog as part of the assessment. Most don’t see the point and see it as part of the high workload. A part of this is the expectation to use the blogs for reflection and building a PLN. The 2013 course didn’t do a good job of scaffolding this.

Absence of a search engine

The learning path doesn’t include a lot of content. Instead, it tends to give some context and point to other resources and have integrated activities. i.e. more of a study guide type approach. However, even with limited content students often find themselves later in the semester wanting to revisit a prior topic. Given that the learning path is hosted on the Moodle course site and there is no search engine, this requires a reliance of memory or manual searching. This frustrates.

In writing this, I’m thinking I may not do anything about this. The obvious addition of a search engine would require institutional and technical support. That’s unlikely to happen quickly if at all. I could combine the learning path into a single document that the students could search manually. The problem is that this would lose the activity completion “ticks” which show students what they’ve completed (positive comments on this) and is used for assessment.

The real reason for not doing something is that one of the aims I have for the course is to encourage the students to develop the mindset of solving their own ICT problems. Of seeing ICTs as a source for solutions to their problems, rather than a cause of their problems. Developing this mindset is for me, one of the most important enablers for using ICTs well. So this problem becomes something for them to solve for themselves using what ICT tools they can find.

Limited creativity in using ICTs

Many students tended to base their teaching while on prac, on the ICTs we used in the course. This is not surprising (it’s human nature), but suggests that we didn’t make the point strongly enough that context matters. i.e. what ICTs you’d use in a University course with 250 student spread over large geographic distances is not what you’d use in a class of 20 6 year olds.

Limited use of theories

A contribution of this course is providing the students with a range of theory, frameworks, literature etc to guide their use of ICTs in their teaching. With some exceptions this is done far from well and often as a post-hoc rationalisation (i.e. plan the lesson and then find some literature/theory to explain the design).

The site structure works

Lots of negative comments about being lost from the Semester 1 students. There could be some questions about how much this perception was impacted by the material not being available early.

The semester 2 students were more positive. For example

The structure of the Studydesk – David please teach all other lecturers to structure their studydesks like yours!

Student evaluations

The institution has a formal end of semester evaluation of teaching process. It’s a web-based process that is available from near the end of teaching to a near when results are released. Staff only see the responses after results are released.

The charts above a captures from the output provided by the evaluation system. The following provides some more background on this data. More in-depth analysis will come in later posts.

The following table provides a summary of end of course student evaluations. These take the form of a handful of standardised questions and with some optional standardised questions. Both closed and open questions. Most of the responses rates for the different cohorts is quite high, in comparison to some I’ve seen.

There is also evidence of the standard student confusion over which course they are commenting on. For example, one of the responses to an open question is the student who found the “Breakdown of how to tackle the Webquest” as one of the most helpful/effective aspects of the course. The trouble is that Webquests are actually a key part of another course that many of the EDC3100 students do at the same time.

I have included links to PDFs of the responses for each student cohort where I am the primary teacher involved. I’m reluctant to provide those where other teaching staff involved as I’m worried about stepping on toes (I haven’t talked with the other folk about this yet). I will note, however, that during my initial skim of the responses, the only mention of other staff by name is of the form “Feedback from my tutor (name removed) was fantastic (5/5)” or like this “Once again, (name removed) worked hard but was hampered with inadequate help from Toowoomba” – I’m the source of the Toowoomba inadequacy.

Semester 1, 2013
Cohort n %
Online 62 33%
Toowoomba 16 27%
Springfield 24 41%
Fraser Coast 11 58%
Semester 2, 2013
Cohort n %
Online 42 42%

Interestingly, according to Table 9.1 from this document (Frankline, 2001) none of the above responses rate meet the recommended level given the number of students.

Franklin (2001) also has this interesting tidbit

New or revised courses frequently get lower-than-expected ratings the first time out. This may be very important if you have been active in developing or revising a course for which you do not have abundant ratings. New courses may take time to work the bugs out, and consequently you may have lower ratings than usual

What’s next?

  1. Analyse the student responses to the open questions.

    Due to another research project I need to become familiar with using NVivo. Using Nvivo to analyse the student responses to open questions seems a good way to become familiar with the tool and perhaps reveal some interesting insights. Two birds, one stone.

    Of course a barrier to this is the difficulty of getting access to student responses in a raw format to feed into NVivo.

  2. Revisit what the course should be covering.

    The Bachelor of Education for this course is currently under-going an accreditation process. With the new national curriculum (and perhaps very soon the re-jigged, new national curriculum) there are some requirements to revisit exactly what is being taught. For example, there was a strong recommendation last year that Interactive White Boards be explicitly covered (I didn’t see what “should” be taught in the course aligns with what I think should be taught).

    Beyond this I have some thoughts about what the course is missing and has too much of.

  3. Take a look at the current course.

    I actually need to go back and look through the course, the site, the activities etc and see what is ok and what needs to be improved.

  4. Play with some analytics.

    As part of the last point and also because of another project, I’d like to play a bit more with some analytics to look at what happened in the course and the course site. Something I’ve already started with this post trying to visualise blog post frequency.

  5. Talk with other teaching staff.

    By about this stage, I hoping the above work will give a good foundation for more discussions with other staff.

  6. Generate some ideas.

    Obviously I already have some ideas (e.g. doing something badges could be a good fit, but unlikely) already but hopefully all of this will identify some more.

  7. Make some plans.

    Lastly, the ideas will need to be reduced to something doable.

Now to go learn more about NVivo.


Franklin, J. (2001). Interpreting the Numbers: Using a Narrative to Help Others Read Student Evaluations of Your Teaching Accurately. New Directions for Teaching and Learning, 2001(87), 85–100.

Lewis, K. G. (2001). Making Sense of Student Written Comments. New Directions for Teaching and Learning, 2001(87), 25-32.

Wongsurawat, W. (2011). What’s a comment worth? How to better understand student evaluations of teaching. Quality Assurance in Education, 19(1), 67–83. doi:10.1108/09684881111107762

Missing affordances – A partial explanation for the quality of University L&T

A friday afternoon rant/story illustrating what I see as a fatal flaw in institutional University L&T systems (at least those I’m experienced within in Australia). This flaw helps explain why the quality of L&T – especially e-learning – leaves something to be desired.

Evaluating teaching and learning

I’m in the midst of thinking about the main course I’m teaching this year. As part of that I’m trying to take a serious look at the responses from last year’s students on the standard course evaluation survey.

Doing this is actually part of my responsibilities as laid out in institutional policy (emphasis added)

All staff who are employed on a continuing (full-time or fractional), fixed-term or casual basis and who have a substantial involvement in teaching are required to:

  1. Evaluate their teaching using the SELT or SERE instrument, as appropriate. In addition, staff may choose to use other, optional instruments such as PELT, short, open-ended written responses, meetings with student representative groups and nonstandard questionnaires.
  2. Reflect on evaluation feedback and, where necessary, determine, implement and communicate to students a timely response that is consistent with the continuous improvement of teaching.

Now there are some widely known limitations of the type of data arising from these surveys. However, at the same time there are a range of techniques and strategies that can be adopted that can help address these limitations somewhat. At the simplest is the idea of sorting comments by respondent (Lewis, 2001) or something like Wongsurawat’s (2011) approach of using each respondents correlation with the mean class ratings. Beyond this, I’m certain the quantitative researchers at the institution could come up with a range of analysis that might be beneficial.

What does the institution provide?

A simple web interface that provides tables of statistical data and bar graphs for the closed questions and lists of responses for the open questions. We can choose whether to see just the closed questions, just the open questions or both. There’s no apparent way in the interface to get the raw data, so no easy way of importing into an analysis tool. I can generate a PDF file with the data.

My institution is not alone as illustrated by this tweet from @s_palm

I’m required to do this reflection by institutional policy. It’s also a good thing to do, but the institutional ecosystem does a poor job of enabling me to do this effectively


In writing about cognitive artefacts, Norman sums up the problem

..no surprise that those things that the affordances make easy are apt to get done, those things that the affordances make difficult are not apt to get done (Norman, 1993, p. 106)

i.e. standard human behaviour.

If Norman’s argument holds, what does it mean for the institution requirement of reflecting on evaluation feedback that the affordances offered by the institution make this difficult to do? What does this suggest about the impact on the quality of teaching and learning at the institution?

And this isn’t the only situation where there are limited affordances. For example, I can’t easily find the final grade breakdown for students in courses I’ve taught. There’s no way I or my students can search a course website. Apparently the alignment of course learning outcomes with learning content, activities and assessment is good practice. What affordances are built into institutional systems to encourage continuous consideration of alignment (i.e. not accreditation induced mapping projects)?

What might happen if what is deemed important for quality learning and teaching was made easy? What might that do for the quality of learning and teaching at an institution?

Ignorance and the big picture

I’m aware that my current institution has expended significant resources in the design of the institutional course evaluation policy and instruments. It does appear that the amount of effort and resources expended in that effort has starved focus and resources from the ensuring that there are appropriate affordances in place to make it easy for staff to fulfil institutional policy. I wonder if the small picture question of affordances is visible to those thinking of the institutional big picture.

I often think that decision makers taking the “big picture” means they are completely ignorant of these smaller level details.

Be careful what you wish for

Of course, the only thing worse than this affordance not being provided by the institutional infrastructure. Is the likely affordance that would result from the institution’s attempt to address this problem.

The best solution to this may not be to purchase a you beaut enterprise system that comes with all the bells and whistles. It might be just to provide an export option for the raw data, thus allowing academics to leverage their respective experience and skills with a range of analysis tools. In addition, it might be a good idea to provide a simple mechanism (integrated into the export function) by which people can share the analysis they do.

Solve the cause, not the symptom

And of course, if (heaven forbid) someone from the “big picture” crew in the institution got wind of this blog post, the most likely outcome would be a focus entirely on student evaluation. All the while missing the fundamental cause, that the structure, policies and practices of the institution are incapable of paying attention to affordances, let alone doing something about them.

Biggs and the reflective institution

There’s echos of this in Biggs’ (2001) idea of the reflective institution and the notion of Quality Feasibility

What can be done to remove the impediments to quality teaching? This is a question that institutions rarely ask, although individual expert teachers continually do. It is one thing to have a model for good teaching, but if there are institutional policies or structures that actually prevent the model from operating effectively, then they need to be removed. (Biggs, 2001, p. 223)


Biggs, J. (2001). The Reflective Institution: Assuring and Enhancing the Quality of Teaching and Learning. Higher Education, 41(3), 221–238.

Lewis, K. G. (2001). Making Sense of Student Written Comments. New Directions for Teaching and Learning, 2001(87), 25-32.

Wongsurawat, W. (2011). What’s a comment worth? How to better understand student evaluations of teaching. Quality Assurance in Education, 19(1), 67–83. doi:10.1108/09684881111107762

Visualising posting frequency: BIM and EDC3100

The following describes an attempt to develop a visualisation of the frequency with which students in the course I teach posted to their blogs during two offerings of the course in 2013. A few reasons for doing this

  1. Confirm my gut feel that some students (for a variety of reasons) treated blogging pragmatically and only posted just before due dates.

    Only some evidence of this.

  2. Explore if there’s any visual correlation between this behaviour and their final grade in the course and other factors.

    Some suggestion of this.

  3. Help think about whether or not a visualisation like this might be something to include in the next version of BIM.

    That might be difficult given time-frames and other constraints, but I’ll be exploring how to create some of these manually and use them during the coming semester.

Cut to the chase (you may have to be patient for these visualisations to come up in your browser – and I can’t be sure they’ll work for everyone)

More description of what and how follows.

The first plan

A table. Columns represent days of the semester. Each row is a student. Colour the cells where a post occurs. Show the due dates by running a line through the table. Group the rows via various external factors including: GPA, result in the course, sector.

Google Chart’s magic table looks like a good tool for a first run. So here goes.

Test out magic tables

Well, the example code for Data::Google::Visualization::DataTable is truly broken. But using it will generate Javascript array that will slot straight into the MagicTable visualisation.

Doing it with BIM data

So bring in some of the BIM code, read the database and generate the data structure required by the MagicTable code and you get the following as a first stab (click on it to see a larger version). Each row of this figure represents an individual student blog. Each column represents a day of the semester. Any cell that is shaded a particular colour represents a day when some posts were added to the student’s blog.

The fish-eye view in the above is overing over one student who definitely seems to have left things to the last minute – 16 posts on the one day. A day that happens to be toward the end of the semester.

The prevalence of green squares earlier in the semester represents days with 6 to 8 posts.

If you want to be more interactive, you can see the full working HTML page here. The table itself has a size of 1403×1315.

What’s missing/wrong from this view?

  1. The first few columns don’t seem to be getting set correctly – NaN?
  2. Need more signposting of the various semester dates. e.g. when are the 3 assignments due? When’s the Professional Experience period? When are the semester holidays?

    On the HTML page it’s possible to see a pattern where there are fewer posts later in the semester. As it happens, the last 5 weeks of the teaching period for this semester included 2 weeks of holidays and 3 weeks of Professional Experience (where the students are out teaching in schools).

  3. Include some indication of student results – e.g. final grade, GPA, assignment result or even the result for the learning journal (the marks given for the blog).

    This example of a magic table uses the idea of column/row headers which could be useful in doing this.

Next step

If I move directly to generating the Javascript from Perl for magic table it appears I have a bit more control. I can set a row and column header so that you can see some detail about the column (e.g. the data) and row (e.g. GPA, student name, grade etc). I have that working. Now I need to get some extra student data into the database.

It is moments like this that I detest working in an environment where there is not decent data infrastructure. Having to waste time to fix up some of the data I’m working with.

Ok, now let’s sort by grade and here is the finished product. The page itself offers some more description of what is shown.

I don’t have the time to explore what’s going on here in detail. So, some quick observations, none of which tell you very much without further exploration

  • There is an early student with “no grade” (indicating a withdrawal from the course) that has apparently posted consistently throughout the semester.
  • All of the other “no grade” students don’t post after the 15th August (when the first assignment is due).
  • There’s a group of 5 students with the lowest passing grade who blogged consistently throughout the semester but started later than other groups of students.
  • There are a few students with the top grades who are quite obviously blogging just before the assignment due dates.

However, it does reinforce an existing opinion that the blogging in this course is not yet introduced sufficiently well for the students to get real value from it.

Semester 1

The semester 1 offering of the course had three times as many students and was the first time the blogs were used in this course. The semester 1 offering also included 3 groups of on-campus students as well as online students (semester 2 was only offered online). I’m thinking that the semester 1 map should show evidence of more “pragmatic” blogging.

Let’s find out.

A rough first run at the semester 1 view.

Some quick comments

  • The magic table widget breaks down with the a table this size. Difficult to see the row/column labels as the page scrolls.
  • At least for me, you can see the shape of the semester. Assignment 1 due – break. Assignment 2 due – Professional Experience (no blogging), then assignment 3 due.
  • THere’s at least one C student who appears to be blogging just about every day and often more than one post. What’s with that?
  • There’s a group of C students who appear to be very pragmatic.
  • There seems to be a general increase in posting regularity and intensity as you get to the HD students.

    At least this is how it looks to me.

Would be interesting to explore further. Does mode in semester 1 make a difference? Amongst many more.

What’s next?

Wonder how difficult this would be to incorporate into BIM in a usable way?

At this stage, I’m thinking I’ll produce a couple of these visualisations during the coming semester for students to think about where they are with their practice.

May also prove an interesting indication of how changes in the scaffolding and support of the blogging process works out.

The visual representation is interesting, but needs easier ways to manipulate and explore the data more. e.g. some representation of the size, content, number of links, and quality (?) of the blog posts etc.

Reflective Blogging as part of ICT Professional Development to Support Pedagogical Change

I am planning to do some more work on BIM in preparation for using it in teaching this year, including finishing some analysis of how the blogging went in last year’s two offerings.

As luck would have it, I skimmed one of my feeds and came across Prestridge (2014). What follows is a summary and some thoughts. It’s nice to be reading an open access journal paper after a few very closed off articles.

Aside: I am wondering whether or not in the new world order being someone that reads feeds and has students blog is become somewhat old fashioned.


The abstract for Prestridge (2014) is

Reflection is considered an inherent part of teacher practice. However, when used within professional development activity, it is fraught with issues associated with teacher confidence and skill in reflective action. Coupled with anxiety generally associated with technological competency and understanding the nature of blogging, constructive reflection is difficult for teachers. This paper focuses on the reflective quality of school teachers’ blogs. It describes teachers’ perceptions and engagement in reflective activity as part of an ICT professional development program. Reflective entries are drawn from a series of blogs that are analysed qualitatively using Hatton and Smith’s (1995) three levels of reflection-on-action. The findings suggest that each level of reflective action plays a different role in enabling teachers to transform their ICT pedagogical beliefs and practices. Each role is defined and illustrated suggesting the value of such activity within ICT professional development, consequently reshaping what constitutes effective professional development in ICT.

This appears to be relevant to what I do as the course I teach is titled “ICTs and Pedagogy” and reflection through blogging is a key foundation to the pedagogy in the course. Of course, this appears to be focused on in-service, rather than pre-service teachers.


In the Australian education context various government policies illustrate that ICTs are important. There’s a move to 1-to-1 student/computer ratios. However, “success with regard to technology integration has been based on how extensive or prominent the use of it has been in schools rather than on whether the teacher has been able to utilize it for ‘new’, ‘better’, or more ‘relevant’ learning outcomes (Moyle, 2010)” (Prestridge, 2014). Suggests a need to “reconceptualise both the intentions and approaches to professional development” if there’s going to be an ROI on this government investment and if we’re to help teachers deal with this.

PD is “an instrument to support change in teacher practice”. Long held view that PD should move from “up-skilling in the latest software” to a deeper approach that focuses on pedagogy and context rather than technology; building teachers’ confidence in change; development of teachers’ metacognitive skills; and, as a philosophical/revisioning of ICT in learning (references attached to each of these). References work by Fisher et al (2006) as requesting “a cultural change in the teaching profession”, the principles of which need to be “activiated within ICT professional development if we are going to move from retooling teachers to enabling them to transform their practices”.

Note: I wonder how well this academic call for a cultural change matches the perceptions of teachers and the organisations that employ them? I have a feeling that some/many teachers are likely to be more pragmatic. As for the organisations and the folk that run them….

And now onto the importance/relevance of reflection to this. Schon gets a mention. As does the action research spiral, teacher-as-researcher, inquiry based professional development, reflective action, Dewey. Leading to research suggesting “that reflection brings automatic calls in the improvement of teaching” and other work suggesting there’s a lack of substantive evidence.

This paper aims to investigate “the role of written reflection as a central process in a ‘hearts and mind’ approach to ICT professional development.

Note: The mix of plural and singular in “hearts and mind” is interesting/a typo in this era of standardised outcomes/curriculum and increasing corporatisation.

Methods to framing the research

Background on a broader ARC funded project that aims to develop “a transformative model of teacher ICT professional development”. With “one or two teachers” volunteering from each school it would appear to suffer the problem of pioneers versus the rest. Teachers engaged in classroom inquiries, in particular the “implementation of an ICT application in regard their pedagogical practices and student learning outcomes”. Supported through a local school leader, outside expert, online discussion forum and personal blogging.

Has a table that lists the inquiry questions of the 8 teachers. Questions range from “How can students be supported when creating an electronic picture book using the theme ‘Program Achieve”?” to “What strategies need to be employed to promote effective/productive ICT practices that encourage intellectual demand and recognise difference and diversity?”

Teachers were encouraged to blog after teaching. Provided with a framework for reflecting after teaching (5R framework). Weekly blog mandatory. School leaders asked to encourage blogging.

This work focuses on

  1. teachers’ perceived value of the reflective activity

    Data from teachers’ final interviews and reports analysed using constant comparative method

  2. the role of written reflection in enabling change in pedagogy

    Blog posts analysed with Hatton and Smith’s (1995) three types of writing: descriptive reflection; dialogic reflection; and, critical reflection.


  • 1 teacher had consistent reflection of the year implementation
  • 4 teachers had spasmodic entries, mostly at the beginning
  • the other teachers writing could be seen as simply record keeping

Finds similar results in other use of reflective blogs and suggests that “teachers’ lack of understanding on how to reflect limits their reflective writing abilities”.

Note: Not a great result perhaps, but not entirely unexpected. Might get some idea of this from my students posts in 2013 later today.

Perceived value of reflective writing

Not surprisingly, the “consistent reflection” person liked blogging. Others didn’t.

A major theme on the value was “a lack of understanding on how to reflect”.

Note: I have a feeling this may be one factor for my students. Though I wonder how much pragmatism and especially how reflective blogging falls outside the realm of standard practice for many plays a role.

“What to write in the reflective blog and then what to do with these reflections were issues raised by the teachers”

Note: Raising the issue of BIM being better at providing “prompts” to students.

Ahh, a quote from a participant brings back the “realm of standard practice” issue

I think because this inquiry thing was such a different way of doing things I’ve ever done before, it took me a while to get fair dinkum about it. I still couldn’t get the blog…..that’s one positive that’s come out of it because if I were asked to do something like this again then I would do it much more readily.

Picks up on the idea of “reflection as description” through a number of quotes. An apparent lack of priority given to analysing on what had happened and going beyond description.

This is even though the teachers were given the 5R framework and a range of questions/prompts in project documentation and comments by the outside PD expert.

Note: Given this difficulty in understanding how to write reflectively, what impact does it have on the next part of the paper “examining the role of written reflections to identify how reflection supports teacher change in pedagogy”?

The role of reflective writing

The obvious “solution” is to focus on the 1 teacher who consistently blogged, generating the problem of a sample of 1.

The posts were analysed in chronological order. Emphasis on linking the type of reflection and the role it plays in “improving and or supporting teachers in transforming the beliefs and practices”

The most common type of reflection is descriptive, which really isn’t reflection. But this descriptive reflection “provides a leverage for dialogic reflection” which may or may not be pursued. As it turns out, generally not chosen. Only when a critical friend provides some additional prompting does it appear.

When it did occur, it helped shape the teacher’s pedagogical beliefs and practices. Descriptive reflection made conscious the connection between pedagogical beliefs and actual practice, but more as a justification.

Only spasmodic evidence of critical reflection.

Suggests that data supports the conclusion that there’s a developmental sequence to reflection. Start with descriptive and then the more demanding forms emerge.

The role played by each type of reflection in transforming pedagogical beliefs and practices

  1. Descriptive reflection – a connector, making conscious the links between pedagogical beliefs, current teaching practices, and student learning outcomes.
  2. Dialogical reflection – the shaper, where the connections were examined and explored, enabling transformation.
  3. Critical reflection – a positioner. Placing the role of teacher in the broader context and critical evaluate the role.


If how to reflect in written form is understood, then “reflective action plays a significant part in enabling them to change their pedagogical beliefs and practices”. Each type of reflection plays a different role.

A lack of guidance and support were found to affect reflective action.


Prestridge, S. J. (2014). Reflective Blogging as part of ICT Professional Development to Support Pedagogical Change. The Australian Journal of Teacher Education, 39(2).

Challenges in employing complex e-learning strategies in campus-based universities

The following is a summary of McNaught et al (2009). This is one of three papers that from the same institution around the LMS that I’ve looked at recently.

The abstract for the paper is

Despite the existence of a significant number of established interactive e-learning tools and strategies, the overall adoption of e-learning is not high in many universities. It is thus important for us to identify and understand the challenges that face more complex e-learning projects. Using a qualitative method that gathered together the reflections of experienced practitioners in the field, this paper outlines many types of challenges that arise in the planning and development, implementation and evaluation stages of e-learning projects. Some of these challenges are related to human factors and some are associated with external factors such as technological infrastructure, university policy and support and the teaching and learning culture as a whole. A number of models are presented to assist our understanding of this situation – one on understanding the nature of innovation, a grounded model of the challenge factors we have encountered in our own experience and one to show possible future directions.

The paradox of e-learning

Lot’s of e-learning conferences full with presentations about digital resources and tools. But reality of institutional adoption of e-learning very different. “..this paper was born out of a desire to ‘come clean’ and see if we can advance e-learning from its often mundane position as the repository for lecture notes and PowerPoints” (McNaught et al, 2009, p. 268).

The context of campus-based universities

The cases arise from campus-based universities in Hong Kong, though “we believe our ‘story’ is more generally applicable” (p. 268). The authors do suggest that the “dynamic of distance universities are quite different” given that distance may provide more of an incentive for better e-learning strategies.

Note: I really don’t think that the distance dynamic plays much of a role at an overall level. There is perhaps more thought, but I wonder how much of that translates into action?

Even writing in 2009, the authors suggest that most of the success stories arise from pioneering teachers. The early adopters. References a 1998 paper by Anderson et al as support. Gives some statistics from CUHK from 2004 to show limited use. This draws on some of the previous papers.

More interactive uses of technology is often “development-intensive and/or administrative-intensive. They require teachers to spend a great deal of time in planning and creating the online resources, and then usually sustained significant effort in monitoring and regulating the online strategies while they are in progress”. Cites Weaver’s (2006) challenge to “encourage quality teaching practices…that seamlessly integrates the technical skills with the pedagogical and curricular practices… and does not promote transfer of existing poor teaching practices to the online environment”

Examples of unsuccessful complex e-learning

Appears to draw on literature to give examples of complex e-learning projects that failed in various stages of the development process

  1. During development – “getting it finished” – Cheng et al (2006).
  2. “Getting it used”

A model to show why innovation is challenging

Going to present two ways of “representing the challenges that face innovation and change – in this case we are considering a complex interactive use of e-learning in campus-based universities”.

The first is the J Curve. i.e. things will get worse before they get better “because of the expenses and challenges that occur early on in the innovation cycle”.

Note: But like much of the innovation literature this simplification doesn’t capture the full complexity of life, innovation and e-learning. If innovation is in something that is rapidly changing (e.g. university e-learning) then is there ever an upward swing? Or does the need for a new innovation – and another downward spiral – occur before you get the chance to climb out of the trough? For example, does the regular LMS migration phase in most universities (or the next organisational restructure) prevent any ability to climb up the J curve?

The second is the S curve (a related representation below). i.e. diffusion occurs through innovation, growth and maturity. With the transition from “innovation” to “growth” phase being the most important. And it’s hard

Leading innovation through the bottom of the J-curve or through the transition from ‘innovation’ to ‘growth’ in the S-curve is not easy as this process often requires people to rethink their beliefs and reformulate their ways of working; this is difficult. (p. 271)

Now brings in Lewin’s ideas about conceptual change process as a way of thinking about the challenge of changing beliefs and practices (a model the authors have used previously). This process has three stages

  1. “a process for diagnosing existing conceptual frameworks and revealing them to those involved;”
  2. a period of disequilibrium and conceptual conflict which makes the subject dissatisfied with existing conceptions; and
  3. a reforming or reconstruction phase in which a new conceptual framework is formed” [Kember et al. (2006), p.83]

Note: A few years ago I expressed some reservations about the applicability of Lewin’s model. I think they still apply.

To some extent this quote from the author’s gets at some of my reservations about this perspective on encouraging change with e-learning (emphasis added) “The process of demonstrating to teachers that there might be a better way forward with their use of e-learning requires evidence and this is why evaluation is so critical” (p. 271).

The assumption here is that there is a better way for the teacher to teach. We – the smart people who know better – just need to show them. Given the paucity of quality technology available within universities; the diversity of teachers, students and pedagogies; and the challenge from Weaver above I wonder if there is always a better way to demonstrate that is – to employ some Vygotsky – within the Zone of Proximal Development of the particulars of the learning/teaching context?

The author’s model for understanding the challenges facing e-learning, innovation and change is

  1. An understanding that change is not easy and always meets resistance (J-curve).
  2. An appreciation that there will be no significant gains unless significant numbers of teachers begin to adopt the innovation – in this case, complex e-learning (S-curve).
  3. A suggestion that the process of implementation should model the three stages of the conceptual change process. Evaluation is integral to this process.

Note: Are people all the change averse? Sure, we are/can be creatures of habit. However, when it comes to e-learning and that sort of “innovation” the change is often done to students and staff, rather than with them. i.e. the particular tool (e.g. a new LMS) or pedagogical approach (e.g. MOOC, flipped classroom etc) is chosen centrally and pushed out. Systems that are developed with people to solve problems or provide functions that were identified as needed are different (I think).

Note: I find #1 interesting in that it takes the J-Curve to suggest that there will always be resistance. From their introduction to the J-Curve the point seems to be that innovation brings challenges and expense that mean ROI will drop initially. This doesn’t seem to be about resistance.

Note: #2 is also interesting. The requirement that there be significant levels of adoption prior to significant gains arising from an innovation is a problem if you accept good quality L&T being about contextually appropriate approaches. The sheer diversity of learners, teachers etc – no to mention the rapid on-going change – suggests that this model of “significant gains == significant levels of adoption” may not fit with the context. Or at least cause some problems.

The study

Qualitative method to collect reflection of practitioners in the field “regarding the challenges in the various stages of development and use of complex e-learning strategies”. 5 authors – 3 from central L&T and 2 were pioneering teachers.

Note: Would appear that the sample is biased toward the “innovators”, involving other folk may have revealed very different insights.

Three sources of data

  1. Detailed interviews with teachers and programmers and analysis of email communication logs for projects that were never implemented.
  2. Publications about the work of one of the authors.
  3. Similar from another author.


Iterations of reflection and discussion led to a table of challenges.

Teachers Students Supporting staff Technology/Environment/Culture
Planning and development Limited time and resources Miscommunication Restrictions in university resources and support
Necessity of new skills Different perception of tasks with teachers Technology being inflexible
Miscommunication Limitation in resources and expertise Idiosyncratic nature of development
Different perception of tasks with support team Idiosyncratic nature of development
Implementation New to strategies
Unwillingness to learn differently
New to strategies Sustainability
Dissemination Unwillingness to share
Unmotivated to learn new technologies
Strategies do not match teaching styles
Contrary to existing T&L practice
Evaluation Lack of cases Lack of appreciation Question about effectiveness

These are elaborated more with examples.


Taking the four sources from the above table, the authors propose the idea of a “mutual comfort zone”. An e-learning project needs to have all of the factors in this MCZ for it to be successful. The paper illustrates this with the obligatory overlapping circle diagram.

Cases of successful complex e-learning strategies, thus, seem to be limited to the instances when all the factors noted in Figure 4 work in unison. It is therefore easy to see why successful cases of complex e-learning are not all that common and are restricted to highly motivated pioneering teachers who are comfortable with innovative technologies and may also be in an innovation-friendly environment. (p. 281)

Note: resonating with the mention of ZPD above.

Becoming more optimistic, the future is promising because

  • The tools are getting more “e-learning friendly”.
  • LMSs are “now more user-friendly and more flexible” makes mention of open source LMSs like Moodle.

    Note: But doesn’t more flexibility bring complexity?

  • Teachers now have better IT skills are want to use technology.
  • Supporting services are proving based on accumulated experience.

    Note: I wonder about this. Organisational restructures and the movement of people aren’t necessarily helping with this. I can point to a number of situations where it has gone the other way.

  • Institutions are adopting e-learning, so the policy problem is solved.

    Note: Assumes that the policy is done well and actually can and does have an impact on practice. I’m not sure those conditions are met all the tie.

Given all this “E-learning might then reach a critical mass and so that e-learning will progress beyond the valley bottom of the J-curve and will start climbing the growth phase in the S-curve”.

I wonder if this is evident? This links very nicely with some of the ideas in my last post.


Mcnaught, C., Lam, P., Cheng, K., Kennedy, D. M., & Mohan, J. B. (2009). Challenges in employing complex e-learning strategies in campus-based universities. International Journal of Technology Enhanced Learning, 1(4), 266–285.