Evaluating EDC3100 in 2013 – step 1

It’s time to reflect on what happened in EDC3100, ICTs and Pedagogy last year. In another couple of months semester will start and in a couple of weeks I’m scheduled to talk with some of the teaching staff. The following is the first of a few posts that will serve the basis for discussions with other members of the teaching staff, an artefact to show the 2014 students some evidence of thought and rationale for the course they’re taking, and perhaps to spark some ideas from you, the reader.

This post

Background

The course in question is taken by 3rd year students undertaking a Bachelor of Education (i.e. pre-service teachers and the vast majority of students in the course) and also a small number of students studying a Bachelor of Vocational Education and Training (the tension between requirements laid down by the accrediting bodies for school teachers and the needs of the VET students is one of the difficulties in this course). The aim of the course is for students to develop the ability to use ICTs to improve their teaching and their students learning.

I’ve now taught the course for two years. In 2012 I taught the course as developed by someone else. I wasn’t going to let my ignorance get away from me. For 2013 I was able to redesign the course to something closer to what I prefer. However, a range of issues meant that almost all of the redesign occurred as the first semester 2013 offering progressed. This was far from optimum as is shown below. The second semester 2013 offering was a tweaked version of the redesign and was seemingly significantly more successful.

The course is offered twice a year (semesters 1 and 2). The semester 1 offering is the largest offering. Semester 1, 2013 had approaching 300 students spread amongst 3 different campuses, a Malaysian partner and online. The online student cohort (n=180) was the largest. I was responsible for the online students and those at the Toowoomba campus. Two other teaching staff were responsible for the students at the two other campuses and the Malaysian partner. The semester 2 offering is online only and in 2013 had around 100 students.

The aim for 2014 is to only do minor revisions. To build on what is there and improve it. Workload (we’re only meant to do major revisions every 2/3 years) and expectations around research outcomes is one of the reasons for this. But another important reason is avoid throwing away the knowledge and resources gathered during the 2013 offering. For example, I know have a collection of very good sample assignments from the 2013 students. These and other insights from 2013 are seen as valuable. A major revision would throw them out. It would also mean a third year in a row where I’m dealing with essentially a new course.

On the other hand, the course synopsis etc is from before my time and I’m not a big fan (of course I’m not, I didn’t write it) and that along with learning outcomes should perhaps be revisited.

Bias and perceptions

Before I start I’d like to make explicit what my current perceptions are of the course. These are based on having developed and taught it, having skimmed some of the student feedback, and talked with other staff and students. I’m trying to make explicit my current perspectives to challenge myself to think differently once I’ve taken a more indepth look at the course.

Semester 1 bad, Semester 2 better

Semester 1 was problematic. Mostly because the assessment and specific tasks each week being developed during the semester. This created confusion amongst the staff and the students. It prevented students from being able to plan ahead. It also created some of the workload issues. Semester 2 was more successful because everything was available up front and the insights generated during semester 1 led to some minor improvements.

Having taught both semester this was evident. It also shows up in the student evaluations. For example, the image (click on it to see a bigger version) that follows is the first chart from the Semester 2 students (remember these are online only). It shows comparative means for the standard 8 questions (see the PDF complete view for the actual questions) for the class, the course and the faculty. In this image class (online students) and course are the same as all students are in the online “class”.

Semester 2 EDC3100 Comparative Means by David T Jones, on Flickr

All of the means for this offering are above the Faculty (of Education) mean and all above 4, most above 4.5.

Compare this with the means from the Semester 1 online students. I’m using the online students from semester 1 as they are they group most comparable to the Semester 2 students (all online). As it happens, the semester 1 online students were also the cohort that liked the course the most.

Semester 1 online students EDC3100 Compa by David T Jones, on Flickr

A very different story. Only 2 of the class means just above 4. Note that the course means are significantly below the class means. The online students in semester 1 were the happiest of the cohorts (i.e. the on-campus students ranked it much worse). For the class means (online students) most were above faculty mean, but not SEC02 (I had a clear idea of what was expected of me in this course) and SEC05 (I found the assessment in this course reasonable). The “in-semester development” of the course was a problem.

Workload and assignments

Workload for the course remains high for the students, even in semester 2. This is due to a number of factors including the assessment and the weekly activities. It is, however, also related to the students being challenged. i.e. many are not highly experienced with ICTs, come in with a perception of being ICT challenged, and yet the course expects them to make heavy use of ICTs for their learning as well as their teaching. The inclusion of 15 days of Professional Experience – where students are expected to teach – doesn’t help this perception.

Duplication for on-campus students

Part of the workload issue for on-campus students is the apparent need to complete both the learning path and attend on-campus classes. The learning path is the series of activities and resources students need to complete each week (some early origins of this idea described here). This is tracked by the LMS and they get a small percentage of the overall course mark for completing this work.

The trouble is that on-campus students also expect to attend the on-campus classes. They then see the expectation to complete the learning path and, not surprisingly, see that they have to do twice the work they would normally have to do.

They don’t get the blogs or reflection

Students are expected to blog as part of the assessment. Most don’t see the point and see it as part of the high workload. A part of this is the expectation to use the blogs for reflection and building a PLN. The 2013 course didn’t do a good job of scaffolding this.

Absence of a search engine

The learning path doesn’t include a lot of content. Instead, it tends to give some context and point to other resources and have integrated activities. i.e. more of a study guide type approach. However, even with limited content students often find themselves later in the semester wanting to revisit a prior topic. Given that the learning path is hosted on the Moodle course site and there is no search engine, this requires a reliance of memory or manual searching. This frustrates.

In writing this, I’m thinking I may not do anything about this. The obvious addition of a search engine would require institutional and technical support. That’s unlikely to happen quickly if at all. I could combine the learning path into a single document that the students could search manually. The problem is that this would lose the activity completion “ticks” which show students what they’ve completed (positive comments on this) and is used for assessment.

The real reason for not doing something is that one of the aims I have for the course is to encourage the students to develop the mindset of solving their own ICT problems. Of seeing ICTs as a source for solutions to their problems, rather than a cause of their problems. Developing this mindset is for me, one of the most important enablers for using ICTs well. So this problem becomes something for them to solve for themselves using what ICT tools they can find.

Limited creativity in using ICTs

Many students tended to base their teaching while on prac, on the ICTs we used in the course. This is not surprising (it’s human nature), but suggests that we didn’t make the point strongly enough that context matters. i.e. what ICTs you’d use in a University course with 250 student spread over large geographic distances is not what you’d use in a class of 20 6 year olds.

Limited use of theories

A contribution of this course is providing the students with a range of theory, frameworks, literature etc to guide their use of ICTs in their teaching. With some exceptions this is done far from well and often as a post-hoc rationalisation (i.e. plan the lesson and then find some literature/theory to explain the design).

The site structure works

Lots of negative comments about being lost from the Semester 1 students. There could be some questions about how much this perception was impacted by the material not being available early.

The semester 2 students were more positive. For example

The structure of the Studydesk – David please teach all other lecturers to structure their studydesks like yours!

Student evaluations

The institution has a formal end of semester evaluation of teaching process. It’s a web-based process that is available from near the end of teaching to a near when results are released. Staff only see the responses after results are released.

The charts above a captures from the output provided by the evaluation system. The following provides some more background on this data. More in-depth analysis will come in later posts.

The following table provides a summary of end of course student evaluations. These take the form of a handful of standardised questions and with some optional standardised questions. Both closed and open questions. Most of the responses rates for the different cohorts is quite high, in comparison to some I’ve seen.

There is also evidence of the standard student confusion over which course they are commenting on. For example, one of the responses to an open question is the student who found the “Breakdown of how to tackle the Webquest” as one of the most helpful/effective aspects of the course. The trouble is that Webquests are actually a key part of another course that many of the EDC3100 students do at the same time.

I have included links to PDFs of the responses for each student cohort where I am the primary teacher involved. I’m reluctant to provide those where other teaching staff involved as I’m worried about stepping on toes (I haven’t talked with the other folk about this yet). I will note, however, that during my initial skim of the responses, the only mention of other staff by name is of the form “Feedback from my tutor (name removed) was fantastic (5/5)” or like this “Once again, (name removed) worked hard but was hampered with inadequate help from Toowoomba” – I’m the source of the Toowoomba inadequacy.

Semester 1, 2013
Cohort n %
Online 62 33%
Toowoomba 16 27%
Springfield 24 41%
Fraser Coast 11 58%
Semester 2, 2013
Cohort n %
Online 42 42%

Interestingly, according to Table 9.1 from this document (Frankline, 2001) none of the above responses rate meet the recommended level given the number of students.

Franklin (2001) also has this interesting tidbit

New or revised courses frequently get lower-than-expected ratings the first time out. This may be very important if you have been active in developing or revising a course for which you do not have abundant ratings. New courses may take time to work the bugs out, and consequently you may have lower ratings than usual

What’s next?

  1. Analyse the student responses to the open questions.

    Due to another research project I need to become familiar with using NVivo. Using Nvivo to analyse the student responses to open questions seems a good way to become familiar with the tool and perhaps reveal some interesting insights. Two birds, one stone.

    Of course a barrier to this is the difficulty of getting access to student responses in a raw format to feed into NVivo.

  2. Revisit what the course should be covering.

    The Bachelor of Education for this course is currently under-going an accreditation process. With the new national curriculum (and perhaps very soon the re-jigged, new national curriculum) there are some requirements to revisit exactly what is being taught. For example, there was a strong recommendation last year that Interactive White Boards be explicitly covered (I didn’t see what “should” be taught in the course aligns with what I think should be taught).

    Beyond this I have some thoughts about what the course is missing and has too much of.

  3. Take a look at the current course.

    I actually need to go back and look through the course, the site, the activities etc and see what is ok and what needs to be improved.

  4. Play with some analytics.

    As part of the last point and also because of another project, I’d like to play a bit more with some analytics to look at what happened in the course and the course site. Something I’ve already started with this post trying to visualise blog post frequency.

  5. Talk with other teaching staff.

    By about this stage, I hoping the above work will give a good foundation for more discussions with other staff.

  6. Generate some ideas.

    Obviously I already have some ideas (e.g. doing something badges could be a good fit, but unlikely) already but hopefully all of this will identify some more.

  7. Make some plans.

    Lastly, the ideas will need to be reduced to something doable.

Now to go learn more about NVivo.

References

Franklin, J. (2001). Interpreting the Numbers: Using a Narrative to Help Others Read Student Evaluations of Your Teaching Accurately. New Directions for Teaching and Learning, 2001(87), 85–100.

Lewis, K. G. (2001). Making Sense of Student Written Comments. New Directions for Teaching and Learning, 2001(87), 25-32.

Wongsurawat, W. (2011). What’s a comment worth? How to better understand student evaluations of teaching. Quality Assurance in Education, 19(1), 67–83. doi:10.1108/09684881111107762

5 thoughts on “Evaluating EDC3100 in 2013 – step 1

  1. Pingback: Getting started with NVivo | The Weblog of (a) David Jones

  2. Pingback: Analysing some course evaluation comments | The Weblog of (a) David Jones

  3. Hi David,
    A comprehensive start. Don’t forget that while the staff were often as mystified as the students in semester 1, speaking for myself, we were a really united and excited team. It’s a much better course than previously provided and challenging students to reach higher peaks is something we should do, and you do, very well.
    I think one of the refinements is to revisit the blog entries. My concern is that we are not analysing the quality of the entries by only assessing the frequency of entries.
    Also, for the on campus tutorials at Springfield, I tried to build them around the Learning Path so it wasn’t doubling up, but rather leading them through it…or following them through it as the case turned out sometimes!
    that’s my start on thinking about 2014.

  4. Pingback: What should be covered in EDC3100? | The Weblog of (a) David Jones

  5. Pingback: The formal course evaluation process | An experiment in Networked & Global Learning

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s