Competence with digital technology: Teacher or environment?

Apparently there’s a problem with digital skills in Australian schools. Only 52% of Year 10  students achieved a minimum standard of digital competence, and the teachers tasked to help develop that competence feel they aren’t competent. Closer to home, I’ve previously pointed out that the pre-service teachers I work with are far from digital natives harnessing digital technologies seamlessly to achieve the learning, teaching, and life goals.

Given the perceived importance of digital competence, then something must be done. Otherwise “we run the real risk of creating a generation of digitally illiterate students”.

But what?

Mcleod and Carabott suggest

explicit teaching of digital competence through professional development for teachers. This is also important in teacher education programs…

digital competence tests should also be required for teacher registration

What do I think of those suggestions?

Well they certainly have the benefit of being familiar to those involved in formal education. Expanding as they do existing ideas of testing teachers.

But I’m not sure that’s a glowing recommendation. There’s an assumption that those familiar practices are working and should be replicated in other areas.

Limited views of knowledge – blame the teacher

Beyond that they seem based on a fairly limited view of knowledge. Di Blas et al (2014) talk about the knowledge required to integrate digital technologies into teaching as having

consistently been conceptualized as being a form of knowledge that is resident in the heads of individual teachers (p. 2457)

The type of view that sees the problem with a perceive lack of digital competence to be fixed only by filling the heads of the teacher with the necessary digital competence, and then testing whether or not it’s been filled appropriately. If it hasn’t been filled properly, then it tends to be seen as the teacher’s fault.

The limitations of this view means that I don’t think that any approach based on it will be successful. (After all, a deficit model is not a great place to start).

A distributive view

In this paper (Jones, Heffernan, & Albion, 2015) some colleagues and I draw on a distributive view of learning and knowledge to explore our use as teacher educators of digital technologies in our learning and teaching. Borrowing and extending work from Putnam and Borko (2000) we see a distributive view of learning and knowledge focused on digital technologies as involving at least four conceptual themes:

  1. Learning/knowledge is situated in particular physical and social contexts;
  2. It is social in nature;
  3. It is distributed across the individual, other people, and tools; and, that
  4. Digital technologies are protean.

How does this help with the digital competence of school students, teachers, and teacher educators? It suggests we think about what these themes might reveal about the broader context within which folk are developing and using their digital competence.

Schools and digital technologies

Are schools digitally rich environments? Each year I teach about 400 pre-service teachers who head out into schools on Professional Experience for three weeks. During that time they are expected to use digital technologies to enhance and transform their students’ learning. As they prepare for this scary prospect, the most common question from my students is something like

My school has almost no (working) digital technologies? What am I going to do?

Many schools are not digitally rich environments.

If they do, then digital technologies are often seen in ways that mirror reports from Selwyn and Bulfin (2015)

Schools are highly regulated sites of digital technology use (p. 1)…

…valuing technology as

  1. something used when and where permitted;
  2. something that is standardized and preconfigured;
  3. something that conforms to institutional rather than individual needs;
  4. something that is a directed activity. (p. 15)

As teacher educators with large percentages of online students, our digital environment is significantly more rich in terms of the availability of digital technologies. However, in our 2015 paper (Jones, Heffernan, & Albion, 2015) we report that the digital technologies we use for our teaching matches the description from Selwyn and Bulfin. Our experience echoes Rushkoff’s (2010) observation that “instead of optimizing our machines for humanity – or even the benefit of some particular group – we are optimizing humans for machinery” (p. 15). More recently I worked on a paper (Jones and Schneider, in review) with a high school teacher that identified the same problem in schools. Digital technologies that were inefficient, got in the way of effective learning and teaching, and failed to mirror the real world digital technology experience.

How do students and especially teachers learn to value and develop their digital competence in such an environment?

In the recent paper (Jones and Schneider, in review) we wondered what might happen if this environment was modified to actually enable and encourage staff and student agency with digital technologies. Allow people to optimise the technology for what they want to do, rather than optimise what they want to do to suit the technology. If this was done:

  • Would it lead to digital environments that were more effective in terms of learning and teaching?
  • Would it demonstrate the value of digital technologies and computational thinking to teachers in their practice?
  • Would this improve their digital competence?

If you could do it, I think it would positively impact all of these factors. But doing so requires to radically rethink a number of assumptions and practices that underpin most of education and the institutional use of digital technologies.

I’m not holding my breath.

Instead, I wonder how long before there’s standardised test for that.

References

Blas, N. Di, Paolini, P., Sawaya, S., & Mishra, P. (2014). Distributed TPACK: going beyond knowledge in the head. In Society for Information Technology & Teacher Education International Conference (pp. 2457–2465). Retrieved from http://www.editlib.org/p/131154

Jones, D., Heffernan, A., & Albion, P. (2015). TPACK as Shared Practice: Toward a Research Agenda,. In L. Liu & D. Gibson (Eds.), Research Highlights in Technology and Teacher Education 2015 (pp. 13–20). Waynesville, NC: AACE. Retrieved from http://www.editlib.org/d/151871

Putnam, R., & Borko, H. (2000). What do new views of knowledge and thinking have to say about research on teacher learning? Educational Researcher, 29(1), 4–15. Retrieved from http://www.jstor.org/stable/1176586

Selwyn, N., & Bulfin, S. (2015). Exploring school regulation of students’ technology use – rules that are made to be broken? Educational Review, 1911(December), 1–17. doi:10.1080/00131911.2015.1090401

 

Some simple analysis of student submissions

The last post outlined the process for extracting data from ~300 student submissions. This one outlines what was done to actually do some analysis on that data.

The analysis has revealed

  • Around 10% of the submissions have an issue with the URL entered.
  • About 16 lesson plans that have been evaluated by more than 1 student.
  • At least 100 students have evaluated a lesson plan found online with the Australian Curriculum Lessons site being the most popular.
  • An education-based site set up by the industry group Dairy Australia appears to be the most progressive in terms of applying a CC license to resources (apart from the OER commons site)
  • It’s enabled allocating related assignments to a single marker, but the process for doing so with the Moodle assignment management system is less than stellar.

Time to do some marking.

What?

Having extracted the data, the following tests can/should be done

  1. Is the lesson plan URL readable?
  2. Are there any lesson plans being evaluated by more than one students?
    Might be useful to allocate these to the same marker.
  3. What is the distribution of where lesson plans are sourced from? Did most use their own lesson plan?
  4. Pre-check whether the lesson can be used?

Is the lesson plan readable?

Code is simple enough, but using LWP::Simple::head is having some problems.

Let’s try LWP::UserAgent.  That’s working better.

Seems that if they successfully enter the URL it’s readable.

3 students have used file:/ as a URL – not online.

Distribution of lesson plans

Aim here is to group all the URLs based on the hostname. This will then allow the generation of some statistics about where the lesson plans are being sourced from. Findings include the following counts for domains

  • 1 – domain = UNI (a problem)
  • 3 – that don’t have a domain
  • 96 that appear to be using their own site as the course, indicating their own lesson plan
  • 19 domains indicating a lesson planning site
    Accounting for 107 students
  • 32 with some sort of ERROR

That’s still not 300. Ahh, we seem to have some problems with entering URLs correctly, common mistakes include

  • Just leaving off the http:// entirely
  • Mangling bits of http:// (e.g. ttp:// or http//)
  • Using a local file i.e. file:////
  • Having the URL as “To complete”
  • Having the URL empty

Fix those up as much as possible.  Most of these appear to have put something in the cover sheet – if they have one.

Duplicate URLs

There are 16 lesson plans that are used by more than one student.  Most by 2, 4 by 3, 1 by 4 and 1 by 5 students.

Identifying these mean I can allocate them to the same marker. Would be nice if there was an easier way to do this in Moodle.

Pre-check use

At least 107 of the students are using a lesson plan find online. The question is whether or not they can use that lesson plan as per copyright etc.

I could manually check each site, but perhaps to short cut it I should check the spreadsheet for a couple of students, see what argument they’ve mounted for fair use, and then confirm that.

The sites to check are

Interestingly there is some large variation between people using the same site.  Should allocate these to the same marker.  Will cut down on time for them.

Setting up the analysis of student submissions

A couple of weeks ago I wrote this post outlining the design of an Excel spreadsheet EDC3100 students were asked to use for their first assignment. They’ll be using it to evaluate an ICT-based lesson plan. The assignment is due Tuesday and ~140 have submitted so far. It’s time to develop the code that’s going to help me analyse the student submissions.

Aim

The aim is to have a script that will extract each students responses from the spreadsheet they’ve submitted and place those responses into a database. From there the data can be analysed in a number of ways to help improve the efficiency and effectiveness of the marking process; and, explore some different practices (the earlier post has a few random ideas).

The script I’m working on here will need to

  1. Be given a directory path containing unpacked student assignment submissions.
  2. Parse the list of submitted files and identity all the spreadsheets
  3. Exclude those spreadsheets that have already been placed into the database.
    Eventually this will need to be configurable.
  4. For all the new spreadsheets
    1. Extract the data from the spreadsheet

At this stage, I don’t need to stick the data in a database.

Steps

  1. Code that when given a directory will extract the spreadsheet names
  2. Match the filename to a student #id.
  3. Parse an individual Excel sheet
    1. Rubric
    2. About
    3. What
    4. How
    5. Evaluation
    6. RAT
  4. Mechanism to show the values associated with question number in the sheet.
    Look at a literal data structure.
  5. Implement a test sheet
  6. See which student files will give me problems.

Extract spreadsheet names

 

This is where the “interesting” naming scheme by the institutional system will make things interesting. The format appears to be

SURNAME Firstname_idnumber_assignsubmission_file_whateverTheStudentCalledTheFile.extension

Where

  • SURNAME Firstname
    Matches the name of the student with the provided case (e.g. “JONES David”)
  • idnumber
    Appears to be the id for this particular assignment submission.
  • assignsubmission_file_
    Is a constant, there for all files.
  • whateverTheStudent…
    Is the name of the file the student used on their computer. It appears likely that some students will have been “creative” with the naming schemes.  Appears at least one student has a file name something.xlsx.docx

Match the filename to a student id

This is probably going to be the biggest problem area. I need to connect the file to an actual unique student id. The problem is that the filename doesn’t contain a unique id that is associated with the student (e.g. the Moodle user id for the student, or the institutional student number).  All it has is the unique id for the submission.

Hence I need to rely on matching the name.  This is going to cause problems if there are students with the same name, or students who have changed their name while the semester is under way. Thankfully it appears we don’t currently have that problem.

Test with 299 submitted files

Assignment due this morning – let’s test with the 299 submitted files.

Ahh, issues with people’s names: apostrophe

Problem files

Apparently 18 errors out of 297 files.  Where did the other 2 go?

“Bad” submissions include

  1. 10 with only 1 file submitted;
    All 10 only submitted the checklist. Not the cover sheet or the lesson plan.
  2. 26 with only 2 files submitted (3 total required)
    1. 25 – Didn’t submit the lesson plan
    2. 1 – Didn’t submit the checklist
    3. 0 – Didn’t submit the coversheet
  3. 18 files that appear to have the bad xlsx version problem from below.

That implies that some of the people who submitted 3 files, didn’t submit an excel file?

Oh, quite proud in a nerdy, strange way about this

 

for name in `ls | cut -d_ -f2 | sort | uniq  -c | sort -r | grep ' 3 ' | sed -e '1,$s/^.*[0-9] //'`
do
    files=`ls *$name*`
    echo $files | grep -q ".xls"
    if [ $? -eq 1 ]
    then
        echo "found $name"
    fi
done

I’m assuming there will be files that can’t be read. So what are the problems.

Seem they are all down to Microsoft’s “Composite Document File V2 Format”.  These files will open in Excel, but challenge the Perl module I’m using.

Out of the 297 submitted so far, 18 have this problem.  Going to leave those for another day.

LATs, OER, TPACK, and GitHub

The following is an attempt to think about the inter-connections between the paper “Open Educational Resources (OERs) for TPACK Development” presented by Mark Hofer and Judi Harris at SITE’2016 and the Moodle OpenBook project and my own teaching.

First, is a description of what the open courses they’ve developed and what the students do. Second, is some early thinking of how this might link to EDC3100 and the Moodle open book project.

Learning Activity Types as OER/open courses

The paper offers a rationale and description of the development of two short, open courses designed to help primary and secondary pre-service teachers use learning activity types (LATs) to develop their TPACK.

Hofer and Harris (2016) describe them this way

The asynchronous, online “short courses” for preservice teachers that we have created are divided into eight brief, sequential modules…Each module begins with an overview and learning goal for the segment, and is presented as video-based content that includes narrated slides, interviews with practicing teachers, imagery, and additional online resources. Each of the videos ranges from 2-8 minutes in length, and includes verbatim closed captioning.

In completing the courses the students

  • Reflect on examples of ICT and pedagogy they’ve previously seen.
  • Select three lesson plans from a curated collection of plans from pre-service teachers.
  • Analyse those lesson plans: objectives, standards, types of learning activities, how learning is assessed, and the use of digital technologies.
  • Practice replacing one ill-fitting activity types from another sample lesson with other types of activity type that better fit the learning goal.
  • Consider substituting different technologies in the sample plan and discuss reasoning.
  • Review portions of interviews with an experience teacher.
  • Use selected plans from before to choose a LAT taxonomy and explore that taxonomy.
  • Think about replacing activity types and technolgoies and discuss.
  • Create their own lesson plan.
  • Subject their lesson plan to two self-tests called “Is it worth it?”

Hofer and Harris (2016)

We consciously erred on the side of the materials being perhaps too prescriptive and detailed for more experienced and/or advanced learners, since we suspected that it would be easier for other users to remove some of the content than to have to create additional supports.

Moodle open book and my course

In EDC3100 we cover similar ground and the content of these short courses could be a good fit. However, the model used in the course is a little different in terms of implementation. The short course content would need to be modified a bit. Something thought of by the Hofer and Harris (2016)

This is why we have released the courses in a modularized (easier-to-modify) format, along with an invitation to mix, remix, and otherwise customize the materials according to the needs of different groups of teacher-learners and the instructional preferences of their professors. The Creative Commons BY-SA license under which these short courses were released stipulates only that the original authors (and later contributors) are attributed in all succeeding derivatives of the work, and that those derivatives are released under the same BY-SA license

My course is implemented within Moodle. It uses the Moodle book module to host the content. The Moodle open book project has connected the Moodle book module with Github. The aim being to make it easier to release content in the Moodle book out to broader audiences. To enable the sharing and reuse of OERs, just like these courses.

While the technical side of the project is basically finished (it could use some more polishing before official release) there’s a large gulf between having a tool that shared Moodle book content via github and actually using it to share and reuse OERs, especially OERs that are actually used in more than one context. The LAT short courses appear to provide a perfect test bed for this.

Hofer and Harris (2016)

For teacher educators who would like to try the course “as is,” we have developed the content as a series of modules within the BlackBoard learning management system and have exported it as a content package file which can be imported into a variety of other systems. With either no changes or minor edits, the short courses in their current forms can be used within existing educational technology and teaching methods courses.

I’m assuming that the content package file will be able to be imported into Moodle, and perhaps even into the Book module.  It would be interesting to explore how well that process works and how immediately usable I (and others) think the content might be in EDC3100.

If I then make changes in response to the context and share them via the Moodle open book and Github, it would be interesting to see how useful/usable those changes and Github are to others. In particular, how useful/usable the Github version would be in comparison to the the LMS content package and the current “Weebly” versions of the courses.

I suspect that while Github provides enhanced functionality for version control (Weebly offers none), I’m not convinced that teacher educators will find that functionality accessible both in terms of technical knowledge, existing processes and practices around web content, and perhaps due to the contextual changes made.  Also, while GitHub handles multiple versions very well, the Moodle open book doesn’t yet support this well.

Putting the LAT courses into the Moodle open book seems to provide the following advantages:

  1. Provides a real test for the Moodle open book that will reveal its shortcomings.
  2. Provide a useful resource (optional for now) for EDC3100 students and also potentially for related courses I’ll need to develop in the future.
  3. Enable the community around LATs and the short courses experiment with a slightly different format.

I think I’ve convinced myself to try this out with the secondary LAT course as an initial test case. Just have to find the time to do it.

SITE’2016: LATs, OER, and SPLOTs?

SITE’2016 is almost finished, so it’s past time I started sharing some of the finds and thoughts that have arisen. There’s been a (very small) bit of movement around the notion of open. I’ll write about LATs and OER and some possibilities in another post. This post is meant to explore the possibility of adapting some of the TPACK learning activities shared by @Keane_Kelly during her session into SPLOTs.

It’s really only an exploration of what might be involved, what might be possible, and how well that might fit with the perceived needs I have in my course(s), but at the same time make something that breaks out of those confines. I’m particularly interested in Reil and Polin’s idea around residue of experiences and rich learning environments.

Over time, the residue of these experiences remains available to newcomers in the tools, tales, talk, and traditions of the group. In this way, the newcomers find a rich environment for learning. (p. 18)

As most of my teaching and software development work has had to live within an LMS, I’m also a novice at the single web page application technology (and SPLOTs).

What is a SPLOT?

A SPLOT is

Simplest Possible Learning Online Tools. SPLOTs are developed with two key principles in mind: 1) to make sharing cool stuff on the web as simple as possible, and 2) to let users do so without having to create accounts or divulge any personal data whatsoever.

The work by @cogdog builds on WordPress, but I’m wondering if something similar might be achieved using some form of single web page application?

i.e. a single web page that anyone could download and start using. No need for an account. Someone teaching a course might include this in a class. Someone with a need to learn a bit more about the topic could just use it and gain some value from it.

TPACK learning activities

Kelly’s presentation introduced four learning activities she uses to help students in an Educational Technology course develop their understanding about the TPACK framework. They are fairly simple, mostly offline, but appear to be fairly effective. My question is whether they can be translated into an online form, and an online form that is widely shareable – hence the interest in the SPLOT idea.

Vocabulary target review

In this activity the students are presented with a target (using a Google drawing) and a list of vocabulary related to TPACK (though this could be used for anything). The students then place the vocab words on the target. The more certain they are of the definition, the more “on target” the place the words. This then feeds into discussions.

At some level, through the use of Google drawing it’s already moving toward a SPLOT.

What if the students are entirely online, and especially with a tendency to asynchronous study? How might this be adapted to anyone, anytime and provide them with an access to the residue of experience of previously participants?

One approach might be something like a single web-page application that

  1. Presents that target and a list of vocab words that the user can place as appropriate.
    This list of vocab words could be hard-coded into the application. Or, perhaps the specific application (you could produce different versions for different vocab) could be linked to a Google doc or some other source of JSON data. The application gets the list of vocab words from that source.
  2. Once submitted the application could allow the user to view the mappings from previous users. This could be filtered by various ways.
    The assumption is that the application is storing the mappings of users somewhere. The representation might highlight other mappings that are related in someway to the user’s map.
  3. View provided definitions.
    Having provided their mapping the user could now gain access to definitions of the terms. There might be multiple definitions. Some put into the system at the start, some contributed by other users (see next step).
  4. Identify the most useful definitions.
    The interface might provide a mechanism by which the user can “like” definitions that help them.
  5. Provide a definition.
    Whether this occurs at this stage or earlier, the user could be asked to provide a definition for one or more terms after/prior to seeing the definitions of others.
  6. Remap their understanding.
    Having finished (more activity could be designed in the above) the user moves the words to represent any change in their understanding of the words. The system could track and display how much change has occurred and compare it with the changes reported by others.

TPACK game

The second activity is a version of the TPACK game (or this video). A game that is already available online, but not as a flexible object that people can manipulate and reuse. Immediate thought is the following might help make a “more SPLOT” version of the TPACK game

  1. Provide a single web page application that implements a variety of ways to interact with the TPACK game.
    For example,

    • The current version has people trying to identify a third element of TPACK given the other two. Which appears to be the version used by @Keane_Kelly
    • Another version might be to show a full set of three and ask people to reflect on whether or not the combination is a good fit, one they’ve seen before, not a good fit, and why.
  2. Provide the capacity to provide answers to to the application that are stored and perhaps reused.
    For example, the two different versions of the game above could be combined so that if someone suggests a particular combination in the first one that has already been “evaluated” they could be shown what others have though of it and why.
  3. Provide the capacity to share and modify the values for T, P and C.
    The current online version of the game plus @Keane_Kelly appear to have their own set of values for T, P, and C. Kelly mentioned the need to keep the Technology updated over time, but there’s broader value in keeping a growing list of values for all.  As there is also for customising some.  e.g. some technologies won’t always make sense in all environments, but in particular the content might be something to customise, e.g. for a specific curriculum or topic area.

    If it were an online application that used some sort of shared data space, it could be grown through use. It should also be possible to modify which data store is used, to support customisation to a particular context.

 

 

What to expect/look for from SITE’2016?

Fountain

I’m spending this week attending the SITE’2016 conference (SITE = Society of Information Technology and Teacher Education). This is my first SITE and the following outlines some of my expectations and intent.

It’s big

Site is one of a raft of conferences run by AACE. I’ve been to two of them previously: EdMedia and Elearn. These are big conferences. 1000+ delegates. Up to be beyond 10 simultaneous sessions. Lots of in-crowds and cliques. Lots of times when there is nothing you’re really interested in, and lots of times when there are multiple things you are very interested in. A lot of really good stuff lost in the mass.

Observations that have been borne out by my first glance at the program.  Too much to take in and do justice to.

At face value, a fairly traditional large conference. With the same breadth of simple to complex, of repetition to real innovation, from boring to mind-blowing. Probably the same ratio as well.

As I’m far from being an extroverted and expert networker, I instead rely on actively trying to make some connections with what I see and I’m doing/going to do.

Join a clique?

While our paper didn’t get an overall paper award, it was successful in winning a TPACK SIG Paper Award.  Given that our previous paper was also TPACK related and won a paper award. This might suggest a “clique” with which I might have some connection, and there are a couple of related papers that sound interesting.

There also appear to be other “cliques” based around computational thinking, design thinking/technologies, and ICT integration by pre-service teachers (more generally than TPACK).  All of these are interests of mine, they connect directly to teaching.

I’m thinking a particular focus for the next few days will one identifying and sharing ideas for using digital technology in school settings with the current EDC3100 crew.

There was one explicit mention of OER in the titles. Pity I can get access to the content of talks yet (thanks to how I was registered and the closed way the organisers treat the proceedings – online now but only for those registered)

Time to get the presentation finished.