Exploring Moodle Book usage – Part 7 – When are they used?

The last post in this series looked briefly at the contents of Moodle Book resources. This post is going to look at when the book resources are used, including:

  • What time of day are the books used?
  • When in the semester are they used?

By the end I spent a bit of time exploring the usage of the Book resources in the course I teach.

What time of day are they used?

This is a fairly simple, perhaps useless, exploration of when during the day. More out of general interest and laying the ground work for the code for the next question.

Given the huge disparity in the number of views versus print versus updates, there will be separate graphs for each. Meaning 3 graphs per year.  For my own interest and for the sake of comparison, I’ve included a fourth graph which is the same analysis for the big 2015 offering of the course I teach.  This is the course that perhaps makes the largest use of the Book and also the offering in which  I did lots of updates.

The graphs below show the number of events that occurred in each hour of the day. 12pm to 1am, 1am to 2am,…and so on.  Click on the graphs to see expanded versions.

There is no graph for prints per hour for 2012 as there were none in the database. This appears likely to be a bug that needs to be addressed.

Overall findings from time of day

Growth – The maximum number of events has grown each year (as expected given earlier indications of growth).

  • max views per hour: 2012 just less than 35K to 2015 over 150K
  • max prints per hour: 2013 just over 400 to 2015 over 1500
  • max updates per hour: 2012 just over 500 to to 2015 over 6000.

Similarity – The overall of shapes of the graphs stay the same, suggesting a consistent pattern in interaction.

This is especially the case for the viewing events. Starting with a low number from midnight to 1am, a on-going drop in events until 5am when it grows until the maximum per hour between 11am and midday. Then there is a general drop away until 7pm to 8pm when it grows again until dropping away after 9pm

Views per hour each year

2012
2012 views per hour

2013
2013 views per hour

2014
2014 views per hour

2015

2015 views per hour

EDC3100 2015 S1

EDC3100 2015 1 views per hour

Prints per hour each year

2012

2012 prints per hour

2013

2013 prints per hour

2014

2014 prints per hour

2015

2015 prints per hour

EDC3100 2015 S1

EDC3100 2015 1 prints per hour

Updates per hour each year

2012

2012 updates per hour

2013

2013 updates per hour

2014

2014 updates per hour

2015

2015 updates per hour

EDC3100 2015 S1

EDC3100 2015 1 updates per hour

Calendar Heatmaps

A calendar heatmap is a fairly common method of representing “how much of something” is happening each day of the year. The following aims to generate calendar heatmaps using the same data shown in the above graphs. The plan is to use the method/code outlined on this page.

It requires the generation of a two-column CSV file. First column the date in YYYYMMDD format and the 2nd column the “how much of something” for that day. See the example data on the blog post.  Looks like it might be smart enough to figure out the dates involved.  Let’s see.

It is, but doing all of the years together doesn’t work all that well given the significant increase in numbers of courses using the Book as time progresses and the requirement for the heatmap to use the same scale for all years. As a result the 2012 usage doesn’t show up all that well. Hence each of the years were mapped on separate heatmaps.

The following calendar heatmaps show how often the Book resources were viewed on each day. The events counted are only those for Book resources from courses offered in the given year. In 2012, 2013 and 2014 this means that there is a smattering of views of a books early in the following year (semester 3 stretches from Nov to Feb). There is no similar usage for the 2015 books because the data does not include any 2016 events.

The darker the colour the greater the use. In the 2012 image below you should be able to see a tool tip showing a value of 81 (out of 100) that is quite dark, but not the darkest.

2012

The 2012 map seems to establish the pattern.  Heavy use at the start of semester with a gradual reduction through semester. A few upticks during semester and toward the end of semester.

I no longer have easy access to specific dates for 2012 and 2013. The 2014 heatmap has some specific dates which should broadly apply to these earlier years.
2012 Book usage

2013

2013 Book usage - calendar heatmap

2014

The institution maintains a web page that shows the important dates for 2014, it includes:

  • March 3 – Semester 1 starts.
    Course websites open 2 weeks before this date – 17th Feb
  • June 16 – Semester 1 exams start.
  • July 21 – Semester 2 starts
    Course websites open 2 weeks prior – 7th July.
  • November 3 – Semester 2 exams start.
  • November 17 – Semester 3 starts.

Screen Shot 2016-09-11 at 4.52.36 pm

2015

The semester 1 2015 offering of my course had the following due dates for its 3 assignments

  1. 30th March – which appears to coincide with a heavy usage day.
  2. 4th May – also a slightly heavy usage day, but not as heavy.
  3. 15th June – two somewhat heavy usage days before and on this date.

Raising the question of what the heatmap for that course might look like – see below

Screen Shot 2016-09-11 at 4.53.10 pm

EDC3100 – S1, 2015

Focusing just on my course the increase in usage just before the due date for the assignments is more obvious. One of the reasons for this is that all the Assessment information for the course is included in a Moodle Book resource.
EDC3100 S1 2015 book usage - calendar heatmap
Other time periods relevant to this course are:

  • April 6 to 17 – the two week mid-semester break; and,
    Which correspond to two of the lightest periods of usage of book resources.
  • May 18 to June 5 – a three week period when most of the students are on Professional Experience within schools.
    Which also corresponds to a light period of usage.

The two heaviest days of usage are the 9th and 10th of March. The start of Week 2 of semester. It’s a time when the pressure is on to get a blog created and registered and start completing learning paths.

After the peak of the first three weeks, usage of the Book resources drops to around 50% per day.

Questions to arise from this

  • Does the learning journal assessment item for EDC3100 change when students interact with the course site?
  • Is the pattern of usage (down to 50% a day) indicative of students turning off, or becoming more familiar with the approach?
  • Does the high level of usage indicate

It also begs the question about whether particular offerings of the course show any differences.

2012 – S2

The 2012 S2 pattern is quite a bit different. It is a bit more uneven and appears to continue well after the semester is finished.  This is due to this being the first semester the course used the Book module and also because there was a semester 3 offering of the course for a few students that used the same resources.
EDC3100 2012 2 - Book usage

The 2012 heatmap also shows a trend that continues. i.e. usage of the Book resources continue well past the end of semester. It’s not heavy usage, but is still there.

Question: is that just me, or does it include students?

2013 – S1

2013 S1 is a bit different as well. Lighter use at the start of semester. A bit heavier usage around assignment due dates. My guess is that this was still early in the evolution of how the Book was being used.

EDC3100 2013 S1 - Book usage

2013 – S2

This map seems to be evolving toward the heavy use at the start of semester.
EDC3100 2013 S2 - Book usage

2014 – S1

And now the pattern is established. Heavy use at the start of semester and in the lead up to Assignment 1. A slight uptick then for Assignments 2 and 3. With the light usage around Professional Experience evident.

EDC3100 2014 S1 - Book usage

2014 – S2

EDC3100 2014 S2 - Book usage

2015 – S2

  EDC3100 2015 S2 - Book usage
What about just the students?

The following shows just the student usage for the 2013 S1 offering. Not a huge difference to the “all role” version above suggesting that it is students who are doing most of the viewing. But it does confirm that the on-going usage of the Book resources past the end of the semester are students who appear to have found some value for the information post the course.

EDC3100 2013 1 - Just students

Which comes first? Pedagogy or technology?

Miranda picks up on a common point around the combination of technology and pedagogy with this post titled Pedagogy First then Technology. I disagree. If you have to think in simple sequential terms, then I think pedagogy should be the last consideration, not the first. The broader problem though is our tendency to want limit ourselves to the sequential

Here’s why.

The world and how we think isn’t sequential

The learning and teaching literature is replete with sequential processes such as ADDIE, Backwards Design, Constructive Alignment etc. It’s replete with such models because that’s what academics and experts tend to do. Develop models. The problem is that all models are wrong, but some of them are useful in certain situations for certain purposes.

Such models attempt to distill what is important from a situation to allow us to focus on that and achieve something useful. The only trouble is that the act of distillation throws something away. It’s an approach that suffers from a problem identified by Sir Samuel Vimes in Feet of Clay by the late Terry Pratchett

What arrogance! What an insult to the rich and chaotic variety of the human experience.

Very few, if any, human beings engage in anything complex or creative (such as designing learning) by following a sequential process.  We are not machines. In a complex task within a complex environment you learn as much, if not more, by engaging in the process as you do planning what you will do beforehand.

Sure, if the task you are thinking about is quite simple, or if it is quite complicated and you have a lot of experience and expertise around that task, then you can perhaps follow a sequential process. However, if you are a teacher pondering how to transform learning through the use of digital technology (or using something else), then your task is neither simple, nor is it complicated, nor is it something you likely have experience or expertise with.

A sequential process to explain why technology first

Technologies for Children is the title of a book that is designed to help teachers develop the ability to help learners engage with the Australian Curriculum – Technologies learning area. A curriculum that defines two subjects: Design and Technologies, and Digital Technologies. In the second chapter (Fleer, 2016) the author shares details of how one year 4/5 teacher integrates this learning area into her class. It includes examples of “a number of key statements that reflected the technological processes and production skills” (Fleer, 2016, p. 37) that are then turned into learner produced wall charts. The following example wall chart is included in Fleer (2016, p. 37). Take note of the first step.

When we evaluate, investigate, generate designs, generate project plans, and make/produce we:

  1. Collaboratively play (investigate) with the materials.
  2. Evaluate the materials and think about how they could be used.
  3. Generate designs and create a project plan for making the item.
  4. Produce of make the item.
  5. Evaluate the item.
  6. Write about the item and talk with others.
  7. Display the item.

Before you can figure out what you are going to do with a digital technology, you need to be fully aware of how the technology works, what it can do, what are the costs of doing that, what it can’t…etc. Once you’ve got a good handle on what the digital technology can do, then you can figure out interesting and effective ways to transform learning using the technology. i.e. pedagogy is the last consideration.

This is not to suggest that pedagogy is less important because it comes last. Pedagogy is the ultimate goal

But all models are wrong

But of course all models are wrong. This model is (arguably) only appropriate if you are not familiar with digital technology. If you know all about digital technology or the specific digital technology you are considering, then  your need to play with the digital technology first is lessened.  Maybe you can leap straight to pedagogy.

The trouble is that most teachers that I know have fairly limited knowledge of digital technologies. In fact, I think many of the supposed IT experts within our institutions and the broader institution have somewhat limited understandings of the true nature of digital technologies. I’ve argued that this limited understanding is directly impacting the quality of the use of digital technology for learning and teaching.

The broader problem with this “technology first” model – as with the “pedagogy first” model – is the assumption that we engage in any complex task using a simple, sequential process. Even the 7 step sequential process above is unlikely to capture “the rich and chaotic variety” of how we evaluate, investigate and generate designs for using digital technology for learning and teaching. A teacher is just as likely to “play (investigate)” with a new digital technology by trying out in a small safe to fail experiment to see how it plays out. Perhaps this is repeated over a few cycles until the teacher is more comfortable with how the digital technology works in the specific context, with the specific learners.

References

Fleer, M. (2016). Key ideas in the technologies curriculum. In Technologies for Children (pp. 35–70). Cambridge University Press.

On the value or otherwise of SAMR, RAT etc.

Updated 30 August, 2016: Added mention of @downes’ pointers to peer review literature using SAMR. Evolved into a small section

There definitely seems to be a common problem when it comes to thinking about evaluating the use of digital technology in learning and teaching. Actually, there are quite a few, but the one I’m interested in here is how people (mostly teachers, but students as well – and perhaps should throw organisations in here as well) perceive what they are doing with digital technology.

This is a topic that’s been picked up recently by some NGL folk as the course has pointed them to the SAMR model (originally), but now to the RAT model. Both are acronyms/models originally intended to be used by people introducing digital technology into teaching to self-assess what they’ve planned. To actively think about how the introduction of digital technology might change (or not) what learners and teachers are doing. The initial value of these models is to help people and organisations avoid falling into this pitfall when applying digital technology to learning and teaching.

SAMR has a problem

SAMR has received a lot of positive attention online, but there is also some negative reactions coming to the fore. One example is this open letter written to the SAMR creator that expresses a range of concerns. This open letter is picked up also in this blog post titled SAMR: A model without evidence. Both these posts and/or the comments upon them suggest that SAMR appears to have been based/informed by the work of Hughes, Thomas and Scharber (2006) on the RAT framework/model.

A key problem people have with SAMR is the absence of a theoretical basis and peer-reviewed literature for SAMR. Something with the RAT model does have. This is one of the reasons I’ve moved away from using SAMR toward using the RAT model. It’s also the reason why I’ll ignore SAMR and focus on the RAT model.

SAMR and literature

Update: @downes points to a collection of literature the includes the SAMR model. This addresses the question of whether or not there is peer reviewed literature using SAMR, but whether this addresses the perceived (and arguable) need for a “theoretical basis” to underpin SAMR. Most of the literature I looked at made use of the SAMR model for the same purpose I’ve use it, the RAT model and the Computer Practice Framework (CPF). As a method for evaluating what was done, for example

A related Google Scholar search (samr Puentadura) reveals a range of additional sources. But that search also reveals the problem of misspelling the SAMR author’s surname. A better search would be (samr Puentedura) which reveals material from the author and their related citations.  However, this search also reveals the weakness identified in the open letter mentioned above. The work developing/sharing the SAMR model by Puentedura is only visible on his website, not in peer-reviewed publications

Whether this is a critical weakness is arguable. Personally, it’s sufficient to prompt a search for something that performs a similar job, but doesn’t suffer this weakness.

What is the RAT model for?

The “model without evidence” post includes the following

SAMR is not a model of learning. There is no inherent progression in the integration of technology in learning within SAMR. Using SAMR as a model for planning learning and the progression of learning activities is just plan wrong

The same could be said for the RAT model, but then the RAT model (and I believe SAMR) were never intended to be used as such. On her #ratmodel page Hughes offers this

The original purpose of the RAT framework was to introduce it as a self-assessment for preservice and inservice teachers to increase critical technological decision-making.

The intended purpose was for an educator to think about how they’ve used digital technologies in a learning activity they’ve just designed. It’s a way for them to think about whether or not they’ve used digital technologies in ways that echo the above cartoon. It’s a self-reflection tool. A way to think about the use of digital technologies in learning

It’s not hard to find talk of schools or school systems using SAMR as an evaluation framework for what teachers are doing.  I’m troubled by that practice, it extends these models beyond self-reflection.  In particular, such use breaks the “best practices and underlying assumptions for using the R.A.T model” from Hughes (emphasis added)

  1. The R.A.T. categories are not meant to connote a linear path to technology integration, such as teaching teachers to start with R activities, then move to A and ultimately T. Rather, my research shows that teachers will have an array of R, A, and T technology integration practices in their teaching. However, T practices seem more elusive.
  2. The key to Transformative technology integration is opportunities for teachers to learn about technology in close connection to subject matter content. For example, supporting subject-area teachers learning in a PLC across a year to explore subject area problems of practice and exploration of digital technology as possible solutions.
  3. Discrete digital technologies (e.g., Powerpoint, an ELMO, GIS software) can not be assessed alone using the R.A.T. model. One needs rich instructional information about the context of a digital technology’s use in teaching and learning to begin a RAT assessment. Such rich information is only known by the practitioner (teacher) and explains why the model supports teacher self-assessment. For use in research, the RAT model typically requires observations and conversations with teachers to support robust assessment.

It’s not the technology, but how you use it

Hughes’ 3rd point 3 from the above (the one about discrete digital technologies) is why I’ve grown to dislike aspects of diagrams like the Padagogy Wheel pointed to by Miranda.

Whether you are replacing, amplifying, transforming (RAT model) OR you are remembering, analysing, creating, understanding etc (Blooms Taxonomy) does not arise from the technology. It arises from how the technology is used by those involved, it’s what they are doing which matters.

For example, one version of the padagogy wheel suggests that Facebook helps “improve the user’s ability to judge material or methods based on criteria set by themselves of external sources” and thus belongs to the Evaluate level of Blooms’ taxonomy. It can certainly be used that way, but whether or not how I’ve used it in my first lesson from today meets that criteria is another matter entirely.

The problem with transformation

Transformation is really, really hard. For two reasons.

The first is to understand the difference between amplification and transformation. Forget about learning, it appears difficult for people to conceive of transformation in any context. I try to help a bit through the use of print-based encyclopedia versus Encarta (replacement) versus Wikipedia (transformation).  Both Encarta and Wikipedia use digital technologies to provide an “encyclopedia”, however, only Wikipedia challenges and transforms some of the fundamental assumptions of “encyclopedia”.

The second is related to the horsey horseless carriage problem. The more familiar you are with something, the harder it is to challenge the underlying unwritten assumptions of that practice. I’d suggest that the more involved you are with print-based encyclopedia’s, the harder it was to see value in Wikipedia.

It’s made that much harder if you don’t really understand the source of transformation. It’s hard for people who aren’t highly digitally literate and have high levels of knowledge around learning/teaching/context to be able to conceive of how digital technologies can transform learning and teaching.

What do you compare it against?

To decide if your plan for using digital technologies for learning is an example of replacement, amplification or transformation, most people will compare it against something. But what?

In my undergraduate course, I ask folk to think about what the learning activity might look like/be possible if there wasn’t any digital technology involved. But I wonder whether this is helpful, especially into the future.

Given the growing prevalence of digital technologies, at what stage does it make sense to think of a learning activity as not involving some form of digital technology?

I wonder whether this is part of the reason why Angela lists as Substitution the use of the Internet for research?

Amplification, in the eye of the beholder?

Brigitte connects to Angela’s post and mentions a recent presentation she attended where SAMR (and the Technology Acceptance Model – I believe) were used to assess/understand e-portfolios created by student teachers. A presentation in which – Brigitte reports – that how students perceived themselves in terms of technical skills influenced their self-evaluation against the SAMR model

For example, a student with low technical skills might place themselves at the Substitution level in terms of creating an e-porfolio, however what they produced might be classified as sitting at the Modification or even Redefinition level when viewed by the assessors. Conversely, a student might classify themselves as at Redefinition but their overconfidence in using the tool rather than their skill level meant they produced something only at Substitution level.

I wonder how Brigitte’s identification of her use of a blog for reflecting/sharing as being substitution connects with this?

Focus on the affordances

Brigitte identifies her blog-based reflective practice as being substitution. Typically she would have been using other digital technologies (email, discussion boards) and face-to-face discussions to do this, and for her there is no apparent difference.

However, I would argue differently. I would point to particular advantages/differences of the blog that offer at least some advantage, but also potentially change exactly what is being done.

A blog – as used in this case – is owned by the author. It’s not hosted by an institution etc. Potentially a blog can help create a great sense of identity, ownership etc. Perhaps that greater sense of ownership creates more personal and engaged reflections. It also offers one way to react to the concerns over learning analytics and privacy Brigitte has raised elsewhere.

The blog is also open. DIscussion boards, email, and face-to-face discussions are limited in space and time to those people allowed in. The blog is open both in space and time (maybe). There’s no limit on how, why and whom can connect with the ideas.

But this brings up an important notion of an affordance.  Goodyear, Carvalho and Dohn (2014) offer the following on affordances

An assemblage of things does not have affordances per se; rather, it has affordances in relation to the capabilities of the people who use them. These evolve over time as people become better at working with the assemblage. Affordance and skill must be understood, not as pre-given, but as co-evolving, emergent and partly co- constitutive (Dohn, 2009). (p. 142)

Just because I might see these affordances/advantages, it doesn’t mean that Brigitte (or anyone else) will.

Does that mean I’m right and Brigitte is wrong? Does it mean that I’ve failed in the design of the NGL course to provide the context/experiences that would help Brigitte see those affordances? Does this meant that there is no right answer when evaluating a practice with something like the RAT model?

Should you be doing it at all?

Of course, the RAT (or SAMR) models don’t ask the bigger question about whether or not you (or the learners) should really be doing what you’re doing (whether with or without digital technologies).

A good current example would appear to be the push within Australia to put NAPLAN online.  The folk pushing it have clearly identified what they think are the benefits of doing NAPLAN with digital technologies, rather than old-school pen(cil) and paper. As such it is an example (using the RAT model) of amplification. There are perceived benefits.

But when it comes to standardised testing – like NAPLAN – there are big questions about the practice. Just one example is the question of just how comparable the data is across schools and years. The question about comparability is especially interesting given research that apparently shows

The results from our randomised experiment suggest that computer devices have a substantial negative effect on academic performance

 

References

Goodyear, P., Carvalho, L., & Dohn, N. B. (2014). Design for networked learning: framing relations between participants’ activities and the physical setting. In S. Bayne, M. de Laat, T. Ryberg, & C. Sinclair (Eds.), Ninth International Conference on Networked Learning 2014 (pp. 137–144). Edinburgh, Scotland. Retrieved from http://www.networkedlearningconference.org.uk/abstracts/pdf/goodyear.pdf

Hughes, J., Thomas, R., & Scharber, C. (2006). Assessing Technology Integration: The RAT – Replacement, Amplification, and Transformation – Framework. In C. Crawford, R. Carlsen, K. McFerrin, J. Price, R. Weber, & D. A. Willis (Eds.), Society for Information Technology & Teacher Education International Conference 2006 (pp. 1616–1620). Orlando, Florida: AACE. Retrieved from http://www.editlib.org/p/22293/

Understanding and using the idea of “network learning”

The following seeks to engage with some thoughts shared by Brigitte, bring together some earlier ramblings of my own, and connect this with R&D related work I should be doing over coming months (though it’s historically rare for those plans to come to fruition).

The title of Brigitte’s post is the question “What is networked learning?” This is an important question in the context of the NGL course we’re participating in because the overall focus is developing your own answer to that question, identifying the principles of your conception of NGL, and then using those principles to design a change to how some task you are involved with “as teacher”.  Hence if your answer to “What is networked and global learning?” isn’t all that great, the rest of what you do will suffer because of it.

Features of less than great answers

It’s not hard to see less than great answers to this question. The following lists some of the features of those that I’m familiar with.

It’s the technology, isn’t it?

The most common is that the use of networked digital technology (even an LMS) is the key feature of network learning. Or if you’re really cool, it’s use of blogs, Diigo, Twitter, Facebook, Instagram, Slack or insert latest sexy networked digital technology. While I’m keen on digital technology and it can be a great enabler for efficiency, or a great catalyst for rethinking and transformation of practice. It’s just a (increasingly useful) means to an end.

This post from last year – titled “There’s more to it than the Internet and social software” – picks up a similar refrain and links it to various thoughts from 2015 NGL participants and beyond, including the idea that everything is a network.

It’s groups of people, isn’t it?

Another common less than great answer revolves around groups of people. i.e. multiple people all working toward a common goal. An answer that often suggests that the absence of commonality of purpose (or some other form) means it can’t be what passes for networked learning. And/or, it’s an answer that often assumes that a single person – someone not talking directly to someone else – can not be engaged in what passes for networked learning.

In this comment on one of my earlier blog posts comparing connected and networked learning, Nick Kelly expands the comparison to include communities of practice. The most common “groups of people” model that comes to most people’s minds.  The particular view of network learning Nick uses in that comment is described as

NL emphasises the possibility for technology and design to enable better connections between learners and between learners and resources

Nothing there about common purpose.  It’s a definition that includes the idea of connections between learners and resources.

It’s about students (and teachers), isn’t it?

Another common less than great answer tends to limit network learning to the learners. Or, as I suggest in this post it might also include the teachers

Typically networked learning – at least within an institutional setting – is focused on how the students and the teachers are engaging in networked learning. More specifically, how they are using the LMS and associated institutional systems (because you can get in trouble for using something different).

But what about everyone else? If we live in a rapidly changing world where ubiquitous digital technology is transforming the very assumptions upon which we operate, aren’t we all learners who might benefit from network learning? Harking back to Nick’s description above

NL emphasises the possibility for technology and design to enable better connections between learners and between learners and resources

Which is the point I try to make in the earlier post, that network learning shouldn’t just be thought of as what the students and teachers engage in, but as

how the network of people (teaching staff, support staff, management, students), communities, technologies, policies, and processes within an institution learn about how to implement networked learning.

The argument made in this paper is that the use of digital technology to enhance learning and teaching in most formal educational institutions is so terrible because “everything is a network” is only thought to apply (if then) to learning and teaching, not the support and management roles.

Learning and knowledge are people things, aren’t they?

In this paper some colleagues and I draw on what Putnam and Borko (2000) have to say about new views of knowledge. Views of knowledge that certainly do not agree that knowledge is something that is solely in people’s heads. It’s a view that’s connectes

Better answers

Brigitte draws on the Wikipedia definition of networked learning

Networked learning is a process of developing and maintaining connections with people and information, and communicating in such a way so as to support one another’s learning.

That’s a better answer (IMHO). No explicit mention of technology or common purpose. But of course there are alternatives and this remains a short description that doesn’t offer much detail. What are good and bad ways of developing and maintaining connections? What is a connection? What is its form? How might it be formed?

It’s in answering these types of questions where the variety between different interpretations of NGL enter the picture. Exploring these different interpretations and find one that works for them is one of the challenges for participants in the NGL course.

Putting it into practice

Formulating and justifying principles for action

A use the following definition of educational theory quite often because it resonates with my pragmatic view of theory. Hirst (2012) describes educational theory as

A domain of practical theory, concerned with formulating and justifying principles of action for a range of practical activities. (p. 3)

And that’s the aim of the NGL course, to encourage participants to draw upon their view of network learning to formulate and justify principles for action. Action that involves them planning some intervention into an act of teaching.

This post seeks to compare two different perspectives on network learning. One titled connected learning (getting a lot of traction and doing interesting stuff in the USA) and more European view of network learning. What’s interesting is that both appear to formulate principles for action.

It’s the formulation of principles for action that are based on an appropriate perspective of networked learning, and then using those principles to design a contextually appropriate intervention is the main focus of the last task in the course.

Is it worth it?

Adam isn’t alone when he expresses the following, related uncertainty

While I myself am a big enthusiast of implementation of ICT in education, I still haven’t convinced myself that online and distance curriculums actually offer learning advantages aside from flexibility and convenience

Indeed, this may be the big question for many people, but whenever people ask the “does it work” question with learning and teaching (with or without digital technologies), I am immediately put in mind of the following quote

That is why ‘what works’ is not the right question in education. Everything works somewhere, and nothing works everywhere. – Dylan Wiliam

A previous offering of NGL included a UK-based university educator teaching one of the sciences. Her definition of “what worked” was, not surprisingly, a very objective one. Either, NGL worked, or it didn’t work. And you could only know if it worked if there were double-blind, randomised, controlled trial. The gold standard for knowing if something works, or doesn’t work.

Along with Wiliam, I think education is much more difficult than that. It’s much more contextual. What works today, may not work tomorrow with the same learners.

Why is e-learning like teenage sex?

I’ve given a presentation that argues that almost all e-learning is like teenage sex. Not because I think that digital technologies cannot have any positive effect. But because I think the way that formal education institutions and the people within them understand and harness digital technologies remains extremely limited.

From this perspective, in this type of context, NGL is rarely going to provide advantages beyond flexibility and convenience.  Especially when the mindsets that underpin how formal education institutions do anything is stuck in a very non-NGL view. Which is what we argued with the BAD/SET framework, and where the D in BAD stood for Distribution and was defined as

the world is complex, dynamic, and consists of interdependent assemblages of diverse actors (human and not) connected via complex networks.

For me, network learning involved effectively recognising and leveraging that view of the world.

References

Hirst, P. H. (2012). Educational theory. In P. H. Hirst (Ed.), Educational Theory and Its Foundation Disciplines (pp. 3–29). Milton Park, UK: Routledge.

Putnam, R. T., & Borko, H. (2000). What do new views of knowledge and thinking have to say about research on teacher learning? Educational Researcher, 4-15.

Exploring Moodle Book Module usage – part 1 – background and planning

I’m due to have the slides for a Moodlemoot Australia presentation in a few weeks. Time to get organised. The following is (perhaps) the first of a sequence of posts reporting on progress toward that presentation and the related research.

Background

My interest in research is primarily driven by the observation that most educational usage of digital technology to enhance learning and teaching is fairly bad. Typically the blame for this gets laid at the feet of the teaching staff who are digitally illiterate, not qualified to teach, or are laggards. My belief/argument is that the problem really arises because the environment within formal education institutions just doesn’t understand what is required to make a difference. Much of what they do (e.g. institutional standards for course sites, checklists, training, support documentation, design and support of technlogies…) does little to help and tends to make the problem worse.

You want digitally fluent faculty?

A contributing factor to that is that institutional attempts to improve digital learning actually fails to be based on any insights on how people (in this case teaching staff and all those involved with digital learning) learn. How institutions implement digital learning actually gets in the way of people learning how to do it better.

Schema and the grammar of school

The ideas of schema and the grammar of school offer one example of this failure. This earlier post includes the following quote from Cavallo (2004) establishes the link

David Tyack and Larry Cuban postulated that there exists a grammar of school, which makes deviation from our embedded popular conception of school feel as nonsensical as an ungrammatical utterance [1]. They describe how reform efforts, whether good or bad, progressive or conservative, eventually are rejected or denatured and assimilated. Reform efforts are not attempted in the abstract, they are situated in a variety of social, cultural and historical contexts. They do not succeed or fail solely on the basis of the merit of the ideas about learning, but rather, they are viewed as successful based upon their effect on the system and culture as a whole. Thus, they also have sociological and institutional components — failure to attend to matters of systemic learning will facilitate the failure of the adoption of the reforms. (p. 96)

The grammar of school problem is linked to the idea of schema which links to the following quote that I first saw in Arthur (2009) and which is taken from Vaughan (1986, p. 71)

[In the situations we deal with as humans, we use] a frame of reference constructed from integrated sets of assumptions, expectations and experiences. Everything is perceived on the basis of this framework. The framework becomes self-confirming because, whenever we can, we tend to impost it on experiences and events, creating incidents and relationships that conform to it. And we tend to ignore, misperceive, or deny events that do not fit it. As a consequence, it generally leads us to what we are looking for. This frame of references is not easily altered or dismantled, because the way we tend to see the world is intimately linked to how we see and define ourselves in relation to the world. Thus, we have a vested interest in maintaining consistency because our own identity is at risk.

Evidence of schema in how digital technologies are used

Horsey, Horseless Carriage

The schema idea means that people will perceive and thus use digital technologies in ways that fit with their “integrated sets of assumptions, expectations and experiences”. This is an explanation for the horsey, horseless carriage way people respond to digital technologies. It’s why courses where the majority of students are online students and will never come onto a campus are still designed around the idea of face-to-face lectures and tutorials.

It also explains why when I finally returned to teaching a course I adopted the idea of a ramble for the structure of the course.  It explains why the implementation of the ramble evolved into using the Moodle Book module the way it does today. The images below (click on them to see larger versions) illustrate the connection between my practice 20 years apart, more detail follows.

1996 2016
The 85321 "online" book - 1996 Online book 2016

The 1996 image is a page from  the study guide (wonder how many people can play the au file containing the Wayne’s World II quote) for the Systems Administration course I taught in 1996. The 2016 image is a page from the “study guide” I developed for an Arts & Technologies C&P course.

I believe/suggest that the influence of schema also plays a significant contributor in the practice of other teaching staff as they transition into digital learning. It’s a factor in why most course sites remain dumping grounds for lecture slides and the subsequent widespread growth in the use of lecture capture systems.

And it’s not just the teaching staff. Students have developed schema about what it means to be taught, and what it means to be taught at university. A schema developed either through direct experience, or via the experience of others and various media. The typical schema for university education involved large lecture halls and tutorials.

 

So what?

The above suggests that whenever students and teachers engage with a digital technology (or any change around) and its use for learning and teaching, there are three main possibilities:

  1. It seen as nonsensical and rejected.
    e.g. whatever was said doesn’t make sense from existing grammar rules and seen as just being wrong.
  2. It sounds like something familiar and is modified to fit within the confines of that familiar practice.
    e.g. whatever was said sounds an awful lot like an existing use of grammar (even though it is different), and thus is interpreted as matching that existing use.
  3. The significant difference is seen as valued and existing practice is modified to make use of that difference.
    e.g. the different use of grammar is both understood as different and the difference is valued, and is subsequently existing practice is modified to incorporate the new grammar.

If this is the case, then examining the use (or not) of a digital technology in learning and teaching should reveal evidence of these possibilities.  This seems very likely, given widespread common complaints about the use of digital technology to enhance learning and teaching. Complains that see most practice stuck at possibility #2 (at best).

If this is the case, then perhaps this way of thinking might also identify how to address this.

But first, I’m interested in seeing if use of a particular digital technology matches this prediction.

Use of the Moodle Book module

Due to a 2015 grant from the USQ OpenTextbook Initiative I’m going explore the use the Moodle Book module. The plan is to analyse the use of the Moodle Book module (the Book) at USQ to see how both learners and teachers are engaging with it, see if the above expectations are met, and figure out what might be done in terms of the support and development of the Moodle Book module to help improve this.

What follows is an initial map of what I’ll be exploring.

A major aim here is to explore whether a student or teacher using the Book have made the transition from possibility #2 (treating the Book as a print-based book) to possibility #3 (recognising that this is an online book, and using that difference). I’ve highlighted some of the following questions/analysis, which I think be useful indicators of this transition. The darker the yellow highlight, the more strongly I think it might indicate someone making the leap to an online book.

Question for you: What other practices might indicate use that has moved from #2 to #3?

Which courses use the Book

First step is to explore whether the Book is being used. How many courses are using it? How many books are being produced with the module.

As the abstract for the talk suggests, early analysis revealed a growth in use, but I’m wondering how sound that analysis was. Hence there is a need to

  1. Correctly identify the number of course offerings using the Book each year.
  2. Identify the number of different teaching staff are responsible for those courses.
    Longer term, it would be useful to ask these staff about their background and reasons for using the Book.
  3. Identify the type of courses using the Book.
  4. How many books are being produced by each course?
  5. How do the books fit into the structure of the course?
    1. Is the structure the same from offering to offering?
    2. How much does the number and content of the books change from offering to offering?

Characteristics of the book content

  1. Statistics around the level of readability of the text (e.g. Flesch-Kincaid etc).
  2. The structure of the book – are sub-chapters used.
  3. Are images, video, Moodle activities included?
  4. What about links?
    • Are there any links at all?
    • What is linked to?
    • Are links purely to external resources? 
    • How many links connect back to other parts of the course’s Books?

Patterns in how the books are authored

  1. How are the books authored?
    • From scratch?
      1. Using the web interface?
      2. Via an import process?
    • Copied from previous offerings?
    • ?? other??
  2. How are they edited? 
    My expectation that a teacher who sees the Book as a replacement for a print book will not be editing the books during semester.

Patterns in how the books are read/used

  1. Are students reading the books online or printing them out?
  2. Does printing always happen at the start of semester? Does it continue through semester? Does it drop off?
  3. When are students reading the books?
  4. What is the nature of the paths they take through the books?
    1. Do they read the books and the chapters in order?
    2. How long do the spend on each chapter?
    3. Do they revisit particular books?
  5. How many times do discussion forum posts in a course include links to chapters/sub-chapters within the books
    • Posts written by teaching staff
    • Post written by students

References

Arthur, W. B. (2009). The Nature of Technology: what it is and how it evolves. New York, USA: Free Press.

Cavallo, D. (2004). Models of growth – Towards fundamental change in learning environments. BT Technology Journal, 22(4), 96–112.

How many digital devices do you have?

In a couple of the courses I teach I ask students (for slightly different purposes) the question from the title of this post, “How many digital devices do you have?”.  In one of the courses that question takes the form of a quiz and looks something like the following.

Question text

How many different digital technologies do you own?
Select one:
a. 0
b. 1 to 5
c. 6 to 10
d. 11 to 20
e. More than 20

 What answer would you give?

Count them up folks. What answer would you give. I’ll give you some space to think about that before talking about what some other folk have said.

 

 

What others have said

Some of the students in another course (where the question is framed somewhat differently) have offered the type of answers I expected, based on the framing of the question.

Jay identifies 3 devices. Neema lists 2.

Thinking a bit further afield than that I can probably count quite a few more than that in my house. I’ll ignore devices personal to other members of my family. This gets me the following list: laptop, 2 smart phones, digital camera, printer, various external drives, Apple TV device, T-Box, X-Box One.  That’s 9.

 

 

 But that doesn’t really start to count them

Fleming (2011) writes that it is

estimated that today’s well-equipped automobile uses more than 50 microcontroller units (p. 4)

Wikipedia defines a microcontroller as “a small computer) on a single integrated circuit containing a processor core, memory, and programmable input/output peripherals.

So your car alone potentially has you well into double figures. Remember that Fleming was writing in 2011. If you have recently purchased the latest Mercedes E-Class, chances are the number of microcontroller units in your car goes well beyond 50.

And of course, with your thinking re-calibrated by this example, you can probably quite easily identify additional devices in your house that are likely to control microcontrollers.

Implications

Digital devices are increasingly ubiquitous. Digital isn’t limited to a separate device like a computer, tablet, or smart phone. It’s wearable and in every thing.

I expect most people not to be aware of just how reliant they are on digital technologies in everything they do. Hence it’s uncertain that they understand or are prepared for what this might mean for what they do. For example, I don’t think many people in higher education or education more broadly quite understand the implications this has for how those organisations operate, perform, or exist. I’m not convinced that the patterns they use to make sense of the world are ready yet to deal with these changes effectively.

But then I’m not convinced the technologists are either.

Interesting times ahead.

References

Fleming, B. (2011). Microcontroller units in automobiles. IEEE Vehicular Technology Magazine, 6(3), 4–8. doi:10.1109/MVT.2011.941888

 

Teacher presence in network learning

A new semester and the Networked and Global Learning course is running again. Apologies to those in the other courses I teach, but this course is consistently the most engaging and interesting. It’s a course in which I typically learn as much as the other participants. However, due to the reasons/excuses outlined in the last post, I haven’t been able to engage as much as I would have liked with the course.

This has me thinking about something Adam wrote, in particular the argument/observation from Rovai (2002) which Adam describes as

This is bringing to light the sense of disconnection students are often experiencing due to physical and psychological separation from teachers, peers and institutions

What follows is some random reactions to this particular quote and an attempt to connect it with my teaching.

Badly designed learning generates bad outcomes

As someone who has been working, learning and teaching online for a long time I am biased and this idea troubles me. In fact, it puts me in mind of the following point made in this recent post around the question of banning laptops in the classroom, because handwriting is better for learning

Those studies about the wonders of handwriting all suffer from the same set of flaws, namely, a) that they don’t actually work with students who have been taught to use their laptops or devices for taking notes. That is, they all hand students devices and tell them to take notes in the same way they would in written form. In some cases those devices don’t have keyboards; in some cases they don’t provide software tools to use (there are some great ones, but doing it in say, Word, isn’t going to maximize the options digital spaces allow), in some cases the devices are not ones the students use themselves and with which they are comfortable. And b) the studies are almost always focused on learning in large lecture classes or classes in which the assessment of success is performance on a standardized (typically multiple-choice) test, not in the ways that many, many classes operate, and not a measure that many of us use in our own classes. And c) they don’t actually attempt to integrate the devices into the classes in question,

In terms of student disconnection,is it arising from there truly being something essential that a physical face-to-face learning experience provides that can’t be provided in an online space?

Or, is it because the types of online learning experiences being examined by Rovai have not been designed appropriate to draw on the affordances offered by an online learning environment?  Do these online learning experiences examined by Rovai suffer the same problem that most of the attempts to engage in open education illustrate? i.e. an inability to break out of the “persistent patterns of relations” (Bigum, 2012) that are formed by someone brought up teaching face-to-fact?

Given that the abstract for Roavi (2002) includes

Data were collected from 375 students enrolled in 28 different courses, offered for graduate credit via the Blackboard e-learning system by a private university

Indicating that the “persistent patterns of relations” under examination in this paper is from a North American university in 2000/2001 where online learning was limited to the Blackboard LMS. A time and system which is unlikely to be described by anyone as offering only the pinnacle of an online learning experience.

Might the sense of disconnection arise from the poor quality of the learning experience (online or otherwise) rather than the lack of physical presence.

Or is it simply that both teachers and learners have yet to figure out how to leverage the affordances of online learning?

What type of presence should a teacher have?

The following two images represent connections formed between participants in two discussion forums in a large course I teach (these are from first semester 2015). Each dot represents a participant. A red do is a teacher, blue dot a student. A connection between two people is formed when one of them replies to a post from the other.

This first image is from the general Question and Answers forum on the course site.

Forum network map The second image is from the Introduction and welcome forum, where students introduce themselves and say hi to someone the same and different. Screen Shot 2016-08-07 at 3.01.34 pm

In the first image, there is on red dot (me) that is strongly the center of all that’s going on. I’m answering questions. In the second image, the red dot that is me, is only lightly connected.

Which is better? Of course it depends. Which is scalable in an effective way?

The Equivalency Theorem suggests that as long as one of student-teacher, student-student, or student-content interaction is high, deep formal learning can occur. High levels of more than one and the educational experience will be more satisfying.

So far the NGL course has been suffering from low student-teacher interaction.  I wonder about the other two? Time will tell.

Teacher as meddler in the middle

A couple of years ago I wrote this post as an example of a “as teacher” post – a requirement for the NGL course. Not a lot has changed, and all this talk of interaction and connection has me thinking again of the first question I was interested in two years ago

How I can be a more effective “meddler in the middle”?

In particular, how can I be more aware of where the types of interactions students are having in the courses I teach, and subsequently what actions can I take to help strength as necessary? If I do this, what impact will it have on student learning and their experience?

I wonder if the paucity of methods for me to understand exactly how and interactions that are occurring that has me refining teaching materials. Materials that students may not be engaging with.  I’m hoping that this project will help reveal how and if students are engaging with the content in at least one course. Anecdotally, it appears that for many interaction with the content is little more than a box to tick. If borne out, this raises the question of how to get students to interact/engage effectively with the content.

There are similar questions to be explored around use of blogs and the connections between students….

References

Bigum, C. (2012). Edges , Exponentials and Education : Disenthralling the Digital. In L. Rowan & C. Bigum (Eds.), Transformative Approaches to New Technologies and student diversity in futures oriented classrooms: Future Proofing Education (pp. 29–43). Springer. doi:10.1007/978-94-007-2642-0

Rovai, A. (2002). Development of an instrument to measure classroom community. The Internet And Higher Education, 5(3), 197-211. http://dx.doi.org/10.1016/s1096-7516(02)00102-1

Valuing the “residue of experience” a bit more

For a while now I have been drawing on the following quote from Riel and Polin (2004)

Over time, the residue of these experiences remains available to newcomers in the tools, tales, talk, and traditions of the group. In this way, the newcomers find a rich environment for learning. (p. 18)

to explain why I encourage/require the use of various types of social media (blogs, social bookmarking, feed readers) in my courses. This 2014 post identifies the problem (what happens in a course site, stays and dies in a course site) and how the social used in these courses helps address that problem.  If you do a Google search for edc3100 blog, you will get another illustration of how at least some of the residue of experience remains available to newcomers in at least one of the courses.

The problem is that this year has revealed that the design of the course doesn’t yet value that residue of experience, at least not in terms of the main value measure for many students – assessment. Students gain marks for writing blog posts that link to posts from other students, but the code that does this marking only recognises currently enrolled students. Linking to the broader residue of experience doesn’t count.

Interestingly, this has only become an issues this year. Only this year have students been asking why they missed out on marks for links to other (“old”) student posts. Leaving aside why it’s only started this year, this post documents the move to valuing the residue of experience.

After implementing the code below, it appears that at least 28 (about 25%) students this semester have linked to blog posts from students in previous offerings of the course. Would be interesting to explore this further. See how prevalent the practice has been in previous courses. Update these visualiations to show the connections between offerings.

What I need to do

The process will be

  • Refamiliarising myself with how the “valuing” is currently done.
  • Planning and implementing how to value the residue of experience.
  • Figuring out if/how to check how often the residue of experience has been used.

How it is currently valued

Some perl code does the work.  Details follow.

BlogStatistics class gathers all information about the blogs for students in the current course offering.  A method generateAllStatistics does some of the grunt work.

But this class also creates a data member MARKING for each student. Based on the Marking class and its GenerateStats method. This class gets the content from the bim_marking table (i.e. all the posts by the student).

GenerateStats accepts a reference to a hash that contains links to all the other blogs in the course (for the specific offering).  It calls DoTheLinks (gotta love the naming) passes it the hash ref to do the count.

One question is how much old data do I currently have?  Seems like there’s only the 2015 and 2016 data easily accessible.

Planning and implementation

One approach would be

  • BlogStatistics generates a list of old student blog URLs
    • add BlogStatistics::getOldStudentBlogs that creates $%BlogStatistics::OLD_BLOGS DONE
  • BlogStatistics passes this into each call to Marking::GenerateStats  DONE
  • Marking::GenerateStats would pass this onto Marking::DoTheLinks DONE
    • also increment POSTS_WITH_STUDENT_LINKS if a link is to an old student blog DONE
    • increment POSTS_WITH_OLD_STUDNET_LINKS if a link is to an old student blog DONE
  • Modify the report generator to show OLD links DONE

 

 

Planning changes to EDC3100 assignment 1

In the first half of the year there was a new assignment in EDC3100 designed to both enhance student learning, but also experiment with making the data produced by students and markers as part of the assessment process more accessible for manipulation by software. i.e. the students and markers entered data into a spreadsheet.

It’s a new semester, time to reflect on that initial use and see what changes should and can be made.

Student results

Let’s start with student results. (Note: this is all a bit rough and ready)

Overall the average mark for the assignment was 13.8 (72%) out of 19 with a standard deviation of around 3.  But that’s for both parts of the assignment.

Given current practice of using Word documents as assignment cover sheets, extracting out the specific marks for the checklist/spreadsheet assignment is difficult. But I have an Excel spreadsheet and I can run a script to get that data.

The average mark is about 9.5 (68%) out of 14, with a standard deviation around 2.

Let’s dig a bit deeper into the three criteria that made up that mark. The three criteria were

  1. Mark – students use a checklist to evaluate a lesson plan and its use of ICT and pedagogy.
  2. Acceptable use – focused on students ability to identify a lesson plan they can use wrt copyright.
  3. RAT – students use the RAT model to evaluate the use of ICT and pedagogy in the course

The following table compares cohort performance on the criteria and overall.

Criteria Average % stdev %
Overall 68 15.8
Mark 75.2 17.2
Acceptable Use 63.2 16.7
RAT 59.3 17.8

The RAT question was where the students were least successful.  It’s also (arguably) the more difficult question. The checklist was the highest mark.  Acceptable use is also quite low and needs some work.

Those last two is where the focus will go for now.

Other thoughts and experiences

Student feedback

Student feedback included the following comments related to the assignment

Some of the items we were required to assess in Assignment One could have been better explained

more guidance was required for Assignment 1. I didn’t like the use of the Excel document

 The last point was connected to the issue of not being able to justify the interpretation, which links back to points raised elsewhere. The first point is one to ponder. The results above suggest that’s not where the real need lays.

Marker feedback

Feedback from markers included

  • Identifying use of an IWB, when in fact it’s just being used as a data projector.
  • Little understanding of what constitutes: an authentic problem, and connections beyond the classroom
  • Some surprise that even with 50 assignments to mark, there were few double ups of lesson plans.
  • Another liked the format in that it gave students a better handle on what to look for in an ICT-rich lesson and the RAT model was useful for framing an evaluation.
  • The wording and nature of the statements for the acceptable use and the RAT question need to be clarified – to confusing (for marker and student)

One aspect of the assignment that troubled one of the markers was that the lesson chosen by the student only had to include some form of ICT.  It didn’t need to be rich nor effective ICT. This was actually one of the aims of the assignment, to allow students develop some appreciation for the breadth of what is possible and just how narrow use often is.

Questions asked during semester

  • Struggles to find a CC-licensed lesson plan.
  • Clarity about what makes an acceptable lesson plan
    • e.g. Can an American lesson be used?
    • Linked to concerns about Q10 and distinguishing between an appropriate lesson plan and whether or not you can use it due to copyright
  • Questions re: term of use and uploading
  • What if I can’t find any information about copyright?
  • How can/should the lesson plan be put online?
  • The distinction between what a student is using an ICT, and when the teacher is using it
  • Explanation of how the checklist questions are marked – e.g. those that don’t apply
  • Reporting bugs in the formatting of the cells

Personal thoughts

Early reflections on the semester included

The spreadsheet worked reasonably well. The checklist within the spreadsheet requires some refinement. As does some aspects of the rubric. The duplication of a Word-based coversheet needs to be removed.

 Other thoughts during the semester included:

  • Students had a tendency to treat the free text questions as requiring an essay.
  • The “pretend” context for the task wasn’t clear enough.
  • In particular, a problem about the exact legal status of ACME’s server, links and making copies of files.
  • Issues with specific questions and the checklist
    • The “web applications” option under “What is used” causing confusion about overlap with “web browser” question
    • Q16 includes mention of print material around ICT
    • Q26 mentions embedded hardware, a question of it and the connection with IWB
    • Appears to be strong connections between Q22 and A46
    • The purpose of Q10 is not clear enough, confusion with matching curriculum etc.
    • A feeling that there are too many questions and perhaps overlap
    • Criteria for RAT question isn’t clear enough about the quality of the response
      • e.g. not mentioning all uses of ICT and Pedagogy
      • Missing out on themes
      • Incorrect identifying something as belonging to a theme
    • Suggestion for a drop down box around linkage of ICT to objectives: not related, somewhat related, essential, extends, transforms
  • More explicit scaffolding/basic activities around the evaluation questions
    • e.g. Is ICT being used to Question, Scaffold, Lecture, in an authentic task

Random suggestions

Due to institutional constraints (not to mention time) none of the changes to be made can be radical.  Keeping with that, some initial suggested changes to explore include:

  1. Pre-submission checks
    1. What pre-submission checks should I run?
    2. Can they be run? How does that integrate with the Moodle assignment activity workflow?
  2. Remove the cover sheet entirely, just use the spreadsheet
    1. Need to include the learning journal mark into the spreadsheet
    2. Would be nice to do this automagically
  3. Tweaking the marking
    1. The criteria for Acceptable use and RAT questions need to be improved
    2. Look closely at each of the points about the questions
  4. Student preparation
    1. Make clear the need not to write essays for the free text questions
    2. Finding CC licensed lesson plans
      1. Great difficulty in finding those that are CC licensed
      2. Provide a list of prior sites people have used
      3. Generate some sort of activity to test understanding of CC with a specific example
    3. RAT Model
      1. More activities in learning paths
      2. Better labeling on  the spreadsheet
    4. More questions/activities around specific terms and concepts within the checklist

 

Planning an EDC3100 “installfest”

The following documents the planning of an “installfest” for the course EDC3100. Implementation and reflection will come later.

Rationale

The course encourages/requires that students to modify their learning process in the course to engage in Jarche’s seek/sense/share framework using a combination of a personal blog, Diigo, and the Feedly feed reader.

This is a radical departure for most students and a challenge for most. It results in a lot of time expended at the start of semester. For example, a past students shared her experience

I spent a lot of time trying to work out blogging, Diigo and Feedly and to be honest I am still only using the bare minimum with blogging

Not a good outcome and apparently what has been used previously, doesn’t work. So an alternative is required.

As it happens, the same student also suggested a possible solution

My thoughts on changes or additions to the course that I would have found useful, would have been to come to a workshop near the start.

I’ve been pondering this suggestion and how it might work with the next offering of the course that has around 100 online students. Being of a certain age I remember installfests and have been wondering if that might be a useful model.    Leading to questions such as..

Can something like an installfest be run in a online video-conference space? Will students participate? Will it help? How to organise it within existing constraints?

Design thoughts

Linux Installfest HOWTO

Interestingly, I came across the Linux Documentation Project’s Linux Installfest HOWTO, the following starts from that document.

The location will be virtual, not physical. So advice about preparing the physical location doesn’t quite apply. However, the features of the Zoom service will need to considered.

Consideration: Might the “other room” feature of Zoom be useful for organising people at different stages?

Bringing up the major constraint, there’s likely to be only me to fulfill the various suggested roles. With more time I might have been able to organise additional help, but let’s not talk about the one week missing between semester 1 and semester 2.

Consideration: Can the session structure be informed by the identified roles? e.g. a receptionist role could be taken by the initial part of the session which focuses on welcoming people to the space. Might also be useful to explicitly ask for volunteers who are a little further ahead than others, volunteers who might take on a Tier 1 support role.

Consideration: Can a Google document/sheet be used to get an idea of people’s knowledge, experience and comfort level with the various tools? Is completing this sheet part of the entry process? Perhaps something based on the data sheet?

Consideration: Have a space at the end for reflection? Perhaps in part people could do this on their blog?  It might even be a good exercise to start them making connections etc.  To see all the tools working together.

Fit with the course requirements

Course requirements to consider include

  • Blog
    • Which blog?
    • Posts and their model.
    • Feeds
  • Trying to help students develop an appreciation of the value of developing conceptual models of how a technology works, moving beyond recipe following.
  • Challenge of explaining how these three tools fit together.
  • What about seek/sense/share, and what that means for how they learn.
    Question: Do the why first? Too abstract.  Leave it until the end? Don’t know why and perhaps too late and too tired by everything else.  Perhaps show them how it all looks at the end?
  • Identity
    • anonymous or not
    • Professional identity
    • Not being an egg
  • How to demonstrate to people the process
    Select a volunteer and I help guide them through the process using some sort of scaffold (e.g. the slides or study desk)
  • How to give people the time to try it by themselves and perhaps get support
  • How to encourage/enable reuse of sections of the video to integrate into the learning paths

Questions to ask (form/spreadsheet)

  • Name
  • Are you will to volunteer to be guided
  • Blog
    • Do you have one set up?
    • Rate your knowledge about the blog?
    • have you written a blog post?
    • Have you customised your blog?
  • Diigo
    • Do you have a Diigo account?
    • Do you have a Diigo extension installed?
    • Have you book marked something using Diigo
    • Have you shared it to the EDC3100 Diigo group
  • Feedly
    • Have you logged into Feedly?
    • Have you imported the EDC3100 OPML files?
    • Have you tried following anyone else?

 

Initial design

Welcome

First 5+ minutes focus on welcoming everyone and asking them to fill out the form.

Outline the purpose of the session.

Outline the structure

  • Welcome
  • Where are we up to, where are we going
  • Doing it
    • Diigo
    • Feedly
    • Blog
  • Pulling it all together

Where are we up to? Where are we going?

Explain the three tools and the seek/sense/share approach to learning, only briefly on why, focus on concrete illustration showing my use of the tools. Link this to professional identity and the idea of being anonymous. Which tools need to be anonymous?

We want you to be able to do this by the end of the session.

Show the sheet behind the form – link to an idea they can use, mention the Google spreadsheet links in the EDC3100 Diigo group.  Find out where people are up to, think about approaches, ask for volunteers to be Tier 1 support – perhaps on the chat?  Or perhaps in a breakout room.

Outline structure (easy first, to more difficult)

  • Feedly
  • Diigo
  • Blog

Diigo

  1. Sign-up for account.
    Make sure go to learn more. — username and email (which email – personal or USQ)
  2. Join the EDC3100 group.
  3. Show the emails I get and the approval process
  4. Install a Diigo tool
    Recommend Diigo extension – but Diigolet will do
  5. Bookmark a page for yourself
  6. Bookmark a page to the group????
  7. Do minute paper

Feedly

  1. Which account – link to professional identity
    1. Umail if only for University – this okay because it’s not visible.
    2. Facebook or other account if using for personal
  2. Visit Feedly – Hit the get started button – login
  3. Import the OPML files.
  4. Add some content – get them to search in Feedly for something they are interested in
  5. Make point about not reading the actual page, but a copy, show how to access the actual page
  6. minute paper

Blog

  1. Which blog service
  2. Which identity – anonymous etc.
  3. Go to your choice of blog provider
  4. Hit the equivalent of “Create website”
  5. Follow the process
  6. Choose your configuration
  7. Write your first blog post — maybe suggest it should be linked to this post and reflect upon it.  Work in some ideas about reflection.
  8. Register the blog on the Study Desk — probably shouldn’t show this in Zoom.
  9. Talk about WordPress reader and it’s relationship with Diigo
  10. Minute paper

Pulling it all together

  1. Can I get them to download the OPML file and into Feedly.
  2. Come back to the seek/sense/share processe
    1. Seek – Start with Feedly
      1. See discussion forum posts
      2. See posts from other students
    2. Sense – on blog
    3. Share – on blog and Diigo
  3. Another minute paper???

Tasks

  1. Powerpoint scaffold for the session
  2. Google forums
    1. Where are you up to?
    2. Minute papers
      1. Feedly
      2. Diigo
      3. Blog
  3. Set up data system for EDC3100 S2
    1. Blog registration counter
    2. Creating OPML files