bim2 – student and marker fixes

Some more work on bim2, carrying on from last night. Aim here is to attack some of these tasks:

  • Fix problem with mirroring of student feeds.
  • Double check the marker screens.

Mirroring of student feeds

Last night I proposed three possible causes

  1. The caching/operation of the Moodle 2 version of Simplepie and bim.
  2. Left over database entries not cleaned up appropriately during testing.
  3. Errors crept into the mirroring code due to Moodle 2 database API changes.

Is bim2 currently using the Moodle2 version of the SimplePie library for mirroring, rss parsing etc.

require_once( $CFG->libdir.’/simplepie/moodle_simplepie.php’ );

Check.

Let’s try a brand new student. Yes, same problem. Is not mirroring the feed properly. Keeps adding all the new stuff. Is inserting the same entry into the database each time, perhaps the problem is with the bim2 code.

So time to look at lib/bim_rss.php. Basic process is

  • Create details_link hash with key being link to posts in the dbase. – BROKEN
    Is giving an empty hash when there should be 6 entries….bim_get_marking_details is broken. Yep, hasn’t been moved across to the Moodle 2 dbase API.
  • Loop through items in the feed
    • if not already in the details_link hash
      • Prepare for entry, including checking if it’s an answer to a question.
      • insert it into the dbase

Minor display problems

In fixing this, there are some minor display problems to fix. This is also probably a porting issue.

  • “showquestions” – apparently meant to be details about questions, rather than a link to a label.
    “showquestions” is a link to the page where students can view the questions they have to answer. However, it should have some descriptive text here. It appears that link_to_popup_window is deprecated in Moodle 2.

    Not sure why this was a popup. Make it a normal window and move on…Oh joy, the language files are cached. Need to turn that off.

  • Links after “All posts” heading – descriptive text is a link
    Being caused by an unclosed anchor tag. Where is that shown? Ahh, lib/locallib.php – fixed.
  • Help buttons “TO DO” – all of them are to do.
    Another conversion not fixed. All those text files need to be moved into the lang file. Done.

That last task also helped test the various transitions that a post can go through: unallocated, allocated/submitted, marked, suspended and released.

At this stage, the student part of bim2 is a go.

implode

The problem with get_marking_details above was caused by unfinished porting of how SQL ” in ” statements are handled. The new “get_in_or_equal” method needs to be used.

This needs to be fixed before moving on. Need to search for implode.

Note: Will need to keep an eye on bim_get_all_marker_stats as it needs to be closely tested.

Marker “screens”

A marker can do the following

  • View student details
    Needed to fix the help popups. Done.

    • View various details about the students – WORKS
    • Download OPML file for their students
      Error in SQL. This is all done in marker/generateOpml.php. Seems the problem is in bim_get_markers_students. Actually the userid isn’t being passed in. that’s fixed.

      Another problem. It’s not actually returning anything for this marker. Actually, a range of problems from the bim1 code. This probably never worked.

      It does now.

    • Register a blog for a student
    • Send an email to all unregistered students – WORKS
  • Mark posts
    • View an overview of marking progress
    • Mark a particular post – which includes a range of changes

Will leave this for now.

Next time I need to continue going through the marker interface.

bim2 – status check and what’s next

So it appears that bim2.0 is increasingly needed (if you don’t know what bim is, check this out). University of Canberra have gone to Moodle 2 and CQU are about to, the only two places I know that bim is being used. Most importantly, I’m now teaching at a Moodle Uni and am seriously thinking about using bim, and my Uni is about to move to Moodle 2.

In keeping with my practice, this post is another in a list of posts that serve as a development journal (mainly because I’m coding so infrequently now I need to remember how to do this stuff). The aim here is to figure out where the porting is up to and perhaps identify where next.

The current code for bim2 can be found in this branch on github. I’ll try to keep it up to date.

Current status

The last work reported

Most of the basic code for bim2 is working, but the capabilities aren’t. i.e. identifying the type of user and sending them to the right function.

Mmm, all are working, but not the coordinator.

As it stands the coordinator stuff is working. I wonder if that’s because of a hard-coded kludge. Mmm, no. It seems to be coded as required.

I did have earlier problems because of versioning, it appears that’s fixed now.

Let’s test the other user types and make sure they are working

  • Student – working, at least the redirect, the display leaves something to be desired.
  • Marker – working as well.
  • Coordinator (as a teacher, not as admin user) – bugger, that’s not working “error/No capability to access this page”

To do list

Which leads me to this basic to do list

  1. Test out the marker and student views of BIM and find out what’s not working.
  2. Fix what’s not working for marker and student
  3. Figure out why the “coordinating” teacher is not being identified as such.

Not identifying “coordinating” teacher

I can feel this being a bugger to identify where it’s going wrong, mostly because my knowledge of the Moodle capability system (let alone the Moodle 2.x capability system) is close to non-existent.

Ha! Noticed that the teacher account only had the role “Coordinator” set and that the access.php file was not looking for “Coordinator” as a role to treat as a bim “coordinator”. Added “teacher” as a role and it’s working.

Is “coordinator” a standard Moodle 2 role? No, it doesn’t look like it.

Amazing what some time away will do for perspective and clarity.

What’s working for the student role?

First, why aren’t the header/footer being displayed properly. Thinking I haven’t added appropriate stuff in “show_student”. Yep, have to call the print_header functions. Add that in, fix up the call to print the footer and we’re working.

So, what can a student do and what do I need to check

  • No feed registered
    • View bim – WORKS
    • Register invalid feed
      • URL is not a URL – WORKS
      • URL is not accessible – WORKS
      • URL is not a feed – WORKS
    • Register a valid feed – WORKS
  • Feed registered
    • view current details – with no new blog posts added. – WORKS
    • View current details – with new blog post added – Not working so well

Status for now

Most of the student functionality is working. However, every time the student is viewing their bim activity, the number of mirrored posts is going up by 10. Needs to be fixed.

Some potential causes to investigate

  • The caching/operation of the Moodle 2 version of Simplepie and bim.
  • Left over database entries not cleaned up appropriately during testing.
  • Errors crept into the mirroring code due to Moodle 2 database API changes.

Curriculum innovation as an educational technology trend

Came across this post titled “Five Trends to Watch in Education Technology” via Stephen Downes’ OLDaily. In particular, I was really drawn to trend #1 – the Curriculum. In particular, because it connects with some ideas that have burbling away for the least week or so sparked by some questions from a colleague.

Rob Reynolds’ take on Curriculum as a trend includes

Across education, the very notion of curriculum is changing in a number of ways. We are seeing a shift to newer literacies and are even beginning to entertain significant changes to what core content needs to be taught/learned. There is certainly a growing realization that curricula today must be more flexible and open, and that the idea of fixed/static bodies of important information to be taught no longer works.

I’m currently teaching a course that aims to help K12 teachers figure out how they are going to use Information and Communication Technologies in their teaching. It’s a fairly standard University course. It has a set textbook. A weekly schedule. A set curriculum. A couple of large assignments. A course website.

Within context/constraint there have been a few interesting innovations, but it’s all still constrained by the curriculum which is fairly set. It’s week 6 we must be covering “Topic X”.

I just don’t see this rigidity fitting nicely with the notion of a “more flexible and open” curricula.

Curriculum/student mismatch

It doesn’t help that the current curricula approach doesn’t really fit the needs of the students.

There are almost 300 students in this course this term. 120 of them entirely online. Around the same number are split between three different campuses. The next offering will have 100+ students, all of them online.

These students are split across a number of teaching specialisations, including: Early Childhood, Primary, Secondary (including various disciplinary specialisations), and Vocational Education and Training (VET). What it means to use ICTs in early childhood is entirely different in a VET context.

The students come with very different backgrounds in technology – ranging from ex-IT professionals through to “it breaks if I touch it” – and a broad array of ages. See the following graph that shows the age distribution.

age Distribution

In addition, the course is nominally a 3rd year course. Which suggests you can assume that the students have two years of study toward an education degree under their belt. Of course, this is not the case. With exemptions/bridging etc there are some students for whom this is their first course at University.

Given all this diversity it really isn’t all that possible to design a single path through a set curriculum that is going to be appropriate for all these students.

Double loop learning and constructive alignment

Current accepted practice within higher education courses is something along the lines of constructive alignment. I, as the expert, identify the outcomes the students should achieve. I then design assessments and activities that enable the students to develop and demonstrate those outcomes. As typically implemented this approach is the opposite of a “more flexible and open” curriculum. All students are expected to work towards the same goals, often using the same sequence of activities to get there.

Over recent years the Australian higher education sector – with its growing diversity of multiple campuses and alternate delivery modes – has faced requirements to demonstrate that all students are gaining an equivalent learning experience. The tendency has been for equivalence to be reduced to consistent learning experience. Further driving out any notion of a “more flexible and open” curriculum.

A couple of days ago I blogged about a talk given by Gardner Campbell. In it he references Naughton’s From Gutenberg to Zuckerberg and his discussion of “double-loop” learning

it is not enough for managers to adjust their behaviour in response to feedback on the success of their actions relative to pre-established targets; they also need to reflect on the appropriateness, in the light of unfolding events, of the assumptions (the mental model) used to set up those actions and targets

Substitute “learners” for “managers” and you have some idea of what I’ve been thinking about. Is it possible/plausible/desirable for a University course to have a “more flexible and open” curriculum that seeks to encourage and enable double-loop learning amongst the students?

Is it possible to break university managers etc out of the viewpoint that “innovation” around teaching and learning isn’t just about doing the old curriculum with the new technology, but is instead about developing new conceptions of what curriculum could be?

What are the really useful analytics?

Following on from recent posts around learning analytics, this post is going to try and drill down a bit further on one of the useful questions around analytics identified by Shelia MacNeil’s summary of a Gardner Campbell talk. The questions

What are the really useful “analytics” which can help me as a learner, teacher, administrator, etc? Which systems provide that data just now ? How can/do these stakeholders access and share the data in meaningful ways? How can we improve and build on these systems in ways which take into account the complexity of learning? Or as Gardner said, how can we start framing systems and questions around wisdom?

I’m going to focus on “What are the really useful “analytics” which can help me as a learner, teacher, administrator, etc?” and in particular on the “useful analytics” – I’ll call them patterns – that the Indicators project has identified so far.

So far, the Indicators project has really only looked for these useful patterns within one institution, though with a fairly large sample. The definition of useful here is something like “potentially of interest to learners, teachers or administrators”. In the short term, I’m hoping we can extend both the sample and also the number of institutions.

A side step into purpose

Before getting into that, a few words about the purpose of learning analytics. Barneveld et al (2012) suggest that there are some common business reasons behind analytics

: increasing financial/operational efficiency; expanding local and global impact; establishing new funding models during a changing economic climate; and responding to the demands for greater accountability

which appears to be taken from an IBM white paper

There follows an argument and some references that suggest analytics can help management a great deal to “cut costs and improve teaching and learning”. So much so that the following sentence repeats the mantra “improving efficiencies to saving money to enhancing student achievement”

While that’s certainly a potential application of analytics, like others I find this a limiting perspective. It almost automatically entails the sort of simplification that Gardner Campbell was arguing against.

Instead, I’m starting to lean more toward the idea of analytics being yet another “blind man” available to people to discover what is going on around e-learning/learning. Like the other blind men of e-learning research (I’ll use that term for want of a better label) – surveys, interviews etc. – it has its limitations (Gardner expressed a big picture limitation) but perhaps combined they can help.

53 / The blind Men and the Elephant

Perhaps the indicators project is more about helping combine the perspectives blind men in order to better understand what is going on. At this stage, just maybe the managers can get their hands on it. But not until we’ve tested the “sight” of the analytics blind man. Which leads nicely into the next section.

The patterns

Much of the initial work of the indicators project has simply been to investigate what the analytics blind man can actually see. What were the patterns in the usage data? Some of this exploration was inspired by the work of others. We were asking, “Does that same pattern hold here?”. We were particularly interested in seeing if what patterns emerged from cross-LMS, cross-institutional, and longitudinal data. Here’s a quick list of what we’ve looked at so far, more explanation following.

First, the list

  • Does LMS feature adoption change over time and between systems?
  • Is there a link between LMS activity and student grades?
  • Is there a link between LMS activity and external factors (e.g. staff participation, staf background, instructional design involvement, student age)?
  • Investigating critical success factors.
  • Differences between LMSs.

Feature adoption

An LMS comes with a host of features. Which features are used? Does feature adoption change over time? Based on the work of Malikowski et al (2006, 2007, 2008) we initially examined feature adoption within a locally grown LMS, Blackboard use at the same institution, and eventually Moodle feature adoption.

Malikowski et al identified five categories of LMS features. Our initial exploration of four predominant features at the one institution from 2005-2009 revealed some widely different results as summarised in the following graphs. Some explanation of the graphs

  • The green and purple lines represent the top and bottom ranges found by Malikowski et al.
  • The black continuous line represents feature adoption with this institution’s version of the Blackboard LMS.
  • The black dashed line represents feature adoption with a locally produced “LMS”.

Feature adoption - Transmit Content - Wf vs Bb

The transmit content usage for Wf (locally produced LMS) only includes optional content distribution. Wf automatically produced course websites with a range of information. Academics could optionally add more material.

Feature adoption - Class Interaction- Wf vs Bb

Use of class interaction features were significantly higher in Wf than Blackboard and what was found by Malikowski et al.

Feature adoption - Evaluating Students - Bb vs Wf

Feature adoption: evaluating Courses Bb versus Wf

Student activity and grades

Dawson et al (2008) found significant differences between low and high performing students in the quantity of online sessions times, total time online and the amount of active participation in discussion forums. i.e. the more usage of the LMS, the better the grade for students.

We found that this generally applied for our students whether “usage” was measured by visits to the LMS, posts to discussion forums, or replies to discussion forums. However, it didn’t apply to all groups of students. The institution had three types of students. The following graph shows how one group (the dotted line) don’t show this pattern. With this group of students, students who got high distinctions and distinctions had less LMS usage than the students with a Credit.

Staff interaction impact on forum posts and replies

Impact of external factors

We then explored whether other factors influenced this link between student usage and grade.

For example, would online courses taught by the education academics (teacher education) be any different than the other courses. The following graph shows somewhat similar trends, but it also shows that education (EDED) courses have discussion forums that are, on average, visited less than other courses.

Hits on course sites and forums

And when looking at the average number of posts/replies by students the expected pattern breaks down. Especially for replies.

Forum posts and replies

We found that staff involvement with course sites made a difference. The following graphs show the usage/grade pattern for courses where staff accessed the course website less than 100 times during the course.

Average student hits on course site/discussion forum for super low staff participation courses

Average student posts/replies on discussion forums for super low staff participation courses

We also found that age changed the pattern and level of usage. Older students used the LMS more. Younger students less.

Critical success factors

The choice of external factors was informed by Fresen’s (2007) work, which identified a “taxonomy of factors to promote quality web-supported learning”.

Differences between LMSs

Beer et al (2010) compared various patterns between Blackboard (pre-2010) and Moodle (2009 and beyond). Findings included

  • Number of clicks on Moodle course sites somewhat lower than on Blackboard.
  • Average time on site was about the same between the two LMS.
  • The average number of pages per visit on Moodle was less than for Blackboard (11.37 versus 24.51).

References

Barneveld, A. V., Arnold, K., & Campbell, J. P. (2012). Analytics in Higher Education: Establishing a Common Language. Business (pp. 1-11). Retrieved from http://net.educause.edu/ir/library/pdf/ELI3026.pdf

Beer, C., Clark, K., & Jones, D. (2010). Indicators of engagement. Curriculum, technology and transformation for an unknown future. Proceedings of ASCILITE Sydney 2010 (pp. 75-86). Sydney. Retrieved from http://ascilite.org.au/conferences/sydney10/procs/Beer-full.pdf

Dawson, S., McWilliam, E., & Tan, J. P. L. (2008). Teaching smarter: How mining ICT data can inform and improve learning and teaching practice. Melbourne. Retrieved from http://www.ascilite.org.au/conferences/melbourne08/procs/dawson.pdf

Fresen, J. (2007). A taxonomy of factors to promote quality web-supported learning. International Journal on E-Learning, 6(3), 351-362.

Malikowski, S. (2008). Factors related to breadth of use in course management systems. Internet and Higher Education, 11(2), 81-86.

Malikowski, S., Thompson, M., & Theis, J. (2006). External factors associated with adopting a CMS in resident college courses. Internet and Higher Education, 9(3), 163-174.

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

Explorations of narrative research

For a long time I’ve had a vague interest in narrative research, i.e. it’s one of those things I always meant to learn more about. Here are some initial explorations.

Narrative approaches to education research

My google for “narrative research method education” turns up this site from the UK as the #1 hit. I’ll start there.

Connects strongly with me from the start due to this “Human beings are storying creatures. We make sense of the world and the things that happen to us by constructing narratives to explain and interpret events both to ourselves and to other people.”

Dave Snowden often uses the label Homo Narrans as an alternate label for the species. These folks have an academic reference for something similar

Indeed, somewhat playfully, it has been suggested that there is a case for revising the term homo sapiens to ‘homo fabulans – the tellers and interpreters of narrative’ (Currie, 1998: 2).

Lots of discussion here, particular liked this

Bruner has suggested that there are two basic ways in which human beings think about, make sense of, and tell about the world: narrative cognition and logico-scientific paradigmatic cognition (Bruner, 1986). Essentially, logico-scientific cognition is concerned with universals, empiricist reasoning and proof: and narrative cognition, with how the particular and specific contribute to the whole.

Especially the last point about the “particular and specific” have contributions to make for the whole.

Richardson’s (2000) criteria for evaluating narrative papers

  • Substantive contribution.
  • Aesthetic merit.
  • Reflexivity and participatory ethics.
  • Impact.
  • Experience – near

Deluze and rhizomes get a mention for a number of things, including structure.

List of narrative approaches

  • Autoethnography.
  • Ethnographic fiction
  • Poetry.
  • Performance ethnography.
  • Mixed genres.
  • Writing as a method of inquiry.
  • Narrative interviewing.

It would appear that autoethnography is approach currently most appropriate. Resources to follow up with include

  • Sparkes A (2001) Auto-ethnography: self indulgence or something more In: Bochner A and Ellis C (eds) Ethnographically Speaking Alta Mira Press CA
  • Bochner A (2000) Criteria Against Ourselves Qualitative Inquiry, Volume 6 Number 2, pp.266 – 272
  • Denzin, N. (2003) Performing (Auto)Ethnography: The Politics and Pedagogy of Culture (Thousand Oaks, Sage).
  • Ellis C and Bochner A (2000) Auto Ethnography, Personal Narrative, Reflexivity: Researcher As Subject, In Denzin N and Lincoln Y (eds) (2nd Ed) Handbook of Qualitative Research Sage Thousand Oaks
  • Etherington, K. (2004). Becoming a reflexive researcher. London: Jessica Kingsley

Autoethnography

So let’s explore this little thread a bit. Wikipedia is about as good a place as any to start.

Apparently I’m leaning towards analytic authoethnography, rather than evocative authoethnography, as per Ellingson and Ellis (2008, p 445) – as quoted on Wikipedia

Analytic autoethnographers focus on developing theoretical explanations of broader social phenomena, whereas evocative autoethnographers focus on narrative presentations that open up conversations and evoke emotional responses.

But perhaps not, in some other literature there appears to be some disquiet about the rationale of analytics autoethnography.

This captures an aspect/perspective interesting to me (again from Wikipedia)

According to Bochner and Ellis (2006), an autoethnographer is “first and foremost a communicator and a storyteller.” In other words, autoethnography “depicts people struggling to overcome adversity” and shows “people in the process of figuring out what to do, how to live, and the meaning of their struggles” (p. 111).

More resources

  • 35(4), August 2006 of the Journal of Contemporary Ethnography
  • 13(3), Summer 2007 of Culture and Organization
  • Humphreys, M. (2005). Getting Personal: Reflexivity and Autoethnograhic Vignettes, Qualitative Inquiry, 11, 840-860
  • Ellingson, Laura. L., & Ellis, Carolyn. (2008). Autoethnography as constructionist project. In J. A. Holstein & J. F. Gubrium (Eds.), Handbook of constructionist research (pp. 445-466). New York: Guilford Press.
  • Maréchal, G. (2010). Autoethnography. In A. J. Mills, G. Durepos & E. Wiebe (Eds.), Encyclopedia of case study research (Vol. 2, pp. 43–45). Thousand Oaks, CA: Sage Publications.

“Here I stand” – Campbell’s concerns on analytics and other stuff

Continuing a re-engagement with analytics I spent some time listening to Gardner Campbell’s talk to the LAK’12 MOOC – Here I Stand and from there followed various links.

Gardner captures one of my major concerns with how analytics may proceed, especially within institutions that are increasingly driven by accountability, efficiency and other concerns. Concerns that they are responding to with top-down management. Gardner’s uses the metaphor of the human mind/learning being as complex as M-Theory (actually more complex) and that learning analytics as commonly thought of is equivalent to measuring M-Theory using a simple cartesian graph.

The end result is that it simplifies learning and how we treat to an extent that is meaningless.

He connects this view of analytics with the LMS approach to e-learning and the traditional nature of curriculum that are all in the simple domain. Learning analytics just continues this. Lots of imagery with school as a feedlot or a Skinner box.

Wikipedia image of Skinner box

Gardner talks about four strong cautions for analytics
Four strong cautions

  1. “Student success”
    Typically defined within analytics as the student passing doesn’t mean the same as succeeding in life. e.g. given of high performing high school student with no idea of what to do next.
  2. Complexity
    A lot on this that resonates, more below.
  3. Points of “Intervention”
    Just one idea is that of an analytics system that, rather than intervening just before the student fails (as most current analytics projects are trying to achieve), intervenes just as the students begins to understand.
  4. The “Third wave”

Draws on John Naughton – From Gutenberg to Zuckerberg: What you really need to know about the Internet (2012) – to illustrate “Complexity is the new Reality”

  1. Non-linear
  2. Feedback matters – a lot
  3. Systems demonstrate self-organisation
  4. EMERGENCE – synergies – new phenomena

Naughton also talks about double loop learning where “success means more than positive outcomes “relative to pre-established targets” – Which sounds very much like learning objectives – Instead it means that learners “need to reflect on the appropriateness, in the light of unfolding events, of the assumptions (the mental model) used to set up those actions and targets”

Gardner’s quote (or close to it somewhere in here) is “A real learning analytics system must be able to learn.”

Also mention of the Pardox of the Active User

Other links

Shelia MacNeil offers another summary of Gardner’s talk and points to other work. It was from Shelia’s post that I came across Exploiting activity data in the academic environment which is a somewhat broader example of analytics including some useful insights into privacy etc issues around the data.

Shelia identifies some very useful questions

What are the really useful “analytics” which can help me as a learner, teacher, administrator, etc? Which systems provide that data just now ? How can/do these stakeholders access and share the data in meaningful ways? How can we improve and build on these systems in ways which take into account the complexity of learning? Or as Gardner said, how can we start framing systems and questions around wisdom?

An earlier blog post from Gardner that arose out of reading this book (really learning to dislike book’s that aren’t available on Kindle).

Implications for the indicators project

The types of questions identified are exactly the areas which the Indicators Project was (and is about to start again) attempting to explore. The point about complexity is also timely, as that is the perspective that will underpin our work. Consequently I will be reading a bit more of Naughton.

I especially like the point about double loop learning. For three main reasons

  1. It captures one important distinction between traditional business intelligence approaches and what we hope to do with the indicators project.
  2. It highlights how we’d like to use analytics, i.e. to help university academics engage in double loop learning about how and why they teach.
  3. It frames a concern I have out the outcomes focus of much university education, i.e. we’re measuring students against outcomes we think are important and we’ve established ahead of time, rather than asking them to reflect on their assumptions and mindsets.
    In particular, I’m thinking this might be an interesting point of departure for thinking about how courses I’m responsible for might evolve.

Learning analytics and study behaviour: A pilot study

It looks like we’re going to start playing around with the Indicators project again. So it’s time to start reading up a bit on the learning analytics literature and see where the interesting work is. First up is

Phillips, R., Maor, D., Cumming-Potvin, W., Roberts, P., Herrington, J., Preston, G., Moore, E., et al. (2011). Learning analytics and study behaviour: A pilot study. In G. Williams, P. Statham, N. Brown, & B. Cleland (Eds.), ASCILITE 2011 (pp. 997-1007). Hobart, Tasmania.

The following is a summary/reflection of reading through it.

Abstract

Describes a study that looks closely at analytics data of four students – interacting mainly (it appears) with lecture capture software and then interviewed them to find check out the assumptions from analytics. It found that the analytics had some limitations which was supplemented nicely by the qualitative data. Makes some suggestions for further research around the methodology.

Aim is to find out how students engage with and study in e-learning environments. But this study focuses on the lecture capture tool

Background/intro

Makes mention of “Much e-learning research over the years has been based on quantitative data largely derived from the perceptions of students” which has some limitations. Other work is qualitative around individual contexts. This work is seeking to combine descriptive/qualitative with learning analytics.

Analytics has a history, but the meaning of the data is not always clear. usage logs record behaviour, without explaining why suggesting taking care to analyse and interpret the data.

Prior work

Some mention of the lecture capture work. Which is seen to focus on the technology, not the learning environment as a whole.

This was the impetus for our current work, which holistically examines a unit of study, and uses learning analytics to gain a richer understanding of what students actually do in a technology-enhanced learning environment.

The focus being on learning processes used by students, rather than learning outcomes.

Lectopia usage patterns

Summarises prior work around analytics from lecture capture system and identifying “types” of student users: conscientious, good-intentioned, repetant, bingers, crammers, one-hit wonders, and random users.

This work reports on a pilot study interviewing students with diverse usage patterns to find out what’s going on.

Method

..pragmatic, mixed-methods paradigm of inquiry using a modified design-based research approach..

Data sources: their analysis tool, SNAPP, standard LMS usage reports, assessment results, attendance logs for lectures, interviews with teachers, and semi-structured interviews with students.

Received ethics approval for an approach that included identifying students and handling this appropriately. Some difficulties getting a good sample of students to interview – became a “convenience sample”

Students drawn from a 3rd year sociology of education course. ~150 students on main campus, ~50 students on regional campus, ~100 DE students. 1 hour lecture and 2 hour workshop for internals. Lectopia plus discussion forum activities for DEs. Numerous readings.

109 students accessed lecture recordings

Recording usage appears to match the expected peaks (lead up to assessment) and troughs (other times).

Interviewed students were all internal.

Discussion

Interview data from 4 students is summarised. And shows very different approaches to study.

the four cases reported here start to illustrate some of the complexity of the modern, technologyenhanced learning environment.

A range of limitations of the study are identified and then

two major implications arose from this analysis: a need for the broadening of the mechanisms for identifying student behaviour patterns; and the application of the methodology to other contexts.

Student don’t visit the lecture capture system enough to give useful data. Broader LMS usage needs to be examined.

Suggests the addition of a mid-term survey of students about their perceptions of technology use and wider range of data.

Talks about selecting more units to include by reviewing courses for alignment.