Exploring current institutional e-learning usage

The following is a summary of an exploration of the recent literature analysing University LMS usage and some thinking about further research.

In summary, thinking there’s some interesting work to be done using analysis of LMS databases to analyse the evolution (or not) of student and staff usage of the LMS over a long period of time. Use a range of different indicators and supplement this with participant observation, surveys, interviews etc.

Background

Way back in 2009 we wrote

Extant literature illustrates that LMS system logs, along with other IT systems data, can be used to inform decision-making. It also suggests that very few institutions are using this data to inform their decisions…….When it comes to Learning Management Systems (LMS) within higher education it appears to be a question of everyone having one, but not really knowing what is going on. (Beer et al, 2009, p. 60)

That paper was the first published work from the Indicators Project which was designed to increase awareness of what is being done with institutional LMS and consequently help address questions such as what can and does influence the quantity and quality of LMS usage by students and staff

Along with many others we thought that “LMS system logs, along with other IT systems data, can be used to inform decision-making” (Beer et al, 2009, p. 60). Since then I’ve seen very little evidence of institutions making use of this data to inform decision making. At the same time the literature has suggested that it is possible. Macfadyen and Dawson (2012, p. 149)

Learning analytics offers higher education valuable insights that can inform strategic decision-making regarding resource allocation for educational excellence.

but even they encountered the socio-technical difficulties that can get in the way of this.

It’s now four years later, I’m at a different university and there remains little evident use of learning analytics to understand what is going on with the LMS and why. It appears time to revisit this work, see what others have done in the meantime and think about what we can do to contribute further to this research. The following outlines an initial exploration and thinking about how and what we might do.

Findings and ideas

In Beer et al (2009) we described the project as

intended to extend prior work and investigate how insights from this data can be identified, distributed and used to improve learning and teaching by students, support staff, academic staff, management and organizations.

We’re probably still at the identification stage, figuring out what can be derived from the available data. The exploration and testing of interesting indicators.

Little whole institution, longitudinal analysis

Much of the published work I’ve seen has focused on snapshots or short time frames. A semester, or perhaps a whole year. Some isn’t even at the institutional level, but just a handful of courses.

Question: Has anyone seen any research comparing LMS usage over 4/5 years?

Why hasn’t this happened? Perhaps some of these help explain

  1. Over a 5 year period most institutional will have changed LMS. Cross LMS comparisons are difficult.
  2. No-one’s kept the data. Most IT divisions are looking to save disk space (after all it’s such an expensive resource these days) and have purged the data from a few years ago. Or at least, have it backed up in ways that make it a bit more difficult to get to it.
  3. Over a 5 year period, there’s probably been or about to be an organisational restructure brought on by a change of leadership (or other difficulties) that focuses people’s attentions away from looking at what’s happened in the past.
  4. Looking at the data might highlight the less than stellar success of some strategies.

What path adoption and why?

It appears that there is a gradual increase in usage of the LMS over time (“usage” is an interesting term to define). I wonder how well this pattern applies? If it is impacted by various institutional factors? Is the “technology dip” visible?

Mapping the adoption trend over time and exploring factors behind its change could be interesting.

Student usage of features – adoption measure?

Macfadyen and Dawson (2012, p. 157)

A more detailed understanding of what, exactly, is occupying student time in LMS-supported course sites provides a more meaningful representation of how an LMS is being used, and therefore the degree to which LMS use complements effective pedagogical strategies.

Perhaps adoption of LMS features should be measured by the percentage of student time is spent using that feature?

Might open up some interesting comparisons between teacher expectations and student practice.

This could be an interesting adaptation of MAV’s heatmaps

Malm and DeFranco (2011) suggest logins divided by enrolment.

Other indicators

  • Average user time online. (Macfadyen and Dawson, 2012)
  • Student usage of LMS tools by minutes of use time per student enrolled (Macfadyen and Dawson, 2012)
  • percentage of content by type. (Macfadyen and Dawson, 2012)
  • distribution of average student time per “learning activity category” (Macfadyen and Dawson, 2012) based on earlier four categories of activities where LMS tools are allocated to activities
  • correlation between student achievement and tool use frequency (Macfadyen and Dawson, 2012)

Various bits and pieces found

The planned process and the summary below, goes something like this

  • Explore citations of Beer et al (2009).
  • Explore citations of Malikowski et al

    Some of the inspiration for our work.

  • Explore existing literature I’ve saved.
  • Do a broader search.
  • Stuff that just came up in the above.

Citations of Beer et al

According to Google Scholar, cited by 16 and only 4 or so of those are our own publications.

  • Agudo-Peregrina et al (In Press)
    Defines 3 classifications (by agent, frequency, mode) of interactions and evaluates the relation to academic performance across VLE supported ftf and online learning. Empirical study with data from 6 online and two VLE supported courses. relationship to performance found only with online courses, not VLE supported.

    Beer et al (2009) mentioned as part of literature focusing on the relationship between interactions and students performance and mentions the Indicators project. Identifies 6 main areas for future research

    1. moderating factors of interactions in online courses e.g. user experience in the use of the VLE;
    2. capture “PLE” and other non-LMS data
    3. analysis of interactions based on semantic load
    4. inclusion of static/semi-static user data to allow customization
    5. complementary use of data visualisation to help explain and steer the learning process
    6. development of recommender systems
  • Goldsworthy & Rankine (2010)
    Analysis of 72 sites to identify learning design strategies which promote effective collaboration. ASCILITE short paper. Links to work at UWS exploring usage of the LMS. Beer et al (2009) referenced for the three choices: surveys, mine LMS data, manually review sites. They reviewed sites.
  • Hartnett (2011)
    Analysis of 2 cases (different courses) to explore relationships between motivation, online participation and achievement. Used a variety of measures including surveys for motivation, analytics etc. “The mixed results point to complex relationships between motivation, online participation, and achievement that are sensitive to situational influences” (Hartnett, 2011, p. 37)
  • Pascual-Miguel et al (2010) (a closed off paper).
    Exploration of whether interaction is an indicator of performance and whether it differs with mode. Results show partial or no evidence of a link
  • Greenland (2011)
    Log analysis for 10 courses that differ based on learning activity design. The design has substantial impact on levels of student interaction. Highlights some challenges.

Malikowski citations

  • Alhazmi & Rahman (2012)
    Aim to identify why the LMS has failed mentions related journal article to identify 5 failure aspects

    1. Content management – LMS used as content container
    2. Feature utilization – interactive features left unused
    3. Teaching and learning methods – one way delivery of information, passive learner
    4. Learners’ engagement – low level
    5. Assessment management – inflexible, difficult to use and no aligning between assessment and ILOs
  • Lon et al (2007)
    Relationship between course ratings and LMS use. Found students do not rate courses more highly when instructors use the LMS. But show student value LMS for different reasons. COmbined survey data with analysis of course sites.
  • Luis et al (2013)
    Uses CMS to mean Content Management System but refers to Blackboard/WebCT as examples. Looks at how students regulate their tool use throughout the course by considering th emoment tools are used – a temporal dimension missing from earlier studies. “More insight into students’ tool-use is particularly important from an instructional design perspective since research has repeatedly revealed that a learning environment’s effectiveness depends heavily on students’ adaptive tool-use.” 179 students. Only a minority of students used tools inline with the course requirements.

    Draws on 3 phases of learning – novices/disconnected knowledge; organised into meaningful structures; structures are highly integrated and function in an autonomous way. Done in a single course.

  • Naveh et al (2012)
    Through surveys and interviews proposes 5 critical success factors for increasing student satisfaction with the LMS: content completeness; content currency, easy to navigate, easy to access, and course staff responsiveness.
    Interestingly draws on institutional theory and the idea of environmental legitimacy outweighing efficiency.
    Developed survey 8000+ (13%) responses. Semi-structured interview of students of top and bottom ranking courses.

Existing literature

  • Romero et al (2013)
    “This paper compares different data mining techniques for classifying students (predicting final marks obtained in the course) based on student usage data in a Moodle course” (p. 136)
  • Malm and DeFranco (2011)
    “This article describes a student-centered measure of LMS utilization, average number of student logins per student, as a primary tool for policymakers” and illustrates how it can be used in several ways.

    “…most commonly used adoption metrics are faculty-focused and binary in nature” (p. 405) Binary as used or not.

    Suggesting ALPSi = ( Total student loginsi / Total enrolled studentsi ) as the solution (where i is class section). The advantages are meant to be

    • student focused;
    • based on easily available system data and simple to calculate
    • simple measure of intensity of use that can be useful for analysing and discussing the role of the LMS on campus.
    • section based.

    used the figure in various ways, including intensity of site usage based on age of faculty member (digital natives are apparently under 35). This was confirmed by a t-test.
    But did not find a change over time.

  • A couple of papers by Lam and McNaught (including one referenced in Malm & DeFranco above) look very interesting, but sadly IGI stuff is inaccessible. Some of it is summarised here
    “overall findings are that, while adoption of simple strategies (information-based) is increasing, there is little evidence of horizontal and vertical diffusion of the more complex strategies that engage students in interaction.” also found in terms of four elearning functions (provision of content, online discussion, assignment submission and online quiz) found a “slight decline in the use of diverse online strategies”. “Use of some more complex strategies actually decreased”.
  • University of Kentucky EAD results from Malm and DeFranco (2011) a concerted effort to promote adoption of the LMS.

Broader search

Stuff that came up

Classifications of LMS feature usage

  • Malikowski et al (2007).
  • Dawson et al (2008).
  • Macfadyen & Dawson (2012)

References

Agudo-Peregrina, Á. F., Iglesias-Pradas, S., Conde-González, M. Á., & Hernández-García, Á. (2013). Can we predict success from log data in VLEs? Classification of interactions for learning analytics and their relation with performance in VLE-supported F2F and online learning. Computers in Human Behavior. doi:10.1016/j.chb.2013.05.031

Alhazmi, A. K., & Rahman, A. A. (2012). Why LMS failed to support student learning in higher education institutions. 2012 IEEE Symposium on E-Learning, E-Management and E-Services, 1–5. doi:10.1109/IS3e.2012.6414943

Beer, C., Jones, D., & Clark, K. (2009). The indicators project identifying effective learning, adoption, activity, grades and external factors. In Same places, different spaces. Proceedings ascilite Auckland 2009 (pp. 60–70). Auckland, New Zealand.

Greenland, S. (2011). Using log data to investigate the impact of (a) synchronous learning tools on LMS interaction. In G. Williams, P. Statham, N. Brown, & B. Cleland (Eds.), Changing Demands, Changing Directions. Proceedings ascilite Hobart 2011 (pp. 469–474). Hobart, Australia.

Goldsworthy, K., & Rankine, L. (2010). Learning design strategies for online collaboration : An LMS analysis. In C. H. Steel, M. J. Keppell, G. P, & H. S (Eds.), Curriculum, technology and transformation for an unknown future. Proceedings of ASCILITE Sydney 2010 (pp. 382–386). Sydney.

Hartnett, M. (2011). Relationships Between Online Motivation , Participation , and Achievement : More Complex than You Might Think. Journal of Open, Flexible and Distance Learning, 16(1), 28–41.

Lonn, S., Teasley, S., & Hemphill, L. (2007). What Happens to the Scores? The Effects of Learning Management Systems Use on Students’ Course Evaluations. In Annual Meeting of the American Educational Research Association (pp. 1–15). Chicago.

Lust, G., Elen, J., & Clarebout, G. (2013). Regulation of tool-use within a blended course: Student differences and performance effects. Computers & Education, 60(1), 385–395. doi:10.1016/j.compedu.2012.09.001

Malikowski, S. (2010). A Three Year Analysis of CMS Use in Resident University Courses. Journal of Educational Technology Systems, 39(1), 65–85.

Macfadyen, L., & Dawson, S. (2012). Numbers Are Not Enough. Why e-Learning Analytics Failed to Inform an Institutional Strategic Plan. Educational Technology & Society, 15(3), 149–163.

Naveh, G., Tubin, D., & Pliskin, N. (2012). Student satisfaction with learning management systems: a lens of critical success factors. Technology, Pedagogy and Education, 21(3), 337–350. doi:10.1080/1475939X.2012.720413

Pascual-Miguel, F., Chaparro-Peláez, J., Hernández-García, Á., & Iglesias-Pradas, S. (2010). A Comparative Study on the Influence between Interaction and Performance in Postgraduate In-Class and Distance Learning Courses Based on the Analysis of LMS Logs. In M. D. Lytras, P. Ordonex De Pablos, D. Avison, J. Sipior, & Q. Jin (Eds.), Technology Enhanced Learning. Quality of Teaching and Educational Reform (pp. 308–315). Springer.

Romero, C., Espejo, P. G., Zafra, A., Romero, J. R., & Ventura, S. (2013). Web usage mining for predicting final marks of students that use Moodle courses. Computer Applications in Engineering Education, 21(1), 135–146. doi:10.1002/cae.20456

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s