Possible sources of an institution’s e-learning content problems

My current institution has a content problem when it comes to e-learning (insert digital learning, online learning, technology enhanced learning, or just learning if you prefer). The following is an attempt to use my experience teaching at the institution to understand what are some of the factors contributing to the problem.

In order to appear solutions-focused, I’ll start by re-framing the contributing factors I’ve identified below as suggested partial solutions to the content problem, including:

  1. Implement a search engine.
  2. Implement content authoring tools that fulfill authoring and learning requirements.
  3. Focus on authoring tools that help produce content that is “of” the web, not just “on” the web.
  4. Focus on authoring tools that support both design and bricolage.
  5. Identify, query, and replace conceptions and metaphors from prior modes (e.g. print-based and perhaps face-to-face) of learning.
  6. Develop and provide support for a number of higher-level models of “Course Activity”.
  7. Move away from an information transmission focus toward one based on learner activity.
  8. Develop a range of contextual services that can enhance content and student learning.

Disclaimer: I think improving any aspect of learning and teaching within a university is a wicked problem. i.e. there is no one silver bullet solution. Just better and worse solutions. From my perspective, the solutions that the institution appears to be exploring are not necessarily leaning towards the “better” end of the spectrum. The above may be a little better.

In addition, I don’t think this is a problem restricted to my current institution. I also don’t think that the solutions attempted so far are all that much different from what’s been attempted at other institutions. I’ve observed both the problems and the solutions elsewhere.

Does your institution have a “content problem”? Has it any solutions? Any of them worked?

PS. if we’re having problems with “content”, imagine the problems there must be with creating effective learning activities (IMHO, a much harder and more important problem).

Evidence of the problem

Evidence of the problems comes from two different sources.


First, is at the institutional level. For some time there has been concern expressed at senior levels within the institution that students can’t find information on the course sites. This has led to a number of institutional projects and strategies.

The first was the development of a standard look and feel for course sites. Publicised as a makeover of the StudyDesk (the institutional brand for Moodle, which potentially causes its own problems) that promises the ability to find “all course information” and “assessment submission in one location”. There is apparently on-going work around this

Personal observation

From the evidence I see (personally and via my better half who is currently a student at the institution) there remains some distance until this promise is fulfilled. The Moodle sites at the institution that I see are still largely problematic and still mirror what I found when I took over the course I currently teach. i.e. a hodge podge of Powerpoint and other files interspersed with various bits of HTML (Moodle labels) with headings or explanatory text. The HTML often illustrates complete ignorance of simple design (e.g. of the CRAP design principles) and is often an attempt to explain how everything fits together. This is required because due to a couple of institutional specific approaches, not all the content is able to be effectively integrated into appropriate places within the Moodle site.

The ad hoc intermingling of all this content ends up in “the inevitable scroll of death” and the problem that students (and staff) can’t find information.

Even when Moodle courses are well designed, there are times when you can’t find information. I’ll claim that the course site for EDC3100, ICT and Pedagogy (one of the courses I teach) is amongst the most structured of course sites. As far away from the ad hoc upload approach to site design as you can get. In addition, largely I have been the sole designer and maintainer of the course site. A task that I’ve been doing over the last 3+ years and 6+ offerings of the course. I’m also very technically proficient.

And there are times when I still can’t find information quickly on the EDC3100 site!!!

Contributing factors

What is contributing to this problem? What follows are some of the factors arise from my perspective.

No search engine

The number one way you find information on the web is via search and yet there is no search engine that works within Moodle at this institution.

Content tools solving institutional requirements, not authoring/learning requirements

The most recent major investment in content tools at the institution has been the implementation and mandated use of an institutional repository. This quite significant investment of funds was not driven by a desire to help improve the authoring or learning processes. It was driven by two separate institutional requirements, which were:

  1. being able to manage and report use of copyrighted materials; and,
  2. address the disk storage problems created by Moodle course sites containing duplicate copies of large content files.

From what I’ve observed, it would be very hard to claim that the implementation of the learning repository has helped address the ability of people to create and find information for Moodle.

“on” the web, not “of” the web

Alan Levine writes (about the open course ds106)

You will hear people talk about their organizations or projects being on the web. but there is more than a shade of difference of ds106 being of the web.

Much of the thinking behind the tools and approaches of the institution are focused on producing content that is placed “on” the web, but is not “of” the web. In fact, some of the tools provided previously had enough trouble being “of” Moodle, let alone “of” the web.

The prime example here is the ICE environment. An environment developed within the institution to enable it to leverage quite significant print-based distance education material (such as Study Guides) by converting them into a Web format. The existing material (typically created using Word) would be run through ICE to produce a collection of HTML files. That collection of HTML files could then be linked to from the course site – via a link labelled “Course Content”.

The very first web browser was also an editor. If you wanted to edit a page, you could do so within the same tool you were using to view it. The ICE approach doesn’t (I believe) work that way, to make a change you have to go back to the Word version, make the change, and then run it through ICE again. Not “of” the web.

A common way to organise a Moodle course site is by topic or week. Each section of the course site is meant to include everything you should do as part of that topic or week. But the ICE “Course Content” link contains all of the content in one place. It’s more difficult to distribute the content into the appropriate weeks or topics. Meaning that you can’t look in the one place for all the relevant information.

There’s some value in enabling the reuse of existing materials, but they have to be leveraged in a way that encourages them to become part of the new medium. Not always held back to the ways of the old.

A focus on design, rather than bricolage

The ICE model and the model used by print-based distance education was based on design. i.e. the process was to spend a lot of time on design and production of a perfect, final artefact (print-based materials) that was distributed to students. This is because once the materials were sent out, they couldn’t be changed. This created problems, e.g. this from Jones (1996)

inability to respond to errors in study material or the requirements of individual students

Yesterday, one of my students reported some difficulties understanding the requirements for submitting the first assignment. I decided that an example was the best explanation and that I should incorporate that example into the Assignment 1 specification so that other students wouldn’t have the same problem. I can do this because the Assignment 1 specification is a web page on the Study Desk that I can edit.

So I found an example and went to the Assignment 1 page to make the change. Only to discover that I’d already previously modified the page to include (the same) examples. Hence the quick reply back to the student pointing out the examples.

An experience that suggests you can put in all the effort you want around making content findable and understandable, but it may not be enough.

Old metaphors lingering around

It’s not only materials that need to be brought into the new medium. There are other conceptions or metaphors that need to be updated. For example, the makeover of the StudyDesk just undertaken includes a specific page for “Study Schedule”. This was a standard component of print-based distance education packages. But it’s not clear that it belongs in the new Moodle age within which we live.

As mentioned above, a common method for organising Moodle course sites is by week or by topic. The image below is part of the course site for EDC3100. The site is organised by week. The top of the site has skip navigation links (see the next image below) that you can use to take you directly to the week you need to work on. All the activities and resources you need for that week are in that section. As you complete each activity you will get a nice behaviouralist tick indicating that you have completed the activity.

s2 2015

With this structure in place, I question the value of a Study Schedule. Especially when I see the type of information that is contained in many of the Study Schedules on other courses.

My course does include a Study Schedule. It would be interesting to see how often it is used by students.

No higher-level models of “Course Activity”

The makeover of the Study Desk was “sold” to academics (in part) using a line like “we won’t touch ‘Course Activity'”. i.e. the normal Moodle list of activities and resources would remain the sole purview of the academic. The new look and feel was just adding some additional structure (see the left hand menu in the image below) to help students find information.


It was left to academics to organise the “scroll of death” that is a Moodle site. A task that is not straight forward. There have been (yet) no attempts to develop and share higher-level models of how the “course activity” section could be structured. I’m assuming that at some stage soon there will be a project at the institution to develop the “one higher level model” for all courses at the institution, because consistency is good.

I’d argue that there’s value in developing multiple contextually appropriate “higher level” models. The approach I use is one “higher level” model. UNE uses a different model that provides enough eye candy to excite some, and there would be other possibilities.

Resource centric understanding of learning

Lastly, and perhaps most scarily, is the apparent on-going resource centric understanding of learning suggested by the on-going interest in the “Resources” tab in the standard look and feel captured by the tweet from above. It is even more troubling when you combine this significant investment of resources in the “Resources” tab with the apparent lack of focus on “Course Activity”.

At least for me (and a few others I know) this combination speaks of a conception of learning that is focused on the transmission of information, rather than learner activity.

No value added, contextual services

When the screenshot above was taken my mouse was hovering over the 3 in the “Jump to: Week” skip navigation. As a result a tool tip was being shown by the browser with the words Building your TPACK – 16-20 Mar. This is the title I’ve given to the week’s activities and also the dates of the semester that was week 3.

If you look at the earlier screen shot you will see the titles and dates for two more weekly sets of activities: Orientation and getting ready (Before 2 Mar) and ICT, PLNs and You – 2-6 Mar (Week 1). If you were able to mouse over the 0 and 1 in the skip navigation at the top of the page, the tooltip would display the same title and date information. If you were able to look at the provided Study Schedule, you would see the same title and date information in the Study Schedule.

The same course is being offered this semester. The dates listed above no longer apply in the new semester. Under the current institutional model I would be expected to manually search and replace all of the date information every time the course site is rolled over to a new semester. The same applies to assignment due dates and other contextual information. For example, if I decide that the title for week 3 should change, I’ll need to manually search and replace all occurrences of the old title.

Since doing this manually would be silly, most people don’t do it. Instead of providing context specific information (e.g. dates), generic information is given. It’s just week 1 or theme 1. The problem with this is that it makes it more difficult for the teacher and student. Rather than information (like due dates) being available in the space needed, they have to expend energy and time looking elsewhere for that information.

I’ve implemented a kludge macro system, but Moodle has a functionality called filters that could be used to achieve the same end with some advantages.

However, this particular problem doesn’t appear to be on the radar. Arguably because all of the other “content” problems means that few people are producing content that could work with filters or require this approach.

What do “scale” and “mainstreaming” mean in higher education?

@marksmithers has just written a blog post that makes the following point

that talks about a new fund to promote innovation in highered. I know $5M isn’t a huge amount but the principle just seems so misguided. There is no problem with innovation in higher education. The problem is adopting and mainstreaming innovations across a higher ed institutions.

@shaned07 raised a similar question in a recent presentation when he talked about the challenge of scaling learning analytics within an institution.

But the question that troubles me is what do you mean by “scaling” or “mainstreaming” innovations in higher education?

What do you mean by “scaling” and “mainstreaming”?

The stupid definition

This may sound like a typical academic question, but it is important because an naive understanding of what these terms may mean quickly leads to stupidity.

For example, if what I’ve experienced at two different institutions and overhead numerous times at a recent conference is anything to go by, then “scaling/mainstreaming” is seen to be the same as: mandated, consistent, or institutional standard. As in, “We’ll mainstream quality e-learning by creating an institutional standard interface for all course websites”, or, “We’ll ensure quality learning at our institution through the development of institutional graduate attributes”. Some group (often of very smart people) get together and decide that there should be an institutionally approved standard (for just about anything) and every one, process, policy and tool within the institution will then work toward achieving that standard.

Mainstreaming through standardisation is such a strong underpinning assumption that I know of one university where feedback to senior management is provided through an email address something like 1someuni@someuni.edu and another university where achieving the goal of “one university” received explicit mentions in annual reports and other strategic documents.

The problem with the stupid definition

talk to the experts by Mai Le, on Flickr
Creative Commons Creative Commons Attribution 2.0 Generic License   by  Mai Le 

The problem with this approach is that it assumes Universities and their learning and teaching practice is a complicated system, not a complex system. This way of viewing universities is reinforced because the people charged with making these decisions (senior leaders, consultants, internal leaders on information technology, learning etc) are all paid to be experts. They are paid to successfully solve complicated problems. That success and expectation means that they expect/believe the same methods they’ve used to solve complicated problems, will help them solve a complex problem.

As Larry Cuban writes

Blueprints, technical experts, strategic plans and savvy managers simply are inadequate to get complex systems with thousands of reciprocal ties between people to operate effectively in such constantly changing and unpredictable environments……..Know further that reform designs borrowed from complicated systems and imposed from the top in complex systems will hardly make a dent in the daily work of those whose job is convert policy into action.

Much of the content of the talk titled “Why is e-learning ‘a bit like teenage sex’ and what can be done about it?” that @palbion and I gave focuses on identifying the problems that arise from this naive understanding of “mainstreaming/scaling”.

What’s the solution?

Cuban suggests

At the minimum, know that working in a complex system means adapting to changes, dealing with conflicts, and constant learning. These are natural, not aberrations.

The talk I mentioned builds on two papers (Jones & Clark, 2014; Jones, Heffernan & Albion, 2015) that are starting to explore what might be done. I’m hoping to explore some more specifics soon.

Whatever shape that takes, it certainly will reject the idea of mainstreaming through institutional consistency. But in summary, it will probably involve in creating an environment that is better able to adapt to change, deal with conflicts, and constant learning.


Jones, D., Heffernan, A., & Albion, P. R. (2015). TPACK as shared practice: Toward a research agenda. In D. Slykhuis & G. Marks (Eds.), Proceedings of Society for Information Technology & Teacher Education International Conference 2015 (pp. 3287-3294). Las Vegas, NV: AACE. Retrieved from http://www.editlib.org/p/150454/

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262-272). Dunedin. Retrieved from http://ascilite2014.otago.ac.nz/files/fullpapers/221-Jones.pdf

Predicting System Success using the Technology Acceptance Model: A Case Study

Behrens, S., Jamieson, K., Jones, D., & Cranston, M. (2005). Predicting system success using the Technology Acceptance Model: A case study. In 16th Australasian Conference on Information Systems (Paper 70). Sydney. Retrieved from http://aisel.aisnet.org/acis2005/70/


Determining what makes an Information System (IS) successful is an ongoing concern for both researchers and practitioners alike. Arriving at an answer to this problem is compounded by the subjective nature of success and therefore trying to make judgements of what is and is not a success is problematic. Despite these difficulties system use has become more accepted as a measure of system success. Following this logic if a system is accepted it will have a higher likelihood of being used and therefore impact positively on success. The Technology Acceptance Model (TAM) is one of the more widely accepted theoretical frameworks that has been used to measure system acceptance. This paper combines the TAM, as the theoretical framework, with case study research to provide a more holistic account of why a specific IS, an online assignment submission system, has become successful. Initial findings suggest that the TAM measures of perceived usefulness and perceived ease of use are effective predictors of systems success.


Measuring success within IS has been a concern for those within the discipline since its inception. Although success is complex and therefore difficult to measure researchers have made efforts in doing so. Traditionally these measurements focus on delivering a functional IS product within certain economic and temporal constraints. Despite this bias there is evidence to suggest that a more accurate measure of success may lie within the realms of system use. Based on the logic that a system must first be accepted to be used ensuring acceptance should increase the probability of system success. One of the more popular theoretical frameworks that predicts system acceptance of technology is the Technology Acceptance Model (TAM). We use this model to try and investigate why a specific IS innovation in use at Central Queensland University (CQU) has become so popular.

Davis et al.’s (1989) work on the TAM Information Systems theory, is a user centred approach which has gained popularity as a measure of technology acceptance. TAM suggests that when users encounter a new IS innovation there are two main factors which will influence how and when they will use it. These are perceived usefulness and perceived ease-of-use. Perceived usefulness is “the degree to which a person believes that using a particular system would enhance his or her job performance” (Davis 1989). Perceived ease-of-use is “the degree to which a person believes that using a particular system would be free from as the Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh and Davis 2000) there currently exists only one study confirming its validity and robustness. TAM on the other hand has been tested by many more researchers (Adams et al. 1992, Hendrickson et al. 1989, Segars and Grover 1993, Subramanian 1994, Szajna 1994) with different populations of users and IS innovations.

Due to the testing and support of the model by others we rely on the original TAM proposed by Davis (1989) rather than the extended model of the UTAUT (also known as TAM2) to measure technology acceptance. In this study we apply the model to a twelve year old IS, OASIS (Online Assignment Submission, Infocom System) in use at CQU. We apply the TAM measures of perceived usefulness and perceived ease of use to two different groups; staff non-users and staff users. The non-user group is examined due to their potential to become users and therefore impact on the continued growth of the system’s popularity. The user group is examined due to their importance in maintaining the current level of system use. Preliminary analysis into these two groups through the use of a case study approach reveals that the TAM measures appear to be useful predictors of a successful system. Our study also suggests that the evolutionary development model adopted by the system support team may have impacted positively on user perceptions and beliefs of system usefulness and ease of use.

Research approach

Rationale for using a case study approach

This research focused on determining the significant system acceptance factors that have contributed to the success of OASIS. This level of detail allows us to provide a more complete description and explanation of this particular innovation’s success. We believe that such a holistic focus of success using a proven IS is beneficial for researchers and practitioners alike. Although TAM has been the subject of investigation for much research, many of these studies are limited in several respects. These deficiencies include issues such as the strictly quantitative nature of the research with a focus on the adoption of simplistic technologies such as voicemail, email (Adams et al. 1992, Davis 1989, Segars and Grover 1993), word processing, spreadsheet and graphics software (Adams et al. 1992, Bagozzi et al. 1992). Our study helps to address these limitations in the literature by providing an in-depth qualitative study of a more complex non-mandatory system. For practitioners it provides a unique insight into an authentic and successful IS implementation. Given our main requirement of an in-depth investigation we used the case study approach. This decision is in accord with recommendations from proponents of the case study approach, for example Hamel (1993), Yin (1994), and Stake (1995).

Site selection – unit of analysis

Selection of the research site is the most critical decision in the analytic process of case study research (Hamel 1993). For our investigation, the choice of CQU as our site was a relatively simple matter. This was because CQU met all three of the main selection criteria; ‘no real choice’, suitability and pragmatism (Denscombe 1998). Firstly the site represented a unique opportunity for study. All researchers work at CQU and were able to observe the successful adoption of OASIS, a ‘home grown’ IS used in the support of teaching and learning, as well as various other information system adoption failures. The success of OASIS at CQU was an event that could not be ‘planned or created’ (Denscombe 1998). It therefore represented a ‘one-off chance’ for us as researchers to gain insight into why OASIS has become so successful where other ISs had failed. Consequently there was no real element of choice in ‘deciding’ on CQU as an appropriate site for study. Secondly the site was suitable due to the relevance of the case for testing previous theory. TAM is the theory we believe to be relevant in predicting the success of this system due to its ability to measure user acceptance. Through our own observations of OASIS and a review of the literature we believe that the success of OASIS at CQU contains ‘crucial elements’ (Denscombe 1998) of being a successful IS innovation which can test the theory of TAM. Finally the case is both intrinsically interesting and convenient for investigation. Although these final pragmatic considerations are not enough to choose the site on their own they added to the research experience allowing for a more in-depth study, hopefully appealing to a wider audience (Denscombe 1998).

Case study design type

Our case study is an investigation into why a specific information system, OASIS, is a success. Due to the formation and identification of TAM as the theoretical framework that guided our research, we utilised a descriptive case study design (Berg 2004). Our design follows the advice given by Yin (1994) and includes the five components he deemed necessary; study questions, theoretical framework, identification of the unit of analysis, logical linking of the data to the theory and the criteria for interpreting the findings. The first three (already previously stated) indicated the data we needed to collect and the last two how we would use the data. Although, according to Yin (1994) the last two are the least well developed in case studies we lay the foundations for this analysis early in our research design. Linking the data to the theory was to be done through a “pattern matching” technique (see Donald Campbell 1975 via Yin 1994). This is where several pieces of information from the same case may be related to some theoretical proposition. In our research we would determine whether the data matched or did not match the propositions put forth by the TAM theory through visual examination. The last component of determining the criteria for interpreting the findings is the most difficult of all (Yin 1994). Unfortunately there are very few guidelines on how to arrive at such a
criteria and all that exists is Yin’s very brief advice on matching rival theories. Given these difficulties we decided that we would rely on an “all or nothing” approach similar to Yin’s advice. We had to decide on whether the pattern matched with the theoretical propositions or it didn’t. Where the pattern was deemed to hold true then the theory would be tested in the positive, where it didn’t the theory would have failed to explain our case.

Methodological tactics – Data collection

A case study is an in-depth investigation that seeks to uncover the various nuances, patterns and latent elements that other research approaches may overlook (Berg 2004). It accordingly makes use of different methods to collect various kinds of empirical data (Hamel 1993). Our case study made use of the more traditional methods of collecting data. These were questionnaires, participant observation and the review of documents. Data collection through the participant observation and document review techniques is representative of the 12 year period. Participant observation was a crucial data gathering technique as all researchers have been involved with OASIS at some point over its lifetime. Documentation used in this research included support logs and records from the OASIS system. To capture current perceptions, questionnaires were sent to all current user and non-user groups of OASIS identified as academic staff.

At the time of the writing of this paper 94 responses (34.9%) had been received from users of OASIS and 18 responses (15.3%) from non-users. The analysis in this paper concentrates on the free text responses to two open questions asking staff about the factors influencing their perceptions of the usefulness and ease of use of OASIS. Academic staff members in charge of a course are responsible for making the OASIS adoption decision. Students are only able to use OASIS after this adoption decision is made. For this reason we have initially focused on academic staff as the users and non-users of OASIS. Subsequent research will investigate student perceptions.

Scientific value and study limitations

The case study is a popular research approach across many disciplines both basic and applied (Hamel 1993). Despite their popularity they have many strong critics due to the belief that the approach lacks insufficient objectivity and concern over the ability to generalize research results (Berg 2004). We have been careful in our research to maintain objectivity through deliberate construction of a research design. Construction of such a design does much to increase the rigor of a study and counter the claims of “weak research” (Yin 1994). As part of our research design we maintained as much objectivity as possible by having each researcher separately review the evidence as part of the data analysis phase. Our findings have been examined within the context of our chosen theoretical framework; TAM. As far as generalizing the results of this research is concerned our view is similar to that taken by Berg (2004). He states that “When case studies are properly undertaken, they should not only fit the specific event studied but also generally provide understanding about similar … events. The logic behind this has to do with the fact that few human behaviours are unique, idiosyncratic and spontaneous.” (Berg 2004). Likewise it is our belief, due to the socially constructed nature of IS in general, as well as their reliance on social aspects as determinants of success in particular, that the success of OASIS is not a unique event. Our study should therefore provide an understanding to the wider community of success in similar IS implementations.

Theoretical background

Success of an IS innovation can be determined in a number of ways. However, general organisational measures of success include “on time and on budget” (Standish Group 1995, IT Cortex 2002) with the desired functionality (Mahaney and Lederer 1999). Following the logic inherent within the literature focussing on IS characteristics predictive of success, failure is generally discussed in terms of having the opposite characteristics. For example, Whittaker (1999) described the 1997 KPMG survey on what constituted an IS project failure. The study deemed a project as having failed if it overran its budget by 30% or overran its schedule by 30%, or the project was cancelled or deferred due to non-delivery of planned benefits. However, Mahaney and Lederer (1999) argue that there are degrees of failure and that a project that overruns budget by 5% is less of a failure than one that overruns by 50%. Some determinants of why IS innovations might be considered failures include whether they; have the ability to evolve and grow with the organization, integrate well with the business environment, possess consistency between the initial requirements and the final solution, simply make business sense (IT Cortex 2002).

DeLone and McLean (1992) were leaders in moving to a more user centred approach when trying to judge overall IS success. Their model suggests six interdependent measurements of success; system quality, information quality, use, user satisfaction, individual impact and organisational impact. It is important to note that all of these factors should be considered when trying to measure success under the model and that no single measure is intrinsically better than any other. Further attempts have been made to refine and expand on their model by others (e.g. Seddon et al. 1999) as well as minor refinements suggested by themselves (DeLone and McLean 2003). However as DeLone and McLean (1992 p. 61) themselves point out, “there are nearly as many measures of success as there are studies”.

With the more recent study conducted by Iivari (2003) there is more evidence to suggest the applicability of the DeLone and McLean (1992) model in measuring a system’s success. This work helps to contribute to the shift from organisational measures of success to more user focused measures. Davis et al.’s (1989) work on the TAM IS theory, is a user centred approach which has gained popularity as a measure of a users acceptance of technology. We draw the conclusion that if a system enjoys high user acceptance this will impact positively on system use. Use of the system is a contributing factor to system success especially when that system is not mandatory (DeLone and McLean 2003, Iivari 2005). Based on this assumption we use TAM as a theoretical framework to guide our research. Specifically, do the constructs in the model offer a reasonable explanation for why OASIS has enjoyed such an exponential growth in its adoption and use?

System background

Online Assignment Submission, Infocom System (OASIS) arose out of early experiments in 1994 by a single academic implementing a system to reduce assignment turnaround for distance students (Jones and Jamieson, 1997). Adoption of OASIS by other academics was limited at this time. Only 13 course offerings made use of the system with just over 1900 assignments being submitted in the six years up until 2000. Since 2000 use of OASIS has increased significantly. From the years 2000 to 2005 over 77,000 assignments have been submitted via OASIS by 6892 (72+%)of Infocom students.

Students enrolled in Infocom courses are distributed across a number of campuses as well as being enrolled via distance education. There are five regional Central Queensland (CQ) campuses in Bundaberg, Emerald, Gladstone, Mackay and Rockhampton. Four other Australian International campuses (AICs) in Brisbane, Gold Coast, Sydney and Melbourne managed by a commercial partner. Campuses are also located overseas in Fiji, Singapore, Malaysia, Hong Kong and China. Students may also study from any location in the world via distance education (FLEX). Figure 1 provides a summary of Infocom student numbers from 1996 to 2005.

OASIS usage in percentage

Figure 1. Number and type of students enrolled in Infocom Courses (1996-2005)

Since its inception Infocom has had a small development team responsible for its online presence. In 2001, partly in response to increasing numbers, this team was expanded and additional effort placed on providing services that would help support Infocom’s teaching operations. Using an agile development methodology (Jones and Gregor, 2004) this group, in response to direct user feedback, made a range of additions to OASIS to improve its functionality. The combination of increasing complexity and this on-going development of OASIS appears to have had an impact on usage of OASIS. Figure 2 shows percentage of Infocom students, staff and courses using OASIS from 2000-2005. Specific staff figures are only available from 2002 onwards when a markers’ database was added to the system.

Number and type of students enrolled in Infocom Courses

Figure 2: OASIS usage in percentages of Infocom students, courses, and staff (* figures as of September 30, 2005)


In this study we applied the TAM measures of perceived usefulness and perceived ease of use to two different staff groups; non-users and users. Non-user perceptions concerning usefulness and ease of use of the system are used to compare against actual beliefs of users. The non-user group is examined due to their potential to become users and therefore impact on the continued growth of the system’s popularity. The user group is examined due to their importance in maintaining the current level of system use. This section reveals the complex underlying belief structures concerning the two constructs of perceived usefulness and ease of use as they pertain to OASIS. This is information that has been missed in other investigations concerning TAM (Segars and Grover 1993). Of particular interest it reveals that usefulness and ease of use, at least in our study, seem to be influenced positively by the evolutionary development model adopted by the system support team. This is in line with other research concerning what makes for a successful system, namely the ability of the system to evolve with the business (IT Cortex 2002). The findings in this study are of a preliminary nature only.

How do non-users perceive OASIS?

Overall non-users of OASIS had mainly positive perceptions of the system. These perceptions centred on the belief that the system would benefit the students just as much as it would enhance course management.

Perceived usefulness factors

The main student benefit perceived by non-users was timely turn-around of assignments. One respondent noted that ‘OASIS will eliminate unnecessary delays’ while another believed that ‘It may also help with prompt and efficient grade information requests in/out’. Other respondents saw OASIS as a method for improving courses by being able to more easily analyse the results from assessment. One respondent noted that ‘each question can be analysed for effectiveness at distinguishing between students passing and failing. With individual questions assessed for how well they are answered teaching can be modified to prepare students better in the identified weak area’. Another was the benefits in being able to track how well students were progressing, stating that ‘I envisage OASIS would be useful to gauge student progress/understanding/level of expertise throughout a particular course/subject’.

Non-users also perceived a number of administrative benefits from OASIS. Of particular note was the ability to track assignments and marking, with one respondent stating their belief that OASIS ‘will encompass safe guards for assignment delivery and return, as well as acknowledgement of assignment receipt for students’. This is an important aspect of course administration, especially with the difficulties in distributing, managing and moderating marking over multiple campuses and markers. Another respondent support this belief, stating that ‘I have used a similar system before and it was quite helpful to my consolidating marks, and not being on campus would probably simplify the marking system’. Respondents believed that OASIS would make this task both possible and easier. Others believed that OASIS would also provide additional benefit by enabling the use of automated plagiarism detection.

While there were few negative perceptions of the usefulness of OASIS, some respondents had beliefs about what types of assessment OASIS was suitable for. One noted that ‘I believe OASIS is suitable for multiple choice questions. But my assignments are essay type with computer program printouts. As of now, I don’t know how I can make OASIS useful for my course’. Another indicated that OASIS didn’t fit in with the way that they currently assessed, stating that ‘I mark all my student’s assignments manually, it is easier for me to sub-edit stories that way.’.

However, respondents generally had positive perceptions of OASIS, with one respondent stating that ‘certainly, any online submission technology would be useful to me. And the precedent of other IT systems made available in Infocom suggests that it would be extremely user friendly for people with very limited computer competence/confidence. The nifty acronym is also appealing’. The successful evolutionary development process adopted by the support team had produced a number of successful systems and helped in developing positive perceptions amongst users of new systems

Perceived ease of use factors

Non-user respondents generally believed that OASIS would be easy to use. They justified this with two belief factor groups. The first group was the technology centric belief that, as non-users had used similar systems, they would be able to easily use OASIS. One respondent noted that ‘I have been using computers for many years, including online application/enrolment. These may not be identical with OASIS, but I believe there will be similarities’. Another respondent stated that ‘It should not be difficult for me to learn since I’m computer literate’. Another respondent believed that the system would be just as easy to use as other systems developed by the faculty, stating that ‘my positive experience with other Infocom systems gives me confidence that OASIS would be no different. The systems team have a very good track record that inspires confidence’.

The second group of factors was based on having not heard negative things about ease of use of the system. One respondent noted that ‘nobody seem to complain too much about OASIS being hard to use, or hindering them in their job’. However, this was contradicted by another respondent who stated that ‘I have heard from another tutor that OASIS is a bit time consuming and a little confusing… but [I] have not used it myself’.

How do users perceive OASIS?

Users of OASIS had generally positive perceptions of the system. Again, these perceptions centred on the belief that the system would benefit the students just as much as it would enhance course management. Yet users were pragmatic in their beliefs and many discussed the “trade-offs” associated with using OASIS. However, a new category of perceived usefulness was uncovered concerning the personal benefits of using the system.

Perceived usefulness factors

Users believed that OASIS gave them a greater ability to monitor student progression while also allowing students to track their assessment through the marking process. One respondent noted that ‘submission records for students are useful in monitoring my students’ progress, hence adjust tutorials/support as needed’. Another respondent supported this by stating ‘it is easy (and quicker) to know if a student has submitted work for assessment by checking the relevant section of the web site’. Others saw the advantage in being able to compare and contrast assessment results, highly rating OASIS’s functionality to give teaching staff the ‘ability to compare your student’s results with overall performance’. Many staff also saw the non-repudiation aspects of the assignment management as being advantageous, with one user stating ‘students cannot say that they were NOT late or did submit the assignment (when in fact they did not)’.

Administratively, users discussed a number of factors that they perceived made OASIS useful. Most of these concerned assignment management issues. One issue identified was the ability for the user to track where an assignment was and what actions had been performed on it, with one respondent noting that ‘[OASIS] makes assignment collection simple and easy also [as you] do not have assignments go missing. [OASIS is a] quick and easy way of returning assignments and collect the assignment marks’. This tracking also facilitates moderation processes, and as noted by another respondent, ‘OASIS allows for the moderation process to be carried out in a timely fashion”. OASIS was also seen to support core academic requirements as exemplified by one user who stated that ‘OASIS is useful in the case of essay and report type assignments as it helps in detecting plagiarism’. This issue is especially important over a multi-campus operation, with another respondent adding that one of the key usefulness factors was the ability of OASIS to ‘perform copy detection, not only within a campus but between campuses’

The users of OASIS also found a personal usefulness factor in the ability to remotely download assessment to mark and moderate. This gave them the ability to mark, moderate and manage assessment from anywhere in the world. One respondent stated that having ‘assignments on soft-copy [was] a tremendous help [because there was] no need to carry them home’. It was sessional staff who seemed to gain the most benefit from electronic access to assessment. One user explained: ‘I am sessional lecturer and OASIS makes it possible for me to download the assignments for marking. I don’t have to go to the campus to get the submitted assignments’. Another supported this by saying that ‘OASIS is useful because it has enabled me to work from home and pick up students’ assignments outside office hours’.

The most contentious usefulness factor was the benefit OASIS was able to provide in the time taken to mark assessment. While many users found the systems fast and efficient to use, others disagreed, but continued to use the system because of other usefulness factors. One user who found that OASIS saved time stated that:

Having also been a marker (both paper-based, and using OASIS), I was stunned by just how much time was saved by no longer needing to handle piles of paper. Virus scanning 100 floppy disks, for example, takes a long time. OASIS provides a neatly formatted, scanned, and correctly-named set of files

However, several users found the process of dealing with electronic assignments time consuming and cumbersome. One respondent stated that ‘practical experience with many assignments that were to be submitted through the OASIS system indicates that … it takes much more time and effort to mark assignments on line’ while another noted that ‘for assignments that where marking can not be automated it is very time consuming to mark electronic copy, especially when there is significant reading to be done. it is also time consuming to provide feedback’. Many staff made comparisons with OASIS and “hard-copy” marking, with one user stating that:

It is a very good way of submission and collection of assignment. But, the hardest part of it, is adding comments electronically during the marking process. It kills time. I did same types of marking to some other university, and later they decided to take hardcopies and to write comments. We found that saving 50% of the overall marking time.

The general experience of users of OASIS was that it was initially slow to mark assessment with, but as one user who stated that ‘[marking with OASIS] takes a little longer to generate a rhythm to freely mark assignments in an efficient timeframe’. Many noted that the time to mark assessment was dependent of the type and complexity of assessment, with one user stating ‘the experience is very dependent on the assessment design’. Even with these negative aspects, users still perceived it as useful with one user summarising by stating that ‘it’s a great system but online marking and commenting takes significantly longer than on hard copy – other than that I like its functionality’.

The only other negative factors that affected perceived usefulness were those concerning support. Some users felt that OASIS was complicated, difficult to understand and lack support mechanisms. One user’s frustration was evident from the comment ‘just trying to understand how to use [it] is a pain’. Yet other users made particular mention of the support services offered by the web team. This perception of the support services will be discussed later as a factor concerning ease of use.

Perceived ease of use factors

Users generally perceived OASIS as easy to use, however two factor groups, technology and support, affected these beliefs. Technology affected users in both positive and negative ways. Users who were comfortable with technology believed OASIS was easy to use and made comments such as ‘being an IT professional, I find it very very easy to interact with’. However, users who found technology intimidating focused on this as impeding their use of OASIS. One user remarked that ‘anything computer-mediated the comments relating to ease of use and technology were focused on difficulties with understanding technology external to OASIS rather than the system itself

A related group of factors concerned support mechanisms. As previously discussed, the perception of a lack of support mechanisms impacted negatively on the perceived usefulness. However, users of the system were divided on the issue. In particular, those users who had used OASIS over a long period of time and had watched its support mechanisms evolve saw them as a positive influence on ease of use. One respondent supported this by stating that ‘It used to be a problem, but I’ve seen the system and supporting documentation improve to the point that I would consider the system fairly easy to use for new users. Support requests for the system have dropped significantly as it has matured’. Another remarked that ‘OASIS is self explaining, there is not much to learn about it in order to use it’. However, others still regarded the system as difficult to initially learn, with one noting that ‘learning OASIS for the first time is difficult because the instructions are not very clear. However, it is easy once you get the hang of it’. Others noted that the lack of documentation and online help procedures was overcome with support from the web team, with one respondent commenting that ‘learning how to do things in the system is not easy but the tech team offer an excellent support and are to be commended for their efforts’. Most users shared these views believing that once users started using OASIS, the perception of ease of use changed. This was supported by one user who stated that ‘OASIS is no more difficult or easier to use than any other web-based system with online help and hyperlinks to the various relevant parts. I think initially I asked colleagues about its general use as the concept seemed daunting at the time (before I’d actually used it)’.

Discussion and conclusion

In general users had very positive perceptions surrounding the usefulness of OASIS. As one respondent stated, ‘I find the system professional and bug free. It’s an excellent assignment management tool. It provides a rigid framework for student submissions. Students appear to have little or no problem with the general concept of online submission and its use. In all I find the system very useful’. If system success can of this system by the majority of students in the Faculty of Informatics and Communication at CQU and its use by non-mandatory nature of the system giving even more strength to the motivations behind its adoption. In the search for an explanation of why it has been so successful we applied the TAM to investigate both staff users and nonusers of the system. Both of these groups revealed very positive perceptions and beliefs surrounding the usefulness and ease of use constructs in TAM. On further investigation these constructs were complex in nature but seemed to centre more on the administrative benefits that the system could provide rather than the pedagogical benefits originally intended by use of the system.

In examining non-users perceptions and users’ beliefs, we have presented evidence that provides an explanation for the continued and growing success of OASIS. Non-users perceive that the system will be useful and easy to use and will not hesitate in using it when the chance arises. This indicates why the believe that the system is useful and easy to use and this explains stability in growth and continued use. If success can be measured in terms of use then we believe that the usefulness and ease of use factors within TAM are reasonable predictors of system success. We also believe that the usefulness and ease of use constructs are positively influenced by the successful application of agile development methods employed by the support staff of the system. As shown above this process has generated a perception amongst staff that the systems produced by this team will be useful and easy to use, or if the systems are not useful or easy to use that the predict that the success of OASIS will continue as long as the beliefs and perceptions concerning the system’s usefulness and easy to use characteristics are maintained through activities such as evolutionary development.

As stated in the previous section, methodological tactics, our findings are limited to initial analysis of two free text questions. Further detailed analysis has to be carried out on all questions. It is envisaged that in the information of respondents. This may impact on their perceptions of perceived usefulness and perceived ease of use of the system. Although the preliminary results of this study offers a more detailed account of a specific systems success it would benefit from expansion in several areas. Firstly, expanding the study to include students. Secondly, using UTAUT instead of TAM as the theoretical framework. Thirdly, including other cases to see whether the results still hold. Finally it may be useful to investigate further the complex structure of the perceived ease of use and perceived usefulness constructs of TAM to other information systems supporting teaching and learning as well as other more complex information system innovations.


Adams, D. A., Nelson, R. R. & Todd, P. A. (1992) Perceived usefulness, ease of use, and usage of information technology: A replication. MIS Quarterly, 16, 227-247.

Bagozzi, R. P., Davis, F. D. & Warshaw, P. R. (1992) Development and Test of a Theory of Technological Learning and Usage. Human Relations, 45.

Berg, B. L. (2004) Qualitative Research Methods for the Social Sciences, Boston, Pearson Education, Inc.

Davis, F. D. (1989) Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13, 319-340.

Davis, F. D., Bagozzi, R. P. & Warshaw, P. R. (1989) User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35, 982-1003.

Delone, W. H. & Mclean, E. R. (1992) Information systems success: The quest for the dependent variable. Information Systems Research, 3, 60-95.

Delone, W. H. & Mclean, E. R. (2003) The DeLone and McClean Model of Information Systems Success: A Ten-Year Update. Journal of Management Information Systems, 19, 9-30.

Denscombe, M. (1998) The Good Research Guide for small-scale social research projects,
Philadelphia, Open University Press.

Hamel, J. (1993) Case Study Methods, Newbury Park, SAGE Publications.

Hendrickson, A. R., Massey, P. D. & Warshaw, P. R. (1989) On the the test-retest reliability of perceived usefulness and perceived ease of use scales. MIS Quarterly, 17, 227-230.

Iivari, J. (2005) An Empirical Test of the DeLone-McLean Model of Information System Success. The DATA BASE for Advances in Information Systems, 36, 8-18.

IT Cortex (2002) Success assessment: Where is the limit between success and failure? Chapelier, Jean-Pol.

Jones, D. & Behrens, S. (2003) Online Assignment Management: An Evolutionary Tale. 36th Annual Hawaii International Conference on System Sciences. Hawaii, IEEE.

Jones, D. & Buchanan, R. (1996) The design of an integrated online learning environment. IN Allan Christie, P. J., Beverley Vaughan (Ed.) Proceedings of ASCILITE’96. Adelaide.

Jones, D. & Jamieson, B. (1997) Three Generations of Online Assignment Management. IN Kevill, R., Oliver, R. & Phillips, R. (Eds.) ASCILITE’97. Perth, Australia.

Jones, D., Lynch, T. & Jamieson, K. (2003) Emergent Development of Web-based Education. Proceedings of Informing Science + IT Education. Pori, Finland.

Mahaney, R. C. & Lederer, A. L. (1999) Runaway information systems projects and escalating commitment. Special Interest Group on Computer Personnel Research Annual Conference. New Orleans, Lousiana, USA, ACM Press.

Seddon, P. B., Staples, S., Patnayakuni, R. & Bowtell, M. (1999) Dimensions of information systems success. Communications of the AIS, 2.

Segars, A. H. & Grover, V. (1993) Re-examining perceived ease of use and usefulness: A confirmatory factor analysis. MIS Quarterly, 17, 517-525.

Stake, R. C. (1995) The art of case study research, Thousand Oaks, CA, Sage.

Standish Group (1995) The CHAOS report. The Standish Group International.

Subramanian, G. H. (1994) A replication of perceived usefulness and perceived ease of use measurement. Decision Sciences, 25, 863-873.

Szajna, B. (1994) Software Evaluation and choice: predictive evaluation of the Technology Acceptance Instrument. MIS Quarterly, 18, 319-324.

Whittaker, B. (1999) What went wrong? Unsuccessful information technology projects. Information Management & Computer Security, 7, 23-30.

Yin, R. K. (1994) Case Study Research Design and Methods, Thousand Oaks, SAGE Publications.

Digital learning: It’s déjà vu all over again

Below you will find resources associated with a talk titled “Digital Learning: It’s deja vu all over again”. The slides below are the near final set to be presented at the #dLRN15 conference (abstract available below).

Due to time constraints a slightly longer version of the slides has replaced.


The initial steps of my university teaching career commenced in the early 1990s teaching information technology courses to on-campus and distance students. For distance students the learning experience was largely print-based with little or no student-student or student-teacher interaction. Like many academics at that time the increasing availability of the Internet sparked explorations into a range of digital learning innovations designed to overcome the limitations of existing institutional teaching methods (Jones, 1996a, 1996b).

Twenty years later and three years ago – long after digital learning had become the norm in higher education – my teaching career continued at a new institution and in a new discipline. Now teaching pre-service teachers in a program proudly proclaiming itself as being amongst the only in Australia to be available entirely online. Once again I found myself teaching both on-campus and “distance” students. Further extending the sense of déjà vu the last three years have been spent exploring a range of digital learning innovations designed to overcome many of the same limitations of existing institutional teaching methods. Digital learning, it’s like déjà vu all over again.

Using this experience and the BAD/SET framework (Jones & Clark, 2014) the session will argue that the institutional implementation of learning and teaching – be it distance education or digital learning – is underpinned by the SET mindset. A mindset that places more emphasis on reuse and scale than on contextually appropriate pedagogical value and thus creates this sense of déjà vu. The session will seek to illustrate how the combination of both the BAD and SET mindsets can offer useful insights for both research and practice into how digital learning might be harnessed institutionally to achieve appropriate and practical outcomes.


Jones, D. (1996a). Computing by distance education: Problems and solutions. ACM SIGCSE Bulletin, 28(SI), 139–146.

Jones, D. (1996b). Solving Some Problems of University Education: A Case Study. In R. Debreceny & A. Ellis (Eds.), Proceedings of AusWeb’96 (pp. 243–252). Gold Coast, QLD: Southern Cross University Press.

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272). Dunedin. Retrieved from http://ascilite2014.otago.ac.nz/files/fullpapers/221-Jones.pdf

Types of e-learning projects and the problem of starvation

The last assignment for the course EDC3100, ICT and Pedagogy was due to be submitted yesterday. Right now the Moodle assignment activity (a version somewhat modified by my institution) is showing that 193 of 318 enrolled students have submitted assignments.

This is a story of the steps I have to take to respond to the story these figures (they’re not as scary as they seem) tell.

It’s also a story about the different types of development projects that are required when it comes to institutional e-learning and how the institutional approach to implementing e-learning means that certain types of these projects are inevitably starved of attention.

Assignment overview

Don’t forget the extensions

193 out of 318 submitted suggests that almost 40% of the students in the course haven’t submitted the final assignment. What this doesn’t show is that a large number of extensions have been granted. Would be nice for that information to appear on the summary shown above. To actually identify the number of extensions that have been granted, I need to

  1. Click on the “View/grade all submissions” link (and wait for a bit).
  2. Select “Download grading worksheet” from a drop down box.
  3. Filter the rows in the worksheet for those rows containing “Extension granted” (sorting won’t work)

This identifies 78 extensions. Suggesting that just under 15% (48) of the students appear to have not submitted on time.

Getting in contact with the non-submits

I like to get in contact with these students to see if there’s any problem. If you want some support for this practice the #1 principle of the “7 Principles of Good Practice for Undergraduate Education” is

1. Encourages Contact Between Students and Faculty
Frequent student-faculty contact in and out of classes is the most important factor in student motivation and involvement. Faculty concern helps students get through rough times and keep on working.

Weather from my bedroom window

Since my students are spread across the world (see the image to the right) and the semester ended last week, a face-to-face chat isn’t going to happen. With 48 students to contact I’m not feeling up to playing phone tag with that number of students. I don’t have easy access to the mobile phone numbers of these students, nor do I have access to any way to send text messages to these students that doesn’t involve the use of my personal phone. An announcement on the course news forum doesn’t provide the type of individual contact I’d prefer and there’s a question about how many students would actually see such an announcement (semester ended last week).

This leaves email as the method I use. The next challenge is getting the email addresses of the students who haven’t submitted AND don’t have extensions.

The Moodle assignment activity provides a range of ways to filter the list of all the students. One of those filters is “Not submitted”. The problem with this filter is that there’s no way (I can see) to exclude those that have been given an extension. In this case, that means I get a list of 126 students. I need to ignore 78 of these and grab the email addresses of 48 of them.

Doing this manually would just be silly. Hence I save the web pages produced by the Moodle assignment activity onto my laptop and run a Perl script that I’ve written which parses the content and displays the names and email addresses of the students without extensions.

Another approach would have been to use the grading worksheet (a CSV file) I used above. But I’ve gone down the HTML parsing route because I’ve already got a collection of Perl scripts parsing Moodle HTML files due to a range of other screen scraping tasks I’ve been doing for other reasons.

Excluding the won’t submits

I now have the list of students who haven’t submitted and don’t have extensions. But wait, there’s more. There are also some students I know who will, for a variety of reasons, never submit. If possible, I’d prefer not to annoy them by sending them an email about them not submitting Assignment 3.

This information is not in any database. It’s mostly a collection of email messages from various sources stored in the massive 2Gb of space the institution provides for email. I have to manually search through those to find the “won’t submits”.

Send the email

Now it’s time to send the email. In a perfect world I would like to send a personalised email message. A message that includes the student’s name and perhaps other details about their participation in the course. Moodle doesn’t appear to provide an email merge facility. In theory Office provides some functionality this way but I use a Mac and the Office stuff never seems to work easily on the Mac (and I’m biased against Office).

So I don’t send out a personalised email. Just the one email message to these specific students but with only generic content. Many still appear to appreciate the practice. For example, this is a response from one of the students who received one of these emails for a prior assignment (emphasis added)

Thank you for contacting me in regards to the submission. You’re the first staff member to ever do that so I appreciate this a lot.

Some questions

Which makes me wonder how many teaching staff do something like this? Why/why not?

Of the staff who don’t do this, is that because

  1. They don’t think it’s important or appropriate for their course?
  2. They’ve never thought of doing it?
  3. It’s too difficult to do?
Norman on affordances

And, if it were easier, would they do it? What impact might this have?

Moodle is an open source project used by huge numbers of institutions across the world. In addition, over the last year or so my institution has spent some time customising the Moodle assignment activity. I’m fairly certain that I’m not the first person using Moodle to have wanted to contact students who haven’t submitted.

So why all the limitations in the affordances of the Assignment activity?

Types of e-learning projects

In some discussions with @beerc and @damoclarky we’ve identified five separate types of e-learning projects that an institution faces.

  1. External/system driven projects.

    Projects that have to be done because of changes in the external environment. e.g. there’s a new version of Moodle and we need to roll that out or we’ll fall behind and be running a non-supported version.

  2. Strategic projects approved by the institution.

    The institution has decided that a project is important and should be funded and used by the entire institution. e.g. the decision my institution made to modify the Moodle assignment activity in order to transition from a locally built system and not lose functionality.

    Note: there’s a line here between these projects and those below. Typically projects above this line are those that will be used by all (or perhaps most) of the institution.

  3. Projects that might scale, but waiting for them to happen creates problems.

    This is where parts of the institution recognise that there is a problem/need (e.g. the story above) that might scale to the entire institution. But the problem/need has not yet made it across the line above into a strategic project. Meaning that there is a period of time when people know they want to do something, but can’t. They have to wait for the scarce resources of the institution to be allocated.

    In these situations, a few people don’t wait. They develop workarounds like the above. If the need is particularly important, everyone develops their workarounds. Leading to large inefficiencies as the solution is re-created in numerous different ways.

  4. Projects that will only ever be of interest to a program or a particular set of courses.

    For example, all the courses in the Bachelor of Education might benefit from a single page application lesson template that is integrated with the Australian Curriculum. This isn’t something that any other set of courses is going to desire. But it’s possibly of great importance to the courses that do.

  5. Course or pedagogical design specific projects.

    These are projects that are specific to a particular pedagogical design. Perhaps unique to a single course. e.g. the “more student details” Greasemonkey script (see more recent screenshot below) that I’ve implemented for EDC3100. The pedagogical design for this course makes use of both Moodle’s activity completion facility and the BIM module.

    I’m willing to bet large amounts of money that my course is currently the only course that uses this particular combination. This specific version of the tool is unlikely to be valuable to other people. It won’t scale (though the principles behind it might). There’s no point in trying to scale this tool, but it provides real benefit to me and the students in my course.


The problem of starvation

If you ask any University IT Director they will complain about the fact that they don’t have sufficient resources to keep existing systems running and effectively deal with project types #1 and #2 from the list above. The news is even worse for project types 3, 4 and 5.

#5 projects never get implemented at the institutional level. They only ever get done by “Freds-in-the-shed” like me.

#4 projects might get implemented at the institutional level, but typically only if the group of courses the project is for has the funds. If your degree has small numbers, then you’re probably going to have to do it yourself.

#3 projects might get implemented at the institutional level. But that does depend on the institution become aware of and recognising the importance of the project. This can take a loooong time, if at all. Especially if the problem requires changes to a system used by other institutions. If it’s a commercial system it may never. But even with an open source system (like Moodle) it can take years. For example, Costello (2014) says the following about a problem with the Quiz system in Moodle (p. 2)

Despite the community reporting the issue to the Moodle developers, giving multiple votes for its resolution and the proposal of a fix, the issue had nonetheless languished for years unfixed.

and (p. 1)

Applying this patch to DCU’s own copy of Moodle was not an option for us however, as the University operated a strict policy of not allowing modifications, or local customisations, to its Moodle installation. Tinkering with a critical piece of the institutional infrastructure, even to fix a problem, was not an option.


I suggest that there are at least four broad results of this starvation of project types 3, 4 and 5

  1. The quality of e-learning is constrained.

    Without the ability to implement projects specific to their context, people bumble along with systems that are inappropriate. The systems limit the quality of e-learning.

  2. Feral or shadow systems are developed.

    The small number of people who can, develop their own solutions to these projects.

  3. Wasted time.

    The people developing these solutions are typically completing tasks outside their normal responsibilities. They are wasting their time. In addition, because these systems are designed for their own particular contexts it is difficult to share them with other people who may wish to use them. Either because other people don’t know that they exist, or because they use a unique combination of technologies/practices no-one else uses. This is a particular problem for project types 3 and 4.

  4. Lost opportunity for innovation and strategic advantage.

    Some of these project types have the potential to be of strategic advantage, but due to their feral nature they are never known about or can’t be easily shared.

So what?

My argument is that if institutions want to radically improve the quality of their e-learning, then they have to find ways to increase the capacity of the organisation to support all five project types. It’s necessary to recognise that supporting all five project types can’t be done by using existing processes, technologies and organisational structures.


I sent the email to the students who hadn’t submitted Assignment 3 at 8:51 this morning. It’s just over two hours later. In that time, 8 of the 36 students have responded.


Costello, E. (2014). Participatory Practices in Open Source Educational Software : The Case of the Moodle Bug Tracker Community. University of Dublin. Retrieved from http://www.tara.tcd.ie/handle/2262/71751

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Reading – Embracing Big Data in Complex Educational Systems: The Learning Analytics Imperative and the Policy Challenge

The following is a summary and ad hoc thoughts on Macfadyen et al (2014).

There’s much to like in the paper. But the basic premise I see in the paper is that to fix the problems of the current inappropriate teleological processes used in institutional strategic planning and policy setting is an enhanced/adaptive teleological process. The impression I take from the paper is that it’s still missing the need for institutional to be enabling actors within institutions to integrate greater use of ateleological processes (see Clegg, 2002). Of course, Clegg goes onto do the obvious and develop a “dialectical approach to strategy” that merges the two extremes.

Is my characterisation of the adaptive models presented here appropriate?

I can see very strong connections with the arguments made in this paper between institutions and learning analytics and the reasons why I think e-learning is a bit like teenage sex.

But given the problems with “e-learning” (i.e. most of it isn’t much good in pedagogical terms) what does that say about the claim that we’re in an age of “big data” in education. If the pedagogy of most e-learning is questionable, is the data being gathered any use?

Conflating “piecemeal” and “implementation of new tools”

The abstract argues that there must be a shift “from assessment-for-accountability to assessment-for-learning” and suggests that it won’t be achieved “through piecemeal implementation of new tools”.

It seems to me that this is conflating two separate ideas, they are

  1. piecemeal; and,

    i.e. unsystematic or partial measures. It can’t happen bit-by-bit, instead it has to proceed at the whole of institutional level. This is the necessary step in the argument that institutional change is (or must be) involved.

    One of the problems I have with this is that if you are thinking of educational institutionals as complex adaptive systems, then that means they are the type of system where a small (i.e. piecemeal change) could potentially (but not always) have a large impact. In a complex system, a few very small well directed changes may have a large impact. Or alternatively and picking up on ideas I’ve heard from Dave Snowden, implementing large amounts of very small projects and observing the outcomes may be the only effective way forward. By definition a complex system is one where being anything but piecemeal may be an exercise in futility. As you can never understand a complex system, let alone being able to guess the likely impacts of proposed changes..

    The paper argues that systems of any type are stable and resistant to change. There’s support for this argument. I need to look for dissenting voices and evaluate.

  2. implementation of new tools.

    i.e. the build it and they will come approach won’t work. Which I think is the real problem and is indicative of the sort of simplistic planning processes that the paper argues against.

These are two very different ideas. I’d also argue that while these alone won’t enable the change, they are both necessary for the change. I’d also argue that institutional change (by itself) is also unlikely to to achieve the type of cultural change required. The argument presented in seeking to explain “Why e-learning is a bit like teenage sex” is essentially this. That institutional attempts to enable and encourage changed in learning practice toward e-learning fail because they are too focused on institutional concerns (large scale strategic change) and not enough on enabling elements of piecemeal growth (i.e. bricolage).

The Reusability Paradox and “at scale”

I also wonder about considerations raised by the reusability paradox in connection with statements like (emphasis added) “learning analytics (LA) offer the possibility of implementing real–time assessment and feedback systems and processes at scale“. Can the “smart algorithms” of LA marry the opposite ends of the spectrum – pedagogical value and large scale reuse? Can the adaptive planning models bridge that gap?


In the new era of big educational data, learning analytics (LA) offer the possibility of implementing real–time assessment and feedback systems and processes at scale that are focused on improvement of learning, development of self–regulated learning skills, and student success. How- ever, to realize this promise, the necessary shifts in the culture, techno- logical infrastructure, and teaching practices of higher education, from assessment–for–accountability to assessment–for–learning, cannot be achieved through piecemeal implementation of new tools. We propose here that the challenge of successful institutional change for learning analytics implementation is a wicked problem that calls for new adaptive forms of leadership, collaboration, policy development and strategic planning. Higher education institutions are best viewed as complex systems underpinned by policy, and we introduce two policy and planning frameworks developed for complex systems that may offer institutional teams practical guidance in their project of optimizing their educational systems with learning analytics.


First para is a summary of all the arguments for learning analytics

  • awash in data (I’m questioning)
  • now have algorithms/methods that can extract useful stuff from the data
  • using these methods can help make sense of complex environments
  • education is increasingly complex – increasing learner diversity, reducing funding, increasing focus on quality and accountability, increasing competition
  • it’s no longer an option to use the data

It also includes a quote from a consulting company promoting FOMO/falling behind if you don’t use it. I wonder how many different fads they’ve said that about?

Second para explains what the article is about – “new adaptive policy and planning approaches….comprehensive development and implementation of policies to address LA challenges of learning design, leadership, institutional culture, data access and security, data privacy and ethical dilemmas, technology infrastructure, and a demonstrable gap in institutional LA skills and capacity”.

But based on the idea of Universities as complex adaptive systems. That “simplistic approaches to policy development are doomed to fail”.

Assessment practices: A wicked problem in a complex system

Assessment is important. Demonstrates impact – positive and negative – of policy. Assessment still seen too much as focused on accountability and not for learning. Diversity of stakeholders and concerns around assessment make substantial change hard.

“Assessment practice will continue to be intricately intertwined both with learning
and with program accreditation and accountability measures.” (p. 18). NCLB used as an example of the problems this creates and mentions Goodhart’s law.

Picks up the on-going focus on “high-stakes snapshot testing” to provide comparative data. Mentions

Wall, Hursh and Rodgers (2014) have argued, on the other hand, that the perception that students, parents and educational leaders can only obtain useful comparative information about learning from systematized assessment is a false one.

But also suggests that learning analytics may offer a better approach – citing (Wiliam, 2010).

Identifies the need to improve assessement practices at the course level. Various references.

Touches on the difficulties in making these changes. Mentions wicked problems and touches on complex systems

As with all complex systems, even a subtle change may be perceived as difficult, and be resisted (Head & Alford, 2013).

But doesn’t pick up the alternate possibility that a subtle change that might not be seen as difficult could have large ramifications.

Learning analytics and assessment-for-learning

This paper is part of a special issue on LA and assessment. Mentions other papers that have show the contribution LA can make to assessment.

Analytics can add distinct value to teaching and learning practice by providing greater insight into the student learning process to identify the impact of curriculum and learning strategies, while at the same time facilitating individual learner progress (p. 19)

The argument is that LA can help both assessment tasks: quality assurance, and learning improvement.

Technological components of the educational system and support of LA

The assumption is that there is a technological foundation for – storing, managing, visualising and processing big educational data. Need for more than just the LMS. Need to mix it all up and this “institutions are recognizing the need to re–assess the concept of teaching and learning space to encompass both physical and virtual locations, and adapt learning experiences to this new context (Thomas, 2010)” (p. 20) Add to that the rise of multiple devices etc.

Identifies the following requirements for LA tools (p. 21) – emphasis added

  1. Diverse and flexible data collection schemes: Tools need to adapt to increasing data sources, distributed in location, different in scope, and hosted in any platform.
  2. Simple connection with institutional objectives at different levels: information needs to be understood by stakeholders with no extra effort. Upper management needs insight connected with different organizational aspects than an educator. User–guided design is of the utmost importance in this area.
  3. Simple deployment of effective interventions, and an integrated and sustained overall refinement procedure allowing reflection

Some nice overlaps with the IRAC framework here.

It does raise interesting questions about what are institutional objectives? Even more importantly how easy it is or isn’t to identify what those are and what they mean at the various levels of the institution.

Interventions An inset talks about the sociotechnical infrastructure for LA. It mentions the requirement for interventions. (p. 21)

The third requirement for technology supporting learning analytics is that it can facilitate the deployment of so–called interventions, where intervention may mean any change or personalization introduced in the environment to support student success, and its relevance with respect to the context. This context may range from generic institutional policies, to pedagogical strategy in a course. Interventions at the level of institution have been already studied and deployed to address retention, attrition or graduation rate problems (Ferguson, 2012; Fritz, 2011; Tanes, Arnold, King, & Remnet, 2011). More comprehensive frameworks that widen the scope of interventions and adopt a more formal approach have been recently proposed, but much research is still needed in this area (Wise, 2014).

And then this (pp. 21-22) which contains numerous potential implications (emphasis added)

Educational institutions need technological solutions that are deployed in a context of continuous change, with an increasing variety of data sources, that convey the advantages in a simple way to stakeholders, and allow a connection with the underpinning pedagogical strategies.

But what happens when the pedagogical strategies are very, very limited?

Then makes this point as a segue into the next section (p. 22)

Foremost among these is the question of access to data, which needs must be widespread and open. Careful policy development is also necessary to ensure that assessment and analytics plans reflect the institution’s vision for teaching and strategic needs (and are not simply being embraced in a panic to be seen to be doing something with data), and that LA tools and approaches are embraced as a means of engaging stakeholders in discussion and facilitating change rather than as tools for measuring performance or the status quo.

The challenge: Bringing about institutional change in complex systems

“the real challenges of implementation are significant” (p. 22). The above identifies “only two of the several and interconnected socio-technical domains that need to be addressed by comprehensive institutional policy and strategic planning”

  1. influencing stakeholder understanding of assessment in education
  2. developing the necessary institutional technological infrastructure to support the undertaking

And this has to be done whilst attending to business as usual.

Hence not surprising that education lags other sectors in adoption analytics. Identifies barriers

  • lack of practical, technical and financial capacity to mind big data

    A statement from the consulting firm who also just happens to be in the market of selling services to help.

  • perceived need for expensive tools

Cites various studies showing education institutions stuck at gathering and basic reporting.

And of course even if you get it right…

There is recognition that even where technological competence and data exist, simple presentation of the facts (the potential power of analytics), no matter how accurate and authoritative, may not be enough to overcome institutional resistance (Macfadyen & Dawson, 2012; Young & Mendizabal, 2009).

Why policy matters for LA

Starts with establishing higher education institutions as a “superb example of complex adaptive systems” but then suggests that (p. 22)

policies are the critical driving forces that underpin complex and systemic institutional problems (Corvalán et al., 1999) and that shape perceptions of the nature of the problem(s) and acceptable solutions.

I struggle a bit with that observation and even more with this argument (p. 22)

we argue that it is therefore only through implementation of planning processes driven by new policies that institutional change can come about.

Expands on the notion of CAS and wicked problems. Makes this interesting point

Like all complex systems, educational systems are very stable, and resistant to change. They are resilient in the face of perturbation, and exist far from equilibrium, requiring a constant input of energy to maintain system organization (see Capra, 1996). As a result, and in spite of being organizations whose business is research and education, simple provision of new information to leaders and stakeholders is typically insufficient to bring about systemic institutional change.

Now talks about the problems more specific to LA and the “lack of data-driven mind-set” from senior management. Links this to earlier example of institutional research to inform institutional change (McINtosh, 1979) and links to a paper by Ferguson applying those findings to LA, from there and other places factors identified include

  • academics don’t want to act on findings from other disciplines;
  • disagreements over qualitative vs quantitative approaches;
  • researchers & decision makers speak different languages;
  • lack of familiarity with statistical methods
  • data not presented/explained to decision makers well enough.
  • researchers tend to hedge an dquality conclusions.
  • valorized education/faculty autonomy and resisted any administrative efforts perceived to interfere with T&L practice

Social marketing and change management is drawn upon to suggest that “social and cultural change” isn’t brought about by simply by giving access to data – “scientific analyses and technical rationality are insufficient mechanisms for understanding and solving complex problems” (p. 23). Returns to

what is needed are comprehensive policy and planning frameworks to address not simply the perceived shortfalls in technological tools and data management, but the cultural and capacity gaps that are the true strategic issues (Norris & Baer, 2013).

Policy and planning approaches for wicked problems in complex systems

Sets about defining policy. Includes this which resonates with me

Contemporary critics from the planning and design fields argue, however, that these classic, top–down, expert–driven (and mostly corporate) policy and planning models are based on a poor and homogenous representation of social systems mismatched with our contemporary pluralistic societies, and that implementation of such simplistic policy and planning models undermines chances of success (for review, see Head & Alford, 2013).

Draws on wicked problem literature to expand on this. Then onto systems theory.

And this is where the argument about piecemeal growth being insufficient arises (p. 24)

These observations not only illuminate why piecemeal attempts to effect change in educational systems are typically ineffective, but also explains why no one–size–fits–all prescriptive approach to policy and strategy development for educational change is available or even possible.

and perhaps more interestingly

Usable policy frameworks will not be those which offer a to do list of, for example, steps in learning analytics implementation. Instead, successful frameworks will be those which guide leaders and participants in exploring and understanding the structures and many interrelationships within their own complex system, and identifying points where intervention in their own system will be necessary in order to bring about change

One thought is whether or not this idea is a view that strikes “management” as “researchers hedging their bets”? Mentioned above as a problem above.

Moves onto talking “adaptive management strategies” (Head and Alford, 2013) which offer new means for policy and planning that “can allow institutions to respond flexibly to ever-changing social and institutional contexts and challenges” which talk about

  • role of cross-institutional collaboration
  • new forms of leadership
  • development of enabling structures and processes (budgeting, finance, HR etc)

Interesting that notions of technology don’t get a mention.

Two “sample policy and planning models” are discussed.

  1. Rapid Outcome Mapping Approach (ROMA) – from international development

    “focused on evidence-based policy change”. An iterative model. I wonder about this

    Importantly, the ROMA process begins with a systematic effort at mapping institutional context (for which these authors offer a range of tools and frameworks) – the people, political structures, policies, institutions and processes that may help or hinder change.

    Perhaps a step up, but isn’t this still big up front design? Assumes you can do this? But then some is better than none?

    Apparently this approach is used more in Ferguson et al (2014)

  2. “cause-effect framework” – DPSEEA framework

    Driving fource, Pressure, State, Exposure, Effect (DPSEEA) a way of identifying linkages between forces underpinning complex systems.

Ferguson et al (2014) apparently map “apparently successful institutional policy and planning processes have pursued change management approaches that map well to such frameworks”. So not yet informed by? Of course, there’s always the question of the people driving those systems reporting on their work?

I do like this quote (p. 25)

To paraphrase Head and Alford (2013), when it comes to wicked problems in complex systems, there is no one– size–fits–all policy solution, and there is no plan that is not provisional.


Ferguson, R., Clow, D., Macfadyen, L., Essa, A., Dawson, S., & Alexander, S. (2014). Setting Learning Analytics in Context : Overcoming the Barriers to Large-Scale Adoption. Journal of Learning Analytics, 1(3), 120–144. doi:10.1145/2567574.2567592

Macfadyen, L. P., Dawson, S., Pardo, A., & Gasevic, D. (2014). Embracing big data in complex educational systems: The learning analytics imperative and the policy challenge. Research and Practice in Assessment, 9(Winter), 17–28.

Analysing Moodle community forum discussions about the Moodle book module

As part of the “Moodle open book” project I’m hoping to increase my knowledge of what the Moodle community has already discussed about the Book module. The following is a summary of the process I’m using to analyse those discussions.

Not finished, but the story so far. Just over 2400 posts extracted from Moodle community forums that appear to mention “Book module”. About 250 posts (very roughly) coded so far. The following is a very early summary of the features discussed in those posts is

  • 43 – navigation and interface
  • 33 – export and import
  • 15 – printing
  • 13 – integrating activities (mostly quizzes) into the midst of the book.
  • 6 – page visibility
  • 3 – version control

Though a little interesting, I wouldn’t read to much into those figures yet. There are some more statistics on the 2400+ posts below.

Obtain the data

The process for obtaining the data was

  1. Global search for “book module”.

    Use the “Search forum” functionality in the “Moodle in English” community to search for posts that mentioned “book module”. This gave 144 pages of forum posts. These were than saved to my laptop.

  2. Get all the posts from the Book module forum.

    Got a copy of all the forum posts to Book module forum

Parse the data

Need to write a Perl script that will extract that information from the HTML files.

The potentially useful data in this set includes

  • Post
    • the subject line for the post (parsed)
    • body of the post (parsed)
    • date string when posted (parsed)
  • Forum
    • link (parsed)
    • name (parsed)
  • Author
    • User id
    • Author name (parsed)
    • link to their profile (parsed)

Stick it in (a) database table(s)

Next step is to have the script stick it all in a database table to ensure that there are no duplicates. moodle_book_forum_posts

That appears to be working. Now to try and get it all the forum posts inserted.

Done, some quick stats from SQL

  1. 2442 forum posts
  2. 870 authors
  3. 146, 71, 41, 41, 41, 41, 36 – the number of posts (in descending order) by the most prolific authors.
  4. the posts are from 40 forums.
    As you would expect, most in the book forum.

    • Book – 1774 posts
    • General help – 143
    • General developer – 86
    • Themes – 46
    • General plugins – 38
    • Gradebook – 37

    The presence of the gradebook forum potentially points to the biggest flaw with the data so far. i.e. search for “book module” my return posts that include “gradebook module” or similar. This will get ironed out in the later analysis.

  5. Content analysis – into NVivo

    The plan is to use NVivo to do a content analysis on the posts. The aim is to to identify the nature of the posts about the Book module. i.e. are the posts how to use, bug reports, feature requests etc. As part of that what types of features have been requested and when.

    The plan was to import the data from the database, but apparently the Mac version of NVivo cannot import data from a database. Meaning I need to go via a spreadsheet/CSV file.

    Sadly, Nvivo seems a little constrained. e.g. you can’t add to or change a dataset.

    But at least Perl and WriteExcel provide some flexibility.

    Of course, it appears that I have to load the Excel file produced by Perl into Excel and then save it from Excel before NVivo will import it properly.

    Initial analysis with NVivo

    First run through I think I’ll use these nodes

    • Book or NotBook – to indicate whether a post is related to the book module.
    • NewFeature – indicate something to do with new feature
      • Request – asking for a new feature
      • Announce – Announce a new feature
    • Bug – indicate a bug has been identified
      • Request – asking for help with a bug
      • Announce – announcing a fix for a bug
    • Help – getting help with using book
      • Request – asking for help
      • Announce – answering please for help

    Each of the book related nodes will have nodes indicating what is being helped with e.g. export, import, navigation, authoring, permissions, display. Wonder if there’s a list of these already.

    It’s taking a while to do this coding. Pity about the absence of decent keyboard shortcuts in NVivo.

    Will probably need to revisit these categories. Such as there are a few categories where the distinction is questionable – e.g. export/print, bug/new feature

The four paths for implementing learning analytics and enhancing the quality of learning and teaching

The following is a place holder for two presentations that are related. They are:

  1. “Four paths for learning analytics: Moving beyond a management fashion”; and,

    An extension of Beer et al (2014) (e.g. there are four paths now, rather than three) that’s been accepted to Moodlemoot’AU 2015.

  2. “The four paths for implementing learning analytics and enhancing the quality of learning and teaching”;

    A USQ research seminar that is part a warm up of the Moot presentation, but also an early attempt to extend the 4 paths idea beyond learning analytics and into broader institutional attempts to improve learning and teaching.

Eventually the slides and other resources from the presentations will show up here. What follows is the abstract for the second talk.

Slides for the MootAU15 presentation

Only 15 minutes for this talk. Tried to distill the key messages. Thanks to @catspyjamasnz the talk was captured on Periscope

Slides for the USQ talk

Had the luxury of an hour for this talk. Perhaps to verbose.


Baskerville and Myers (2009) define a management fashion as “a relatively transitory belief that a certain management technique leads rational management progress” (p. 647). Maddux and Cummings (2004) observe that “education has always been particularly susceptible to short-lived, fashionable movements that come suddenly into vogue, generate brief but intense enthusiasm and optimism, and fall quickly into disrepute and abandonment” (p. 511). Over recent years learning analytics has been looming as one of the more prominent fashionable movements in educational technology. Illustrated by the apparent engagement of every institution and vendor in some project badged with the label learning analytics. If these organisations hope to successfully harness learning analytics to address the challenges facing higher education, then it is important to move beyond the slavish adoption of the latest fashion and aim for more mindful innovation.

Building on an earlier paper (Beer, Tickner, & Jones, 2014) this session will provide a conceptual framework to aid in moving learning analytics projects beyond mere fashion. The session will identify, characterize, and explain the importance of four possible paths for learning analytics: “do it to” teachers; “do it for” teachers; “do it with” teachers; and, teachers “DIY”. Each path will be illustrated with concrete examples of learning analytics projects from a number of universities. Each of these example projects will be analysed using the IRAC framework (Jones, Beer, & Clark, 2013) and other lenses. That analysis will be used to identify the relative strengths, weaknesses, and requirements of each of the four paths. The analysis will also be used to derive implications for the decision-makers, developers, instructional designers, teachers, and other stakeholders involved in both learning analytics, and learning and teaching.

It will be argued that learning analytics projects that follow only one of the four paths are those most likely to be doomed to mere fashion. It will argue that moving a learning analytics project beyond mere fashion will require a much greater focus on the “do it with” and “DIY” paths. An observation that is particularly troubling when almost all organizational learning analytics projects appear focused primarily on either the “do it to” or “do it for” paths.

Lastly, the possibility of connections between this argument and the broader problem of enhancing the quality of learning and teaching will be explored. Which paths are used by institutional attempts to improve learning and teaching? Do the paths used by institutions inherently limit the amount and types of improvements that are possible? What implications might this have for both research and practice?


Baskerville, R. L., & Myers, M. D. (2009). Fashion waves in information systems research and practice. MIS Quarterly, 33(4), 647–662.

Beer, C., Tickner, R., & Jones, D. (2014). Three paths for learning analytics and beyond : moving from rhetoric to reality. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 242–250).

Jones, D., Beer, C., & Clark, D. (2013). The IRAC framwork: Locating the performance zone for learning analytics. In H. Carter, M. Gosper, & J. Hedberg (Eds.), Electric Dreams. Proceedings ascilite 2013 (pp. 446–450). Sydney, Australia.

Maddux, C., & Cummings, R. (2004). Fad, fashion, and the weak role of theory and research in information technology in education. Journal of Technology and Teacher Education, 12(4), 511–533.

What’s good for “open content” is good for the LMS/virtual learning space?

My tweet stream reminded me this morning that #oer15 is up and running. The following tweet from @courosa was amongst the first I saw.

The tweet draws on the 5Rs framework from David Wiley as a way of defining “open content” as having a license that allows others

to engage in the 5R activities:

  • Retain – the right to make, own, and control copies of the content (e.g., download, duplicate, store, and manage)
  • Reuse – the right to use the content in a wide range of ways (e.g., in a class, in a study group, on a website, in a video)
  • Revise – the right to adapt, adjust, modify, or alter the content itself (e.g., translate the content into another language)
  • Remix – the right to combine the original or revised content with other open content to create something new (e.g., incorporate the content into a mashup)
  • Redistribute – the right to share copies of the original content, your revisions, or your remixes with others (e.g., give a copy of the content to a friend)

Why stop at content?

When it comes to learning (and teaching), I wonder about the focus on and definition of “content”. Especially if you take @downes perspective that the “content in learning functions as a McGuffin”. At the very least, the content in the courses I design is somewhat important, but it’s not the only thing.

What the learner does with and takes from that content is more important. What the learner does is enabled and constrained by the tools available to them and the affordances those tools offer. Sitting in a lecture, reading a print book, watching an online video, engaging on a blog, engaging in a discussion forum all offer different constraints and enablers.

Regardless of their relative merits, increasingly the learners in my courses are being required to engage with a range of digital technologies in the form of a the institutional LMS and other tools. These tools – both institutional and personal – make up their virtual learning space. Complaints about the LMS have been many and regularly over the last 10+ years.

Perhaps the most regularly complaint from certain circles is that the LMS is not open. Not open in terms of only people enrolled in the course at the institution are able to access it. Not open in terms of once you leave the institution you probably don’t have access anymore. Not open in terms of not being able to use Google (or in my case any search engine) to find material on the LMS.

But recently there has been another trend in institutions that have been making the LMS even less open. Many institutions are now mandating consistent, minimum standards for all courses hosted in the LMS. At my current institution that has translated into the virtual learning space for a course having to look a specific way and more troubling to use specific locally produced tools (e.g. a particular way for presenting assessment information and a study schedule).

What’s worse is that this mandated consistent set of minimum standards is being seen through the lens of an “established” view of technology. That you can’t and shouldn’t change the technology. In fact, if you do change the technology you are seen as breaking policy and are required to “please explain” (as has happened to me this year).

In some large part this type of thinking to me is an example of this quote from Bret Victor

We’re computer users thinking paper thoughts

The mandating of consistent, minimum standards for all courses in an LMS gives me a strong sense of a deja vu for the bad old days of 2nd generation print-based distance education. The days when all the distance education courses for a University had to use the same style guide, even if it broke all the Prolog code in the Machine Intelligence material. Mandating consistent, minimum standards for all courses in the LMS is “computer users thinking paper thoughts”.

It’s an example of people not understanding what’s really different about computers and digital media. Mike Caufield makes this point

I would argue (along with Alan Kay and so many others) that for digital media the most radical affordance is the remixability of the form (what Kay would call its dynamism). We can represent ideas not as finished publications, but as editable models that can be shared, redefined, and recontextualized. Conversations are transient, publications are fixed.
But digital media can be forever fluid, if we let it.

Universities are missing out on the full benefits of digital media because they are “computer users thinking paper thoughts” that don’t even recognise the “remixability” of digital media and the potential that brings. Instead of leveraging this affordance of the medium and letting it be fluid, institutions are setting it in stone.

Even if you open access to the LMS, will it be open?

Even if an institution opens up access to the LMS. Allows any one into it. I don’t think it can be classed as open, because access is only the first step in being open.

I’m thinking that the LMS – or any other institutional virtual learning space – can’t be truly open until allows me to

  1. Retain – to make and control copies of the data, experiences, and perhaps affordances offered by that learning space.
  2. Reuse – the data, experiences, and perhaps affordances in other ways, in this space, and other spaces.

    e.g. download data about learner activity to my laptop to perform analysis not available in the LMS. e.g. take data from the LMS to generate a “learning report” to automatically “mark” learning activities.

  3. Revise – to adapt, adjust, modify or alter the data, experiences, and perhaps affordances in other ways, in this space, and other spaces.

    e.g. I can use jQuery to point the mandated “Assessment” link to assessment information that is presented more appropriately.

  4. Remix – to recombine the original and revised data, experiences, and perhaps affordances in other ways, in this space, and other spaces.

    e.g. take LMS data and data from the student records system to develop a “learning process analytics” tool used in the course.

  5. Redistribute – the data, experiences, and perhaps affordances in other ways, in this space, and other spaces.

    e.g. the idea of a tool that allows the learning material in my course to be re-purposed as an open book.

Should/can the virtual learning spaces be open in terms of the 5Rs? How might this be done? What problems/benefits might accrue?

Personally, these are important and interesting questions. Not the least because I’m already doing this (see the examples above) via various backdoor methods. And it is helping to make the task of teaching 300+ students somewhat bearable.

Concrete lounges and why basketball players aren’t better hockey players

Assignment 1 for my course is due later today. 100+ students have submitted. Time to explore the Moodle assignment submission system and how to allocate assignments to markers.

So what?

Why would that be interesting to anyone not using Moodle? Well because…

Is elearning like teenage sex?

One explanation for the quality of “e-learning” is

We have taken our teachers who have been really effective at face-to-face teaching and asked them to also be experts in online teaching. Get real! That’s like asking a good basket baller to become a good hockey player. Yes it’s sport and yes you have a ball and competitors, but the rules are very different. And yes, if you’re a good sportsperson, chances are you can pick-up on being good at another code, but it will take time and quite a bit of training.

”Die Schuhe sind zu groß.” - ”The shoes by Jorbasa, on Flickr
Creative Commons Creative Commons Attribution-No Derivative Works 2.0 Generic License   by  Jorbasa 

That’s certainly part of the problem. But – to extend the analogy – the other part of the problem that I experience day to day is that universities are asking the good basketball players to play hockey with equipment that’s quite a few sizes too small and simply doesn’t help them play hockey, let alone learn how to play hockey.

This is not to say that the provision of the appropriate equipment is easy. It’s not. It’s incredibly difficult. A wicked problem.

The point is that the perspective (from the same post) is – in my experience – not the case at all

We already have all the tools we need to get our students engaged. Sure there will be new ones that come along from time to time that will do things a wee bit better, but for the time being we have plenty to make this happen.

As a teacher engaged with e-learning at a University, most of the technology provided is a concrete lounge.

Assignment submission

My current institution has this year moved away from a locally produced online assignment submission and management (OASM) system toward one embedded within the Moodle LMS. There’s apparently been some customisation of the standard Moodle OASM system, but it’s not clear just how much. I’ve already heard reports from other staff (with courses that have assignments due before mine) that allocation of assignments to makers is less than easy.

The following documents my attempts to do this and seeks to explore if the Moodle assignment submission system will be an example of the wrong size shoes for playing hockey.

I’m a hockey player

Background: I designed and implemented my first OASM system back in 1994/1995. From then through to about 2004/2005 I led the design and implementation of various generations of an OASM system and wrote the odd paper about it. I know a bit about these systems. I’m not a basketball player, I’m a hockey player.

Assigning some assignments – do it myself

Documentation by mray, on Flickr
Creative Commons Creative Commons Attribution-Share Alike 2.0 Generic License   by  mray 

First test, can I figure out how to do this via the interface. i.e. don’t read the “manual”.

Assuming the “Assignment administration” menu would offer some insight/affordances.

“Marker assignment quota” seems the most obvious option. A couple of observations

  • Apparently one of the students has some how been allocated the role of marker, she is appearing in the list of markers.

    My first question is obviously, “How the hell did that happen?”. The user is currently assigned to both the student and “general admin” roles. I don’t remember (even accidentally) making this change. Wonder how it happened?

  • This offers a choice of unlimited or a specific quota, but isn’t pre-populated with data already entered.

    i.e. to employ the markers to do this work, I had to negotiate with them how many they could mark and then specify that in the contract process. Having to re-enter this data in another system is a bit of a pain. I understand why it hasn’t been done. These are two very separate systems managed by very different parts of the institution. But if the shoe were too fit…..

Concrete lounge #1: Having to re-enter data already present in other systems.

View/grade all submissions

Next bet is to try the “View/grade all submissions” which shows a filterable list of all the submitted assignments and allows a number of operations to be done upon them. I’m assuming that “allocate marker” could be one of them.

Yep, “set allocated marker” is an option. Select the student(s) to allocate, select the menu option and hit “Go”. Brings up a page with those students listed and another drop down menu of markers. You chose the marker and hit “Save Changes”

Two potential problems with this

  1. Pre-allocation; and

    This does imply that you can only allocate markers to assignments that have already been submitted. I’ve got at least one marker who has a fixed group of assignments to mark. All located at a specific campus. In a perfect world I’d be able to pre-allocate the assignments from those students to the marker. Rather than have to wait until they are submitted and manually allocate them.

  2. Manually selecting individual students.

    Individual allocation is ok, but I would like to see at least two additional options. First, the allocation by group option available above. Second, a random (or perhaps specific) allocation of specified numbers. e.g. I have markers who will make 50 assignments, I’d like to automatically allocate them 50 assignments and have the system randomly allocate them. I’d rather not have to count to 50.

    Even better, it might be nice to say allocate them 50 assignments, but aim to achieve a balance of ability levels (perhaps based on GPA or some other indicator). Few things are more depressing than having to mark 50 low quality assignments. I assume there would be other allocation schemes people would like to apply.

Are there other options beyond this?

Grading options

Under the “grading option” drop down menu there is a “auto-allocate markers” option. But I wonder if it’s smart enough to handle variety. i.e. I need to ensure that one marker gets all the students at one campus, but can randomly allocate the remainder.

I don’t want to experiment with this option, just in case it goes ahead and auto-allocates straight away. So I’ll do a Google search for documentation. The search results are not that clear.

It appears that Moodle 2.6 added two related features – marking workload; and, marking allocation. These have to configured into the assignment activity. Did I do this? I did indeed. And this provides the functionality I’ve already identified.

So let’s just suck it and see. Good, it doesn’t do this straight away. It allows the options to

  1. allocate all unallocated, submitted submissions;
  2. allocate all unallocated submissions (including unsubmitted ones);

    Both these options are dong be specifying an “allocation batch size” and either doing an “Allocate” or “Simulate”. The simulate is a useful feature.

  3. copy allocations from another assignment.

Nothing here about allocating based on groups.

Filters and options

There are a collection of filters that can be applied, based on

  • # assignments per page;
  • assignment status;
  • marker allocation;
  • workflow status;

    A slight duplication to assignment status, but based on a different approach.

There’s nothing here about filtering based on groups. Is this because I haven’t configured there to be?

Group options in the settings

There is a “Group submission settings” section in the assignment settings. But most of this is based on the idea of students submitting assignments in groups. Not using groups to allocate assignments to markers.

No obvious options

I’m giving up. I can’t see from the system any obvious ways to allocate assignments easily via groups.

At this stage it appears that I will have to

  1. Manually allocate all students at one campus to their marker.
  2. Use the auto-allocate feature for the remaining students.
Edu Doggy by David T Jones, on Flickr

In theory, I could negotiate with the first marker to do an auto-allocate. But I think it important that he mark the assignments of his own students. Changing that preference would be the case of the tail wagging the dog.

Use the documentation

Before I do that, let’s see whether the documentation provided by the institution can offer any insight. It appears that this might be the solution

However, if the assignment activity has first been configured into groups, these can be manually assigned to a specified Marker.

I’m not entirely sure what this means. Let’s experiment with a dummy assignment and use the “common module settings” and the groups there.

First, the groupings don’t seem to be appropiate. No option to do it at campus level.

Okay, this appears to have added an additional option to filter which students/assignments are shown based on groups.

This would provide the option I need (a bit of a kludge), but the question is whether or not the setting can be changed on the fly – i.e. after students have started submitting?

The other question is what do (or will) the students actually see. I don’t believe there is actually an easy way for me to test this.

Let’s try making the change. Appears to be no problem with other assignments. I assume Moodle will warn of any horrible consequence? Famous last words? Logically there shouldn’t be a problem, but….

Change made, but there is no difference in the display. The option to select just the students in a particular group does not appear. Perhaps it can’t be changed once submissions have been made.

Concrete Lounge #2: No apparent way to filter assignments/students by groups

Group membership is stored independently of assignment submission in the Moodle database. It should be possible to offer a “Group filter” – perhaps even one dependent on the “grouping” – as a way to modify the viewing of all submissions.

Looks like I’ll have to do this manually.

Documentation at the wrong abstraction layer

Concrete Lounge #3: The local help documentation (like most help documentation) is written at the level of functionality. It describes the interface and what each interface element does. It isn’t organised at the level of “activity type” i.e. the level of the users.

i.e. I have a certain model of how I want to manage the submission, allocation, and marking of assignments. That’s what I know. That’s where I am. Documentation that started at this level by describing a list of different models of using assignment submission and then describing how to configure the Moodle assignment submission system to implement this model would be more useful (and much more difficult to write).

Better yet. Would be an assignment submission system that would present a list of different models, briefly explain each one, allow me to choose which I want, and then set up the system to fulfill that model.

i.e. the system actually fit what I wanted to do, rather than required me to engage in explorations how to figure out if and how I could translate the functionality of the system into what I wanted to do.

Sorry, but the tools we have available at the moment aren’t quite ready to help basketball players become better hockey players.


As per the comment below I missed an option to flick. That’s done and I can see the groups and make use of those. So here’s what I did

  1. Allocate unsubmitted from campus X to the marker;

    Set filter to the tutorial group I need and filter for “unsubmitted”. This is so that if they submit, they will automatically appear on the marker’s list.

  2. Allocate submitted from campus X to the marker;
  3. Auto-allocate the remaining submitted to markers;

    Priority is given to those submitted.

  4. Drop the allocation for campus X marker

    Problem: the campus X marker was originally allocated 22 students to make. But one has dropped out. Meaning when I do an auto-allocate (simulation) he gets allocated another marker.

    I also have to make sure that the “student marker” has an allocation of 0.

  5. Do the auto-allocation again.

Now all I need do is to figure out how much advice the markers will need to download, mark and resubmit the their allocated assignments.

What advice is there?

Hard to explore this myself as I don’t know how much my view of the system is the same as what a marker would see.

The “Download all submissions” option gets them all, not just the ones I’ve allocated.

Appears that the “view all”, play with filters, and then download is the way to go. I assume that the markers won’t need have the “Marker filter” to play with.

I wonder if the organisation has given any advice specific to markers. Of course, the “portal” I can access links to various staff support sites won’t let me login on my current browser. Nor on chrome.

Oh dear, the “portal” still has a large explicit link to the old assignment submission system.

Now begins the traipsing through various sites to figure out where it is.

A couple of red herrings and finally found the document I had been using (not only was it hidden away in the object repository, I had to login again to access it) and it confirms my suspicion. Downloading will be fairly simple for markers – once they find the right place and buttons to push.

But there doesn’t appear to be any specific file/resource that can be sent to markers. It appears that I’ll have to create my own (just like every other person in charge of a course with multiple markers). Of course, the other option is that I’ve missed the presence of this other document entirely.

It appears a cut down version of the larger document was circulated. Found this out via personal networks and Twitter, rather than via other means. The smaller document had been circulated earlier via email, but finding it in my Inbox……

The documentation is very generic. I’ll update it and include a direct link to the specific assignment.

What is downloaded?

A zip file with all student submissions in a single directory. Wonder how it works if the students are submitting multiple files? Does it put each student’s submission into separate directories then?

Specifying moderation samples

In terms of moderation, my practice is to specify to the marker at least 3 assignments that they should mark first. These are sent back to me ASAP for moderation and any advice on marking. The aim is that the 3 assignment represent a spectrum of abilities based on GPA typically: a 6+ GPA, a high 4/5 GPA, and a sub 4 GPA.

As in the past, this information isn’t part of the OASM system. So I have to do it manually via a spreadsheet. However, in the past the OASM system did provide a spreadsheet of students allocated to a marker. This enable the use of Excel formulas to find some samples. Doesn’t appear possible to do this via Moodle.

Luckily the “more student details” popup described here lets me click on a link in the list of students and find out the students GPA (amongst other things).

Concrete lounge: Can’t easily allocate sample marking based on student GPA (or other means) in part because can’t see how to expert students allocated to a marker to a spreadsheet.

Contacting the non-submits

Another task at this time is to approach the students who have yet to submit the assignment and see what’s going on. Some of these will have extensions, some won’t. Is there an option to show those students who have not submitted, but haven’t received extensions?

Doesn’t appear to be.

Concrete lounge: Can’t see how to list those students who have not yet submitted the assignment, but who haven’t been granted an extension.

The options appear to be scrolling through the list of almost 50 students and manually identifying those without extensions. But even when I do that, what can I do? Can I send those students an email message?

Doesn’t appear to be possible.

Concrete lounge: Unable to send group (or personalised) emails to students who have not yet submitted the assignment.

Wouldn’t be too hard to write a Greasemonkey script that extracts the email addresses of the students without extensions.

  • name in c2 (cell 2)
  • email address is in C4
  • an extension is indicated by a div with class extensiondate in c5

But that would require a bit of extra work. I do have some Perl scripts that I use for web scraping that could be more easily converted, but not as shareable. Script written and email sent.

Concrete lounge: Unable to use filters to identify students who have not submitted the assignment AND do not have an extension

Uploading results and marked files

Uploading the marked files seems fairly straight forward, as long as the same filenames are retained.

Question remains about how to upload the marks. The OASM system won’t be smart enough to extract the results from the uploaded files.

Oh dear, it appears that results need to be added manually for each student. That’s a bugger if you’re a casual marker employed to make 50 assignments. Beyond the time and workload implications, there’s the problem of human error, especially with a hugely repetitive manual process.

Correction, I need to enable the “offline grading worksheet” option. Yep, that adds “download grading worksheet” (and upload) to the options.