Helping teachers “know thy students”

The first key takeaway from Motz, Teague and Shepard (2015) is

Learner-centered approaches to higher education require that instructors have insight into their students’ characteristics, but instructors often prepare their courses long before they have an opportunity to meet the students.

The following illustrates one of the problems teaching staff (at least in my institution) face when trying to “know thy student”. It ponders if learner experience design (LX design) plus learning analytics (LA) might help. Shows off one example of what I’m currently doing to fix this problem and ponders some future directions for development.

The problem

One of the problems I identified in this talk was what it took for me to “know thy student” during semester. For example, the following is a question asked by a student on my course website earlier this year (in an offering that included 300+ students).

Question on a forum

To answer this question, it would be useful “know thy student” in the following terms

  1. Where is the student located?
    My students are distributed throughout Australian and the world. For this assignment they should be using curriculum documents specific to their location. It’s useful to know if the student is using the correct curriculum documents.
  2. What specialisation is the student working on?
    As a core course the Bachelor of Education degree, my course includes all types of pre-service teachers. Ranging from students studying to be Early Childhood teachers, Primary school teachers, Secondary teachers, and even some looking to be VET teachers/trainers.
  3. What activities and resources has the student engaged with on the course site?
    The activities and resources on the site are designed to help students learn. There is an activity focused on this question, has this student completed it? When did they complete it?
  4. What else has the student written and asked about?
    In this course, students are asked to maintain their own blog for reflection. What the student has written on that blog might help provide more insight. Ditto for other forum posts.

To “know thy student” in the terms outlined above and limited to the tools provided by my institution requires:

  • the use three different systems;
  • use of a number of different reports/services within those two systems; and,
  • at least 10 minutes to click through each of these.
Norman on affordances

Given Norman’s (1993) observations is it any wonder that perhaps I might not spend 10 minutes on that task every time I respond to a question from the 300+ students?

Can learner experience (LX) design help?

Yesterday, Joyce (@catspyjamasnz) and I spent some time exploring if and how learner experience design (Joyce’s expertise) and learning analytics (my interest) might be combined.

As I’m currently working on a proposal to help make it easier for teachers “know thy students” this was uppermost in my mind. And, as Joyce pointed out, “know the students” is a key step in LX design. And, as Motz et al (2015) illustrate there appears to be some value in using learning analytics to help teachers “know thy students”. And, beyond Motz’s et al (2015) focus on planning, learning analytics has been suggested to help with the orchestration of learning in the form of process analytics (Lockyer et al, 2013). A link I was thinking about before our talk.

Out of all this a few questions

  1. Can LX design practices be married with learning analytics in ways that enhance and transform the approach used by Motz et al (2015)?
  2. Learning analytics can be critiqued as being driven more by the available data and the algorithms available to analyse it (the expertise of the “data scientists”) driving it. Some LA work is driven by educational theories/ideas. Does LX design offer a different set of “purposes” to inform the development of LA applications?
  3. Can LX design practices + learning analytics be used to translate what Motz et al (2015) see as “relatively rare and special” into more common practice

    Exceptionally thoughtful, reflective instructors do exist, who customize and adapt their course after the start of the semester, but it’s our experience that these instructors are relatively rare and special, and these efforts at learning about students requires substantial time investment.

  4. Can this type of practice be done in a way that doesn’t require “data analysts responsible for developing and distributing” (Motz et al, 2015) the information?
  5. What type of affordances can and should such an approach provide?
  6. What ethical/privacy issues would need to be addressed?
  7. What additional data should be gathered and how?

    e.g. in the past I’ve used the course barometer idea to gather student experience during a course. Might something like this be added usefully?

More student details

“More student details” is the kludge that I’ve put in place to solve the problem at the top of this post. I couldn’t live with the current systems and had to scratch that itch.

The technical implementation of this scratch involves

  1. Extracting data from various institutional systems via manually produced reports and screen scraping and placing that data into a database on my laptop.
  2. Adapting the MAV architecture to create a Greasemonkey script that talks to a server on my laptop that in turn extracts data from the database.
  3. Install the Greasemonkey script on the browser I use on my laptop.

As a result, when I use that browser to view the forum post at the top of this post, I actually see the following (click on the image to see a larger version). The red arrows have been added to the image to highlight what’s changed. The addition of [details] links.

Forum post + more student details

Whenever the Greasemonkey script sees a Moodle user profile link, it adds a [details] link. Regardless of which page on my Moodle course sites I’m on. The following image shows an excerpt from the results page for a Quiz. It has the [details] links as well.

Quiz results + more student details

It’s not beautiful, but it’s only something I currently use and I was after utility.

Clicking on the [details] links results in a popup window appearing. A window that helps me “know they student”. The window has three tabs. The first is labelled “Personal Details” and is visible below. It provides information from the institutional student records system, including name, email address, age, specialisation, which campus or mode the student is enrolled in, the number of prior units they’ve completed, their GPA, and their location and phone numbers.

Student background

The second tab on “more student details” shows details of the student’s activity completion. This is a Moodle idea where it tracks if and when a student has completed an activity or resource. My course site is designed as a collection of weekly “learning
paths”. Each path is a series of activities and resources design to help the student learn. Each week belongs to one of three modules.

The following image shows part of the “Activity Completion” tab for “more student details”. It shows that Module 2 starts with week 4 (Effective planning: a first step) and week 5 (Developing your learning plan). Each week has a series of activities and resources.

For each activity the student has completed, it shows when they completed that activity. This student completed the “Welcome to Module 2” – 2 months ago. If I hold the mouse over “2 months ago” it will display the exact time and date it was completed.

I did mention above that it’s useful, rather the beautiful.

Student activity completion

The “blog posts tab shows details about all the posts the student has written on their blog for this course. Each of the blog posts include a link to that blog post and shows how long ago the post was made.

Student blog posts

With this tool available, when I answer a question on a discussion forum I can quickly refresh what I know about the student and their progress before answering. When I consider a request for an assignment extension, I can check on the student’s progress so far. Without spending 10+ minutes doing so.

API implementation and flexibility

As currently implemented, this tool relies on a number of manual steps and my personal technology infrastructure. To scale this approach will require addressing these problems.

The traditional approach to doing this might involve making modifications to Moodle to add this functionality into Moodle. I think this is the wrong way to do it. It’s too heavyweight, largely because Moodle is a complex bit of software used by huge numbers of people across the world, and because most of the really useful information here is going to be unique to different courses. For example, not many courses at my institution currently use activity completion in the way my course does. Almost none of the courses at my institution use BIM and student blogs the way my course does. Beyond this, the type of information required to “know thy student” extends beyond what is available in Moodle.

To “know thy student”, especially when thinking of process analytics that are unique to the specific learning design used, it will be important that any solution be flexible. It should allow individual courses to adapt and modify the data required to fit the specifics of the course and its learning design.

Which is why I plan to continue the use of augmented browsing as the primary mechanism, and why I’ve started exploring Moodle’s API. It appears to provide a way to allow the development of a flexible and customisable approach to allowing “know thy student” respond to the full diversity of learning and teaching.

Now, I wonder how LX design might help?

The perceived uselessness of the Technology Acceptance Model (TAM) for e-learning

Below you will find the slides, abstract, and references for a talk given to folk from the University of South Australia on 1 October, 2015. A later blog post outlines core parts of the argument.

Slides

Abstract

In a newspaper article (Laxon, 2013), Professor Mark Brown described e-learning as

a bit like teenage sex. Everyone says they’re doing it but not many people are and those that are doing it are doing it very poorly.

This is not a new problem with a long litany of publications spread over decades bemoaning the limited adoption of new technology-based pedagogical practices (e-learning). The dominant theoretical model used in research seeking to understand the adoption decisions of both staff and students has been the Technology Acceptance Model (TAM) (Šumak, Heričko, & Pušnik, 2011). TAM views an individual’s intention to adopt a particular digital technology as being most heavily influenced by two factors: perceived usefulness, and perceived ease of use. This presentation will explore and illustrate the perceived uselessness of TAM for understanding and responding to e-learning’s “teenage sex” problem using the BAD/SET mindsets (Jones & Clark, 2014) and experience from four years of teaching large, e-learning “rich” courses. The presentation will also seek to offer initial suggestions and ideas for addressing e-learning’s “teenage sex” problem.

References

Bichsel, J. (2012). Analytics in Higher Education: Benefits, Barriers, Progress and Recommendations. Louisville, CO. Retrieved from http://net.educause.edu/ir/library/pdf/ERS1207/ers1207.pdf

Box, G. E. P. (1979). Robustness in the Strategy of Scientific Model Building. In R. Launer & G. Wilkinson (Eds.), Robustness in Statistics (pp. 201–236). Academic Press. doi:0-12-4381 50-2

Burton-Jones, A., & Hubona, G. (2006). The mediation of external variables in the technology acceptance model. Information & Management, 43(6), 706–717. doi:10.1016/j.im.2006.03.007

Ciborra, C. (1992). From thinking to tinkering: The grassroots of strategic information systems. The Information Society, 8(4), 297–309.

Corrin, L., Kennedy, G., & Mulder, R. (2013). Enhancing learning analytics by understanding the needs of teachers. In Electric Dreams. Proceedings ascilite 2013 (pp. 201–205).

Davis, F. D. (1986). A Technology Acceptance Model for empirically testing new end-user information systems: Theory and results. MIT.

Davis, F. D. (1989). Perceived usefulness, perceived ease of use and user acceptance of information technology. MIS Quarterly, 13(3), 319.

Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982–1003.

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicator of learning and teaching performance. Canberra: Australian Learning and Teaching Council. Retrieved from http://moourl.com/hpds8

Ferguson, R., Clow, D., Macfadyen, L., Essa, A., Dawson, S., & Alexander, S. (2014). Setting Learning Analytics in Context : Overcoming the Barriers to Large-Scale Adoption. Journal of Learning Analytics, 1(3), 120–144. doi:10.1145/2567574.2567592

Hannafin, M., McCarthy, J., Hannafin, K., & Radtke, P. (2001). Scaffolding performance in EPSSs: Bridging theory and practice. In World Conference on Educational Multimedia, Hypermedia and Telecommunications (pp. 658–663). Retrieved from http://www.editlib.org/INDEX.CFM?fuseaction=Reader.ViewAbstract&paper_id=8792

Holt, D., Palmer, S., Munro, J., Solomonides, I., Gosper, M., Hicks, M., … Hollenbeck, R. (2013). Leading the quality management of online learning environments in Australian higher education. Australasian Journal of Educational Technology, 29(3), 387–402. Retrieved from http://www.ascilite.org.au/ajet/submission/index.php/AJET/article/view/84

Introna, L. (2013). Epilogue: Performativity and the Becoming of Sociomaterial Assemblages. In F.-X. de Vaujany & N. Mitev (Eds.), Materiality and Space: Organizations, Artefacts and Practices (pp. 330–342). Palgrave Macmillan.

Jasperson, S., Carter, P. E., & Zmud, R. W. (2005). A Comprehensive Conceptualization of Post-Adaptive Behaviors Associated with Information Technology Enabled Work Systems. MIS Quarterly, 29(3), 525–557.

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272). Dunedin.

Kay, A. (1984). Computer Software. Scientific American, 251(3), 53–59.

Kunin, V., Goldovsky, L., Darzentas, N., & Ouzounis, C. a. (2005). The net of life: Reconstructing the microbial phylogenetic network. Genome Research, 15(7), 954–959. doi:10.1101/gr.3666505

Laxon, A. (2013, September 14). Exams go online for university students. The New Zealand Herald.

Lee, Y., Kozar, K. A., & Larsen, K. R. T. (2003). The Technology Acceptance Model: Past, Present, and Future. Communications of the AIS, 12. Retrieved from http://aisel.aisnet.org/cais/vol12/iss1/50

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, 57(10), 1439–1459. doi:10.1177/0002764213479367

Müller, M. (2015). Assemblages and Actor-networks: Rethinking Socio-material Power, Politics and Space. Geography Compass, 9(1), 27–41. doi:10.1111/gec3.12192

Najmul Islam, A. K. M. (2014). Sources of satisfaction and dissatisfaction with a learning management system in post-adoption stage: A critical incident technique approach. Computers in Human Behavior, 30, 249–261. doi:10.1016/j.chb.2013.09.010

Nistor, N. (2014). When technology acceptance models won’t work: Non-significant intention-behavior effects. Computers in Human Behavior, pp. 299–300. Elsevier Ltd. doi:10.1016/j.chb.2014.02.052

Stead, D. R. (2005). A review of the one-minute paper. Active Learning in Higher Education, 6(2), 118–131. doi:10.1177/1469787405054237

Sturgess, P., & Nouwens, F. (2004). Evaluation of online learning management systems. Turkish Online Journal of Distance Education, 5(3). Retrieved from http://tojde.anadolu.edu.tr/tojde15/articles/sturgess.htm

Šumak, B., Heričko, M., & Pušnik, M. (2011). A meta-analysis of e-learning technology acceptance: The role of user types and e-learning technology types. Computers in Human Behavior, 27(6), 2067–2077. doi:10.1016/j.chb.2011.08.005

Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39(2), 273–315. doi:10.1111/j.1540-5915.2008.00192.x

Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the Technology Acceptance Model: Four longitudinal field studies. Management Science, 46(2), 186–204.
Venkatesh, V., Morris, M., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478.

It’s not how bad you start, but how quickly you get better

Wood & Hollnagel (2006) start by presenting the Bounded Rationality syllogism

All cognitive systems are finite (people, machines, or combinations).
All finite cognitive systems in uncertain changing situations are fallible.
Therefore, machine cognitive systems (and joint systems across people and machines) are fallible. (p. 2)

From this they suggest that

The question, then, is not fallibility or finite resources of systems, but rather the development of strategies that handle the fundamental tradeoffs produced by the need to act in a finite, dynamic, conflicted, and uncertain world.

The core ideas of Cognitive Systems Engineering (CSE) shift the question from
overcoming limits to supporting adaptability and control
(p. 2)

Which has obvious links to my last post, “All models are wrong”.

This is why organisations annoy me with their fetish for developing the one correct model (or system) and requiring that everyone should and can follow that one correct model.

Refining a visualisation

Time to refine the visualisation of students by postcodes started earlier this week. Have another set of data to work with.

  1. Remove the identifying data.
  2. Clean the data.
    I had to remind myself the options for the sort comment – losing it. The following provide some idea of the mess.

    :1,$s/”* Sport,Health&PE+Secondry.*”/HPE_Secondary/
    :1,$s/”\* Sport, Health & PE+Secondry.*”/HPE_Secondary/
    :1,$s/Health & PE Secondary/HPE_Secondary/
    :1,$s/\* Secondary.*/Secondary/
    :1,$s/\* Secondry.*/Secondary/
    :1,$s/\* Secondy.*/Secondary/
    :1,$s/Secondary.*/Secondary/
    :1,$s/\* Secdary.*/Secondary/
    :1,$s/\* TechVocEdu.*/TechVocEdu/

  3. Check columns
    Relying on a visual check in Excel – also to get a better feel for the data.

  4. Check other countries
    Unlike the previous visualisation, the plan here is to recognise that we actually have students in other countries. The problem is that the data I’ve been given doesn’t include country information. Hence I have to manually enter that data. Giving for one of the programs, the following.

    4506 Australia
    8 United Kingdom
    3 Vietnam
    3 South Africa
    3 China
    2 Singapore
    2 Qatar
    2 Japan
    2 Hong Kong
    2 Fiji
    2 Canada
    1 United States of America
    1 Taiwan
    1 Sweeden
    1 Sri Lanka
    1 Philippines
    1 Papua New Guinea
    1 New Zealand
    1 Kenya
    1 Ireland

And all good.

Does learning about teaching in formal education match this?

Riel and Pollin (2004) talk about a view of learning that sees learning occurring

through engagement in authentic experiences involving the active manipulation and experimentation with ideas and artifacts – rather than through an accumulation of static knowledge (p. 17)

They cite people such as Bruner and Dewey supporting that observation.

When I read that, I can’t but help reflect on what passes for “learning about teaching” within universities.

Authentic experience

Does such learning about teaching occur “through engagement in authentic experiences”?

No.

Based on my experiences at two institutions, it largely involves

  • Accessing face-to-face and online instructions on how-to use a specific technology.
  • Attending sessions talking about different teaching methods or practices.
  • Being told about the new institutionally mandated technology or practice.
  • For a very lucky few, engaged with an expert in instructional design or instructional technology about the design of the next offering of a course.

Little learning actually takes place in the midst of teaching – the ultimate authentic experience.

Active manipulation

Does such learning allow and enable the “active manipulation and experimentation with ideas and artifacts”?

No.

Based on my experience, the processes, policies, and tools used to teach within universities are increasingly set in stone. Clever folk have identified the correct solution and you shall use them as intended.

Active manipulation and experimentation is frowned upon as inefficient and likely to impact equity and equality.

Most of the technological environments (whether they be open source or proprietary) are fixed. Any notion of using some technology that is not officially approved, or modifying an existing technology is frowned upon.

Does this contribute to the limitations of university e-learning?

If, learning occurs through authentic experience and active manipulation, and the university approach to learning about teaching (especially with e-learning) doesn’t effectively support either of these requirements, then is there any wonder that the quality of university e-learning is seen as having a few limitations?

References

Riel, M., & Polin, L. (2004). Online learning communities: Common ground and critical differences in designing technical environments. In S. A. Barab, R. Kling, & J. Gray (Eds.), Designing for Virtual Communities in the Service of Learning (pp. 16–50). Cambridge: Cambridge University Press.

What do “scale” and “mainstreaming” mean in higher education?

@marksmithers has just written a blog post that makes the following point

that talks about a new fund to promote innovation in highered. I know $5M isn’t a huge amount but the principle just seems so misguided. There is no problem with innovation in higher education. The problem is adopting and mainstreaming innovations across a higher ed institutions.

@shaned07 raised a similar question in a recent presentation when he talked about the challenge of scaling learning analytics within an institution.

But the question that troubles me is what do you mean by “scaling” or “mainstreaming” innovations in higher education?

What do you mean by “scaling” and “mainstreaming”?

The stupid definition

This may sound like a typical academic question, but it is important because an naive understanding of what these terms may mean quickly leads to stupidity.

For example, if what I’ve experienced at two different institutions and overhead numerous times at a recent conference is anything to go by, then “scaling/mainstreaming” is seen to be the same as: mandated, consistent, or institutional standard. As in, “We’ll mainstream quality e-learning by creating an institutional standard interface for all course websites”, or, “We’ll ensure quality learning at our institution through the development of institutional graduate attributes”. Some group (often of very smart people) get together and decide that there should be an institutionally approved standard (for just about anything) and every one, process, policy and tool within the institution will then work toward achieving that standard.

Mainstreaming through standardisation is such a strong underpinning assumption that I know of one university where feedback to senior management is provided through an email address something like 1someuni@someuni.edu and another university where achieving the goal of “one university” received explicit mentions in annual reports and other strategic documents.

The problem with the stupid definition

talk to the experts by Mai Le, on Flickr
Creative Commons Creative Commons Attribution 2.0 Generic License   by  Mai Le 

The problem with this approach is that it assumes Universities and their learning and teaching practice is a complicated system, not a complex system. This way of viewing universities is reinforced because the people charged with making these decisions (senior leaders, consultants, internal leaders on information technology, learning etc) are all paid to be experts. They are paid to successfully solve complicated problems. That success and expectation means that they expect/believe the same methods they’ve used to solve complicated problems, will help them solve a complex problem.

As Larry Cuban writes

Blueprints, technical experts, strategic plans and savvy managers simply are inadequate to get complex systems with thousands of reciprocal ties between people to operate effectively in such constantly changing and unpredictable environments……..Know further that reform designs borrowed from complicated systems and imposed from the top in complex systems will hardly make a dent in the daily work of those whose job is convert policy into action.

Much of the content of the talk titled “Why is e-learning ‘a bit like teenage sex’ and what can be done about it?” that @palbion and I gave focuses on identifying the problems that arise from this naive understanding of “mainstreaming/scaling”.

What’s the solution?

Cuban suggests

At the minimum, know that working in a complex system means adapting to changes, dealing with conflicts, and constant learning. These are natural, not aberrations.

The talk I mentioned builds on two papers (Jones & Clark, 2014; Jones, Heffernan & Albion, 2015) that are starting to explore what might be done. I’m hoping to explore some more specifics soon.

Whatever shape that takes, it certainly will reject the idea of mainstreaming through institutional consistency. But in summary, it will probably involve in creating an environment that is better able to adapt to change, deal with conflicts, and constant learning.

References

Jones, D., Heffernan, A., & Albion, P. R. (2015). TPACK as shared practice: Toward a research agenda. In D. Slykhuis & G. Marks (Eds.), Proceedings of Society for Information Technology & Teacher Education International Conference 2015 (pp. 3287-3294). Las Vegas, NV: AACE. Retrieved from http://www.editlib.org/p/150454/

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262-272). Dunedin. Retrieved from http://ascilite2014.otago.ac.nz/files/fullpapers/221-Jones.pdf

Revisiting the IRAC framework and looking for insights

The Moodlemoot’AU 2015 conference is running working groups one of which is looking at assessment analytics. In essence, trying to think about what can be done in the Moodle LMS code to enhance assessment.

As it happens I’m giving a talk during the Moot titled “Four paths for learning analytics: Moving beyond a management fashion”. The aim of the talk is to provide some insights to help people think about the design and evaluation of learning analytics. The working seems like a good opportunity to (at some level) “eat my own dogfood” and fits with my current task of developing the presentation.

As part of getting ready for the presentation, I need to revisit the IRAC framework. A bit of work from 2013 that we’ve neglected, but which (I’m surprised and happy to say) I think holds much more promise than I may have thought. The following explains IRAC and what insights might be drawn from it. A subsequent post will hopefully apply this more directly to the task of Moodle assessment analytics.

(Yes, Col and Damien, I have decided once again to drop the P and stick with IRAC).

The IRAC Framework

Originally developed to “improve the analysis and design of learning analytics tools and interventions” and hopefully be “a tool to aid the mindful implementation of learning analytics” (Jones, Beer, Clark, 2013). The development of the framework drew upon “bodies of literature including Electronic Performance Support Systems (EPSS) (Gery, 1991), the design of cognitive artefacts (Norman, 1993), and Decision Support Systems (Arnott & Pervan, 2005).

This was largely driven by our observation that most of the learning analytics stuff wasn’t that much focused on whether or not it was actually adopted and used, especially by teachers. The EPSS literature was important because an EPSS is meant to embody a “perspective on designing systems that support learning and/or performing” (Hannafin, McCarthy, Hannafin, & Radtke, 2001, p. 658). EPSS are computer-based systems intended to “provide workers with the help they need to perform certain job tasks, at the time they need that help, and in a form that will be most helpful” (Reiser, 2001, p. 63).

Framework is probably not the right label.

IRAC was conceptualised as four questions to ask yourself about the learning analytics tool you were designing or evaluating. As outlined in Jones et al (2013)

The IRAC framework is intended to be applied with a particular context and a particular task in mind. A nuanced appreciation of context is at the heart of mindful innovation with Information Technology (Swanson & Ramiller, 2004). Olmos & Corrin (2012), amongst others, reinforce the importance for learning analytics to start with “a clear understanding of the questions to be answered” (p. 47) or the task to be achieved.

Once you’ve got your particular context and task in mind, then you can start thinking about these four questions:

  1. Is all the relevant Information and only the relevant information available?
  2. How does the Representation of the information aid the task being undertaken?
  3. What Affordances for interventions based on the information are provided?
  4. How will and who can Change the information, representation and the affordances?

The link with the LA literature

Interestingly, not long after we’d submitted the paper for reviewing, Siemens (2013) came out and that paper included the following Learning Analytics (LA) Model (LAM) (click on the image to see a larger version). LAM was meant to help move LA from small scale “bottom-up” approaches into a more systemic and institutional approach. The “data team” was given significant emphasis in this.

Siemens (2013) Learning Analytics Model

Hopefully you can see how the Siemens’ LAM and the IRAC framework, at least on the surface, seem to cover much of the same ground. In case you can’t, the following image (click on it to see a larger version) makes that connection explicit.

IRAC and LAM

Gathering insights from IRAC and LAM

The abstract for the Moot presentation promises insights so let’s see what insights you might gain from IRAC. The following is an initial list of potential insights. Insights might be too strong a word. Provocations or hypothesis might be better suited.

  1. An over emphasis on Information.

    When overlaying IRAC onto the LAM the most obvious point for me is the large amount of space in the LAM dedicated to Information. This very large focus on the collection, acquisition, storage, cleaning, integration, and analysis of information is not all that surprising. After all that is what big data and analytics bring to the table. The people who developed the field of learning analytics came to it with an interest in information and its analysis. It’s important stuff. But it’s not sufficient to achieve the ultimate goal of learning analytics, which is captured in the following broadly used definition (emphasis added)

    Learning analytics is the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning, and the environments in which it occurs.

    The point of learning analytics is to find out more about learning and the learning environment and change it for the better. That requires action. Action on part of the learner, the teacher, or perhaps the institution or other actors. There’s a long list of literature that strongly argues that simply providing information to people is not sufficient for action.

  2. Most of the information currently available is of limited value.

    In not a few cases, “big data is driven more by storage capabilities than by superior ways to ascertain useful knowledge” (Bollier & Firestone, 2012, p. 14). There have been questions asked about how much the information that is currently captured by LMSes and other systems can actually “contribute to the understanding of student learning in a complex social context such as higher education” (Lodge & Lewis, 2012, p. 563). Click streams reveal a lot about when and how people traverse e-learning environments, but not why and with what impacts. Beyond that is the problem raised by observations that the use of e-learning by most courses does not make particularly heavy or well-designed use of the learning environment.

  3. Don’t stop at a dashboard (Representation).

    It appears that most people think that if you’ve generated a report or (perhaps worse) a dashboard you have done your job when it comes to learning analytics. This fails on two parts.

    First, these are bad representations. Reports and many dashboards are often pretty crappy at helping people understand what is going on. Worse, these are typically presented outside of the space where the action happens. Breaking the goal of an an information system/EPSS i.e. “provide workers with the help they need to perform certain job tasks, at the time they need that help, and in a form that will be most helpful” (Reiser, 2001, p. 63).

    Second, just providing data in a pretty form is not sufficient. You want people to do something with the information. Otherwise, what’s the point? That’s why you have to consider the affordances question.

  4. Change is never considered.

    At the moment, most “learning analytics” projects involve installing a system, be it stand alone or part of the LMS etc. Once it’s installed it’s all just a better of ensuring people are using it. There’s actually no capacity to change the system or the answers to the I, R, or A questions of IRAC that the system provides. This is a problem on so many levels.

    In the original IRAC paper we mentioned: how development through continuous action cycles involving significant user participation was a core of the theory of decision support systems (Arnott & Pervan, 2005) a pre-cusor to learning analytics; Buckingham-Shum’s (2012) observation that most LA is based on data already being captured by systems and that analysis of that data will perpetuate existing dominant approaches to learning; the problem of gaming once people learn what the system wants. Later we added the task artifact cycle.

    More recently (Macfadyen et al 2014) argue that one of the requirements of learning analytics tools is an integrated and sustained overall refinement procedure allowing reflection” (p. 12).

  5. The more context sensitive the LA is, the more value it has.

    In talking about the use of the SNAPP tool to visualise connections in discussion forums, Lockyer et al (2013) explain that the “interpretation of visualizations also depends heavily on an understanding the context in which the data were collected and the goals of the teacher regarding in-class interaction” (p. 1446). The more you know about the learning context, the better the insight you can draw from learning analytics. An observation that brings the reusability paradox into the picture. Most LA – especially those designed into an LMS – have to be designed to have the potential to be reused across all of the types of institutions that use the LMS. This removes the LMS (and its learning analytics) away from the specifics of the context, which reduces its pedagogical value.

  6. Think hard about providing and enhancing affordances for intervention

    Underpinning the IRAC work is the work of Don Norman (1993), in particular the quote in the image of him below. If LA is all about optimising learning and the learning environment then the LA application has to make it easy for people to engage in activities designed to bring that goal about. If it’s hard, they won’t do it. Meaning all that wonderfully complex algorithmic magic is wasted.

    Macfadyen et al (2014) identify facilitating the deployment of interventions that lead to change to enhance learning as a requirement of learning analytics. Wise (2014) defines learning analytics intervention “as the surrounding frame of activity through which analytics tools, data and reports are taken up and used”. An area of learning analytics that is relatively unexplored (Wise, 2014) and I’ll close with another quote from Wise (2014) which sums up the whole point of the IRAC framework and identifies what I think is the really challenging problem for LA

    If learning analytics are to truly make an impact on teaching and learning and fulfill expectations of revolutionizing education, we need to consider and design for ways in which they will impact the larger activity patterns of instructors and students. (Wise, 2014, 203)

    (and I really do need to revisit the Wise paper).

Norman on affordances

References

Arnott, D., & Pervan, G. (2005). A critical analysis of decision support systems research. Journal of Information Technology, 20(2), 67–87. doi:10.1057/palgrave.jit.2000035

Bollier, D., & Firestone, C. (2010). The promise and peril of big data. Washington DC: The Aspen Institute. Retrieved from http://india.emc.com/collateral/analyst-reports/10334-ar-promise-peril-of-big-data.pdf

Buckingham Shum, S. (2012). Learning Analytics. Moscow. Retrieved from http://iite.unesco.org/pics/publications/en/files/3214711.pdf

Hannafin, M., McCarthy, J., Hannafin, K., & Radtke, P. (2001). Scaffolding performance in EPSSs: Bridging theory and practice. In World Conference on Educational Multimedia, Hypermedia and Telecommunications (pp. 658–663). Retrieved from http://www.editlib.org/INDEX.CFM?fuseaction=Reader.ViewAbstract&paper_id=8792

Gery, G. J. (1991). Electronic Performance Support Systems: How and why to remake the workplace through the strategic adoption of technology. Tolland, MA: Gery Performance Press.

Jones, D., Beer, C., & Clark, D. (2013). The IRAC framwork: Locating the performance zone for learning analytics. In H. Carter, M. Gosper, & J. Hedberg (Eds.), Electric Dreams. Proceedings ascilite 2013 (pp. 446–450). Sydney, Australia.

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, 57(10), 1439–1459. doi:10.1177/0002764213479367

Lodge, J., & Lewis, M. (2012). Pigeon pecks and mouse clicks : Putting the learning back into learning analytics. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Future challenges, sustainable futures. Proceedings ascilite Wellington 2012 (pp. 560–564). Wellington, NZ. Retrieved from http://www.ascilite2012.org/images/custom/lodge,_jason_-_pigeon_pecks.pdf

Macfadyen, L. P., Dawson, S., Pardo, A., & Gasevic, D. (2014). Embracing big data in complex educational systems: The learning analytics imperative and the policy challenge. Research and Practice in Assessment, 9(Winter), 17–28.

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Olmos, M., & Corrin, L. (2012). Learning analytics: a case study of the process of design of visualizations. Journal of Asynchronous Learning Networks, 16(3), 39–49. Retrieved from http://ro.uow.edu.au/medpapers/432/

Reiser, R. (2001). A history of instructional design and technology: Part II: A history of instructional design. Educational Technology Research and Development, 49(2), 57–67.

Siemens, G. (2013). Learning Analytics: The Emergence of a Discipline. American Behavioral Scientist, 57(10), 1371–1379. doi:10.1177/0002764213498851

Swanson, E. B., & Ramiller, N. C. (2004). Innovating mindfully with information technology. MIS Quarterly, 28(4), 553–583.

Wise, A. F. (2014). Designing pedagogical interventions to support student use of learning analytics. In Proceedins of the Fourth International Conference on Learning Analytics And Knowledge – LAK ’14 (pp. 203–211). doi:10.1145/2567574.2567588

Exploring BIM + sentiment analysis – what might it say about student blog posts

The following documents some initial exploration into why, if, and how sentiment analysis might be added to the BIM module for Moodle. BIM is a tool that helps manage and mirror blog posts from individual student blogs. Sentiment analysis is an application of algorithms to identify the sentiment/emotions/polarity of a person/author through their writing and other artefacts. The theory is that sentiment analysis can alert a teacher if a student has written something that is deemed sad, worried, or confused; but also happy, confident etc.

Of course, the promise of analytics-based approaches like this may be oversold. There’s a suggestion that some approaches are wrong 4 out of 10 times. But I’ve seen other suggestions that human beings can be wrong at the same task 3 out of 10 times. So the questions are

  1. Just how hard is it (and what is required) to add some form of sentiment analysis to BIM?
  2. Is there any value in the output?

Some background on sentiment analysis

Tends to assume a negative/positive orientation. i.e. good/bad, like/dislike. The polarity. There are various methods for performing the analysis/opinion mining. There are challenges in analysing text (my focus) alone.

Lots of research going on in this sphere.

Of course there also folk building and some selling stuff. e.g. Indico is one I’ve heard of recently. Of course, they all have their limitations and sweet spots, Indico’s sentiment analysis is apparently good for

Text sequences ranging from 1-10 sentences with clear polarity (reddit, Facebook, etc.)

That is perhaps starting to fall outside what might be expected of blog posts. But may fit with this collection of data. Worth a try in the time I’ve got left.

Quick test of indico

indico provides a REST based API that includes sentiment analysis. Get an API key and you can throw data at it and it will give you a number between 0 (negative) and 1 (positive).

You can even try it out manually. Some quick manual tests

  • “happy great day fantastic” generates the result 0.99998833
  • “terrible sad unhappy bad” generates 0.000027934347704855157
  • “tomorrow is my birthday. Am I sad or happy” generates 0.7929542492644698
  • “tomorrow is my birthday. I am sad” generates 0.2327375924840286
  • “tomorrow is my birthday. I am somewhat happy” 0.8837247819167975
  • “tomorrow is my birthday. I am very happy” 0.993121363266806

With that very ill-informed testing, there are at least some glimmers of hope.

Does it work on blog posts…….actually not that bad. Certainly good enough to play around with some more and as a proof of concept in my constrained circumstances. Of course, indico is by no means the only tool available (e.g. meaningcloud).

But for the purpose of the talk I have to give in a couple of weeks, I should be able to use this to knock up something that works with the more student details script.

Types of e-learning projects and the problem of starvation

The last assignment for the course EDC3100, ICT and Pedagogy was due to be submitted yesterday. Right now the Moodle assignment activity (a version somewhat modified by my institution) is showing that 193 of 318 enrolled students have submitted assignments.

This is a story of the steps I have to take to respond to the story these figures (they’re not as scary as they seem) tell.

It’s also a story about the different types of development projects that are required when it comes to institutional e-learning and how the institutional approach to implementing e-learning means that certain types of these projects are inevitably starved of attention.

Assignment overview

Don’t forget the extensions

193 out of 318 submitted suggests that almost 40% of the students in the course haven’t submitted the final assignment. What this doesn’t show is that a large number of extensions have been granted. Would be nice for that information to appear on the summary shown above. To actually identify the number of extensions that have been granted, I need to

  1. Click on the “View/grade all submissions” link (and wait for a bit).
  2. Select “Download grading worksheet” from a drop down box.
  3. Filter the rows in the worksheet for those rows containing “Extension granted” (sorting won’t work)

This identifies 78 extensions. Suggesting that just under 15% (48) of the students appear to have not submitted on time.

Getting in contact with the non-submits

I like to get in contact with these students to see if there’s any problem. If you want some support for this practice the #1 principle of the “7 Principles of Good Practice for Undergraduate Education” is

1. Encourages Contact Between Students and Faculty
Frequent student-faculty contact in and out of classes is the most important factor in student motivation and involvement. Faculty concern helps students get through rough times and keep on working.

Weather from my bedroom window

Since my students are spread across the world (see the image to the right) and the semester ended last week, a face-to-face chat isn’t going to happen. With 48 students to contact I’m not feeling up to playing phone tag with that number of students. I don’t have easy access to the mobile phone numbers of these students, nor do I have access to any way to send text messages to these students that doesn’t involve the use of my personal phone. An announcement on the course news forum doesn’t provide the type of individual contact I’d prefer and there’s a question about how many students would actually see such an announcement (semester ended last week).

This leaves email as the method I use. The next challenge is getting the email addresses of the students who haven’t submitted AND don’t have extensions.

The Moodle assignment activity provides a range of ways to filter the list of all the students. One of those filters is “Not submitted”. The problem with this filter is that there’s no way (I can see) to exclude those that have been given an extension. In this case, that means I get a list of 126 students. I need to ignore 78 of these and grab the email addresses of 48 of them.

Doing this manually would just be silly. Hence I save the web pages produced by the Moodle assignment activity onto my laptop and run a Perl script that I’ve written which parses the content and displays the names and email addresses of the students without extensions.

Another approach would have been to use the grading worksheet (a CSV file) I used above. But I’ve gone down the HTML parsing route because I’ve already got a collection of Perl scripts parsing Moodle HTML files due to a range of other screen scraping tasks I’ve been doing for other reasons.

Excluding the won’t submits

I now have the list of students who haven’t submitted and don’t have extensions. But wait, there’s more. There are also some students I know who will, for a variety of reasons, never submit. If possible, I’d prefer not to annoy them by sending them an email about them not submitting Assignment 3.

This information is not in any database. It’s mostly a collection of email messages from various sources stored in the massive 2Gb of space the institution provides for email. I have to manually search through those to find the “won’t submits”.

Send the email

Now it’s time to send the email. In a perfect world I would like to send a personalised email message. A message that includes the student’s name and perhaps other details about their participation in the course. Moodle doesn’t appear to provide an email merge facility. In theory Office provides some functionality this way but I use a Mac and the Office stuff never seems to work easily on the Mac (and I’m biased against Office).

So I don’t send out a personalised email. Just the one email message to these specific students but with only generic content. Many still appear to appreciate the practice. For example, this is a response from one of the students who received one of these emails for a prior assignment (emphasis added)

Thank you for contacting me in regards to the submission. You’re the first staff member to ever do that so I appreciate this a lot.

Some questions

Which makes me wonder how many teaching staff do something like this? Why/why not?

Of the staff who don’t do this, is that because

  1. They don’t think it’s important or appropriate for their course?
  2. They’ve never thought of doing it?
  3. It’s too difficult to do?
Norman on affordances

And, if it were easier, would they do it? What impact might this have?

Moodle is an open source project used by huge numbers of institutions across the world. In addition, over the last year or so my institution has spent some time customising the Moodle assignment activity. I’m fairly certain that I’m not the first person using Moodle to have wanted to contact students who haven’t submitted.

So why all the limitations in the affordances of the Assignment activity?

Types of e-learning projects

In some discussions with @beerc and @damoclarky we’ve identified five separate types of e-learning projects that an institution faces.

  1. External/system driven projects.

    Projects that have to be done because of changes in the external environment. e.g. there’s a new version of Moodle and we need to roll that out or we’ll fall behind and be running a non-supported version.

  2. Strategic projects approved by the institution.

    The institution has decided that a project is important and should be funded and used by the entire institution. e.g. the decision my institution made to modify the Moodle assignment activity in order to transition from a locally built system and not lose functionality.

    Note: there’s a line here between these projects and those below. Typically projects above this line are those that will be used by all (or perhaps most) of the institution.

  3. Projects that might scale, but waiting for them to happen creates problems.

    This is where parts of the institution recognise that there is a problem/need (e.g. the story above) that might scale to the entire institution. But the problem/need has not yet made it across the line above into a strategic project. Meaning that there is a period of time when people know they want to do something, but can’t. They have to wait for the scarce resources of the institution to be allocated.

    In these situations, a few people don’t wait. They develop workarounds like the above. If the need is particularly important, everyone develops their workarounds. Leading to large inefficiencies as the solution is re-created in numerous different ways.

  4. Projects that will only ever be of interest to a program or a particular set of courses.

    For example, all the courses in the Bachelor of Education might benefit from a single page application lesson template that is integrated with the Australian Curriculum. This isn’t something that any other set of courses is going to desire. But it’s possibly of great importance to the courses that do.

  5. Course or pedagogical design specific projects.

    These are projects that are specific to a particular pedagogical design. Perhaps unique to a single course. e.g. the “more student details” Greasemonkey script (see more recent screenshot below) that I’ve implemented for EDC3100. The pedagogical design for this course makes use of both Moodle’s activity completion facility and the BIM module.

    I’m willing to bet large amounts of money that my course is currently the only course that uses this particular combination. This specific version of the tool is unlikely to be valuable to other people. It won’t scale (though the principles behind it might). There’s no point in trying to scale this tool, but it provides real benefit to me and the students in my course.

MoreStudentDetails

The problem of starvation

If you ask any University IT Director they will complain about the fact that they don’t have sufficient resources to keep existing systems running and effectively deal with project types #1 and #2 from the list above. The news is even worse for project types 3, 4 and 5.

#5 projects never get implemented at the institutional level. They only ever get done by “Freds-in-the-shed” like me.

#4 projects might get implemented at the institutional level, but typically only if the group of courses the project is for has the funds. If your degree has small numbers, then you’re probably going to have to do it yourself.

#3 projects might get implemented at the institutional level. But that does depend on the institution become aware of and recognising the importance of the project. This can take a loooong time, if at all. Especially if the problem requires changes to a system used by other institutions. If it’s a commercial system it may never. But even with an open source system (like Moodle) it can take years. For example, Costello (2014) says the following about a problem with the Quiz system in Moodle (p. 2)

Despite the community reporting the issue to the Moodle developers, giving multiple votes for its resolution and the proposal of a fix, the issue had nonetheless languished for years unfixed.

and (p. 1)

Applying this patch to DCU’s own copy of Moodle was not an option for us however, as the University operated a strict policy of not allowing modifications, or local customisations, to its Moodle installation. Tinkering with a critical piece of the institutional infrastructure, even to fix a problem, was not an option.

Ramifications

I suggest that there are at least four broad results of this starvation of project types 3, 4 and 5

  1. The quality of e-learning is constrained.

    Without the ability to implement projects specific to their context, people bumble along with systems that are inappropriate. The systems limit the quality of e-learning.

  2. Feral or shadow systems are developed.

    The small number of people who can, develop their own solutions to these projects.

  3. Wasted time.

    The people developing these solutions are typically completing tasks outside their normal responsibilities. They are wasting their time. In addition, because these systems are designed for their own particular contexts it is difficult to share them with other people who may wish to use them. Either because other people don’t know that they exist, or because they use a unique combination of technologies/practices no-one else uses. This is a particular problem for project types 3 and 4.

  4. Lost opportunity for innovation and strategic advantage.

    Some of these project types have the potential to be of strategic advantage, but due to their feral nature they are never known about or can’t be easily shared.

So what?

My argument is that if institutions want to radically improve the quality of their e-learning, then they have to find ways to increase the capacity of the organisation to support all five project types. It’s necessary to recognise that supporting all five project types can’t be done by using existing processes, technologies and organisational structures.

Responses

I sent the email to the students who hadn’t submitted Assignment 3 at 8:51 this morning. It’s just over two hours later. In that time, 8 of the 36 students have responded.

References

Costello, E. (2014). Participatory Practices in Open Source Educational Software : The Case of the Moodle Bug Tracker Community. University of Dublin. Retrieved from http://www.tara.tcd.ie/handle/2262/71751

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Reading – Embracing Big Data in Complex Educational Systems: The Learning Analytics Imperative and the Policy Challenge

The following is a summary and ad hoc thoughts on Macfadyen et al (2014).

There’s much to like in the paper. But the basic premise I see in the paper is that to fix the problems of the current inappropriate teleological processes used in institutional strategic planning and policy setting is an enhanced/adaptive teleological process. The impression I take from the paper is that it’s still missing the need for institutional to be enabling actors within institutions to integrate greater use of ateleological processes (see Clegg, 2002). Of course, Clegg goes onto do the obvious and develop a “dialectical approach to strategy” that merges the two extremes.

Is my characterisation of the adaptive models presented here appropriate?

I can see very strong connections with the arguments made in this paper between institutions and learning analytics and the reasons why I think e-learning is a bit like teenage sex.

But given the problems with “e-learning” (i.e. most of it isn’t much good in pedagogical terms) what does that say about the claim that we’re in an age of “big data” in education. If the pedagogy of most e-learning is questionable, is the data being gathered any use?

Conflating “piecemeal” and “implementation of new tools”

The abstract argues that there must be a shift “from assessment-for-accountability to assessment-for-learning” and suggests that it won’t be achieved “through piecemeal implementation of new tools”.

It seems to me that this is conflating two separate ideas, they are

  1. piecemeal; and,

    i.e. unsystematic or partial measures. It can’t happen bit-by-bit, instead it has to proceed at the whole of institutional level. This is the necessary step in the argument that institutional change is (or must be) involved.

    One of the problems I have with this is that if you are thinking of educational institutionals as complex adaptive systems, then that means they are the type of system where a small (i.e. piecemeal change) could potentially (but not always) have a large impact. In a complex system, a few very small well directed changes may have a large impact. Or alternatively and picking up on ideas I’ve heard from Dave Snowden, implementing large amounts of very small projects and observing the outcomes may be the only effective way forward. By definition a complex system is one where being anything but piecemeal may be an exercise in futility. As you can never understand a complex system, let alone being able to guess the likely impacts of proposed changes..

    The paper argues that systems of any type are stable and resistant to change. There’s support for this argument. I need to look for dissenting voices and evaluate.

  2. implementation of new tools.

    i.e. the build it and they will come approach won’t work. Which I think is the real problem and is indicative of the sort of simplistic planning processes that the paper argues against.

These are two very different ideas. I’d also argue that while these alone won’t enable the change, they are both necessary for the change. I’d also argue that institutional change (by itself) is also unlikely to to achieve the type of cultural change required. The argument presented in seeking to explain “Why e-learning is a bit like teenage sex” is essentially this. That institutional attempts to enable and encourage changed in learning practice toward e-learning fail because they are too focused on institutional concerns (large scale strategic change) and not enough on enabling elements of piecemeal growth (i.e. bricolage).

The Reusability Paradox and “at scale”

I also wonder about considerations raised by the reusability paradox in connection with statements like (emphasis added) “learning analytics (LA) offer the possibility of implementing real–time assessment and feedback systems and processes at scale“. Can the “smart algorithms” of LA marry the opposite ends of the spectrum – pedagogical value and large scale reuse? Can the adaptive planning models bridge that gap?

Abstract

In the new era of big educational data, learning analytics (LA) offer the possibility of implementing real–time assessment and feedback systems and processes at scale that are focused on improvement of learning, development of self–regulated learning skills, and student success. How- ever, to realize this promise, the necessary shifts in the culture, techno- logical infrastructure, and teaching practices of higher education, from assessment–for–accountability to assessment–for–learning, cannot be achieved through piecemeal implementation of new tools. We propose here that the challenge of successful institutional change for learning analytics implementation is a wicked problem that calls for new adaptive forms of leadership, collaboration, policy development and strategic planning. Higher education institutions are best viewed as complex systems underpinned by policy, and we introduce two policy and planning frameworks developed for complex systems that may offer institutional teams practical guidance in their project of optimizing their educational systems with learning analytics.

Introduction

First para is a summary of all the arguments for learning analytics

  • awash in data (I’m questioning)
  • now have algorithms/methods that can extract useful stuff from the data
  • using these methods can help make sense of complex environments
  • education is increasingly complex – increasing learner diversity, reducing funding, increasing focus on quality and accountability, increasing competition
  • it’s no longer an option to use the data

It also includes a quote from a consulting company promoting FOMO/falling behind if you don’t use it. I wonder how many different fads they’ve said that about?

Second para explains what the article is about – “new adaptive policy and planning approaches….comprehensive development and implementation of policies to address LA challenges of learning design, leadership, institutional culture, data access and security, data privacy and ethical dilemmas, technology infrastructure, and a demonstrable gap in institutional LA skills and capacity”.

But based on the idea of Universities as complex adaptive systems. That “simplistic approaches to policy development are doomed to fail”.

Assessment practices: A wicked problem in a complex system

Assessment is important. Demonstrates impact – positive and negative – of policy. Assessment still seen too much as focused on accountability and not for learning. Diversity of stakeholders and concerns around assessment make substantial change hard.

“Assessment practice will continue to be intricately intertwined both with learning
and with program accreditation and accountability measures.” (p. 18). NCLB used as an example of the problems this creates and mentions Goodhart’s law.

Picks up the on-going focus on “high-stakes snapshot testing” to provide comparative data. Mentions

Wall, Hursh and Rodgers (2014) have argued, on the other hand, that the perception that students, parents and educational leaders can only obtain useful comparative information about learning from systematized assessment is a false one.

But also suggests that learning analytics may offer a better approach – citing (Wiliam, 2010).

Identifies the need to improve assessement practices at the course level. Various references.

Touches on the difficulties in making these changes. Mentions wicked problems and touches on complex systems

As with all complex systems, even a subtle change may be perceived as difficult, and be resisted (Head & Alford, 2013).

But doesn’t pick up the alternate possibility that a subtle change that might not be seen as difficult could have large ramifications.

Learning analytics and assessment-for-learning

This paper is part of a special issue on LA and assessment. Mentions other papers that have show the contribution LA can make to assessment.

Analytics can add distinct value to teaching and learning practice by providing greater insight into the student learning process to identify the impact of curriculum and learning strategies, while at the same time facilitating individual learner progress (p. 19)

The argument is that LA can help both assessment tasks: quality assurance, and learning improvement.

Technological components of the educational system and support of LA

The assumption is that there is a technological foundation for – storing, managing, visualising and processing big educational data. Need for more than just the LMS. Need to mix it all up and this “institutions are recognizing the need to re–assess the concept of teaching and learning space to encompass both physical and virtual locations, and adapt learning experiences to this new context (Thomas, 2010)” (p. 20) Add to that the rise of multiple devices etc.

Identifies the following requirements for LA tools (p. 21) – emphasis added

  1. Diverse and flexible data collection schemes: Tools need to adapt to increasing data sources, distributed in location, different in scope, and hosted in any platform.
  2. Simple connection with institutional objectives at different levels: information needs to be understood by stakeholders with no extra effort. Upper management needs insight connected with different organizational aspects than an educator. User–guided design is of the utmost importance in this area.
  3. Simple deployment of effective interventions, and an integrated and sustained overall refinement procedure allowing reflection

Some nice overlaps with the IRAC framework here.

It does raise interesting questions about what are institutional objectives? Even more importantly how easy it is or isn’t to identify what those are and what they mean at the various levels of the institution.

Interventions An inset talks about the sociotechnical infrastructure for LA. It mentions the requirement for interventions. (p. 21)

The third requirement for technology supporting learning analytics is that it can facilitate the deployment of so–called interventions, where intervention may mean any change or personalization introduced in the environment to support student success, and its relevance with respect to the context. This context may range from generic institutional policies, to pedagogical strategy in a course. Interventions at the level of institution have been already studied and deployed to address retention, attrition or graduation rate problems (Ferguson, 2012; Fritz, 2011; Tanes, Arnold, King, & Remnet, 2011). More comprehensive frameworks that widen the scope of interventions and adopt a more formal approach have been recently proposed, but much research is still needed in this area (Wise, 2014).

And then this (pp. 21-22) which contains numerous potential implications (emphasis added)

Educational institutions need technological solutions that are deployed in a context of continuous change, with an increasing variety of data sources, that convey the advantages in a simple way to stakeholders, and allow a connection with the underpinning pedagogical strategies.

But what happens when the pedagogical strategies are very, very limited?

Then makes this point as a segue into the next section (p. 22)

Foremost among these is the question of access to data, which needs must be widespread and open. Careful policy development is also necessary to ensure that assessment and analytics plans reflect the institution’s vision for teaching and strategic needs (and are not simply being embraced in a panic to be seen to be doing something with data), and that LA tools and approaches are embraced as a means of engaging stakeholders in discussion and facilitating change rather than as tools for measuring performance or the status quo.

The challenge: Bringing about institutional change in complex systems

“the real challenges of implementation are significant” (p. 22). The above identifies “only two of the several and interconnected socio-technical domains that need to be addressed by comprehensive institutional policy and strategic planning”

  1. influencing stakeholder understanding of assessment in education
  2. developing the necessary institutional technological infrastructure to support the undertaking

And this has to be done whilst attending to business as usual.

Hence not surprising that education lags other sectors in adoption analytics. Identifies barriers

  • lack of practical, technical and financial capacity to mind big data

    A statement from the consulting firm who also just happens to be in the market of selling services to help.

  • perceived need for expensive tools

Cites various studies showing education institutions stuck at gathering and basic reporting.

And of course even if you get it right…

There is recognition that even where technological competence and data exist, simple presentation of the facts (the potential power of analytics), no matter how accurate and authoritative, may not be enough to overcome institutional resistance (Macfadyen & Dawson, 2012; Young & Mendizabal, 2009).

Why policy matters for LA

Starts with establishing higher education institutions as a “superb example of complex adaptive systems” but then suggests that (p. 22)

policies are the critical driving forces that underpin complex and systemic institutional problems (Corvalán et al., 1999) and that shape perceptions of the nature of the problem(s) and acceptable solutions.

I struggle a bit with that observation and even more with this argument (p. 22)

we argue that it is therefore only through implementation of planning processes driven by new policies that institutional change can come about.

Expands on the notion of CAS and wicked problems. Makes this interesting point

Like all complex systems, educational systems are very stable, and resistant to change. They are resilient in the face of perturbation, and exist far from equilibrium, requiring a constant input of energy to maintain system organization (see Capra, 1996). As a result, and in spite of being organizations whose business is research and education, simple provision of new information to leaders and stakeholders is typically insufficient to bring about systemic institutional change.

Now talks about the problems more specific to LA and the “lack of data-driven mind-set” from senior management. Links this to earlier example of institutional research to inform institutional change (McINtosh, 1979) and links to a paper by Ferguson applying those findings to LA, from there and other places factors identified include

  • academics don’t want to act on findings from other disciplines;
  • disagreements over qualitative vs quantitative approaches;
  • researchers & decision makers speak different languages;
  • lack of familiarity with statistical methods
  • data not presented/explained to decision makers well enough.
  • researchers tend to hedge an dquality conclusions.
  • valorized education/faculty autonomy and resisted any administrative efforts perceived to interfere with T&L practice

Social marketing and change management is drawn upon to suggest that “social and cultural change” isn’t brought about by simply by giving access to data – “scientific analyses and technical rationality are insufficient mechanisms for understanding and solving complex problems” (p. 23). Returns to

what is needed are comprehensive policy and planning frameworks to address not simply the perceived shortfalls in technological tools and data management, but the cultural and capacity gaps that are the true strategic issues (Norris & Baer, 2013).

Policy and planning approaches for wicked problems in complex systems

Sets about defining policy. Includes this which resonates with me

Contemporary critics from the planning and design fields argue, however, that these classic, top–down, expert–driven (and mostly corporate) policy and planning models are based on a poor and homogenous representation of social systems mismatched with our contemporary pluralistic societies, and that implementation of such simplistic policy and planning models undermines chances of success (for review, see Head & Alford, 2013).

Draws on wicked problem literature to expand on this. Then onto systems theory.

And this is where the argument about piecemeal growth being insufficient arises (p. 24)

These observations not only illuminate why piecemeal attempts to effect change in educational systems are typically ineffective, but also explains why no one–size–fits–all prescriptive approach to policy and strategy development for educational change is available or even possible.

and perhaps more interestingly

Usable policy frameworks will not be those which offer a to do list of, for example, steps in learning analytics implementation. Instead, successful frameworks will be those which guide leaders and participants in exploring and understanding the structures and many interrelationships within their own complex system, and identifying points where intervention in their own system will be necessary in order to bring about change

One thought is whether or not this idea is a view that strikes “management” as “researchers hedging their bets”? Mentioned above as a problem above.

Moves onto talking “adaptive management strategies” (Head and Alford, 2013) which offer new means for policy and planning that “can allow institutions to respond flexibly to ever-changing social and institutional contexts and challenges” which talk about

  • role of cross-institutional collaboration
  • new forms of leadership
  • development of enabling structures and processes (budgeting, finance, HR etc)

Interesting that notions of technology don’t get a mention.

Two “sample policy and planning models” are discussed.

  1. Rapid Outcome Mapping Approach (ROMA) – from international development

    “focused on evidence-based policy change”. An iterative model. I wonder about this

    Importantly, the ROMA process begins with a systematic effort at mapping institutional context (for which these authors offer a range of tools and frameworks) – the people, political structures, policies, institutions and processes that may help or hinder change.

    Perhaps a step up, but isn’t this still big up front design? Assumes you can do this? But then some is better than none?

    Apparently this approach is used more in Ferguson et al (2014)

  2. “cause-effect framework” – DPSEEA framework

    Driving fource, Pressure, State, Exposure, Effect (DPSEEA) a way of identifying linkages between forces underpinning complex systems.

Ferguson et al (2014) apparently map “apparently successful institutional policy and planning processes have pursued change management approaches that map well to such frameworks”. So not yet informed by? Of course, there’s always the question of the people driving those systems reporting on their work?

I do like this quote (p. 25)

To paraphrase Head and Alford (2013), when it comes to wicked problems in complex systems, there is no one– size–fits–all policy solution, and there is no plan that is not provisional.

References

Ferguson, R., Clow, D., Macfadyen, L., Essa, A., Dawson, S., & Alexander, S. (2014). Setting Learning Analytics in Context : Overcoming the Barriers to Large-Scale Adoption. Journal of Learning Analytics, 1(3), 120–144. doi:10.1145/2567574.2567592

Macfadyen, L. P., Dawson, S., Pardo, A., & Gasevic, D. (2014). Embracing big data in complex educational systems: The learning analytics imperative and the policy challenge. Research and Practice in Assessment, 9(Winter), 17–28.