The adoption and acceptance of learning analytics

Much earlier this year I was invited to participate with some folk much cleverer than I around the question of the adoption of learning analytics and a project to explore this using the Technology Acceptance Model (TAM). Going by the date embedded in the URL of this post, that was way back in August. It’s December and I’m now trying to get back to this post to capture some of my thinking.

If I had to summarise my thinking now, prior to completing the post below, it would consist of

  1. Based on the experience with business intelligence systems in the broader business world and the LMS/e-learning within universities, adoption of learning analytics is likely to be problematic in terms of both quantity and quality.
  2. The centrality in TAM of an individual’s perceptions of the usefulness and ease of use of an IT innovation on adoption panders to my beliefs and prejudices.
  3. I have some qualms (from both the literature but also my limitations) about the value of research based on TAM and surveys of intention to use.

And now some random thoughts.

Deja vu all over again

Based on my current observations, my fear is that learning analytics as implemented by universities is going to suffer similar problems to most prior applications of ICTs into university learning. For example, Geoghegan’s (1994) identification of the chasm as it applied to instructional technology, the findings 10+ years later that usage of the LMS by academics was limited in terms of both quantity and quality, and more recent reports that understanding the information provided by learning analytics is really hard.

The Technology-Adoption-Model

For better or worse, the current research is looking at leverage the Technology Adoption Model (TAM) for exploring the likely acceptance of learning analytics. TAM is one of the “big theories” associated with the Information Systems discipline and has been widely used. TAM provides an instrument through which predictions can be made about whether or not some new technological tool is going to be adopted within a particular group or organisation. The idea is that based on the beliefs about the tool held by the individuals within that group, you can make predictions about whether or not the tool will be used. The particular beliefs that tend to be at the core are perceived usefulness (often the most influential) and perceived ease of use.

TAM is not without its criticisms, including Bagozzi (2007). It has evolved somewhat, currently at TAM3 (Venkatesh, et al 2008). One of the criticisms of TAM has been that it doesn’t provide practitioners with “actionable guidance”. i.e. how do you increase the likelihood of adoption.

TAM work is traditionally survey based. Venkatesh and Bala (2008) identify three broad areas of TAM research

  1. Replication and testing of the constructs.
  2. Develop theoretical underpinnings for TAM constructs.
  3. The addition of new constructs as determinants of TAM constructs.

    Leading to four different types of determinant: individual differences, system characteristics, social influence, facilitating conditions.

The determinants above arose in the development of TAM2. In developing TAM3, Venkatesh and Bala (2008) suggested the following additions:

  • Perceived usefulness

    • Subjective norm
    • Image
    • Job relevance
    • Output quality
    • Result demonstrability
  • Perceived ease of use
    • Computer self-efficacy
    • Perceptions of external control
    • Computer anxiety
    • Computer playfulness
    • Perceived enjoyment
    • Objective usability

With experience and voluntariness as potential moderator factors. Perhaps the above illustrates Bagozzi’s (2007) suggestion that

On the other hand, recent extensions of TAM (e.g., the UTAUT) have been a patchwork of many largely unintegrated and uncoordinated abridgements

Bagozzi (2007) points out that there can be an “potentially infinite list of such moderators” that has the result in making the broadenings of TAM “both unwieldy and conceptually impoverished”. The advice being that introduction of these moderating variables should be theory based.

LAAM

As it happens, Ali et al (2012) have taken TAM and done some work around learning analytics described as

While many approaches and tools for learning analytics have been proposed, there is limited empirical insights of the factors influencing potential adoption of this new technology. To address this research gap, we propose and empirically validate a Learning Analytics Acceptance Model (LAAM), which we report in this paper, to start building research understanding how the analytics provided in a learning analytics tool affect educators’ adoption beliefs. (p. 131)

Factors examined

  1. Pedagogical knowledge and information design skills
  2. Perceived utility of a learning analytics tool
  3. Educators perceived ease-of-use of a learning analytics tool

Identifying what influences usefulness and ease of use

Back in 2006 a group of us used TAM to explore perceptions of an online assignment submission system (e.g. Beherens et al, 2006). However, rather than trying to predict levels of usage of a new system, this work was exploring perceptions of a system that was already being used. The intent was to explore what was making this particular system successful. TAM1 was used in a survey but which included free text responses for respondents to talk about what influenced their perceptions.

Having re-read this again, there’s probably some value in exploring this research again. Especially given that the institution has moved onto using another system.

Some thoughts on TAM and learning analytics

I see the need for identifying and exploring the factors that will make learning analytics tools likely to be used. Not sure TAM or its variants are the right approach. Some reasons following.

Are there large groups of people actually using learning analytics?

How do you measure individual perceptions of something that many people haven’t used yet?

Ali et al (2012) got a group of educators together and had them experiment with a particular tool.

This approach raises a problem

Is there any commonality between learning analytics tools?

If the aim is to test this at different institutions, is each institution using the same set of learning analytics tools? I think not, currently most are doing their own thing.

Running TAM surveys on different tools would generate other problems.

Identifying the factors before hand

The survey approach is based on the assumption that you can identify the model beforehand. i.e. you figure out what factors will influence adoption, incorporate them into a model (in this case integrating with TAM) and then test it. Ali et al (2012) included pedagogical knowledge and information design skills of educators.

You might be able to argue that given the relative novelty (which itself is arguable) of learning analytics that you might want to explore these a bit more.

I think this comes back to my humble nature/stupidity and not thinking I can know everything up-front. Hence my preference for emergent/agile development.

Doesn’t offer tool developers/organisations guidance for intervention

There was a quote from the literature identifying this as a weakness of TAM. But as a wannabe developer of learning analytics enhanced tools, TAM appears to be of fairly limited use for another reason. As mentioned above TAM is focused on the internal beliefs, attitudes and intentions. Do you think this tool is easy to use? Do you think it’s useful? Or picking up on Ali et al (2012): what is your level of pedagogical knowledge or information design?

This doesn’t seem to provide me with any insight about how to make the learning analytics useful or easy to use? Or at least not insight that I couldn’t gain from a bit of user-centered design. As a tool developer, how do I change the users perceptions of computer self-efficacy or anxiety? An organisation might think it can do this via training etc, but I have my doubts.

Teacher conceptions of teaching and learning

If a factor were to be added for using TAM and learning analytics, I do think that the conceptions of teaching and learning work would be a strong candidate. In fact, the introduction to (Steel, 2009) cites some research to indicate that “teacher beliefs about the value of technology use are a significant factor in predicting usage”.

Where to know?

Not sure and time to go home. More thinking and reading to do.

References

Ali, L., Asadi, M., Gašević, D., Jovanović, J., & Hatala, M. (2012). Factors influencing beliefs for adoption of a learning analytics tool : An empirical study. Computers & Education, 62, 130–148.

Bagozzi, R. (2007). The Legacy of the Technology Acceptance Model and a Proposal for a Paradigm Shift. Journal of the association for information systems, 8(4), 244–254.

Behrens, S., Jamieson, K., Jones, D., & Cranston, M. (2005). Predicting system success using the Technology Acceptance Model: A case study. In 16th Australasian Conference on Information Systems. Sydney.

Geoghegan, W. (1994). Whatever happened to instructional technology? In S. Bapna, A. Emdad, & J. Zaveri (Eds.), (pp. 438–447). Baltimore, MD: IBM.

Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision sciences, 39(2), 273–315. doi:10.1111/j.1540-5915.2008.00192.x

2 thoughts on “The adoption and acceptance of learning analytics

  1. Pingback: The adoption and acceptance of learning analyti...

  2. Pingback: TAM, #moodle, online assignment submission and strategic implementation | The Weblog of (a) David Jones

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s