The perceived uselessness of the Technology Acceptance Model (TAM) for e-learning

Below you will find the slides, abstract, and references for a talk given to folk from the University of South Australia on 1 October, 2015. A later blog post outlines core parts of the argument.

Slides

Abstract

In a newspaper article (Laxon, 2013), Professor Mark Brown described e-learning as

a bit like teenage sex. Everyone says they’re doing it but not many people are and those that are doing it are doing it very poorly.

This is not a new problem with a long litany of publications spread over decades bemoaning the limited adoption of new technology-based pedagogical practices (e-learning). The dominant theoretical model used in research seeking to understand the adoption decisions of both staff and students has been the Technology Acceptance Model (TAM) (Šumak, Heričko, & Pušnik, 2011). TAM views an individual’s intention to adopt a particular digital technology as being most heavily influenced by two factors: perceived usefulness, and perceived ease of use. This presentation will explore and illustrate the perceived uselessness of TAM for understanding and responding to e-learning’s “teenage sex” problem using the BAD/SET mindsets (Jones & Clark, 2014) and experience from four years of teaching large, e-learning “rich” courses. The presentation will also seek to offer initial suggestions and ideas for addressing e-learning’s “teenage sex” problem.

References

Bichsel, J. (2012). Analytics in Higher Education: Benefits, Barriers, Progress and Recommendations. Louisville, CO. Retrieved from http://net.educause.edu/ir/library/pdf/ERS1207/ers1207.pdf

Box, G. E. P. (1979). Robustness in the Strategy of Scientific Model Building. In R. Launer & G. Wilkinson (Eds.), Robustness in Statistics (pp. 201–236). Academic Press. doi:0-12-4381 50-2

Burton-Jones, A., & Hubona, G. (2006). The mediation of external variables in the technology acceptance model. Information & Management, 43(6), 706–717. doi:10.1016/j.im.2006.03.007

Ciborra, C. (1992). From thinking to tinkering: The grassroots of strategic information systems. The Information Society, 8(4), 297–309.

Corrin, L., Kennedy, G., & Mulder, R. (2013). Enhancing learning analytics by understanding the needs of teachers. In Electric Dreams. Proceedings ascilite 2013 (pp. 201–205).

Davis, F. D. (1986). A Technology Acceptance Model for empirically testing new end-user information systems: Theory and results. MIT.

Davis, F. D. (1989). Perceived usefulness, perceived ease of use and user acceptance of information technology. MIS Quarterly, 13(3), 319.

Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982–1003.

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicator of learning and teaching performance. Canberra: Australian Learning and Teaching Council. Retrieved from http://moourl.com/hpds8

Ferguson, R., Clow, D., Macfadyen, L., Essa, A., Dawson, S., & Alexander, S. (2014). Setting Learning Analytics in Context : Overcoming the Barriers to Large-Scale Adoption. Journal of Learning Analytics, 1(3), 120–144. doi:10.1145/2567574.2567592

Hannafin, M., McCarthy, J., Hannafin, K., & Radtke, P. (2001). Scaffolding performance in EPSSs: Bridging theory and practice. In World Conference on Educational Multimedia, Hypermedia and Telecommunications (pp. 658–663). Retrieved from http://www.editlib.org/INDEX.CFM?fuseaction=Reader.ViewAbstract&paper_id=8792

Holt, D., Palmer, S., Munro, J., Solomonides, I., Gosper, M., Hicks, M., … Hollenbeck, R. (2013). Leading the quality management of online learning environments in Australian higher education. Australasian Journal of Educational Technology, 29(3), 387–402. Retrieved from http://www.ascilite.org.au/ajet/submission/index.php/AJET/article/view/84

Introna, L. (2013). Epilogue: Performativity and the Becoming of Sociomaterial Assemblages. In F.-X. de Vaujany & N. Mitev (Eds.), Materiality and Space: Organizations, Artefacts and Practices (pp. 330–342). Palgrave Macmillan.

Jasperson, S., Carter, P. E., & Zmud, R. W. (2005). A Comprehensive Conceptualization of Post-Adaptive Behaviors Associated with Information Technology Enabled Work Systems. MIS Quarterly, 29(3), 525–557.

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272). Dunedin.

Kay, A. (1984). Computer Software. Scientific American, 251(3), 53–59.

Kunin, V., Goldovsky, L., Darzentas, N., & Ouzounis, C. a. (2005). The net of life: Reconstructing the microbial phylogenetic network. Genome Research, 15(7), 954–959. doi:10.1101/gr.3666505

Laxon, A. (2013, September 14). Exams go online for university students. The New Zealand Herald.

Lee, Y., Kozar, K. A., & Larsen, K. R. T. (2003). The Technology Acceptance Model: Past, Present, and Future. Communications of the AIS, 12. Retrieved from http://aisel.aisnet.org/cais/vol12/iss1/50

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, 57(10), 1439–1459. doi:10.1177/0002764213479367

Müller, M. (2015). Assemblages and Actor-networks: Rethinking Socio-material Power, Politics and Space. Geography Compass, 9(1), 27–41. doi:10.1111/gec3.12192

Najmul Islam, A. K. M. (2014). Sources of satisfaction and dissatisfaction with a learning management system in post-adoption stage: A critical incident technique approach. Computers in Human Behavior, 30, 249–261. doi:10.1016/j.chb.2013.09.010

Nistor, N. (2014). When technology acceptance models won’t work: Non-significant intention-behavior effects. Computers in Human Behavior, pp. 299–300. Elsevier Ltd. doi:10.1016/j.chb.2014.02.052

Stead, D. R. (2005). A review of the one-minute paper. Active Learning in Higher Education, 6(2), 118–131. doi:10.1177/1469787405054237

Sturgess, P., & Nouwens, F. (2004). Evaluation of online learning management systems. Turkish Online Journal of Distance Education, 5(3). Retrieved from http://tojde.anadolu.edu.tr/tojde15/articles/sturgess.htm

Šumak, B., Heričko, M., & Pušnik, M. (2011). A meta-analysis of e-learning technology acceptance: The role of user types and e-learning technology types. Computers in Human Behavior, 27(6), 2067–2077. doi:10.1016/j.chb.2011.08.005

Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39(2), 273–315. doi:10.1111/j.1540-5915.2008.00192.x

Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the Technology Acceptance Model: Four longitudinal field studies. Management Science, 46(2), 186–204.
Venkatesh, V., Morris, M., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478.

It’s not how bad you start, but how quickly you get better

Wood & Hollnagel (2006) start by presenting the Bounded Rationality syllogism

All cognitive systems are finite (people, machines, or combinations).
All finite cognitive systems in uncertain changing situations are fallible.
Therefore, machine cognitive systems (and joint systems across people and machines) are fallible. (p. 2)

From this they suggest that

The question, then, is not fallibility or finite resources of systems, but rather the development of strategies that handle the fundamental tradeoffs produced by the need to act in a finite, dynamic, conflicted, and uncertain world.

The core ideas of Cognitive Systems Engineering (CSE) shift the question from
overcoming limits to supporting adaptability and control
(p. 2)

Which has obvious links to my last post, “All models are wrong”.

This is why organisations annoy me with their fetish for developing the one correct model (or system) and requiring that everyone should and can follow that one correct model.