Design of a BAD approach to developing TPACK

Two of my recent publications (Jones & Clark, 2015; Jones, Heffernan & Albion, 2016) seek to explore and identify the problems faced by teachers in developing the knowledge necessary to teach in e-learning/digital learning environments. Both suggest that there are significant problems with how this is currently done and offer some theoretical suggestions about how it might be improved. Jones & Clark (2015) suggest that the BAD/SET mindsets offer one way to understand the problems of the current approaches, and outline some fruitful ways forward.

This post is an attempt to describe in broad details of a system that could potentially enable a much greater use of the BAD mindset within what passes for institutional e-learning environments (and perhaps beyond). The system, is really a collection of different current technologies, including:

  • Greasemonkey or other approaches to client-side scripting;
  • Smallest Federated Wiki; and,
  • APIs and API-centric architectures.

Some form of screen-scraping web-automation might also be added to this list, as might other technologies. But the three above is a start.

See the abstract of this presentation for an explanation of TPACK and Jones et al (2016) for an argument for “distributed TPACK”

Basic Aim

The division of responsibility for all elements of e-learning and learning about e-learning are currently organised and operate as a tree. Each of software development, instructional design, graphic design, copyright, staff development, teaching, and other related tasks are located in different parts of the organisational hierarchy. Learning itself is organised as a tree. Students enrol in programs (e.g. Bachelor of Education) which is divided up into years (e.g. first year, second year etc) and then into courses and then into semesters and weeks. The end result is a series of separate black boxes that rarely or easily share the knowledge they produce. The hierarchy is extends beyond institutions. Many institutions might be using the Moodle LMS, but they will typically be black boxes that rarely or easily share their knowledge with other institutions.

The aim of this “system” is to:

  1. enable the creation and sharing (or not) of new connections within the hierarchy of systems, people, and responsibilities that make up organisational e-learning and beyond;
  2. make it easier to combine disparate nodes and connections into single “black boxed sub-networks” that can be shared, viewed, modified, and re-shared by anyone;
  3. by anyone;
  4. in order to improve the quality of the knowledge about how best to undertake e-learning.

A metaphor

Most people are familiar with the organisation of species of organisms into evolutionary trees (aka Phylogenetic trees), suggested very early on by Darwin.

Darwin Exhibition @ Gulbenkian by jcraveiro, on Flickr
Creative Commons Creative Commons Attribution-Noncommercial-No Derivative Works 2.0 Generic License   by  jcraveiro 

 

However, more recent work has suggested that trees aren’t all that effective in capturing the full complexity of what is going on. Biologists have started talking about the “net of life”. Or as described here, the tree is still the main way to describe the evolution of species

but the “tree” now has “vines” that hang across the branches

As shown in the following from Kunin et al (2005)

How it might work

The following outlines how this system might work, the technology that might be used to implement it, and the benefits it might bring.

1. Install the “script”

The first step is to install some form of client (browser) side script (e.g. Greasemonkey)
. The idea is that the “vines that hang across the branches” will be revealed by this script (aka augmented browsing). Hence the assumption is that participation in this “net of e-learning” is dependent upon using a web browser and this script.

You might also need to configure the script to point to a particular “database” or three.

2. Use the script

Normally the script doesn’t make any difference to what you see when browsing the web. You just continue using the web browser as you normally do. However, the script has been configured to look out for specific types of URLs. When it sees these URLs it will allow you to see the vines, it will augment your browsing.

The types of URLs/web pages it would be on the look out for is any page for which you and the other members of your network (perhaps within an organisation, but also across different organisations) use for learning and teaching. Some possible examples might be:

  • A discussion forum (or any other component) in Moodle (or some other LMS).
  • The object repository used by your organisation.
  • The dashboard for a WordPress blog (or any other web page).

3. View the vines/scaffolds

When such a page is detected, the script will add a user interface element to the page. Perhaps in the form of a pop-up window or something embedded in the page. The form itself should eventually be configurable, but the basic aim is to be easily visible, but without cluttering up the interface of the page.

The interface element should give you a visual overview of all of the “vines” (scaffolds) that have been hung across the tree. These vines might include (examples of some given below):

  1. Collections of helper applications, skins, recipes, and other “code” that can be used to automate, extend, and customise the page being viewed and its related functionality.
  2. Pointers/details of people who have recently or heavily used this particular page (as a potential source of help).
  3. Relevant support resources.
    At some level this mirrors (but extends) the context sensitive help available in Moodle (and I assume other LMS). e.g. rather than sending an email containing a 9 page Word document with instructions on how to create a supplementary assessment and expecting people to store it ready for use when they have to do this task. Embed a link to the information on the assessment activity in Moodle.
  4. A method for asking and answering questions about the specific page.
  5. A method for recording critical incidents.

No need that all these vines be visible, or that these are the only possible vines.

The script would probably tend to

  1. Provide a brief summary of the status of each vine that you can view at a glance.
  2. Provide access to a different, expanded interface to perform specific tasks.

4. Select, and use helper applications

A key vine that I’m interested in is the idea of “helper applications”. i.e. small scripts/applications that add value to the functionality of the current page/service. This idea is linked to the idea of the Reusability Paradox and the Starvation Problem of e-learning.

These helper applications would potentially cover a wide array of functionality and complexity. The following examples illustrate some initial categories

  • customise;
    e.g. a couple of lines of jQuery that allow you to change the menu in the institutionally provided course template to something that’s useful.
  • extend; and
    e.g. a Greasemonkey script that adds a table of statistics about marker progress to the Moodle assignment activity.
  • automate.
    e.g. rather than provide a 9 page Word document containing instructions on how to perform some task with an LMS. Add a Wizard that steps the user through the process.

The details of the technical implementation of these helper applications would tend to be less important than the intent being that any individual end user can decide whether or not to install/user a particular helper application. It’s not a question for the institutional committee to meet and decide what’s best for everyone.

Greasemonkey like scripts would be one technical approach, as used by the markers progress example above. Automation might be implemented by some form of web scraping.

Implementation would be significantly aided by the increasing prevelance of API-centric architecture.

Pedagogical skins

From a purely selfish perspective, I am personally interested “helper applications” that act as pedagogical skins. i.e. applications that customise, add, and combine the basic functionality of most basic online tools into something that is more specifically designed to support the particular learning activities. e.g. a pedagogical skin that leverages the Moodle discussion forum into a debate forum. Or another skin that adds various types of notification and awareness (Carroll et al, 2003) or learning analytics interventions (Wise, 2014).

5. Contribute a helper application

The intent is that not only is the adoption decision up to the individual, but the development decision is also. The point here is that – to paraphrase the words of Anton Egon – not everyone is going to be able to code, but the great ideas about what to code can come from anywhere.

Not eveyone can

The idea is that this system is helping anyone to make changes to it, not just the people within the IT governance structure. The aim is to move the ability to customise and code from a tree to a network. Or, as more correctly pointed out by @s_palm, to move from a hierarchical network to an open network.

6. Configure/reconfigure the “script”

This approach of providing the individual with control over this system extends to the ability to re-configure the “script” that drives all this. i.e. you can control which URLs are of interest to you. You’re not limited to whatever your organisation has decided.

This is where I think the Smallest Federated Wiki can play a role.

The core “script” for this conception would probably be limited to the following task:

When a URL “of interest” is visited, display something

Meaning that the two key bits of data are

  1. Which URLs are of interest
  2. What is the something to display for this URL.

Traditionally, this type of information would have been stored in a database of some description. A database that is under the control of some person or organisation. Once again bringing us back to a tree structure. How do you remove this barrier?

This is where Smallest Federated Wiki (SFW) enters the picture. It’s a Wiki, but not as you know it. It has a number of features that offer great potential for this “system”. @hapgood’s Smallest Federated Wiki as an Alternate Vision of the Web gives much more detail. I’m going to focus on two features particular relevant to this idea.

Everything on a SFW page is data

Rather than just store text. Each paragraph on a SFW page is a little bit of JSON data plus a link to instructions how to display that data. This feature of SWF is designed to make it easier to bring data into SFW. I assume that this meant there would be options for taking data out of SFW?

Only after starting on this post did I stumble across this post from @hapgood that explains and has a video demonstrating how to use SFW as a “database”. The video pointed me to something I’d missed before, at the bottom of each SFW page you will find a link to the JSON data for that page. i.e. where an external application could extract the data.

This means that pages in a SFW could be used to store the data about which URLs are of interest and what to display, rather than a database. To add a new URL of interest or change what to show, you would edit the page on SFW.

Now this could get quite large and cumbersome. There is a reason why people use databases to store data. This is one of the problems with this approach that will have to be considered.

However, SFW data is managed/displayed via plugins (e.g. this video that illustrates a SFW plugin that is an expenses calculator and this from @hapgood on SFW as a universal canvas. This offers some potential for writing plugins to help keep a handle on this problem. e.g. one of the plugins might allow a list of URLs to be specified in a another data source so that not every URL has to be shown on one page. That other data source might be another SFW page, or it might be a database or any other data store.

But doesn’t that get back to the tree problem? That’s where the next feature of SFW enters the picture.

Every page on SFW has a “fork” button

When you edit a SFW page (whether it’s yours or from someone else) the original SFW page is forked. Your SFW takes a copy of the data and stores it on your server. There are links back to the original page and the original author is able to see if someone has forked their wiki page and make decisions about incorporating those changes back into the original.

The video in this post from @hapgood shows this in action.

Assuming that the “script” for this system allows the configuration of the URL for the SFW page “database”, then anyone with a SFW could fork a “database” page and customise it to their needs. Inheriting all the data from the original page and adding their own. They could then configure the script to use this new “database”, either instead of, or as well as the original. If appropriate, the author of the original “database” might accept the customisations back into the original.

Each person/group involved can make their own decisions about what “vines” they want to see.

If the “script” supported multiple “database” pages that would offer support for an individual to support the multiple identifies. e.g. as a member of my institution I might want to use the institutional “database”. But as some teaching Education students, I would also like to have an Education “database”. As someone teaching the course EDC3100, I might also like to have an EDC3100 “database”.

By forking the SFW database anyone can re-configure how this “script” would work.

Of course, raising questions about spam, and duplication and coordination between separate “databases” etc. But questions that can be explored.

An institutional use case

Let’s assume that my institution has built this system and is encouraging it’s use. The process might go something like this:

  1. Pre-population by the institution.

    Some appropriate central group might start by pre-populating the institution’s SFW “database” page. It would probably focuses on URLs within the institutional LMS and other systems. The institution would probably go through various committees, working groups, and related palaver before coming up with the institution’s quality assured and controled “database” page.

  2. Dissemination to staff.

    (I’ll focus on teaching staff, but the same could apply to students). Staff get an email with a pointer to the script. The script might be pre-configured to use the URL for the institutional SFW “database”.

  3. Installation and use by my colleagues and I.

    I decide to go with this and install it on my browser. I start using it. A couple of my colleagues in education start using it. Many of us in education make use of the Australian Curriculum website. We want to start sharing stories, advice, and helper applications for using that site. How do we do this?

  4. We fork the SFW page or create our own.

    Using my SFW I might create a new Education database or fork the institutions. Either way my education colleagues and I re-configure the script so that it is pointing to the education database. This might be instead of, or as well as the original institutional SFW database page.

  5. Education academics at other institutions see the value.

    Education academics at all Australian Universities would use the Australian Curriculum website. Perhaps some academics at other institutions here about what we’re doing and are interested in participating.
    So they grab the script, configure it just to point to our education database page and start participating.

  6. Participation continues.

    As outlined above participation potentially includes a range of activities, such as

    • Reading some advice on how to use a particular page.
    • Adding some advice on how to use a particular page.In some instances it might make sense to store this advice in SFW pages. But in others the advice may be URLs to existing online resources.
    • Participating in a forum talking about a particular page.The implementation of the forum is beyond this particular system. The SFW database page would just contain a URL to the forum and perhaps the script might know enough how to generate summary data from the forum to display.

      In an institution, the forum might be the formal IT helpdesk system.

    • Installing or contributing helper applications.Again, typically this script would only provide pointers to where these operations are performed (e.g. github)

And so on.

The aim of the system is to help create, manage, and traverse vines that hang across the tree the underpins e-learning. The intent is that this should be generative and highly contextually sensitive.

Would this work? Would people use it? Would you use it?

Lots of questions.

References

Carroll, J. M., Neale, D. C., Isenhour, P. L., Beth Rosson, M., & Scott McCrickard, D. (2003). Notification and awareness: Synchronizing task-oriented collaborative activity. International Journal of Human Computer Studies, 58(5), 605–632. doi:10.1016/S1071-5819(03)00024-7

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272). Dunedin. Retrieved from http://ascilite2014.otago.ac.nz/files/fullpapers/221-Jones.pdf

Jones, D., Heffernan, A., & Albion, P. R. (2015). TPACK as shared practice: Toward a research agenda. In D. Slykhuis & G. Marks (Eds.), Proceedings of Society for Information Technology & Teacher Education International Conference 2015 (pp. 3287–3294). Las Vegas, NV: AACE. Retrieved from http://www.editlib.org/p/150454/

Wise, A. F. (2014). Designing pedagogical interventions to support student use of learning analytics. In Proceedins of the Fourth International Conference on Learning Analytics And Knowledge – LAK ’14 (pp. 203–211). doi:10.1145/2567574.2567588

All models are wrong, but some are useful and its application to e-learning

In a section with the heading “ALL MODELS ARE WRONG BUT SOME ARE USEFUL”, Box (1979) wrote

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. (p. 202)

Over recent weeks I’ve been increasingly interested in the application of this aphorism to the practice of institutional e-learning and why it is so bad.

Everything in e-learning is a model

For definition’s sake, the OECD (2005) defines e-learning as the use of information and communications technology (ICT) to support and enhance learning and teaching.

As the heading suggests, I’d like to propose that everything in institutional e-learning is a model. Borrowing from the Wikipedia page on this aphorism you get the definition of model as “a simplification or approximation of reality and hence will not reflect all of reality” (Burnham & Anderson, 2002).

The software that enables e-learning is a model. At some stage, some smart people sat down, generated and analysed a set of requirements, and then developed a model (in the form of the software) that aims to fulfill those requirements.

Instructional design and teaching are essentially the creations of models intended to enable learning. At some stage, some smart people sat down, generated and analysed a set of requirements, and then developed a model to achieve some learning outcome.

Organisational structures are models. At some stage, some smart people sat down, generated and analysed a set of requirements, and then developed a model to achieve some operational and strategic requirements. Those same set of smart people probably also worked on developing a range of models in the form of organisational policies and processes. Some of which may have been influenced by the software models that are available.

The theories, tools, and schema used in the generation of the above models, are in turn models.

And following Box, all models are wrong.

But it gets worse.

In e-learning, everyone is an expert model builder

E-learning within an institution – by its nature – must bring together a range of different disciplines, including (but not limited to): senior leadership, middle management, quality assurance (boo) and related; researchers; librarians; instructional designers, staff developers and related learning and teaching experts; various forms of technology experts (software developers, network and systems administrators, user support etc); various forms of content development experts (editors, illustrators, video and various multimedia developers); and, of course the teachers/subject matter experts. I’ll make special mention of the folk from marketing who are the experts of the institutional brand.

All of these people are – or at least should be – expert model builders. Experts at building and maintaining the types of models mentioned above. Even the institutional brand is a type of model.

This brings problems.

Each of these expert model builders suffer from expertise bias.

What do you mean you can’t traverse the byzantine mess of links from the staff intranet and find the support documentation? Here, you just click here, here, here, here, here, here, here, and here. See, obvious……

And each of these experts thinks that the key to improving the quality of e-learning at the institution can be found in the institution doing a much better job at their model. Can you guess which group of experts is most likely to suggest the following?

The quality of learning and teaching at our institution can be improved by:

  • requiring every academic to have a teaching qualification.
  • ensuring we only employ quality researchers who are leaders in their field.
  • adopt the latest version of ITIL, i.e. ITIL (the full straight-jacket).
  • all courses are required to meet the 30 page checklist of quality criteria.
  • all courses were redesigned using constructive alignment.
  • we re-write all our systems using an API-centric architecture.
  • adopted my latest theory on situated cognitive, self regulated learning and maturation.

What’s common about most of these suggestion is that it will be all better if we just adopt this new better model. All of the problems we’ve faced previously are due to the fact that we’ve used the wrong model. This model is better. It will solve it.

Some recent examples

I’ve seen a few examples of this recently.

Ben Werdmuller had an article on Medium titled “What would it take to save #EdTech?” Ben’s suggested model solution was an open startup.

Mark Smithers blogged recently reflecting on 20 years in e-learning. In it Mark suggests a new model for course development teams as one solution.

Then there is this post on Medium titled “Is Slack the new LMS?”. As the title suggests, the new model here is that embodied by Slack.

Tomorrow I’ll be attending a panel session titled “The role of Openness in Creating New Futures in higher education” (being streamed live). Indicative of how the “open” model is seen as yet another solution to the problem of institutional e-learning.

And going back a bit further Holt et al (2011) report on the strategic contributions of teaching and learning centres in Australian higher education and observe that

These centres remain in a state of flux, with seemingly endless reconfiguration. The drivers for such change appear to lie in decision makers’ search for their centres to add more strategic value to organisational teaching, learning and the student experience (p. 5)

i.e. every senior manager worth their salt does the same stupid thing that senior managers have always done. Changed the model that underpins the structure of the organisation.

Changing the model like this is seen as suggesting you know what you are doing and it can sometimes be made to appear logical.

And of course in the complex adaptive system that is institutional e-learning it is also completely and utterly wrong and destined to fail.

A new model is not a solution

This is because any model is “a simplification or approximation of reality and hence will not reflect all of reality” (Burnham & Anderson, 2002) and “it would be very remarkable if any system existing in the real world could be exactly represented by any simple model” (Box, 1979, p. 202).

As Box suggested, this is not to say you should ignore all models. After all, all models are wrong, but some are useful. You can achieve some benefits from moving to a new model.

But a new model can never be “the” solution. Especially as the size of the impact of the model grows. A new organisational structure for the entire university is never going to be the solution, it will only be really, really costly.

There are always problems

This is my 25th year working in Universities. I’ve spent my entire 25 years identifying and fixing the problems that exist with whatever model the institution has used. Almost my entire research career has been built around this. A selection of the titles from my publications illustrates the point

  1. Computing by Distance Education: Problems and Solutions
  2. Solving some problems of University Education: A Case Study
  3. Solving some problems with University Education: Part II
  4. How to live with ERP systems and thrive.
  5. The rise and fall of a shadow system: Lessons for Enterprise System Implementation
  6. Limits in developing innovative pedagogy with Moodle: The story of BIM
  7. The life and death of Webfuse: principles for learning and learning into the future
  8. Breaking BAD to bridge the reality/rhetoric chasm.

And I’m not alone. Scratch the surface at any University and you will find numerous examples of individual or small groups of academics identifying and fixing problems with whatever models the institutions has adopted. e.g. A workshop at CSU earlier this year included academics from CSU presenting a raft of systems they’ve had to develop to solve problems with the institutional models.

The problem is knowing how to combine the multitude of models

The TPACK (Technological Pedagogical Content Knowledge) framework provides one way to conceptualise what is required for quality learning and teaching with technology. In proposing the TPACK Framework, Mischra and Koehler (2006) argue that

Quality teaching requires developing a nuanced understanding of the complex relationships between technology, content, and pedagogy, and using this understanding to develop appropriate, context-specific strategies and representations. Productive technology integration in teaching needs to consider all three issues not in isolation, but rather within the complex relationships in the system defined by the three key elements (p. 1029).

i.e. good quality teaching requires the development of “appropriate, context-specific” combinations of all of the models involved with e-learning.

The reason why “all models are wrong” is because when you get down to the individual course (remember I’m focusing on university e-learning) you are getting much closer to the reality of learning. A reality that is hidden from the senior manager developing policy, the QA person deciding on standards for the entire institution, the software developer working on a system (open source or not) etc. are all removed from the context. They are all removed from the reality.

The task of the teacher (or the course design team depending on your model) is captured somewhat by Shulman (1987)

to transform the content knowledge he or she possesses into forms that are pedagogically powerful and yet adaptive to the variations in ability and background presented by the students (p. 15)

The task is to mix all those models together and produce the most effective learning experience for these particular students in this particular context. The better you can do that, the more pedagogical value. The better the learning.

All of the work outlined in my publications listed above has been attempts to mix the various models available into a form that has greater pedagogical value within the context which I was teaching.

A new model means a need to create a new mix

When a new LMS, a new organisational structure, a new QA process, or some other new model replaces the old model it doesn’t automatically bring an enhancement in the overall experience of e-learning. That enhancement is really only maximised by each of the teachers/course design teams having to go back and re-do all the work they’d previously done to get the mix of models right for their context.

This is where (I think) the “technology dip” comes from, Underwood and Dillon (2011)

Introducing new technologies into the classroom does not automatically bring about new forms of teaching and learning. There is a significant discontinuity between the introduction of ICT into any educational setting and the emergence of measurable impacts on pedagogy and learning outcomes (p. 320

Instead the quality of learning and teaching dips after the introduction of new technologies (new models) as teachers struggle to work out the new mix of models that are most appropriate for their context.

It’s not how bad you start, it’s how quickly you get better

In reply to my comment on his post, Mark asks the obvious question

What other model is there?

Given the argument that “all models are wrong”, how do I propose a model that is correct?

I’m not going expand on this very much, but I will point you to Dave Snowden’s recent series of posts, including this one titled “Towards a new theory of change” and his general argument

that we need to stop talking about how things should be, and start changing things in the here and now

For me this means, stop focusing on your new model of the ideal future. e.g. If only we used Slack for the LMS. Instead develop an on-going capacity to know in detail what is going on now (learner experience design is one enabler here), enable anyone and everyone in the organisation to be able to remix all of the models (the horrendously poor way most universities don’t use network technology to promote connections between people currently prevent this), make it easy for people to know about and re-use the mixtures developed by others (too much of the re-mixing that is currently done is manual), find out what works and promote it (this relies on doing a really good job on the first point, not course evaluation questionnaires), and find out what doesn’t work and kill it off.

This doesn’t mean doing away with strategic projects, it just means scaling them back a bit and focusing more on helping all the members of the organisation learn more about the unique collection of model mixtures that work best in the multitude of contexts that make up the organisation.

My suggestion is that there needs to be a more fruitful combination of the BAD and SET frameworks and a particular focus on developing the organisation’s distributed capacity to develop it’s TPACK.

References

Box, G. E. P. (1979). Robustness in the Strategy of Scientific Model Building. In R. Launer & G. Wilkinson (Eds.), Robustness in Statistics (pp. 201–236). Academic Press. doi:0-12-4381 50-2

Holt, D., Palmer, S., & Challis, D. (2011). Changing perspectives: Teaching and Learning Centres’ strategic contributions to academic development in Australian higher education. International Journal for Academic Development, 16(1), 5–17. Retrieved from http://www.tandfonline.com/doi/abs/10.1080/1360144X.2011.546211

OECD. (2005). E-Learning in Tertiary Education: Where do we stand? Paris, France: Centre for Educational Research and Innovation, Organisation for Economic Co-operation and Development. Retrieved from http://www.oecd-ilibrary.org/education/e-learning-in-tertiary-education_9789264009219-en

Underwood, J., & Dillon, G. (2011). Chasing dreams and recognising realities: teachers’ responses to ICT. Technology, Pedagogy and Education, 20(3), 317–330. doi:10.1080/1475939X.2011.610932