All models are wrong, but some are useful and its application to e-learning

In a section with the heading “ALL MODELS ARE WRONG BUT SOME ARE USEFUL”, Box (1979) wrote

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. (p. 202)

Over recent weeks I’ve been increasingly interested in the application of this aphorism to the practice of institutional e-learning and why it is so bad.

Everything in e-learning is a model

For definition’s sake, the OECD (2005) defines e-learning as the use of information and communications technology (ICT) to support and enhance learning and teaching.

As the heading suggests, I’d like to propose that everything in institutional e-learning is a model. Borrowing from the Wikipedia page on this aphorism you get the definition of model as “a simplification or approximation of reality and hence will not reflect all of reality” (Burnham & Anderson, 2002).

The software that enables e-learning is a model. At some stage, some smart people sat down, generated and analysed a set of requirements, and then developed a model (in the form of the software) that aims to fulfill those requirements.

Instructional design and teaching are essentially the creations of models intended to enable learning. At some stage, some smart people sat down, generated and analysed a set of requirements, and then developed a model to achieve some learning outcome.

Organisational structures are models. At some stage, some smart people sat down, generated and analysed a set of requirements, and then developed a model to achieve some operational and strategic requirements. Those same set of smart people probably also worked on developing a range of models in the form of organisational policies and processes. Some of which may have been influenced by the software models that are available.

The theories, tools, and schema used in the generation of the above models, are in turn models.

And following Box, all models are wrong.

But it gets worse.

In e-learning, everyone is an expert model builder

E-learning within an institution – by its nature – must bring together a range of different disciplines, including (but not limited to): senior leadership, middle management, quality assurance (boo) and related; researchers; librarians; instructional designers, staff developers and related learning and teaching experts; various forms of technology experts (software developers, network and systems administrators, user support etc); various forms of content development experts (editors, illustrators, video and various multimedia developers); and, of course the teachers/subject matter experts. I’ll make special mention of the folk from marketing who are the experts of the institutional brand.

All of these people are – or at least should be – expert model builders. Experts at building and maintaining the types of models mentioned above. Even the institutional brand is a type of model.

This brings problems.

Each of these expert model builders suffer from expertise bias.

What do you mean you can’t traverse the byzantine mess of links from the staff intranet and find the support documentation? Here, you just click here, here, here, here, here, here, here, and here. See, obvious……

And each of these experts thinks that the key to improving the quality of e-learning at the institution can be found in the institution doing a much better job at their model. Can you guess which group of experts is most likely to suggest the following?

The quality of learning and teaching at our institution can be improved by:

  • requiring every academic to have a teaching qualification.
  • ensuring we only employ quality researchers who are leaders in their field.
  • adopt the latest version of ITIL, i.e. ITIL (the full straight-jacket).
  • all courses are required to meet the 30 page checklist of quality criteria.
  • all courses were redesigned using constructive alignment.
  • we re-write all our systems using an API-centric architecture.
  • adopted my latest theory on situated cognitive, self regulated learning and maturation.

What’s common about most of these suggestion is that it will be all better if we just adopt this new better model. All of the problems we’ve faced previously are due to the fact that we’ve used the wrong model. This model is better. It will solve it.

Some recent examples

I’ve seen a few examples of this recently.

Ben Werdmuller had an article on Medium titled “What would it take to save #EdTech?” Ben’s suggested model solution was an open startup.

Mark Smithers blogged recently reflecting on 20 years in e-learning. In it Mark suggests a new model for course development teams as one solution.

Then there is this post on Medium titled “Is Slack the new LMS?”. As the title suggests, the new model here is that embodied by Slack.

Tomorrow I’ll be attending a panel session titled “The role of Openness in Creating New Futures in higher education” (being streamed live). Indicative of how the “open” model is seen as yet another solution to the problem of institutional e-learning.

And going back a bit further Holt et al (2011) report on the strategic contributions of teaching and learning centres in Australian higher education and observe that

These centres remain in a state of flux, with seemingly endless reconfiguration. The drivers for such change appear to lie in decision makers’ search for their centres to add more strategic value to organisational teaching, learning and the student experience (p. 5)

i.e. every senior manager worth their salt does the same stupid thing that senior managers have always done. Changed the model that underpins the structure of the organisation.

Changing the model like this is seen as suggesting you know what you are doing and it can sometimes be made to appear logical.

And of course in the complex adaptive system that is institutional e-learning it is also completely and utterly wrong and destined to fail.

A new model is not a solution

This is because any model is “a simplification or approximation of reality and hence will not reflect all of reality” (Burnham & Anderson, 2002) and “it would be very remarkable if any system existing in the real world could be exactly represented by any simple model” (Box, 1979, p. 202).

As Box suggested, this is not to say you should ignore all models. After all, all models are wrong, but some are useful. You can achieve some benefits from moving to a new model.

But a new model can never be “the” solution. Especially as the size of the impact of the model grows. A new organisational structure for the entire university is never going to be the solution, it will only be really, really costly.

There are always problems

This is my 25th year working in Universities. I’ve spent my entire 25 years identifying and fixing the problems that exist with whatever model the institution has used. Almost my entire research career has been built around this. A selection of the titles from my publications illustrates the point

  1. Computing by Distance Education: Problems and Solutions
  2. Solving some problems of University Education: A Case Study
  3. Solving some problems with University Education: Part II
  4. How to live with ERP systems and thrive.
  5. The rise and fall of a shadow system: Lessons for Enterprise System Implementation
  6. Limits in developing innovative pedagogy with Moodle: The story of BIM
  7. The life and death of Webfuse: principles for learning and learning into the future
  8. Breaking BAD to bridge the reality/rhetoric chasm.

And I’m not alone. Scratch the surface at any University and you will find numerous examples of individual or small groups of academics identifying and fixing problems with whatever models the institutions has adopted. e.g. A workshop at CSU earlier this year included academics from CSU presenting a raft of systems they’ve had to develop to solve problems with the institutional models.

The problem is knowing how to combine the multitude of models

The TPACK (Technological Pedagogical Content Knowledge) framework provides one way to conceptualise what is required for quality learning and teaching with technology. In proposing the TPACK Framework, Mischra and Koehler (2006) argue that

Quality teaching requires developing a nuanced understanding of the complex relationships between technology, content, and pedagogy, and using this understanding to develop appropriate, context-specific strategies and representations. Productive technology integration in teaching needs to consider all three issues not in isolation, but rather within the complex relationships in the system defined by the three key elements (p. 1029).

i.e. good quality teaching requires the development of “appropriate, context-specific” combinations of all of the models involved with e-learning.

The reason why “all models are wrong” is because when you get down to the individual course (remember I’m focusing on university e-learning) you are getting much closer to the reality of learning. A reality that is hidden from the senior manager developing policy, the QA person deciding on standards for the entire institution, the software developer working on a system (open source or not) etc. are all removed from the context. They are all removed from the reality.

The task of the teacher (or the course design team depending on your model) is captured somewhat by Shulman (1987)

to transform the content knowledge he or she possesses into forms that are pedagogically powerful and yet adaptive to the variations in ability and background presented by the students (p. 15)

The task is to mix all those models together and produce the most effective learning experience for these particular students in this particular context. The better you can do that, the more pedagogical value. The better the learning.

All of the work outlined in my publications listed above has been attempts to mix the various models available into a form that has greater pedagogical value within the context which I was teaching.

A new model means a need to create a new mix

When a new LMS, a new organisational structure, a new QA process, or some other new model replaces the old model it doesn’t automatically bring an enhancement in the overall experience of e-learning. That enhancement is really only maximised by each of the teachers/course design teams having to go back and re-do all the work they’d previously done to get the mix of models right for their context.

This is where (I think) the “technology dip” comes from, Underwood and Dillon (2011)

Introducing new technologies into the classroom does not automatically bring about new forms of teaching and learning. There is a significant discontinuity between the introduction of ICT into any educational setting and the emergence of measurable impacts on pedagogy and learning outcomes (p. 320

Instead the quality of learning and teaching dips after the introduction of new technologies (new models) as teachers struggle to work out the new mix of models that are most appropriate for their context.

It’s not how bad you start, it’s how quickly you get better

In reply to my comment on his post, Mark asks the obvious question

What other model is there?

Given the argument that “all models are wrong”, how do I propose a model that is correct?

I’m not going expand on this very much, but I will point you to Dave Snowden’s recent series of posts, including this one titled “Towards a new theory of change” and his general argument

that we need to stop talking about how things should be, and start changing things in the here and now

For me this means, stop focusing on your new model of the ideal future. e.g. If only we used Slack for the LMS. Instead develop an on-going capacity to know in detail what is going on now (learner experience design is one enabler here), enable anyone and everyone in the organisation to be able to remix all of the models (the horrendously poor way most universities don’t use network technology to promote connections between people currently prevent this), make it easy for people to know about and re-use the mixtures developed by others (too much of the re-mixing that is currently done is manual), find out what works and promote it (this relies on doing a really good job on the first point, not course evaluation questionnaires), and find out what doesn’t work and kill it off.

This doesn’t mean doing away with strategic projects, it just means scaling them back a bit and focusing more on helping all the members of the organisation learn more about the unique collection of model mixtures that work best in the multitude of contexts that make up the organisation.

My suggestion is that there needs to be a more fruitful combination of the BAD and SET frameworks and a particular focus on developing the organisation’s distributed capacity to develop it’s TPACK.


Box, G. E. P. (1979). Robustness in the Strategy of Scientific Model Building. In R. Launer & G. Wilkinson (Eds.), Robustness in Statistics (pp. 201–236). Academic Press. doi:0-12-4381 50-2

Holt, D., Palmer, S., & Challis, D. (2011). Changing perspectives: Teaching and Learning Centres’ strategic contributions to academic development in Australian higher education. International Journal for Academic Development, 16(1), 5–17. Retrieved from

OECD. (2005). E-Learning in Tertiary Education: Where do we stand? Paris, France: Centre for Educational Research and Innovation, Organisation for Economic Co-operation and Development. Retrieved from

Underwood, J., & Dillon, G. (2011). Chasing dreams and recognising realities: teachers’ responses to ICT. Technology, Pedagogy and Education, 20(3), 317–330. doi:10.1080/1475939X.2011.610932

Refining a visualisation

Time to refine the visualisation of students by postcodes started earlier this week. Have another set of data to work with.

  1. Remove the identifying data.
  2. Clean the data.
    I had to remind myself the options for the sort comment – losing it. The following provide some idea of the mess.

    :1,$s/”* Sport,Health&PE+Secondry.*”/HPE_Secondary/
    :1,$s/”\* Sport, Health & PE+Secondry.*”/HPE_Secondary/
    :1,$s/Health & PE Secondary/HPE_Secondary/
    :1,$s/\* Secondary.*/Secondary/
    :1,$s/\* Secondry.*/Secondary/
    :1,$s/\* Secondy.*/Secondary/
    :1,$s/\* Secdary.*/Secondary/
    :1,$s/\* TechVocEdu.*/TechVocEdu/

  3. Check columns
    Relying on a visual check in Excel – also to get a better feel for the data.

  4. Check other countries
    Unlike the previous visualisation, the plan here is to recognise that we actually have students in other countries. The problem is that the data I’ve been given doesn’t include country information. Hence I have to manually enter that data. Giving for one of the programs, the following.

    4506 Australia
    8 United Kingdom
    3 Vietnam
    3 South Africa
    3 China
    2 Singapore
    2 Qatar
    2 Japan
    2 Hong Kong
    2 Fiji
    2 Canada
    1 United States of America
    1 Taiwan
    1 Sweeden
    1 Sri Lanka
    1 Philippines
    1 Papua New Guinea
    1 New Zealand
    1 Kenya
    1 Ireland

And all good.

github and the Moodle – Step 3

Time to follow up step 2 in connecting github and the Moodle book module.

Current status

  1. Initial Book tool set up and a github repo created.
  2. Identified a PHP client for the github api that looks useful.
  3. Explored how to complete various required tasks with that API from command line php.

To do here

  1. Consider how the status or relationship between github and book are displayed/tracked.
  2. Refine the design of how the book tool will work with the github api.
  3. Some initial implementation.

How might the status be tracked

I haven’t explored the github API enough. What are the ways you might keep a track of the relationship between github and book versions of the file

  • Create a github repo on the Moodle server and use git. No
    This isn’t a good idea for a few reasons. Can’t see too many Moodle instances wanting random local repos set up for each book. Plus the current model here is that the book is linked to one file in a repo. Meaning you might create locally the whole repo to get one file.
  • Compare sha.
    Git creates a checksum. I guess in theory a checksum of the local book could be produced and compared. However, it appears you can’t a file’s sha from github without also getting the content. Also calculating the local sha might also be heavyweight (if possible to do in an equivalent way). Don’t want to be doing this each time a author views a book chapter.
  • Commits?
    Keep a track locally of the version/commit that was last imported into the book. Then do a test for later commits. Again this would have to be done each time someone viewed a book chapter.

Testing commits

This code

$commits = $client->repos->commits->listCommitsOnRepository( $owner, $repo );

Returns an object of about 50Kb on a fairly small and inactive repository. But it is returning commits on the whole repository. You can refine the path.

Specify the file’s path (information the book tool would have) and it’s down to 16.47

That information includes the sha(s) for all the commits and also the dates when the commit was done (and by whom). The Book module maintains a

Point: This information would be useful to display on the status page.

Clarifying the design

Assume that the author has just created a book resource on a Moodle install that has the github book tool installed.

  1. Empty book – no github connection.
    Beyond the normal empty book interface, the author also sees something like the “GitHub (off)” link in the Administration block as shown in this image.
  2. Turn the github link on
    Clicking on the GitHub link opens a new page that will show

    • Basic information about the tool and how it works (and pointers to more detailed information).
    • Space to enter the required details, including
      • the author’s github username

        Will need to explore oAuth

      • name of the repo
      • name of the file to link
        Note: this will need to be able to handle specifying an existing file in the repo (which in a perfect world would have a a nice gui interface to do – but time won’t allow that – even the OERPub editor didn’t do that) or choose to create a new file based on the book.

        There’ll be a different workflow from here depending on which of these. I’ll focus here on connecting to an existing file.

  3. Github link configured and turned on
    Details about the link have been entered correctly, checked and now the tool displays details about the status of the file. The details will need to be entered into a book tool database.
    At this stage I don’t think it will have imported anything. Just display a list of details about the file. At this stage the author has the option to import the file after checking.
    The author should have the following choices

    1. Which file to import
      In most cases there will be multiple versions of files in the repo. The display should show details of all of them and allow the author to choose which to import.
    2. How to import
      There’s the question of how to import. i.e. add the contents of the file to the end of the book, to the start of the book, or to overwrite.
      Of course, this complicates coding. Especially in terms of committing changes back to the repo. Does the whole book (including the stuff that used to be there) get committed, or just the most recent?
      Initially, there may not be any choice how to import. All or nothing.
      What about merging/updating? Purely updating could be done by overwrite, but merging is different. i.e. I’ve made changes in the book and someone else has made changes in github and I’d like to merge those changes into the one file.
      At this stage, I’m leaning towards putting the onus back on github and keeping the book tool dumb. Makes it easier to implement and maintain at the cost of making it harder for the author – they need to know github to handle this case.
  4. File being imported.
    Clicking the “import” button starts the overwriting process (or a choice of import strategy if provided). The following screen will show the outcome of that process. What it shows might include

    • Whether or not the file was in a format that could be imported.
    • If there were any errors in the format. (these first two are related)
    • The number of chapters/sub-chapters etc that were found.

    The book tool table should be updated to store the date associated with the commit that was imported. Perhaps the SHA should also be stored to allow working on old versions of the file.

  5. Return to the normal book view
    Time to check out the imported book. The Book administration block should now display a link to the file on github that was imported and some indication of the relationship of the contents of the book. Options are

    • clean – i.e. the book and github version are the same.
    • ahead – i.e. the book version has been modified.

      This would include a link to push the changes back to the repo

      If the author has chosen to use an old version of the file for the book and has then changed the book, this is going to create issues for github (I believe). The tool may have to detect this and suggest that the author handle this via github. Will need to explore more.

    • out of date – i.e. the github version has been modified.

      This would include a link to pull the changes from the repo and update the book.

    • both ahead and out of date

      The initial design image had this situation having links to both pull and push. Instead, this might need to be a link to “merge”. Where that would be some advice on how to use github to do the merge.

    This would be calculated by using

    • the commit dates for the file from github
      Would need to include the sha of the file to work with a particular version. This will need to be retrieved everytime the book is viewed, just in case it’s been changed.
    • The “timemodified” field in mdl_book
      Which I assume is kept up to date.
  6. Initial implementation

    The main aim here is to test some of my assumptions around how the github communication will work. For now, I’m going to ignore broader questions such as the github tools database requirements (I’ll hard code specific information for now) and the actual import/export process.

    The focus will be on

    1. Implementing an initial status page.
    2. Getting the link in the Book administration to change

    Initial status page

    Aim here is that a click on “GitHub” in the Book administration block will take the author to a page that shows the status and details of the file that’s currently linked to the book. Test out the use of the github api and performance etc.

    And with a bit of kludging a connection is made and the content is displayed.

    Time now to look at the commits, start thinking about the structure of the code, and the HTML.

    Starting to put the github API calls into the lib.php file. Abstract that away hopefully.

    A lot of the data via the API is returned via JSON that is converted into hash arrays in PHP. Wondering if there’s some neat way of transforming those arrays into tables in Moodle? In PHP? Mustache templates are coming, but perhaps a bit too new to use?

    Let’s checkout the Output API and renderers – but I can’t figure out how to get the render call to work. And it may not be able to as the Book module itself doesn’t use a renderer.

    Back to more primitive approaches and have a bit of tinkering and exploring with both Moodle development and the github client we have a version of the github book tool that is talking to github. But only getting some initial information from github, not yet importing anything useful into the book. It looks like this


    The “History” section is all information retrieved from github for a specific file in a specific repository. It shows a list of all the commits on that file. When the commit was made, what the commit message was, a link to the HTML page on GitHub that shows more information on the commit, and the details of the person who made the commit.

    The idea is that the github book tool will eventually

    • If the book resource is linked to one of these commits
      • Highlight which of the commits (if any) is the current link to the book resource.
      • Indicate whether the book is up to date, ahead, or behind the version in github.
      • Provide links to the github book tool to take appropriate action (push, pull etc). based on the status
    • If the book resource is not yet officially linked to one of these commits
      • Provide a link to the github book tool to make the connection.
    • If the github book tool couldn’t access information from github
      • Attempt to diagnose the problem and display information about why
      • Ask for github credentials if required

        Raising a whole range of issues to consider (how to store passwords, oAuth?).

    Current focus is to have a largely working prototype to show people, get feedback, and test out major sticking points. Thus, next steps to do include

    • oAuth.
      Will be required if we want people to be able to use private repositories and should probably be required for commits back to github
    • Working with any commit.
      Should (yes I think) and can the github book tool allow the user to work with any commit.
    • Status.
      Have the github book tool be able to discover whether the current book is up to date, ahead, or behind the repo.
    • Responsive administration link
      Have the github book tool link in the Book adminstration block show the status.
    • Commit.
      Github book tool can commit to the repo. (Initially working with a simple import/export from the Moodle tables)
    • Pull.
      Book tool can pull from the repo. (Initially working with a simple import/export from the Moodle tables)
    • Identify format.
      Need to identify a format for the single file that is stored on github. A single HTML file. Will need to identify chapters and sub-chapters.
    • Implement import/export.
      Allow data to actually flow from github and Moodle.
    • Consider additional modifications.
      A straight dump from Moodle isn’t going to produce a file useful outside of Moodle. e.g. a link from one chapter to another chapter in Moodle, isn’t going to work as expected in the single file. Import/export will need to do some form of translation.

      There are other translations that may also be useful. e.g. the single HTML file might use a standard CSS link to allow display out of Moodle. Going into Moodle remove this, going out of Moodle include it. Identifying other Moodle specific links, perhaps identifying them with a specific CSS class that will make them obvious out of Moodle.

Visualising locations of students etc

I’ve been set a task (asked nicely really) by my Head of School if it is possible to produce a map that will allow all and sundry to see the geographic spread of our students.

I vaguely remember doing something like this previously with Google maps, but I didn’t think it “visual” enough. @palbion identified a couple of GIS experts in another school who could probably do it. I still don’t know whether I can do it, but I’m using this as an opportunity to test the adage from Connectivism that

The capacity to know is more critical than what is actually known (Siemens, 2008)

Can I use my “capacity to know” to solve this problem?

Making a connection

Just over an hour ago I tweeted out a plea

Within minutes @katemfd tweeted and introduced me to “the Fresh Prince of Visualisation of Things on Maps”

Who replied very quickly with this advice

Making many more connections

Now all I have to do is to grok @cartodb and produce a map.

But first, perhaps check pricing and functionality. Looks like the free version will work. The small wrinkle is the absence of “private datasets”. In the last week we’ve had a couple of serious emails make the rounds about student privacy. Will have to keep that in mind.

I should filter the data a bit more, but let’s give it a go.

  1. Drag and drop data onto the page
  2. Nice interface to manipulate the data once uploaded.
  3. First problem is geo-referencing the data.
    Postcodes are in the data, but not sure if this is sufficient. Need to look at the support. Looks like I might need to add country details. That’s it.

First version done. Time to filter. At this stage, I’m not going to show the visualisations given the worry about privacy.

Oh nice, the platform automatically creates different visualisations including a heat map and has wizards to modify further.

That’s produced a reasonable first go. Will need to refine it more, but enough to send off to the HoS.

That took no more than 20 minutes.

So which is more important?

The original quote is

The capacity to know is more critical than what is actually known

The above experience is actually a combination of both. The network I’ve built on Twitter – especially the brilliant @katemfd (performing as what Barabasi would call a network hub) – has provided the “capacity to know”. It helped me access someone for whom @cartodb was “actually known”.

But wouldn’t Google have worked just as well?

A couple of weeks ago I had performed a quick Google search and didn’t find @cartodb. I didn’t “actually know” about it and so I had to spend too much time figuring out how “to know”.

But even making the connection with @cbhorley wasn’t sufficient. In order to use @cartodb effectively I used a range of stuff that I already “know”.

Why should a teacher know how to code?

The idea that everyone should know how to code is increasingly dominant and increasingly questioned. In terms of a required skill that everyone should know, I remain sitting on the fence. But if you are currently teaching in a contemporary university where e-learning (technology enhanced learning, digital learning, online learning, choose your phrase) forms a significant part of what you do, then I think you should seriously consider developing the skill.

If you don’t have the skill, then I don’t know how you are surviving the supreme silliness that is the institutionally selected and mandated e-learning environment. And, at the very least, I’ve been able to convince Kate

Which means I think it’s a good step when Alex and Lisa have decided to learn a bit of “coding” as the “as learner” task for netgl. I might disagree a little about whether “HTML” counts as coding (you have to at least bring in Javascript to get there I think), but as a first step it’s okay.

Why should a (e-)teacher know how to code

(Sorry for using “e-teacher”, but I needed a short way to make clear that I don’t think all teachers should learn to code. Just someone who’s having to deal with an “institutionally selected and mandated e-learning environment” and perhaps those using broader tools. I won’t use it again)

What reasons can I give for this? I’ll start with these

  1. Avoid the starvation problem.
  2. Avoid the reusability paradox.
  3. Actually understand that digital technologies were meant to be protean.
  4. Develop what Schulman (1987) saw as the distinguishing knowledge of a teacher,

The starvation problem

Alex’s reasons for learning how to code touch on what I’ve called the starvation problem with e-learning projects. Alex’s description was

our developers work with the code. This is fine, but sometime……often, when clients request changes to modules they have paid tens-of-thousands of dollars for, I feel the developers’ time is wasted fixing simple things when they could be figuring out how to program one of the cool new interactions I’ve suggested. So, if I could learn some basic coding their time could be saved and our processes more efficient.

The developers – the folk who can actually change the technology – are the bottleneck. If anything needs to change you have to involve the developers and typically most institutions have too few developers for the amount of reliance they now place on digital technologies.

In the original starvation problem post I identified five types of e-learning projects and suggested that the reliance on limited developer resources meant that institutions were flat out completing all of the necessary projects of the first two types. Projects of types 3, 4, and 5 are destined to be (almost) always starved of developer resources. i.e. the changes to technology will never happen.

# Description
1. Externally mandated changes.
2. Changes arising from institutional strategic projects.
3. Likely (strategic) projects that haven’t registered on some senior managers radar
4. Projects that only a sub-set of institutional courses (e.g. all of the Bachelor of Education courses) will require.

How can we be one university if you have different requirements?

5. Changes specific to a course of pedagogical design.

For a teacher, it’s type 4 and 5 projects that are going to be of the immediate interest. But these are also the projects least likely to be resourced. Especially if the institution is on a consistency/”One University” kick where the inherent diversity of learning and teaching is seen as a cost to be minimised, rather than an inherent characteristic.

Avoid the reusability paradox


The question of diversity and its importance to effective learning (and teaching) brings in the notion of the reusability paradox. The Reusability Paradox arises from the idea that the pedagogical value of a learning object (something to learn with/from) arises from how well it has been contextualised. i.e. how well it has been customised for the unique requirements of the individual learner. The problem is that there is an inverse relationship between the pedagogical value of a learning object and the potential for it to be reused in other contexts.

The further problem is that most of the e-learning tools (e.g. an LMS) are designed to maximise reuse. They are designed to be used in many different contexts (the image to the right).

The problem is that in order to be able to maximise the pedagogical value of this learning object I need to be able to change it. I need to be able to modify it so that it suits the specifics of my learner(s). But as we’ve established above, the only way most existing tools can be changed is by involving the developers. i.e. the scarce resource.


Unless of course you can code. If you can code, then you can write: a module for Moodle that will allow students to use blogs outside of Moodle for learning; a script that will allow you to contact students who haven’t submitted an assignment; develop a collection of tools to better understand who and how learners are using your course site; or, mutate that collection of tools into something that will allow you to have some idea what each of the 300+ students in your course are doing.

Understand the protean nature of digital technologies

And once you can code, you can start to understand that digital technologies aren’t meant to be Procrustean tool that is “designed to produce conformity by violent or ruthless methods”. But instead to understand the important points made by people such as the gentlemen to the left – Doug Englebart and Alan Kay. For example, Kay (1984) described software as the “most protean of media” and suggested that it was obvious that

Users must be able to tailor a system to their wants (p. 57)

The knowledge base for teaching

Shulman (1987) suggested that

the key to distinguishing the knowledge base of teaching lies at the intersection of content and pedagogy, in the capacity of a teacher to transform the content knowledge he or she possesses into forms that are pedagogically powerful and yet adaptive to the variations in ability and background presented by the students (p. 15)

If the majority of the teaching you do is mediated by digital technologies, then doesn’t the ability to transform the digitial technologies count as part of the “knowledge base of teaching”? Isn’t coding an important part of the ability to perform those transformations? Shouldn’t every teacher have some ability to code?

I’m not ready to answer those questions yet, still some more work to do. But I have to admit that it’s easier (and I believe effective) for me to teach with the ability to code, then it would be without that ability.


Kay, A. (1984). Computer Software. Scientific American, 251(3), 53–59.

Shulman, L. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57(1), 1–21. Retrieved from

Understanding learning as network formation

This is a follow on from yesterday’s post weaving in a few posts from netgl participants.

Learning as a (common) journey

Rebecca uses emojis to illustrate a fairly typical journey through netgl (and a few of the other courses I teach). As is confirmed by the comment from another Rebecca (there are 8 participants in the course and 3 of them are Rebecca’s).

One of the turning points for Rebecca (who wrote the post) was a fairly old-fashioned synchronous session held in a virtual space

But then I attended the online chat session, clarified where I was supposed to be heading

Rebecca links this to

when things are deemed too difficult, people tend to revert to coping strategies. In this case, it was good ol’ face to face talking (OK…admittedly online and not in the ‘true’ sense…) to achieve direction out of the online maze.

Aside: I’m wondering if the journey metaphor is just a bit to sequential. Perhaps it’s illustrative of our familiarity and comfort with the sequential, rather than the complexity and inter-connectedness that arise from a network view.

The problem of being disconnected

I think there’s some connection between Rebecca’s struggles with something new and the experience of Lisa’s 11 year-old during a blackout

I found myself with a crazy bored eleven-year-old on my hands who was pacing the house saying ‘when’s the power coming back on, when’s the power coming back on’. His level of anxiety at being disconnected was incredibly sobering.

I also wonder whether the relief Rebecca got from “good ol’ face to face talking” is related to Lisa’s experience of the blackout

It was lovely, not just to switch off from the noise and chaos, but from the words as well – as you say, time for the diffuse mode to kick in and allow moments of quiet reflection

Learning as network formation

As mentioned in yesterday’s post, at some level networked learning is about the idea that what we know is actually (or at least fruitfully represented as) a network. Yesterday’s post pointed to brain research that is based on the brain being a network. It also drew on Downes’ writing on connectivism which has the view

learning is the formation of connections in a network

From this perspective, you might suggest that Rebecca and Lisa’s 11 year-old have already formed networks (learned) how to cope with certain situations like a face-to-face session or spending Saturday with electricity. But they haven’t yet formed networks to deal with the new and unexpected situation. Meaning that they have to start forming that network. Starting with their existing networks, they need to start making new connections to different ideas and practices. Figure out if any existing connections may need to be questioned as not necessarily the only option (e.g. spending all day on the computer, learning via traditional modes). Test out some of the nascent connections and see if they work as expected.

This type of network formation is hard. Especially when the number and diversity of the new connections you have to make increase. Learning how to learn online in a xMOOC which consists of lots of small video-taped lectures, with a set, sequential syllabus that is stored in one place. Is a lot easier than learning how to learn online in a cMOOC that isn’t taking place in one place and expects you to figure out where you want to go.

How do I know? How do I keep up?

In the midst of getting their head around the different approach to learning taken in netgl quite a few folk have raised the question of “how do I keep up”? I saw it first in another of Rebecca’s posts in the form of this question

how, once I graduate from being a formal student and progress into the world of teaching (in whichever form that may take), on Earth do I keep up with all the new programs, networked learning, social media hookups that seem to pop up hourly that I need to contend with?

Charm has shared via the netgl Diigo group a link to and some comments on Kop and Hill (2008), which includes this on connectivism

Connectivism stresses that two important skills that contribute to learning are the ability to seek out current information, and the ability to filter secondary and extraneous information. Simply put, “The capacity to know is more critical than what is actually known” (Siemens, 2008, para. 6).

Rebecca, I think this quote gives you a “network” answer to your question. Your ability to “keep up” (to know, to learn) is what is important, not what you know.

I should also mention this next point from Kop and Hill (2008) which I think is often overlooked

The learning process is cyclical, in that learners will connect to a network to share and find new information, will modify their beliefs on the basis of new learning, and will then connect to a network to share these realizations and find new information once more. Learning is considered a “. . . knowledge creation process . . . not only knowledge consumption.”

“To know” versus “actually known”

In responding to Rebecca’s post, Alex asks

So, given the population is ceasing its reliance on fundamental knowledge and increasing its dependence on immediate information, do you think field-specific academics will remain a valuable entity, as they hold deep information on specific areas?

Touching on the debate between those who believe “to know” is more important and those who believe that “actually known” is still important. A debate that is on-going (for some) and for which I have to admit to not having any links for. My inability to provide links to the “actually known” folk is perhaps indicative of my own networks and prejudices.

Implications for teachers?

It is a debate that raises questions about the role of the teacher. Lisa’s search for a metaphor for the teacher role had her pondering: sage, guide, or grandmother. Grandmother being a link to the work of Sugata Mitra (in a comment I pointed Lisa to a critique of Mitra’s work).

IN terms of “guide on the side” Lisa writes

the role of the guide on the side becomes less about “being the facilitator who orchestrates the context”, as Alison King described in the nineties, and more about helping students to develop the tools and skills needed to hear and decipher a coherent message from the cacophony of information available to them.

Personally, I have an affinity for McWilliam’s (2009) concept of the “meddler in the middle” which points toward a more

interventionist pedagogy in which teachers are mutually involved with students in assembling and/or dis-assembling knowledge and cultural products

Which could perhaps be re-phrased as “mutually involved with students in the formation of their networks”.

I’ll end with Downes’ slogan that describes what he sees as the teacher and learner roles which seems to align somewhat with that idea

To ‘teach’ is to model and demonstrate. To ‘learn’ is to practice and reflect.

Testing the Lucimoo epub export book tool

There’s movement afoot. The Lucimoo epub export tool for the Moodle book module is going through the process of being tested (and perhaps installed) on my institution’s main Moodle instance. What follows is a bit of testing of that tool in the institution’s test environment.

Verdict: all works, a few changes to practice to leverage it properly.

Import a few books

First step is to import a few books into the bare course site within the test environment. Just a few random books from my main course. Something that’s much easier now that @jonof helped identify some lost knowledge (and my oversight/mistake).

Of course it is never perfect. The default setting on the test environment is to use the gui editor. Which removes links to CSS files. Which is a real pain.

Doing an export

Once in the book select the administration/settings block and hey presto, there’s the option to “Download as ebook”


Select that option and I get the option to download the ePub file or view it in iBooks.

As reported earlier the ePub contains a few errors because apparently the original HTML content in my Book resource doesn’t always meet ePub’s stricter requirements. The bugs I had to fix included

  • Missing ending tag for an image (produced by ImageCodr)
    Of course it appears that the over-reaching default HTML editor in Moodle is automatically removing the /> I’m putting at the end of the <<img tag. I've had to change my preference to the plain text area to get that fixed.

    God I hate tools that assume they know better than I what I want to do and won't let me override their assumptions.

  • It appears that it doesn’t like the nbsp entity either.
    There appears to be some blather about this online, but I don’t have the time to fix it. For now I’ll remove the nbsp entities.
  • “Opening and ending tag mismatch: br line 0 and div”
    Replace <br> with <br />

    And all this so far is largely in code auto-generated by ImageCodr

  • “Opening and ending tag mismatch”
    An issue with the relationship between P and BLOCKQUOTE tags that I am somewhat lazy. Yay, that’s the first page.
  • The spacing around the image isn’t great.
  • “Specification mandate value for attribute allowfullscreen”
    A YouTube embed that doesn’t meet expectations.
  • The videos don’t show.
    There is a space for the embedded YouTube video, but it is empty. Will need to figure out a way to fix this. Especially in this test book, which has a lot of videos in it
  • Missing styling.
    In this books I use a bit of CSS to style elements such as activities. The ePub version is currently not showing that styling, though the “Print this book” version does. Ahh, that’s caused by the magical CSS chomping GUI editor. Fixed.

You can view the final ePub file and also a PDF produced by “printing the book”.

The layout of the PDF isn’t great. It does at least show some visual evidence of the videos. Though it’s not very useful.

Test the Assessment book

Assessment is of course what is uppermost in the minds of students, so I should test that book. I don’t have that in my nice offline format, so will have to explore the how Moodle backup and restore process.

Again, a slightly different collection of HTML “strictness” problems. Given the size of the assessment book, there are surprisingly few of them.

The major problem here is that my “macro” approach that relies on jQuery to update due dates and related information obviously won’t work with ePub. Wonder if the filter approach will work with the ePub export tool?

View the results on a mobile device

One of the main benefits of the ePub format is that it is supposed to play nicely with mobile devices. Hence testing the files on a mobile device (my phone) would be sensible. Observation and problems include

  • The Flickr image on the front page isn’t showing up. There is a link I can click on, but not embedded in the book. Wonder if that’s a config option in ibooks?
  • The CSS styling on tables for Assessment doesn’t appear to work.
    It does in iBooks on the laptop, but not on the phone. In much the same way that the images work on laptop, but not phone.
  • Neither does the table of contents, actually that appears to be an issue with internal links being added into the ToC and some of these being incorrectly.

Problems to be explored at a later date, not show stoppers, just part of learning the details of a new ecosystem.

There’s more to it than the Internet and social software

The following is a bit of reflection and curation of various posts from participants in the netgl course. There’ll be a few of these coming. The aim for this post is to suggest that there might be more to the “networked” part of Networked and Global Learning than just the Internet and social media. This is an important point to make because the interventions design by the folk from last year’s offering of the course were a little too limited in their focus on the Internet and various forms of social media.

At some level, the argument here is similar to the one from this post titled “Why everything is a network” i.e. not that everything is a network, or that a network is the only metaphor by which to understand a whole range of situations. It is to suggest, however, that a network is a useful model/metaphor through which to understand and guide interventions in a range of situations.

And this is a view that can trace its origins beyond just learning, teaching and education. Barabasi (2014) writes

Networks are present everywhere. All we need is an eye for them.

and then goes on to show how a network perspective provides ways to understand as diverse topics as: the success of Paul in spreading Christianity; how to cure a disease; and, the rise of terrorism. Leading to the

important message of this book: The construction and structure of graphs or networks is the key to understanding the complex world around us. Small changes in the topology, affecting only a few of the nodes or links, can open up hidden doors, allowing new possibilities to emerge (p. 12)

Changes in purchasing books

For example, in thinking about the future of Tertiary education Lisa talks about changes in the publishing industry, including her own behaviour around purchasing books

as this industry seems to be floundering and I only have to look at my own behaviour as a consumer to see why. As a book consumer, I can say I do still read, but I get my books from the places that are cheapest and easiest for me – Amazon and Audible (owned by Amazon). Why would I spend $45.00 on a hard-copy book from a shop when I can listen to it on the way to work by paying an audible credit that costs less than $13.00? Why would I order a book from a retailer that may take months to arrive that I can download to my Kindle app instantly – and cheaply?

Changes that I observe in my own practice. But also more than that. The Barabasi quote from above is from the Kindle version of the book I purchased. I read that book mostly while traveling to and from Wagga Wagga using my phone. Highlighting bits that were relevant to me and making annotations as I went. In writing this post, I’ve started up the Kindle app on my Mac, synced with Amazon, and was able to view all my annotations and highlights. Not only that, I was able to also see the popular highlights from other people.

The experiences of both Lisa and I illustrate how digital books are making it easier to create links or connections between nodes. Both Lisa and I find it much easier to “connect” (i.e. buy) a book via the combination of Amazon and the Kindle apps. Not only in terms of price, but also in terms of speed. Having the content in a digital form that can be manipulated also helps make links to specific parts of the book.

Barabasi (2014) writes

Nodes always compete for connections because links represent survival in an interconnected world.

Amazon is currently winning a large part of the publishing “war” because it is making the ability to “link” to a book or other publication much easier. The more links it is able to create, the more likely it will be able to survive.

What if there isn’t a network?

Angela ponders “The challenge of networked learning when there is no Network…” as she enjoys a weekend away from Internet connectivity and apparently no ability to engage in netgl. Of course, Angela has forgotten that she had taken along one of the most complex networks we currently know, her brain.

The Connected Brains website makes prominent use of this quote from Tim Berners-Lee

There are billions of neurons in our brains, but what are neurons? Just cells. The brain has no knowledge untill connections are made between neurons. All that we know, all that we are, comes from the way our neurons are connected.

The website then goes onto to trace some of the history and research going on that seeks to understand the brain as a complex network.

In a post title “Connectivism as a learning theory” Stephen Downes makes the connection between the view of the brain as a network, a weakness in other theories of learning, and how connectivism addresses this by viewing learning as “the formation of connections in a network”.

In closing

…much more to come, it’s been a fruitful week for netgl blogging.

But the point here is that the “network” part of netgl is much more than just social software and the Internet. This is perhaps the most visible parts of netgl to the participants, but these aren’t the only examples of, nor are they required for netgl.

github and the Moodle book – Step 2

The continuing story of linking github and the Moodle book module. Following on from step 1 the main aim here is to grok the PHP client for the github api I’m currently chosen.

Some additional work to be done includes

  1. Consider use of branches etc
  2. Ponder whether to work only with releases – or more open as listed below.
    Releases is more directly supported by the PHP client, but directly with content may be a little more flexible. But releases are perhaps more inline with expectations? Perhaps this is a question to answer by looking at some of the ways other similar projects are working.

    At this stage, I sort of see using the book to modify the repo as something that is happening prior to a release.

  3. Looks like storing the sha of the file in a local Moodle database will be necessary to help with checking statuses etc.

How to (if)?

I’ve got it installed and working from command line php scripts. Need to figure out how to use it

  1. Does the file exist in the repo?
    Getting the content should return a 200 status code and “type: file” if it is a file, but it will also return the content of the file.
  2. Create a new file
    API: PUT /repos/:owner/:repo/contents/:path
    Initial implementation in PHP working.
  3. (fetch) Get the content for the file.
    API – GET /repos/:owner/:repo/contents/:path
    Intial implementation in PHP working.
  4. (push) Update the file with new content.
    API: PUT /repos/:owner/:repo/contents/:path
    Initial implementation in PHP working
  5. What is the status of the file in the repo?
    What do I actually mean by status? The full history? Still need to find what, if anything in github/git/the API provides this.
  6. What is the relationship between the content/status of the file in the repo and the content in the book.
    Looks like it’s available via the same call.

How does it work

Would help if I understood the model that it uses. Some of the example code includes something like

$commits = $client->repos->commits->listCommitsOnRepository($owner, $repo);

The question is whether or not there is any pattern in common between this and the github API. I assume there is and grokking that pattern should lead to understanding how to use the API.

The assumption is that the client provides a method to access the API and hence the pattern of methods etc should match.

In the GitHub api is there an equivalent to listCommitsOnRepository? And is it found in something within a hierarchy of repos/commits?

There does appear to be a match. The heading List commits on a repository seems to match and its found within repos/commits.

Can I apply this to get the contents of a file?

The GitHub API defines it here

  1. Title – Get contents meaning method getContents??
  2. Structure is Repositories/Contents
  3. parameters – owner, repo, path

Leading to something like

$commits = $client->repos->contents->getContents($owner, $repo, $path);

Let’s see if I can write code to retrieve the contents of this file from GitHub.

Mmm, getting undefined method for getContent(s).

Let’s dig into the code. GitHubClient class creates the various objects.

What does GitHubRepos contain? There is a link “contents” (GitHubReposContents) as expected. But it only apparently gets the readme!!!

Which does work. But begging the question, where’s the rest?

One fall back would be to call directly – the getReadMe is implemented via
[lang code=”php”]return $this->client->request(“/repos/$owner/$repo/readme”, ‘GET’, $data, 200, ‘GitHubReadmeContent’);[/lang]

That appears to work. Now the question is whether I can get the content. Yep, there is a method that will do that. But it’s still encoded in base64. This will fix that. The rough code that’s working follows.

Code for get a file

The following only works if the repository is open. The major kludge here is the use of GitHubReadmeContent as the last parameter in the request. This appears to define the type of object that is returned by request. This appears to work (for now) because the Readme is just another file. Hence it appears that the various members etc are directly applicable.

A final version should check use getType to check that the type of content returned is a file and not a symlink or directory

$owner = 'djplaner';
$repo = 'edc3100';
$path = 'Who_are_you.html';

$client = new GitHubClient();

$data = array();
$response = $client->request( "/repos/$owner/$repo/contents/$path", 'GET', $data, 200, 'GitHubReadmeContent'   );

print "content is " . base64_decode( $response->getContent() );

Creating a new file?

At this stage, I’m thinking I’ll stick with the approach of using request directly. Mainly because GitHub API for this indicates it’s part of Contents. And it already appears that contents doesn’t include support for method. Yep, not there.

PUT /repos/:owner/:repo/contents/:path will do it. But it also lists other parameters message (commit message) and content as required. Plus committer and branch as optional. Plus this is also likely going to require credentials.

Yep, 404 error. Credentials required. Put in what I think is the required code and get a 422. Which is invalid field. API documentation suggests content is required. Best provide some.

And that appears to work. At least the file was created on GitHub. But it got a 201, rather than a 200 back. Which is actually what the documentation says should happen. Another quick test.

That’s better and the 2nd file is created. This code is listed below.

An example of the PHP client appears to be using releases as a way to upload (or create) a new file.

Code to create a file

Much the same limitation as above – i.e. is GitHubReadmeContent really the best value for the last parameter.

Will also need to look at handling exceptions (e.g. when the response code is different).

[lang code=”php”]
$owner = ‘djplaner’;
$repo = ‘edc3100’;
$path = ‘A_2nd_new_file.html’;
$username = ‘djplaner’;
$password = ‘some password’;

$client = new GitHubClient();
$client->setDebug( true ); # this is a nice little view
$client->setCredentials( $username, $password );

$content = “This will be the content in the second file. The 1st time”;

$data = array();
$data[‘message’] = ‘First time creating a file’;
$data[‘content’] = base64_encode( $content );

$response = $client->request( “/repos/$owner/$repo/contents/$path”, ‘PUT’, $data, 201, ‘GitHubReadmeContent’ );

Update a file

Going to stick with the same method. In essence, this should be an almost direct copy of the code above. Ahh, one difference. There is an additional required parameter – sha – “The blob SHA of the file being replaced”. This will be something that needs to be gotten from git – it’s returned by getting content. Wonder if there’s a get status?

That appears to be working

code to update file

$owner = 'djplaner';
$repo = 'edc3100';
$path = 'A_2nd_new_file.html'; # an existing file
$username = 'djplaner';
$password = 'some password';

$content = "This will be the content in the second file. The 4th time";

$client = new GitHubClient();
#$client->setDebug( true );
$client->setCredentials( $username, $password );

$sha = getSha( $client, $owner, $repo, $path ); # get the content to get sha

$data = array();
$data['message'] = 'First time creating a file - Update 4';
$data['content'] = base64_encode( $content );
$data['sha'] = $sha;
$data['committer'] = array( 'name' => 'David Jones', 
                            'email' => 'some email' );

$response = $client->request( "/repos/$owner/$repo/contents/$path", 'PUT', $data, 200, 'GitHubReadmeContent'   );

print_r( $response );


Separate part of the API seems to deal with these. Works on the sha.

Seems the PHP client has a method based on repos to access this listStatusesForSpecificRef

Mmmm, this doesn’t look like it will do what I want at all. More searching required.

Bringing github and the Moodle book module together – step 1

The following is the first step in actually implementing some of the ideas outlined in an earlier post about bringing and the Moodle Book module together. The major steps covered here are

  1. Explore the requirements of a book tool.
  2. Name and set up an initial book tool.
  3. Figure out how to integrate github.

A book tool

The Moodle book module is part of core Moodle. Changing core Moodle is (understandably) hard. Recently, I discovered that there is a notion of a Book tool. This appears to be a simple “plugin” architecture for the Book module. People can add functionality to the Book module without it being part of core. The current plan is that the github work here will be implemented as a Book tool.

What does that mean? My very quick search doesn’t reveal any specific information. The book tool page within the list of plugin types in the Developer documentation is missing. Suggesting that perhaps what follows should be added to that page.

The plugin types page describes book tools as

Small information-displays or tools that can be moved around pages

Which is perhaps not the best description given the nature of the available Book tools.

The tool directory

The book tools appear to reside in ~/mod/book/tool. Each tool has it’s own directory. Apparently, with all the fairly common basic requirements in terms of files


The Book module’s lib.php has various calls to get_plugin_list(‘booktool’) in various places

  • book_get_view_actions
  • book_get_post_actions
  • book_extend_settings_navigation

The first two look for matching functions (e.g. book_plugin_get_post_actions) in the book tool’s lib.php which get called and then used to modify operations.

The settings navigation is where the changes to the settings/administration block get made and from there that’s how the author gets access to the booktool’s functionality.

Naming and getting it started

The plan seems to be to

  1. Create a new github repository for the new book tool
  2. Copy and edit an existing book tool to get started.
  3. Figure out how to slowly add github functionality.

Creating the booktool github repository

The repository will need to be called moodle-booktool_pluginname. What should the plugin name be?

I’ll start with github. Existing tools tend to include a verb e.g. print, exportepub, importepub, exportimscp. So this may be breaking a trend, but that can always be fixed later.

And then there was a repository.

Clone a local copy.

Copy the contents from another book tool and start editing

And take a note of work to do on the issues section of the github repository.

Updated the icon. Wonder if that will work as is?

Login to local moodle. It has picked up the new module and asking to install. That appeared to work. Now what happens when I view a book resource? Woohoo that works.

Doesn’t do anything useful beyond display the availability of GitHub (with the nice icon).

Early success

Push that code back to the repository.

How to integrate github

Time to actually see if it can start talking to GitHub and how that might be achieved.

Initial plan for this is

  1. Hard code details of github repository and credentials for a single Book module.
  2. Implement the code necessary to update the link in the settings block based on whether the book is up-to-date with the repository.
  3. Implement index.php function to display various status information about current repository and book.
  4. Implement the fetch and push functions.

    From here on a lot more thought will need to be given to the workflow.

  5. Implement the interface to configure the repository/credentials

Which all beg the question.

How to talk to the GitHub API

The assumption underpinning all of this is that the tool will use GitHub API to access it’s services. Moodle is written in PHP, so I’m looking for a PHP-based method for talking to the GitHub API.

There’s no clear winner, so time to do a comparison

  • Scion: Wrapper – initial impressions good. Does use cURL. But requires other “scion” based code
  • KnpLabs API – requires another library for the HTTP requests. Not a plus.
  • tan-tan-kanarek version – looks ok. No mention of other requirements.

Let’s try the latter. Installation and it’s all working. Now only need to grok the API and how to use it from PHP.

The focus here is on an individual file. The book will be connected to an individual file.

Most of these request seem linked to the Contents part of the API – part of Repositories.

Actions required

  1. Does the file exist in the repo?
    Getting the content should return a 200 status code and “type: file” if it is a file, but it will also return the content of the file.
  2. Create a new file
    API: PUT /repos/:owner/:repo/contents/:path
  3. (fetch) Get the content for the file.
    API – GET /repos/:owner/:repo/contents/:path
  4. (push) Update the file with new content.
    API: PUT /repos/:owner/:repo/contents/:path
  5. What is the status of the file in the repo?
  6. What is the relationship between the content/status of the file in the repo and the content in the book.

Running out of time. Will have to come back to this another day for Step 2.