Via a very roundabout method I came across an editorial from the journal “Computers in the Schools” (Maddux and Johnson, 2012) titled “External Validity and Research in Information Technology in Education”.
The problem is that “very few educators would argue that information technology has succeeded in bringing about a paradigm shift in instruction” (Maddux and Johnson, 2012, p. 249). Which links to the observation by Prof Mark Brown in the image below and my perception of university e-learning.
Maddux and Johnson (2012) argue that while there are many complex reasons for this problem, one of the major contributors is limited external validity. Defined as “the extent to which results of a study or a given program development project can be assumed to apply to other people in other places and at other times” (p. 250). The suggest that research – especially research around ICTs in education – tend to occur in situations that are not representative of the “average” setting.
They go onto argue
Specifically, we believe that we continue to design, implement, and test programs and methods that work well when used by master teachers in classrooms and schools where personnel are strongly committed to the success of these programs and methods. Then, practitioners put these programs and methods into place for use by average teachers in settings in which not everyone is highly committed to success and where some individuals or groups are apathetic toward information technology or may even be in opposition to the use of technology in education. (Maddux and Johnson, 2012)
This links to the distinction made by Goodyear (2009) between the “long arc” and “short arc” approaches in terms of national funded learning and teaching projects (a close cousin to ICT in education research). Most of these projects focus on the “long arc” and imagine the (master) teacher as someone with the time and insight to pro-actively plan and design their next course offering. As opposed to the more typical “short arc” approach which is more reactive or just-in-time.
Leaders and managers as master teachers
This assumption of master teachers or the long arc gets taken across into institutional learning and teaching because the leaders and managers (many central L&T and e-learning folk fall into this category as well) of such operations will tend to see themselves as “master teachers”. They will tend to be experienced and see themselves as good teachers and many of them will be. They will tend to be very different from their colleagues, what works for them will not work for their colleagues.
The assumption of external validity
Add to this is the belief amongst some of them that there must be external validity. That their teaching model – or that of someone else – is applicable across the board. Subsequently, their role as a leader or manager becomes the rolling out of that model. i.e. they are becoming like the researchers mentioned by Maddux and Johnson (2012)
Those who are charged with delivering services in grant-supported projects are almost always advocates and experts in the use of the kinds of programs they are using. Such individuals have a tendency to work tirelessly toward proving the efficacy of their programs. (pp. 250-251)
The blame the teacher approach
This discrepancy in outlook can result in the “blame the teacher” approach. Rather than value the difference that means their pet approach doesn’t work, most often leaders and managers will blame the teacher. It failed because they didn’t try hard enough. Even Maddux & Johnson (2012) lean toward this mistake when the characterise the “average teachers” above as not being committed to success, apathetic and even in opposition.
The “blame the teacher” approach goes back at least as far as Pressey in the 1920s and his teaching machines.
Built-in assumptions of external validity
The link between “blame the teacher”, external validity and educational technology extends (or is perhaps made worse) by the types of enterprise systems being adopted by universities (e.g. the LMS, the eportfolio) and the type of assumptions universities use when rolling these systems out (e.g. it’s more efficient if everyone uses it – like this principle “Decisions are made to provide maximum benefit to the enterprise as a whole” number 1 in an Enterprise Architecture policy). i.e. the LMS evaluation process identified this as the best LMS, so it will work well for everyone. It’s the same thinking that underpins consistent website standards.
Which leads to Kaplan’s law of instrument
If educational technology companies (and Centers for Teaching and Learning) are eager to improve education, rather than searching for problems to apply their solutions, they should focus on identifying problems and designing solutions to those problems.
If a University has installed Moodle (insert your favourite tool) then it is only efficient and rational if the people employed to support learning and teaching rely heavily on Moodle (or other tool) as the “solution” search for problems.
Is external validity a good idea?
While external validity may be appropriate for a certain type of research project. Is it an appropriate concern for institutional e-learning. Veletsianos’ suggestion that the focus be on “identifying problems and designing solutions” tends to suggest a move away from external validity and a focus on context specific requirements.
Maddux, C. D., & Johnson, D. L. (2012). External Validity and Research in Information Technology in Education. Computers in the Schools, 29(3), 249–252. doi:10.1080/07380569.2012.703605