Category Archives: information systems

Is IT a service industry, or is it “eating the world”?

In an earlier post I wondered if how high school classes in Information Technology(IT)/Computer Science(CS) are being taught is turning students off and, if so, is this why enrolment numbers are dropping. In the comments on that post Tony suggests some other reasons for this decline. Including the observation that IT courses in local schools (both Tony and I live in Queensland, Australia) are primarily seen to serve the needs of students who want to be IT professionals. The further suggestion is that since

IT is a service-based industry, there only needs to be 5%-10% of the population focused on it as a profession

Now I can agree somewhat with this perspective. It matches some of what I observe. It also reminds me of Nicholas Carr’s 2003 Harvard Business Review article titled IT doesn’t matter which included the following

The point is, however, that the technology’s potential for differentiating one company from the pack – its strategic potential – inexorably diminishes as it becomes accessible and affordable to all

Instead of being strategic, Carr sees IT becoming infrastructure somewhat like electricity etc.

The rise of the cloud seems to reinforce this perspective. Increasingly there is no strategic advantage for an institution having its own guru Systems Administrators running servers and managing networks. Instead they can outsource this to “the cloud” or more often service providers. For example, a number of Australian Universities have outsourced the hosting of their Learning Management Systems.

Combine this with the nerd image of IT, and you can see why more high school students aren’t taking classes in IT.

But what if software ate the world?

And then comes the recent article from Marc Andreessen on “Why software is eating the world”. In his own words

My own theory is that we are in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy

If true, this sort of shift suggests that having some IT, especially software, knowledge and capability could be a useful thing. The prediction is that industries with a significant physical component (e.g. oil and gas) the opportunity is there for existing companies. But in other industries, start-ups will have opportunities.

Andreessen argues that this shift is starting to happen now for much the same reason that Carr argued that IT didn’t matter anymore. i.e. the technology needed to fully harness software has become infrastructure, it’s become invisible. Huge numbers of people have smartphones and Internet access. The IT services industry and the cloud make it simple to develop a global software application.

Of course, one of the problems with this argument is confirmation bias, as put in this comment from the Slashdot post on the Andreessen article

THIS JUST IN
An expert of [field of study] believes [field of study] will change the world. Also emphasizes that other people are not taking [field of study] seriously.

What does this mean for high school IT classes

One of the problems that Andreessen identifies is that

many people in the U.S. and around the world lack the education and skills required to participate in the great new companies coming out of the software revolution.

Give the dearth of enrolments in high school IPT in local schools and universities, I imagine that the same problem exists here in Australia. I believe this is also a major point that Rushkoff makes in his book “Program or be programmed”

So, obviously more people should enrol in the IT classes in high school.

No, I don’t think so. At least not as most stand at the moment.

This connects back to a point from my initial post. I believe that the current curriculum and teaching methods for these courses are generally not appropriate for the purpose of preparing people – beyond just the future IT professionals – for this world that software is eating.

The current curriculum appears aimed at providing the service providers. The folk who will keep the infrastructure going. What is needed is curriculum and teaching methods that will prepare the folk who are going to identify opportunities and transform industries. Or on a smaller scale, identify opportunities for how the IT infrastructure can be harnessed to improve their lives.

Dilbert as an expository instantiation

A few recent posts have been first draft excerpts from my Information Systems Design Theory (ISDT) from emergent university e-learning systems. Being academics and hence somewhat pedantic about these things there are meant to be a number of specific components of an ISDT. One of these is the expository instantiation that is meant to act as both an explanatory device and a platform for testing (Gregor and Jones, 2007) i.e. it’s meant to help explain the theory and also examples of testing the theory.

The trouble is that today’s Dilbert cartoon is probably as good an explanation of what is currently the third principle of implementation for my ISDT.

Dilbert.com

I’m sure that most folk working in a context where they’ve had to use a corporate information system have experienced something like this. A small change – either to fix a problem or improve the system – simply can’t be made because of the nature of the technology or the processes used to make the changes. The inability to make these changes is a major problem for enterprise systems.

The idea from the ISDT is that the development and support team for an emergent university e-learning system should be able to make small scale changes quickly without having to push them up the governance hierarchy. Where possible the team should have the skills, insight, judgement and trust so that “small scale” is actually quite large.

An example

The Webfuse e-learning system that informed much the ISDT provides one example. Behrens (2009) quotes a user of Webfuse about one example of how it was responsive

I remember talking to [a Webfuse developer] and saying how I was having these problems with uploading our final results into [the Enterprise Resource Planning (ERP) system] for the faculty. He basically said, “No problem, we can get our system to handle that”… and ‘Hey presto!’ there was this new piece of functionality added to the system… You felt really involved… You didn’t feel as though you had to jump through hoops to get something done.

Then this is compared with a quote from one of the managers responsible for the enterprise system

We just can’t react in the same way that the Webfuse system can, we are dealing with a really large and complex ERP system. We also have to keep any changes to a minimum because of the fact that it is an ERP. I can see why users get so frustrated with the central system and our support of it. Sometimes, with all the processes we deal with it can take weeks, months, years and sometimes never to get a response back to the user.

Is that Dilbert or what?

The problem with LMS

Fulfilling this requirement is one of the areas where most LMS create problems. For most universities/orgnaisations it is getting into the situation where the LMS (even Moodle) is approaching the “complex ERP system” problem used in the last quote above. Changing the LMS is to fraught with potential dangers that these changes can’t be made quickly. Most organisations don’t try, so we’re back to a Dilbert moment.

Hence, I think there are two problems facing universities trying to fulfil principle #3:

  1. Having the right people in the support and development team with the right experience, insight and judgement is not a simple thing and is directly opposed to the current common practice which is seeking to minimise having these people. Instead there’s reliance on helpdesk staff and trainers.
  2. The product problem. i.e. it’s too large and difficult to change quickly and safely. I think there’s some interesting work to be done here within Moodle and other open source LMS. How do you balance the “flexibility” of open source with the complexity of maintaining a stable institutional implementation?

References

Behrens, S. (2009). Shadow systems: the good, the bad and the ugly. Communications of the ACM, 52(2), 124-129.

Gregor, S., & Jones, D. (2007). The anatomy of a design theory. Journal of the Association for Information Systems, 8(5), 312-335.

Principles of form and function

The aim of my thesis is to formulate an information systems design theory for e-learning. Even though I have a publication or two that have described early versions of the ISDT, I’ve never been really happy with them. However, I’m getting close to the end of this process, at least for the purposes of getting the thesis submitted.

The following is a first draft of the “Principles of form and function”, one of the primary components of an ISDT as identified by Gregory and Jones (2007). I’ll be putting up a draft of the principles of implementation in a little while (UPDATE principles of implementation now up). These are still just approaching first draft stage, they need a bit more reflection and some comments from my esteemed supervisor. Happy to hear thoughts.

By the way, the working title for this ISDT is now “An ISDT for emergent university e-learning systems”.

Principles of form and function

Gregor and Jones (2007) describe the aim of the principles of form and function as defining the structure, organisation, and functioning of the design product or design method. The ISDT described in Chapter 4 was specifically aimed at the World-Wide Web as shown in its title, “An ISDT for web-based learning systems”. Such technology specific assumptions are missing from the ISDT described in this chapter to avoid technological obsolescence. By not relying on a specific technology the ISDT can avoid a common problem with design research – the perishability of findings – and, enable the on-going evolution of any instantiation to continue regardless of the technology.

The principles of form and function for this ISDT are presented here as divided into three groupings: integrated and independent services; adaptive and inclusive architecture; and, scaffolding, context-sensitive conglomerations. Each of these groupings and the related principles are described in the following sub-sections and illustrated through examples from Webfuse. The underlying aim of the following principles of form and function is to provide a system that is easy to modify and focused on providing context-specific services. The ISDT’s principles of implementation (Section 5.6.4) are designed to work with the principles of form and function in order to enable the design of an emergent university e-learning information system.

Integrated and independent services

The emergent nature of this ISDT means that, rather than prescribe a specific set of services that an instantiation should provide, the focus here is on providing mechanisms to quickly add and modify new services in response to local need. It is assumed that an instantiation would provide an initial set of services (see principle 4) with which system use could begin. Subsequent services would be added in response to observed need.

An emergent university e-learning system should:

  1. Provide a method or methods for packaging and using necessary e-learning services from a variety of sources and of a variety of types.
    For example, Webfuse provided two methods for user-level packaging services: – page types and Wf applications – and also used design patterns and object-oriented design for packging of implementation level services. The types of services packaged through these means included: information stored in databases; various operations on that data; external services such as enterprise authentication services; open source COTS; and, remote applications such as blogging tools.
  2. Provide numerous ways to enable different packages to interact and integrate.
    Webfuse provided a number of methods through which the packaging mechanisms described in the previous point could be integrated. For example, Wf applications provided a simple, consistent interface that enabled easy integration from numerous sources. It was through this approach that Wf applications such as email merge, course list, and course photo album were integrated into numerous other services. To allow staff experience what students say on StudentMyCQU, the ViewStudentMyCQU application was implemented as a wrapper around the StudentMyCQU application.
  3. Provide a packaging mechanism that allows for a level of independence and duplication.
    Within Webfuse, modifications to page types could be made with little or no effect on other page types. It was also possible to have multiple page types of the same type. For example, there were three different web-based discussion forums with slightly different functionality preferred by different users. Similarly, the use of the Model-View-Controller design pattern in Wf applications enabled the same data to be represented in many different forms. For example, class lists could be viewed by campus, with or without student photos, as a CSV file, as a HTML page etc.
  4. Provide an initial collection of services that provide a necessary minimum of common e-learning functionality covering: information distribution, communication, assessment, and administration.
    The initial collection of services for Webfuse in 2000 included the existing page types and a range of support services (see Section 4.4.3). These provided an initial collection of services that provided sufficient services for academics to begin using e-learning. It was this use that provided the opportunity to observe, learn and subsequently add, remove and modify available services (see Section 5.3).
  5. Focus on packaging existing software or services for integration into the system, rather than developing custom-built versions of existing functionality.
    With Webfuse this was mostly done through the use of the page types as software wrappers around existing open source software as described in Chapter 4. The BAM Wf application (see 5.3.6) integrated student use of existing blog engines (e.g. http://wordpress.com) into Webfuse via standardised XML formats.
  6. Present this collection of services in a way that for staff and students resembles a single system.
    With Webfuse, whether users were managing incidents of academic misconduct, finding the phone number of a student, responding to a student query on a discussion forum, or uploading a Word document they believed they were using a single system. Via Staff MyCQU they could access all services in a way that fit with their requirements.
  7. Minimise disruption to the user experience of the system.
    From 1997 through 2009, the authentication mechanism used by Webfuse changed at least four times. Users of Webfuse saw no visible change. Similarly, Webfuse page types were re-designed from purely procedural code to being heavily object-oriented. The only changes in the user interface for page types were where new services were added.

Adaptive and inclusive architecture

Sommerville (2001) defines software architecture as the collection of sub-systems within the software and the framework that provides the necessary control and communication mechanisms for these sub-systems. The principles for integrated and independent services described in the previous section are the “sub-systems” for an emergent university e-learning system. Such as a system, like all large information systems, needs some form of system architecture. The major difference for this ISDT is that traditional architectural concerns such as consistency and efficiency are not as important as being adaptive and inclusive.

The system architecture for an emergent university e-learning system should:

  1. Be inclusive by supporting the integration and control of the broadest possible collection of services.
    The approach to software wrappers adopted as part of the Webfuse page types, was to enable the integration of any external service at the expense of ease of implementation. Consequently, the Webfuse page types architecture integrated a range of applications using very different software technologies including a chat room that was a Java application; a page counter implemented in the C programming language; a lecture page type that combined numerous different applications; and, three different discussion forums implemented in Perl. In addition to the page types, Webfuse also relied heavily on the architecture provided by the Apache web server for access control, authentication, and other services. The BAM Wf application (Section 5.3.6) used RSS and Atom feeds as a method for integrating disparate blog applications. Each of these different approaches embody very different architectural models which increase the cost of implementation, but also increase the breadth of services that can be integrated and controlled.
  2. Provide an architecture that is adaptive to changes in requirements and context.
    One approach is the use of an architectural model that provides high levels of maintainability through fine-grained, self-contained components (Sommerville 2001). This was initially achieved in Webfuse through the page types architecture. However, in order to achieve a long-lived information system there is a need for more than this. Sommerville (2001) suggests that major architectural changes are not a normal part of software maintenance. As a system that operated for 13 years in a Web-environment, Webfuse had to undergo major architectural changes. In early 2000, performance problems arose due to increased demand for dynamic web applications (student quizzes) resulting in a significant change in Webfuse architecture. This change was aided through Webfuse’s reliance on the Apache web server and its continual evolution that provided the scaffolding for this architectural change.

The perspective for this ISDT is that traditional homogenous approaches to software architecture (e.g. component architectures) offer numerous advantages. However, there are some drawbacks. For example, a component architecture can only integrate components that have been written to meet the specifications of the component architecture. Any functionality not available within that component architecture, is not available to the system. To some extent such a limitation closes off possibilities for diversity – which this ISDT views as inherent in university learning and teaching – and future emergent development. This does not rule out the use of component architectures within an emergent university e-learning system, but it does mean that such a system would also be using other architectural models at the same time to ensure it was adaptive and inclusive.

Scaffolding, context-sensitive conglomerations

The design of e-learning in universities requires the combination of skills from a variety of different professions (e.g. instructional design, web design etc), and yet is often most performed by academics with limited knowledge of any of these professions. This limited knowledge creates significant workload for the academics and contributes to the limited quality of much e-learning. Adding experts in these fields to help course design is expensive and somewhat counter to the traditional practice of learning and teaching within universities. This suggests that e-learning in universities has a need for approaches that allow the effective capture and re-use of expertise in a form that can be re-used by non-experts without repeated direct interaction with experts. Such an approach could aim to reduce perceived workload and increase the quality of e-learning.

An emergent university e-learning information system should:

  1. Provide the ability to easily develop, including end user development, larger conglomerations of packaged services.
    A conglomeration is not simply an e-learning service such as a discussion forum. Instead it provides additional scaffolding around such services, possibly combining multiple services, to achieve a higher-level task. While many conglomerations would be expert designed and development, offering support for end-user development would increase system flexibility. The Webfuse default course site approach (Section 5.3.5) is one example of a conglomeration. A default course site combines a number of separate page types (services), specific graphical and instructional designs, and existing institutional content into a course website with a minimum of human input. Another form of conglomeration that developed with Webfuse was Staff MyCQU. This “portal” grew to become a conglomeration of integrated Wf applications designed to package a range of services academics required for learning and teaching.
  2. Ensure that conglomerations provide a range of scaffolding to aid users, increase adoption and increase quality.
    There is likely to be some distance between the knowledge of the user and that required to effectively use e-learning services. Scaffolding provided by the conglomerations should seek to bridge this distance, encourage good practice, and help the user develop additional skills. For example, over time an “outstanding tasks” element was added to Staff MyCQU to remind staff of unfinished work in a range of Wf applications. The BAM Wf application was designed to support the workload involved in tracking and marking individual student reflective journals (Jones and Luck 2009). A more recent example focused more on instructional design is the instructional design wizard included in the new version of the Desire2Learn LMS. This wizard guides academics through course creation via course objectives.
  3. Embed opportunities for collaboration and interaction into conglomerations.
    An essential aim of scaffolding conglomerations is enabling and encouraging academics to learn more about how to effectively use e-learning. While the importance of community and social interaction to learning is widely recognised, most professional development opportunities occur in isolation (Bransford, Brown et al. 2000). Conglomerations should aim to provide opportunities for academics to observe, question and discuss use of the technology. Examples from Webfuse are limited to the ability to observe. For example, all Webfuse course sites were, by default, open for all to see. The CourseHistory Wf application allowed staff to see the grade breakdown for all offerings of any course. A better example would have been if the CourseHistory application encouraged and enabled discussions about grade breakdowns.
  4. Ensure that conglomerations are context-sensitive.
    Effective integration with the specific institutional context enables conglomerations to leverage existing resources and reduce cognitive dissonance. For example, the Webfuse default course site conglomeration was integrated with a range of CQU specific systems, processes and resources. The Webfuse online assignment submission system evolved a number of CQU specific features that significantly increased perceptions of usefulness and ease-of-use (Behrens, Jamieson et al. 2005).

References

Behrens, S., Jamieson, K., Jones, D., & Cranston, M. (2005). Predicting system success using the Technology Acceptance Model: A case study. Paper presented at the Australasian Conference on Information Systems’2005, Sydney.

Bransford, J., Brown, A., & Cocking, R. (2000). How people learn: brain, mind, experience, and school. Washington, D.C.: National Academy Press.

Gregor, S., & Jones, D. (2007). The anatomy of a design theory. Journal of the Association for Information Systems, 8(5), 312-335.

Jones, D., & Luck, J. (2009). Blog Aggregation Management: Reducing the Aggravation of Managing Student Blogging. Paper presented at the World Conference on Education Multimedia, Hypermedia and Telecommunications 2009. from http://www.editlib.org/p/31530.

Sommerville, I. (2001). Software Engineering (6th ed.): Addison-Wesley.

How strict a blueprint do ISDTs provide?

Am working on the final ISDT for the thesis. An Information Systems Design Theory (ISDT) is a theory for design and action. It is meant to aim to provide general principles that help practitioners design information systems. Design theory provides guidance about how to build an artifact (process) and what the artifact should look like when built (product/design principles) (Walls, Widmeyer et al. 1992; Gregor 2002). Walls et al (1992) see an ISDT as an integrated set of prescriptions consisting of a particular class of user requirements (meta-requirements), a type of system solution with distinctive features (meta-design) and a set of effective development practices (meta-design). Each of these components of an ISDT can be informed by kernel theories, either academic or practitioner theory-in-use (Sarker and Lee 2002), that enable the formulation of empirically testable predictions relating the design theory to outcomes (Markus, Majchrzak et al. 2002).

My question

I’ve just about happy with the “ISDT for emergent university e-learning systems” that I’ve developed. A key feature of the ISDT is the “emergent” bit. This implies that the specific context within which the ISDT might be applied is going to heavily influence the final system. To some extent there is a chance that aspects of the ISDT should be ignored based on the nature of the specific context. Which brings me to my questions:

  1. How far can the ISDT go in saying, “ignore principle X” if it doesn’t make sense?
  2. How much of the ISDT has to be followed for the resulting system to be informed by the ISDT?
  3. If most of the ISDT is optional based on contextual factors, how much use is the ISDT?
  4. How much and what sort of specific guidance does an ISDT have to give to be useful and/or worthwhile?

Class of systems

One potential line of response to this is based on the “class of systems” idea. The original definition provided by Walls et al (1992) for the meta-design component indicates that it “Describes a class of artefacts hypothesized to meet the meta-requirements” and not a specific instantiation. van Aken (2004) suggests that rather than a specific prescription for a specific situation (an instantiation), the intent should be for a general prescription for a class of problems. van Aken (2004) arrives at this idea through the use of Bunge’s idea of a technological rule.

van Aken (2004) goes onto explain the role of the practitioner in the use of a technological rule/ISDT

Choosing the right solution concept and using it as a design exemplar to design a specific variant of it presumes considerable competence on the part of practitioners. They need a thorough understanding both of the rule and of the particulars of the specific case and they need the skills to translate the general into the specific. Much of the training of students in the design sciences is devoted to learning technological rules and to developing skills in their application. In medicine and engineering, technological rules are not developed for laymen, but for competent professionals.

This seems to offer some support for the idea that this problem, is not really a problem.

Emergent

It appears that the idea of “emergent” is then just an increase in emphasis on context than is generally the case in practice. There is, I believe, a significant difference between emergent/agile development and traditional approaches, it’s probably worthwhile making the distinction in a mild way when introducing the ISDT and then reinforcing this in the artifact mutability and principles of implementation section.

The first stab

The following paragraph is a first draft of the last paragraph in the introduction to the ISDT. It starts alright, but I’m not sure I’ve really captured (or understand) what I’m trying to get at with this. Is it just an attempt to signpost perspectives included below? Need to be able to make this clearer I think.

It is widely accepted that an ISDT – or the related concept of technological rule – are not meant to describe a specific instantiation, but instead to provide a general prescription for a class of problems (Walls, Widmeyer et al. 1992; van Aken 2004). The ISDT presented here is intended to offer a prescription for e-learning information systems for universities. In addition to this general class of problems, the ISDT presented here also includes in its prescription specific advice – provided in the principles of implementation, and artifact mutability components of the ISDT – to be more somewhat more general again. This is captured in the use of the word “emergent” in the title of the ISDT and intended in the sense adopted by Truex et al (1999) where “organisational features are products of constant social negotiation and consensus building….never arriving but always in transition”. This suggests the possibility that aspects of this ISDT may also be subject to negotiation within specific social contexts and subsequently not always seen as relevant.

References

Gregor, S. (2002). Design Theory in Information Systems. Australian Journal of Information Systems, 14-22.

Markus, M. L., Majchrzak, A., & Gasser, L. (2002). A Design Theory for Systems that Support Emergent Knowledge Processes. MIS Quarterly, 26(3), 179-212.

van Aken, J. (2004). Management research based on the paradigm of the design sciences: The quest for field-tested and grounded technological rules. Journal of Management Studies, 41(2), 219-246.

Walls, J., Widmeyer, G., & El Sawy, O. A. (1992). Building an Information System Design Theory for Vigilant EIS. Information Systems Research, 3(1), 36-58.

Some reasons why business intelligence tools aren’t the right fit

The following started as an attempt to develop an argument that business intelligence/data warehouse tools are not a perfect fit for what is broadly called academic analytics. In fact, as I was writing this post I realised that it’s actually an argument that business intelligence tools aren’t a perfect fit for what I’m interested in doing and that is not academic analytics.

The following is by no means a flawless, complete argument but simply an attempt to make explicit some of the disquiet I have had. Please, feel free, and I actively encourage you, to point out the flaws in the following. The main reasons for writing this are:

  1. see what form the final argument might take;
  2. see if I can convince myself of the argument; and
  3. see if others can see some value.

Background – the indicators project

Some colleagues and I are currently involved in some work we’re calling The Indicators Project which

aims to build on and extend prior work in the analysis of usage data from Learning Management Systems

In our first paper we presented some findings that contradicted some established ideas around LMS usage due to the differences in our student body, the breadth and diversity of the data we used.

The indicators project is especially interested in how what we can find out about LMS usage that can help improve learning and teaching. We’re especially interested in what analysis across time, across LMS and across institutions can reveal that we don’t currently know.

It’s somewhat related to the idea of academic analytics

Background – academic analytics

According to Wikipedia academic analytics is “the term for business intelligence used in an academic setting”. In turn, business intelligence is described as “skills, processes, technologies, applications and practices used to support decision making”.

In an environment that is increasingly demanding techno-rational approaches to management, especially the requirement for universities to quantitatively prove that they are giving value for money, universities have started to go in for “business intelligence” in a big way. For most, this foray into “business intelligence” means setting up a data warehouse and the accompanying organisational unit to manage the data warehouse.

The “intelligence” unit is often located either within the information technology grouping or directly within the senior management structure reporting to a senior executive (i.e. a Pro or Deputy Vice Chancellor within an Australian setting). This location tendency arguably reveals the focus of such units on either the technology or servicing the needs of the senior executive they report to.

With the increasing use of technology (especially Learning Management Systems – LMS/VLE) to mediate learning and teaching there becomes available an increasing volume of data about this process. Data which may (or may not) reveal interesting insights into what is going on. Data which may be useful in decision making. i.e. Academic Analytics (aka business intelligence for learning and teaching) has arrived.

In those universities where data warehouses exist, an immediate connection is made between analysing and evaluating LMS usage data and the data warehouse. It is assumed that the best way to analyse this data is to put it into the data warehouse and allow the “intelligence” unit to do their thing.

I’m not convinced that this is the best approach and the following is my attempt to argue why.

Business Intelligence != Data Warehouse

van Dyk (2008) describes “a business intelligence approach is followed in an attempt to take advantage ICT to enable the evaluation of the effectiveness of the process of facilitating learning” and argues that the context that leads to effective data for decision making “can only be created when a deliberate business intelligence approach if followed”. The paper also contains a description of a data warehouse model that accomplishes exactly that. The framework is based on the work of Kimball and Ross (2002) and is shown below

The business intelligence framework

As you can note this business intelligence framework includes as a core, and very important, part a data warehouse. Not surprising as it is based on a book about data warehouses.

However, drawing on the Wikipedia business intelligence page (my emphasis added)

Often BI applications use data gathered from a data warehouse or a data mart. However, not all data warehouses are used for business intelligence nor do all business intelligence applications require a data warehouse.

I’m trying to develop an argument that a data warehouse, defined as a tool/system that sold as a data warehouse tool, may not be the best fit for supporting decision making based around LMS usage data. In particular, it may not be the best fit for the indicators project.

But don’t you need a data warehouse

In the early days of the indicators project we developed an image to represent what we were thinking the project would do. It’s shown below.

Project Overview

Overview of indicators project

There is certainly some similarity between this image and the business intelligence framework above. Both images encapsulate the following ideas:

  • You take data from somewhat, do some operations on it and stick it into a form you can query.
  • You take some time to develop queries on that standardised data that provide insights of interest to people.
  • You make that information available to folk.

So you’re doing essentially the same thing. A lot of people have spent a lot of time on the math and the IT necessary to create and manage data warehouse tools. So why wouldn’t you use a data warehouse? What’s the point of this silly argument?

The problems I have with data warehouses

My problem with data warehouses is that the nature of these systems and how they are implemented within organisations are a bad fit for what the indicators project wants to achieve. From above, the indicators project is especially interested in finding out what analysis of LMS usage data across time, LMS and institution can reveal beyond what is currently known.

The nature of data warehouses within universities and the tools and processes used to implement them are, from my perspective, heavy weight, teleological, proprietary and removed from the act of learning and teaching. These characteristics get in the way of what the indicators project needs to do.

Heavyweight and expensive

Institutions generally spend a lot of money on the systems, people and processes required to set up their data warehouses. Goldstein (2005) reports key findings from an ECAR study of academic analytics and suggests that more extensive platforms are more costly. When systems cost a lot they are controlled. They are after all an expensive resource that must be effectively managed. This generally means heavyweight processes of design and control.

While a significant amount of work has been done around evaluation LMS usage, there’s still a lot to discover. The very act of exploring the data – especially when going cross institutional, cross LMS and cross time – will generate new insight that will require modification to how data is being prepared for the system and how it is being reported.

This level of responsiveness is not a characteristic of heavyweight processes and expensive systems. Especially when the systems main aim and use is focused at other purposes.

Not focused on L&T

Goldstein (2005) reports that few institutions have achieved both broad and deep usage of academic analytics with the most active users coming from central finance, central admissions and institutional research. In fact, the research asked respondents to evaluate their use of academic analytics within seven functional areas. None of those seven areas involved teaching and learning.

This seems to suggest that the focus of data warehouse use within universities is not focused on L&T. The expensive resources of the data warehouse is focused elsewhere. Which suggests that resources available to tweek and modify reports and data preparation for learning and teaching purposes will be few and far between.

Proprietary

Due to the expense of these systems universities will sensibly spend a lot of time evaluating which systems to go with. This will lead to differences in the systems chosen for use. Institutional differences will also lead to differences in the type of data being stored and the format in which it is stored.

The indicators project has an interest in going across institutions. Of comparing findings at different universities. While a data warehouse approach might work at our current institution, it probably won’t be easily transportable to another institution.

This is not to suggest that it wouldn’t be transportable, but that the cost of doing so might exceed what is politically possible within current institutions.

Not located within L&T

It is well known that the two most important factors contributing to the adoption (or not) of a piece of technology are:

  • Ease of use; and
  • Usefulness.

Academic staff at universities are not rewarded (by the institution) for spending more time on their learning and teaching. They do not received any, let alone significant, encouragement to change their practice. Academics are generally given enough freedom to choose whether or not they use a tool and always have the freedom to choose how they use a tool. i.e. if they are forced to use a tool that is not easy to use and/or useful, they will not use it effectively.

The reports and dashboards associated with data warehouse tools do not live within the same space that academic staff use when they are learning and teaching. E-learning for most university staff means the institutional LMS. Systems that are not integrated into the every day, existing systems used by academic staff are less likely to be used.

The usefulness of these reports will be governed by how well they are expressed in terms that can be understood by the academic staff. Goldstein (2005) reports on there being a two-part challenge in providing effective training for academic analytics. I’m going to divide those two into three challenges (in the original the last 2 in my list were joined into one):

  1. help users how to learn the technology;
  2. help users understand the underlying data; and
  3. envision how the analytical tools could be applied.

To me, the existence of these challenges suggest that the technology being used is inappropriate. It is too hard or different for the users to understand and the information being presented is also too far removed from their everyday experience. i.e. if they need training in how to use it, then the tool is too far removed from their existing knowledge.

Given that Goldstein (2005) found these difficulties for the “sweet spot” of business intelligence (i.e. “business and finance”, “budget and planning”, “institutional research” etc.). Imagine the difficulties that will arise when attempting to apply the same technology to learning and teaching. Learning and teaching itself is inherently diverse, while the perspectives of learning and teaching held by the academics doing the teaching is several orders of magnitude more diverse.

The key point here is the “build it and they will come” approach of putting this data into a data warehouse will not work. The academic staff will not come. A large amount of work is required to develop insights into how to identify and integrate the knowledge that arises out of the LMS data in a form that encourages adoption.

Getting academic staff to meaningfully adopt and use this information to change – hopefully improve – their teaching is much more important, difficult and expensive than the provision of a data warehouse. The wrong tool – e.g. a data warehouse – will significantly limit this much more important task.

So what

I believe any approach to use data warehouse tools to provide “dashboards” to coal face academics so they can see information about the impact of their teaching and their students learning, will ultimately fail, or at the very least be very expensive, difficult and be used in limited ways.

Is there any institution doing just this know that can prove me wrong?

What’s the solution?

That’s for another post. But what I’m thinking of is :

  • Much cheaper/simpler technology.
  • Lightweight methodology.
  • Research and coal-face informed development and testing of useful measures/information.
  • Design of additions to institutional LMS and other systems that leverage this information.

References

Goldstein, P. (2005). Key Findings. Academic Analytics: The Uses of Management Information and Technology in Higher Education. ECAR Key Findings, EDUCASE Center for Aplied Research: 12.

Kimball, R. and M. Ross (2002). The data warehouse toolkit: The complete guide to dimensional modeling, John Wiley and Sons.

van Dyk, L. (2008). “A data warehouse model for micro-level decision making in higher education.” The Electronic Journal of e-Learning 6(3): 235-244.

The LMS/VLE as a one word language – metaphor and e-learning

I’m currently back from a holiday restarting work on my thesis and in particular on the process component of the Ps Framework. I’m currently working on the section that describes the two extremes, I’m using Introna’s (1996) distinction between teleological design and ateleological design.

The following arises out of re-reading Introna (1996) and picking up some new insights that resonate with some recent thoughts I’ve been having about e-learning and Learning Management Systems (LMSs/VLEs). The following is an attempt to make sense of Introna (1996) – which is not the easiest paper to follow – and integrate it with some of my thinking.

That is, this is a work in progress.

Basic argument

Introna suggests that the dominant metaphor within the design of information systems – like LMSs/VLEs – is that of the system. That the over-emphasis on the “system” has made systems development a one word language.

Can you imagine holding a conversation in a language with only one word? Not a great stretch of the imagination to see such a language as hugely limiting. Hence our current conversations about e-learning are also hugely limiting, as we’re making do with a one word language. Introna (1996) puts it this way

The use of a one word language will lead to the building of systems that are “dead” not alive and profoundly meaningful

The pre-dominance of the one word language in e-learning

The pre-dominance of the LMS or VLE within e-learning within a University context probably doesn’t need much of a background. While there are some growing movements away from the LMS (e.g. edupunk, e-learning 2.0 etc) I still believe the LMS is the dominant answer to the “how do we do e-learning?” question. As I wrote in (Jones and Muldoon, 2007)

The almost universal approach to the adoption of e-learning at universities has been the implementation of Learning Management Systems (LMS) such as Blackboard, WebCT, Moodle or Sakai. If not already adopted, Salmon (2005) suggests that almost every university is planning to make use of an LMS. Indeed, the speed with which the LMS strategy has spread through universities is surprising (West, Waddoups, & Graham, 2006). Particularly more surprising is the almost universal adoption within the Australian higher education sector of just two commercial LMSs, which are now owned by the same company. Interestingly this sector has traditionally aimed for diversity and innovation (Coates, James, & Baldwin, 2005). Conversely, the mindset in recent times has focused on the adoption of the one-size-fits-all LMS (Feldstein, 2006).

This is even in the light of there being little difference between LMSs. Here’s what Black, Beck et al (2007) had to say

There are more similarities than differences among learning management system (LMS) software products. Most LMSs consist of fairly generic tools such as quiz/test options, forums, a scheduling tool, collaborative work space and grading mechanisms. In fact, the Edutools Web site lists 26 LMSs that have all of these features (2006). Many LMSs also have the means to hold synchronous meetings and some ability to use various templates for instruction. Beyond these standardized features, LMSs tend to distinguish themselves from one another with micro-detailed features such as the ability to record synchronous meetings or the ability to download forum postings to read offline.

In my recent experience, the LMS model has become so endemic that it is mythic and unquestioned. Many folk can’t even envision how you might do e-learning within a University without an LMS.

Even at its best, discussion about e-learning within universities seems to get dragged back to the LMS. The one word language.

What’s the problem with this

Introna (1996) believes that information systems development (I’m going to accept that e-learning information systems are a subset of this) very much involves a social system or three. The development of an information system for us by people is an inherently social process and communication is essential to such a process.

He connects this with authors in the social sciences who have investigated the connection between symbolism, communication and the construction of social reality. He tends to focus on Pondy (1991) but there are others. He includes the following quote from Pondy (1991)

The central hypothesis is that the use of metaphors in the organizational dialogue plays a necessary role in helping organization participants to infuse their organizational experiences with meaning and resolve apparent paradoxes and contradictions, and that this infusion of meaning or resolution of paradox is a form of organizing. In this sense, the use of metaphors help couple the organization, to tie its parts together into some kind of meaningful whole; that is, metaphors help to organize the objective facts of the situation in the minds of the participants. …That is, metaphors serve both as models of the situation and models for the situation

In looking for the pre-dominant metaphor used in information systems development he identifies the system. Developers perform “systems analysis”, they identify the entities that make up the system, the relationships between them etc.

The system no becomes a model of, and a model for, the symbols space that needs to be designed.

While accepting that the system metaphor has been beneficial he also suggests that it is over utilized and that there are benefits to be accrued from identifying different metaphors. For example, he suggests that the “systems” metaphor works well for the design of a transaction processing system but perhaps not so well for a website, an electronic meeting or a multimedia education application.

So what?

Most immediately for me is the potential avenue these thoughts might provide for the innovation role I’m meant to be taking on. I can currently see two immediately useful applications of this thinking:

  1. Using metaphor to map the current “grammar of school” at the host institution in order to identify what current conceptions are and evaluate whether they are limiting what is possible.
    I think it’s fairly obvious from what I’ve said on this blog that I think this is the case. It also helps, or perhaps increases my pattern entrainment, that there is a connection between this and with some work my wife is doing.
  2. Developing different metaphors to develop innovative approaches to e-learning.

More broadly, I think this is another way to show and explain just how limiting and negative an influence the LMS fad has been in e-learning. More broadly again, it highlights some of the disquiet I’ve felt about the direction of the teaching and practice of information systems/technology within organisations.

More to come

Introna (1996) goes onto talk about the role of narrative and myth may have to play in information systems development. I need to follow these up as through Dave Snowden and others I have a growing interest in applying these ideas to e-learning.

More on that later.

References

Black, E., D. Beck, et al. (2007). “The other side of the LMS: Considering implementation and use in the adoption of an LMS in online and blended learning environments.” Tech Trends 51(2): 35-39.

Coates, H., R. James, et al. (2005). “A Critical Examination of the Effects of Learning Management Systems on University Teaching and Learning.” Tertiary Education and Management 11(1): 19-36.

Introna, L. (1996). “Notes on ateleological information systems development.” Information Technology & People 9(4): 20-39.

Feldstein, M. (2006). Unbolting the chairs: Making learning management systems more flexible. eLearn Magazine. 2006.

Jones, D. and N. Muldoon (2007). The teleological reason why ICTs limit choice for university learners and learning. ICT: Providing choices for learners and learning. Proceedings ASCILITE Singapore 2007, Singapore.

Pondy, L.R. (1991), “The role of metaphor and myths in organization and in the facilitation of change”, in Pondy, L.R., Morgan, G., Frost, P. and Dandridge, T. (Eds), Organizational Symbolism, JAI Press, Greenwich, CT, pp. 157-66.

Salmon, G. (2005). “Flying not flapping: a strategic framework for e-learning and pedagogical innovation in higher education institutions.” ALT-J, Research in Learning Technology 13(3): 201-218.

West, R., G. Waddoups, et al. (2006). “Understanding the experience of instructors as they adopt a course management system.” Educational Technology Research and Development.

“An ISDT for e-learning” – Audio is now synchronized

On Friday the 20th of Feb I gave a talk at the ANU on my PhD. A previous post has some background and an overview of the presentation.

I recorded the presentation using my iPhone and the Happy Talk recorder application. I’ve finally got the audio up and synchronised with the Slideshare presentation.

Hopefully the presentation is embedded below, but I’ve had some problem embedding it in the blog (all the other slideshare presentations have been ok.

Nope, the embedding doesn’t want to work. Bugger. Here’s a link to the presentation page on Slideshare.

Limitations of Slideshare

In this presentation, along with most of my current presentations, I use an adapted form of the “Lessig” method of presentation. A feature of this method is a large number of slides (in my case 129 slides for a 30 minute presentation) with some of the slides being used for very small time frames – some less than a second.

The Slideshare synchronisation tool appears to have a minimum time allowed for each slide – about 15 seconds. At least that is what I found with this presentation. I think perhaps the limitation is due to the interface, or possibly my inability to use it appropriately.

This limitation means that some of the slides in my talk are not exactly synchronised with the audio.

The Happy Talk Recorder

I’m very happy with it. The quality of the audio is surprisingly good. Little or no problems in using it. I think I will use it more.

Barriers to innovation in organisations: teleological processes, organisational structures and stepwise refinement

This video speaks to me on so many levels. It summarises many of the problems I have faced and encountered trying to implement innovative approaches to e-learning at universities over the last 15 plus year. I’m sure I am not alone.

Today, I’ve spent a lot of time not directly related to what I wanted to achieve. Consequently, I had planned not to do or look at anything else until I’d finished. But this video resonates so strongly that I couldn’t resist watching, downloading it and blogging it.

I came across the video from a post by Punya Mishra. Some more on this after the video. I should also link to the blog post on the OpenNASA site. Would your University/organisation produce something similar?

If Nona ever gets around to watching this video, I am sure she will see me in a slightly different role in the video. Until recently I had the misfortune to be in the naysayer role. That’s no longer the case. Who said no good could come of organisational restructures?

Barriers to innovation and inclusion

The benefits of being open

Coming across this video, provides further evidence to support an earlier post I made today on the values of being open. I became aware of Punya’s post because of the following process:

  • Almost a year ago Punya published this post on his blog that openly shares the video of a keynote he and Mat Koehler gave.
  • I came across it not long afterwards through my interest in TPACK (formerly known as TPCK).
  • About two weeks ago I decided to use part of the video in some sessions I was running on course analysis and design.
  • A couple of days ago I blogged on an important part of the presentation (not used in the sessions I ran) that resonated with my PhD work.
  • My blog software told Punya’s blog software about my post and added it as a comment to his blog.
  • This afternoon Google Alerts sent me an email that this page on Punya’s blog was linking to my blog (because of the comment – see the comments section in the right hand menu).
  • Out of interest (some might say in the interest of procrastination) I followed the link and saw the video.

I plan to use parts of this video in future presentations around my PhD research. I believe it will resonate with people so much better than me simply describing the abstract principles.

So while not directly contributing to what I wanted to do today. It’s provided with a great advantage in the future.

of a Google Alert I have set on my site. Google emailed me to say that Punya had made this post because his blog software includes a list of the

I’ve spent a lot of time today doing stuff not necessarily directly related to what I wanted to achieve today. To such an extent I’d decided not to blog anymore.

Is all diversity good/bad – a taxonomy of diversity in the IS discipline

In a previous post I pointed to and summarised a working paper that suggests that IS research is not all that diverse. At least at the conceptual level.

The Information Systems (IS) discipline has for a number of years been having an on-going debate about whether or not the discipline is diverse or not. A part of that argument has been discussion about whether diversity is good or bad for IS and for a discipline in general.

Too much diversity is seen as threatening the academic legitimacy and credibility of a discipline. Others have argued that too little diversity could also cause problems.

While reading the working paper titled “Metaphor, meaning and myth: Exploring diversity in information systems research” I began wondering about the definition of diversity. In particular, the questions I was thinking about were

  1. What are the different types of diversity in IS research?
    Based on the working paper I believe there are a number of different types of diversity. What are they?
  2. Are all types of diversity bad or good?
    Given I generally don’t believe in universal generalisations, my initial guess is that the answer will be “it depends”. In some contexts/purposes, some will be bad and some will be good.
  3. Is this topic worth of a publication (or two) exploring these questions and the implications they have for IS and also for other disciplines and research in general?
    Other disciplines have had these discussions.
  4. Lastly, what work have IS researchers already done in answering these questions, particularly the first two?
    There’s been a lot of work in this area, so surely someone has provided some answers to these questions.

What different types of diversity exist?

The working paper that sparked these questions talks about conceptual diversity.

It also references Benbasat and Weber (1996) – two of the titans of the IS discipline and this article is perhaps one of “the” articles in this area – who propose three ways of recognising research diversity

  1. Diversity in the problems addressed.
  2. Diversity in the theoretical foundations and reference disciplines used to account for IS phenomena.
  3. Diversity of research methods used to collect, analyse and interpret data.

The working paper also suggests that Vessey et al (2002) added two further characteristics

  1. Research approach.
  2. Research method.

I haven’t read the Vessey paper but given this summary, I’m a bit confused. These two additional characteristics seem to fit into the 3rd “way” from Benbasat and Weber. Obviously some more reading is required.

In the work on my thesis I’m drawing on four classes of questions about a domain of knowledge from Gregor (2006). They are

  1. Domain questions. What phenomena are of interest in the discipline? What are the core problems or topics of interest? What are the boundaries of the discipline?
  2. Structural or ontological questions. What is theory? How is this term understood in the discipline? Of what is theory composed? What forms do contributions to knowledge take? How is theory expressed? What types of claims or statements can be made? What types of questions are addressed?
  3. Epistemological questions. How is theory constructed? How can scientific knowledge be acquired? How is theory tested? What research methods can be used? What criteria are applied to judge the soundness and rigour of research methods?
  4. Socio-political questions. How is the disciplinary knowledge understood by stakeholders against the backdrop of human affairs? Where and by whom has theory been developed? What are the history and sociology of theory evolution? Are scholars in the discipline in general agreement about current theories or do profound differences of opinion exist? How is knowledge applied? Is the knowledge expected to be relevant and useful in a practical sense? Are there social, ethical or political issues associated with the use of the disciplinary knowledge?

I wonder if these questions might form a useful basis or a contribution to a taxonomy of diversity in IS. At this stage, I think some sort of taxonomy of diversity might indeed be useful.

The gulf between users and IT departments

Apparently Accenture have discovered “user-determined computing” and associated issues.

The definition goes something like this

Today, home technology has outpaced enterprise technology, leaving employees frustrated by the inadequacy of the technology they use at work. As a result, employees are demanding more because of their ever-increasing familiarity and comfort level with technology. It’s an emerging phenomenon Accenture has called “user-determined computing.”

This is something I’ve been observing for a number of years and am currently struggling with in terms of my new job, in a couple of different ways. In particular, I’m trying to figure out a way to move forward. In the following I’m going to try and think/comment about the following

  • Even though “Web 2.0 stuff” seems to be bringing this problem to the fore, it’s not new.
  • The gulf that exists between the different ends of this argument and the tension between them.
  • Question whether or not this is really a technology problem.
  • Ponder whether this is a problem that’s limited only to IT departments.

It’s not new

This problem, or aspects of it, have been discussed in a number of places. For example, CIO magazine has a collection of articles it aligns with this issue (Though having re-read them, I’m not sure how well some of them connect).

The third one seems the most complete on its coverage of this topic. I highly recommend a read.

The gulf

Other earlier work has suggested that the fundamental problem is that there is a gap or gulf, in some cases a yawning chasm, between the users’ needs and what’s provided by the IT department.

One of the CIO articles above puts it this way

And that disconnect is fundamental. Users want IT to be responsive to their individual needs and to make them more productive. CIOs want IT to be reliable, secure, scalable and compliant with an ever increasing number of government regulations. Consequently, when corporate IT designs and provides an IT system, manageability usually comes first, the user’s experience second. But the shadow IT department doesn’t give a hoot about manageability and provides its users with ways to end-run corporate IT when the interests of the two groups do not coincide.

One of the key points here is that the disconnect is fundamental. The solution is not a minor improvement to how the IT department works. To some extent the problem is so fundamental that people’s mindsets need to change.

Is this a technology problem?

Can this change? Not sure it can, at least in the organisations where all that is IT is to be solved by the IT department. Such a department, especially at the management level, is manned (and it’s usually men, at least for now) by people who have lived within IT departments and succeeded, so that they now reside at the top. In most organisations the IT folk retain final say on “technical” questions (which really aren’t technical questions) because of the ignorance and fear of senior management about “technical” questions. It’s to easy for IT folk to say “you can’t do that” and for senior management not to have a clue that it is a load of bollocks.

Of course I should take my own advice look for incompetence before you go paranoid. Senior IT folk, as with most people, will see the problem in the same way they have always seen the problem. They will always seek solve it with solutions they’ve used before, because that’s the nature of the problem they see. One of the “technical” terms for this is inattentional blindness

The chances of a fundamental change to approach is not likely. Dave Snowden suggests that the necessary, but not sufficient conditions, for innovation are starvation, pressure and perspective shift. Without that perspective shift, the gulf will continue exist.

It’s not limited to IT

You can see evidence of this gulf in any relationship between “users” and a service group within an organisation (e.g. finance, Human Resources, quality assurance, curriculum design etc.) – especially when the service group is a profession. The service group becomes so enamoured of its own problem due to pressure from the organisation, the troubles created by the “users” and the distance (physical, temporal, social, mental etc.) between the service group and the “users” that it develops its own language, its own processes and tasks and starts to lose sight of the organisations core business.

The most obvious end result of the gulf is when the service department starts to think it knows best. Rather than respond to the needs, perceived and otherwise, of the “users”, the service department works on what it considers best. Generally something the emphasises the importance of the service divisions and increases their funding and importance within the organisation. You can see this sort of thing all the time with people who are meant to advice academics about how to improve their learning and teaching.

IT are just the easiest and most obvious target for this because IT is now a core part of life for most professions, most organisations continue to see it as overhead to be minimised, rather than an investment to be maximised and the on-going development of IT is changing the paradigm for IT departments.