Research Method – Overview

The following is the first part of chapter 3 of my thesis. The aim of this part is to explain the broad view of research that informs the work. The second part will give more specific details about the specific method used. Over the next week, I’m re-reading this chapter, when the fixes are done, I will upload a completed version.

Update: The latest version of the complete chapter is available from this page

Introduction

This thesis aims to answer the “how” question associated with the design, development and evolution of information systems to support e-learning in universities. It seeks to achieve this by using an iterative action research process (Cole, Purao et al. 2005) to formulate an information systems design theory (ISDT) (Walls, Widmeyer et al. 1992; Walls, Widmeyer et al. 2004; Gregor and Jones 2007). This chapter aims to situate, explain and justify the nature of the research method adopted in this work. It starts by examining the question of research paradigm and its connection with theory (Section 3.2). In particular, it seeks to explain why the choice of paradigm is seen as secondary to deciding the type of theory to be produced, in terms of selecting a research method. The chapter then uses four questions about a body of knowledge identified by Gregor (2006) to describe the particular perspectives that inform the research method to formulate the ISDT developed in this thesis (Section 3.3).

The formulation of an ISDT is one example of design research (Simon 1996; Hevner, March et al. 2004). At the start of this work, design research was not a dominant research methodology within the field of information systems (Lee 2000). There was a reluctance to accept the importance of this type of knowledge within information systems (Gregor 2002) and to this day there remain diverse opinions and on-going evolutionary understanding about the nature, place and process associated with design research and design theory (Baskerville 2008; Kuechler and Vaishnavi 2008). Consequently, the thinking underlying this thesis, and the content and structure of this chapter, has undergone a number of iterations as understanding has improved throughout the entire research process. For example, initial descriptions of this work (Jones and Gregor 2004; Jones and Gregor 2006) used the structure of an ISDT presented by Walls, Widmeyer and El Sawy (1992). This thesis uses the improved specification of an ISDT presented by Gregor and Jones (2007), an improvement that arose, in part, from work associated with this thesis. For these reasons, this chapter may delve into greater detail about these issues than traditional.

Paradigms and theory

It seems traditional at this point to describe the type of research paradigm that has informed the research for this work based on the assumption that the paradigm embodies a world view that has provided the fundamental assumptions to guide this research project and its selection of method. This section takes a slightly different approach.

This section seeks to argue that the question of research paradigm is of secondary importance to the matching of the research question, to the type of theory that best fits and subsequently the most appropriate research methodology or paradigm. This section argues that the aim of research is the generation and evaluation of knowledge (Section 3.2.1) and that this knowledge is typically expressed as different types of theory (Section 3.2.2). Lastly, the section seeks to connect this view with similar views of research paradigms (Section 3.2.3).

What is reseaerch?

The sixth edition of the OECD’s (2002) Frascati Manul defines research and experimental development as a

creative work undertaken on a systematic basis in order to increase the stock of knowledge, including knowledge of man, culture and society, and the use of this stock of knowledge to devise new applications

Vaishnavi and Kuechleer (2004) define research as “an activity that contributes to the understanding of a phenomenon”. Research, in its most conceptual sense, is nothing more than the search for understanding (Hirschheim 1992). Research is systematic, self-critical inquiry that is founded in curiosity and driven by a desire to understand arising from a stable, systematic and sustained curiosity and subjected to public criticism and, where appropriate, empirical tests (Stenhouse 1981).

Based on these perspectives it appears that a major aim of research is to generate and evaluate knowledge. Various perspectives on the nature of that knowledge, it’s purpose, validity, novelty, utility etc exist. Returning to the OECD (2002), they define research and development to cover three activities:

  1. 1. basic research;
    Experimental or theoretical work, without practice application in view, that aims to acquire new knowledge of the foundations of phenomena and observable facts
  2. applied research; and
    Original investigation aimed at acquiring new knowledge primarily for a specific practical aim or objective.
  3. experimental development.
    Systematic work based on existing knowledge that is directed towards producing new, or improving existing, processes, systems or services.

Even with these differences, a major aim of research appears to be to make a contribution to knowledge. If this is the case, then how is that knowledge represented. In creating and validating knowledge, scientists rely on the clear and succinct statement of theory, theory that embodies statements of the knowledge that has been developed (Venable 2006). Developing theory is what separates academic researchers from practitioners and consultants (Gregor 2006).

The role of theory and method

If an aim of research is to make a contribution to knowledge, should theory be used to represent that knowledge? Theory should be a primary output of research (Venable 2006). Theory development is a central activity in organisational research (Eisenhardt 1989). There is value in theory because it is practical. The practicality of good theory arises because it advances knowledge in a scientific discipline and guides research towards crucial questions (van de Ven 1989). Theories are practical as the enable knowledge to be accumulated in a systematic manner and the use of this knowledge to inform practice (Gregor 2006).

While there is recognition of the importance of theory, there remains questions about what it is. There has been a long-running search for the meaning of “theory” (Baskerville 2008). DiMaggio (1995) identifies at least three views of what theory should be and suggests that each has some validity and limitations. There is and has been disagreement about whether a model and a theory are different, whether or not a typology is a theory and other questions about theory (Sutton and Staw 1995). Many researchers within information systems use the word theory, but fail to give any explicit definition (Gregor 2006). This lack of consensus about what theory is, may explain why it is difficult to develop strong theory in the behavioural sciences (Sutton and Staw 1995).

Types of theory

Part of the confusion around theory has been around its purpose, around whether or not there are different types of theory. Within the information systems field there has been several different approaches to identifying different types of theory. Iivari (1983) described three levels of theorising: conceptual, descriptive and prescriptive. A number of authors (Nunamaker, Chen et al. 1991; Walls, Widmeyer et al. 1992; Kuechler and Vaishnavi 2008) have used the distinction of kernel and design theories. Taking a broad view of theory Gregor (2006) identified five inter-related categories of theory based on the primary type of question at the foundation of a research project. These five categories and their question of interest are summarized in Table 3.1.

Table 3.1 – Gregor’s Taxonomy of Theory Types in Information Systems Research (adapted from Gregor 2006)
Theory type Distinguishing attributes
I. Analysis Says “what is”.
The theory does not extend beyond analysis and description. No causal relationships among phenomena are specified and no predictions are made.
II. Explanation Says “what is”, “how”, “why”, “when”, “where”.
The theory provides explanations but does not aim to predict with any precision. There are no testable propositions.
III. Prediction Says “what is” and “what will be”.
The theory provides predictions and has testable propositions but does not have well-developed justificatory causal explanations.
IV. Explanation and prediction (EP) Says “what is”, “how”, “why”, “when”, “where” and “what will be”.
Provides predictions and has both testable propositions and causal explanations.
V. Design and action Says “how to do something”.
The theory gives explicit prescriptions (e.g., methods, techniques, principles of form and function) for constructing an artifact.

The taxonomy presented in Table 3.1 is based on little prior work and there exists opportunities for further work and improvement (Gregor 2006). There also remains some disagreement about the designation of design theory to Theory type V (Venable 2006). However, it does seem to provide a foundation on which to build sound, cumulative, integrated and practical bodies of theory within the information systems discipline (Gregor 2006).

Relationship between theory and method

Gregor (2006) suggests that research begins with a problem to be solved or a question of interest. The type of theory that is to be developed or tested depends on the nature of this problem and the questions the research wishes to address (Gregor 2006). This connection is made on the basis of the primary goals of theory (Gregor 2006). Assuming this image of the research process then it seems logical that the next step is the selection of research methods or paradigms most appropriate to develop or test the selected theory type. This is not to suggest that there is a one-to-one correspondence between a particular theory type and a particular method or paradigm. Gregor (2006) argues that none of the theory types necessitate a specific method, however, proponents of specific paradigms do favour certain types of theory over others. While there is no necessary correspondence between theory types and methods/paradigms, it is suggested that certain methods/paradigms are better suited to certain types of theory, research problems and researchers.

Recognising different types of theory makes it possible to see the differences as complementary and consequently enable integration into a larger whole (Gregor 2006). It is possible for research to make a contribution to more than one type of theory. Baskerville (2008) argues that there is clearly more to design research than design theory alone. Kuechler and Vaishnavi (2008) show how a design research project is contributing to both design theory (Gregor’s Type V) and kernel theory (Gregor’s other types). The possibility for a research project to make contributions to different types of theory suggests that a research project may draw upon several different methods or paradigms.

The role of research paradigms

Having briefly summarised the perspective on research, theory and method in previous sections, this section makes some connections between this perspective and the views on research paradigms expressed by Mingers (2001) and the pragmatic view of science/paradigm (Goles and Hirschheim 2000).

Research methodology attempts to approximate a compatible collection of assumptions and goals which underlay methods, the actual methods, and the way the results of performing those methods are interpreted and evaluated (Reich 1995). The assumptions or beliefs about the world, how it works and how it may be understood has been termed a paradigm (Kuhn 1996; Guba, 1999). Numerous authors have sought to identify and describe different research paradigms. Lincoln and Guba (2000) identify five major paradigms: positivism, postpositivism, critical theory, constructivism and participatory action. Within the information systems discipline, Orlikowski and Baroudi (1991) identify three broad research paradigms: positivist, interpretive and critical. Within information systems and in connection with the rise of design research, numerous authors (Nunamaker, Chen et al. 1991; March and Smith 1995; Hevner, March et al. 2004) have suggested that it is possible to identify two broad research paradigms within information systems: descriptive and prescriptive research. Where descriptive research is seen as traditional research where prescriptive research is design research. There are some who take issue with seeing design research as a separate paradigm (McKay and Marshall 2007).

Just as there are differing views on the number and labels of different research paradigms, there are differences on how to describe them. Guba and Lincoln (1994) describe the beliefs encompassed by a paradigm through three, interconnected questions: ontology, epistemology and methodology. Mingers (2001) describes a paradigm as being a general set of philosophical assumptions covering ontology, epistemology, ethics or axiology and methodology. Gole and Hirschheim (2000) use ontology, epistemology and axiology.

Mingers (2001) describes three perspectives on paradigms. These are:

  • isolationism;
    Views paradigms as based on contradictory assumptions which makes them mutually exclusive and consequently a researcher should follow a single paradigm.
  • complementarist; and
    Paradigms are seen as more or less suited to particular problems and selection is based on a process of choice.
  • multi-method.
    Paradigms are seen to focus on different aspects of reality and can be combined to provide a richer understanding of the problem.

Minger’s (2001) multi-method perspective seems to fit well with a research project seeking to address a research problem through making contributions to different types of theory (as described in Section 3.2.2). Such a perspective suggests that the question of whether a researcher is positivist, interpretivist or critical is of secondary importance to the question of fit between problem, theories and methods.

Such a perspective seems to have connections with that of the pragmatist perspective of research described by Goles and Hirschheim (2000). Pragmatists consider the research question as more important then the method used or the worldview meant to underpin the method (Tashakkori and Teddlie 1998). Table 3.2 compares four important paradigms, including pragmatism. It has been suggested that pragmatism draws on a philosophical basis of pluralism to undercut the traditional dichotomous battle between conflicting paradigms (Goles and Hirschheim 2000). It facilitates the construction of connections and interplay between conflicting paradigms (Wicks and Freeman 1998).

If a paradigm must be chosen, then that of pragmatism seems the best fit. This research puts the question of “how to design and support an information systems for e-learning within universities” as the focus. The type(s) of theories, the methods to be used and their appropriateness should flow and align with that question. The following section provides a explanation of this alignment and describes the choices made for this work.

Table 3.2 – Comparisons for four important paradigms used in the social and behavioural sciences (adapted from Tashakkori and Teddlie 1998)
Positivism Postpositivism Pragmatism Constructivism
Methods Quantitative Primarily quantitative Quantitative + qualitative Qualitative
Logic Deductive Primarily deductive Deductive + inductive Inductive
Epistemology Objective point of view, Knower and known are dualism Modified dualism. Findings probably objectively “true” Both objective and subjective points of view Subjective point of view. Knower and known are inseparable.
Axiology Inquiry is value-free Inquiry involves values, but they may be controlled Values play a large role in interpreting results Inquiry is value-bound
Ontology Naive realism Critical or transcendental realism Accept external reality. Choose explanations that best produce desired outcomes Relativism
Causal linkages Real causes temporally precedent to or simultaneous with effects There are some lawful, reasonably stable relationships among social phenomena. These may be known imperfectly. Causes are identifiable in a probalistic sense that changes over time There may be causal relationships, but we will never be able to pin them down. All entities simultaneously shaping each other. It’s impossible to distinguish causes from effects.

References

Baskerville, R. (2008). "What design science is not." European Journal of Information Systems 17(5): 441-443.

Cole, R., S. Purao, et al. (2005). Being proactive: Where action research meets design research. Twenty-Sixth International Conference on Information Systems: 325-336.

DiMaggio, P. (1995). "Comments on "What theory is not"." Administrative Science Quarterly 40(3): 391-397.

Eisenhardt, K. (1989). "Building theories from case study research." The Academy of Management Review 14(4): 532-550.

Goles, T. and R. Hirschheim (2000). "The paradigm is dead, the paradigm is dead…long live the paradigm: the legacy of Burrell and Morgan." Omega 28: 249-268.

Gregor, S. (2002). "Design Theory in Information Systems." Australian Journal of Information Systems: 14-22.

Gregor, S. (2006). "The nature of theory in information systems." MIS Quarterly 30(3): 611-642.

Gregor, S. and D. Jones (2007). "The anatomy of a design theory." Journal of the Association for Information Systems 8(5): 312-335.

Hevner, A., S. March, et al. (2004). "Design science in information systems research." MIS Quarterly 28(1): 75-105.

Hirschheim, R. A. (1992). Information Systems Epistemology: An Historical Perspective. Information Systems Research: Issues, Methods and Practical Guidelines. R. Galliers. London, Blackweel Scientific Publications: 28-60.

Iivari, J. (1983). Contributions to the theoretical foundations of systemeering research and the Picoco model. Oulu, Finland, Institute of Data Processing Science, University of Oulu.

Jones, D. and S. Gregor (2004). An information systems design theory for e-learning. Managing New Wave Information Systems: Enterprise, Government and Society, Proceedings of the 15th Australasian Conference on Information Systems, Hobart, Tasmania.

Jones, D. and S. Gregor (2006). The formulation of an Information Systems Design Theory for E-Learning. First International Conference on Design Science Research in Information Systems and Technology, Claremont, CA.

Kuechler, B. and V. Vaishnavi (2008). "On theory development in design science research: anatomy of a research project." European Journal of Information Systems 17(5): 489-504.

Kuhn, T. S. (1996). The Structure of Scientific Revolutions. Chicago, University of Chicago Press.

Lee, A. S. (2000). "Irreducibly Sociological Dimensions in Research and Publishing." MIS Quarterly 24(4): v-vii.

March, S. T. and G. F. Smith (1995). "Design and Natural Science Research on Information Technology." Decision Support Systems 15: 251-266.

McKay, J. and P. Marshall (2007). Science, Design and Design Science: Seeking Clarity to Move Design Science Research Forward in Information Systems. 18th Australasian Conference on Information Systems, Toowoomba.

Mingers, J. (2001). "Combining IS Research Methods: Towards a Pluralist Methodology." Information Systems Research 12(3): 240-259.

Nunamaker, J. F., M. Chen, et al. (1991). "Systems development in information systems research." Journal of Management Information Systems 7(3): 89-106.

OECD (2002). Frascati Manual: Proposed standard practice for surveys on research and experimental development. Paris, France, Organisation for Economic Co-operation and Development: 254.

Orlikowski, W. and J. Baroudi (1991). "Studying information technology in organizations: Research approaches and assumptions." Information Systems Research 2(1): 1-28.

Reich, Y. (1995). "The study of design research methodology." Transactions of the ASME.

Simon, H. (1996). The sciences of the artificial, MIT Press.

Stenhouse, L. (1981). "What counts as research?" British Journal of Educational Studies 29(2): 103-114.

Sutton, R. and B. Staw (1995). "What theory is not." Administrative Science Quarterly 40(3): 371-384.

Tashakkori, A. and C. Teddlie (1998). Mixed methodology: combining qualitative and quantitative approaches. Thousand Oaks, California, SAGE.

Vaishnavi, V. and B. Kuechleer. (2004, 18 January 2006). "Design Research in Information Systems."   Retrieved 20 April 2004, 2004, from http://www.isworld.org/Researchdesign/drisISworld.htm.

van de Ven, A. (1989). "Nothing is quite so practical as a good theory." The Academy of Management Review 14(4): 486-489.

Venable, J. (2006). The role of theory and theorising in design science research. First International Conference on Design Science Research in Information Systems and Technology, Claremont, CA.

Walls, J., G. Widmeyer, et al. (2004). "Assessing information system design theory in perspective: How useful was our 1992 initial rendition." Journal of Information Technology, Theory and Application 6(2): 43-58.

Walls, J., G. Widmeyer, et al. (1992). "Building an Information System Design Theory for Vigilant EIS." Information Systems Research 3(1): 36-58.

Wicks, A. and R. E. Freeman (1998). "Organization studies and the new pragmatism: Positivism, anti-positivism and the search for ethics." Organization Science 9(2): 123-140.

Dimensions delimiting conceptions of online teaching – something to guide the indicators and the evaluation of LMS data?

Col Beer has been doing some work around the “indicators” project – an attempt to mine system logs and databases of a course management system (CMS) to generate data of some use.

One of the (many) potential problems with the work, and the work of its like, has been attempting to generate some sort of understanding about how you can rank or categorise the type of learning or activity taking part on on the CMS.

In the following I wonder if the work on teachers’ conceptions of teaching, particularly that associated with online teaching (e.g. Gonzalez, 2009) might provide a useful solution to this problem.

Research on teachers’ conceptions of teaching

There is a large amount of research, quite a research tradition, around understanding the different conceptions of teaching (and subsequently learning) that academics bring to their experience. Much of this work believes that the quality of student learning is directly influenced and constrained by the conceptions of teaching held by teaching staff. (Following from this is the idea that to improve the quality of student learning you have to target teachers’ conceptions of teaching, but that is another story.)

Teachers’ conceptions of online teaching

Gozalez (2009) extends the work on teachers’ conceptions of teaching to the online environment. One of the contributions of this work is some “dimensions delimiting conceptions of online teaching”. The following table is adapted from Gonzalez (2009) and represents these dimensions. I wonder if these dimensions could be used to guide the indicators project? More on this below.

Dimensions delimiting conceptions of online teaching (Gonzalez, 2009: p 310)
The web for individual access to learning materials and information; and for individual assessment The web for learning related communication (asynchronous and/or synchronous) The web as a medium for networked learning
Teacher Provides structured information/directs students to selected web sites Set up spaces for discussion/facilitates dialogue Set up spaces for communication, discussion and knowledge building/facilitates-guides the process
Students Individually study materials provided Participate in online discussions Share and build knowledge
Content Provided by lectuerer Provided by the lecturere but students can modify – extend it through online discussions Buit by students using the space set up by the lecturer
Knowledge Owned by lecturer Discovered by students within lecturer’s framework Built by students

The benefit that this provides is to give an existing framework, with some basis in research about what staff already do, to guide the design of statistics/indicators to be drawn from system logs and databases. Statistics that could indicate the conception of online teaching that is being used by the academics. This could be useful to identify “good” staff using more advanced pedagogy, identify the traditional ones, use this insight to guide training and interventions and perhaps as part of a research project to establish connections between the conceptions identified form the system logs and the outcomes of students in terms of final results.

For example, some potential indicators

  • A course where all content is provided by the academics indicates that the staff member is at the “lower” end.
  • The use of tools such as wikis, blogs (tools that encourage contributions from students) and which are actively used by students indicates a staff member/courses at the “higher” end.
  • A course site where the site framework is put in place by the academic and can’t be modified by students, indicates low end.
  • Large amount of discussions from students, that has low levels of interaction, indicates someone in the middle. High levels of interaction indicate someone at the higher level.

Implications and questions

There is probably many more than the simple ones outlined below. But it is getting late.

  • There is mention of the role context plays in limiting or influencing teachers’ conceptions (and thus the quality of student learning), should the nature and affordances of the technology available play a similar role?
    • Do the affordances of a CMS actively get in the way of teachers’ being able to, or even aware of, the “networked learning” (the “good”) approach?
    • Do the affordances of a PLE type approach actively encourage a more “networked learning” approach?
  • Can this work help expand/enhance the evaluation of learning and teaching, which is somewhat limited at most universities.
  • Is there a role in a design theory for e-learning for some of these ideas?

References

Gonzalez, C. (2009). “Conceptions of, and approaches to, teaching online: a study of lecturers teaching postgraduate distance courses.” Higher Education 57(3): 299-314.

Is all diversity good/bad – a taxonomy of diversity in the IS discipline

In a previous post I pointed to and summarised a working paper that suggests that IS research is not all that diverse. At least at the conceptual level.

The Information Systems (IS) discipline has for a number of years been having an on-going debate about whether or not the discipline is diverse or not. A part of that argument has been discussion about whether diversity is good or bad for IS and for a discipline in general.

Too much diversity is seen as threatening the academic legitimacy and credibility of a discipline. Others have argued that too little diversity could also cause problems.

While reading the working paper titled “Metaphor, meaning and myth: Exploring diversity in information systems research” I began wondering about the definition of diversity. In particular, the questions I was thinking about were

  1. What are the different types of diversity in IS research?
    Based on the working paper I believe there are a number of different types of diversity. What are they?
  2. Are all types of diversity bad or good?
    Given I generally don’t believe in universal generalisations, my initial guess is that the answer will be “it depends”. In some contexts/purposes, some will be bad and some will be good.
  3. Is this topic worth of a publication (or two) exploring these questions and the implications they have for IS and also for other disciplines and research in general?
    Other disciplines have had these discussions.
  4. Lastly, what work have IS researchers already done in answering these questions, particularly the first two?
    There’s been a lot of work in this area, so surely someone has provided some answers to these questions.

What different types of diversity exist?

The working paper that sparked these questions talks about conceptual diversity.

It also references Benbasat and Weber (1996) – two of the titans of the IS discipline and this article is perhaps one of “the” articles in this area – who propose three ways of recognising research diversity

  1. Diversity in the problems addressed.
  2. Diversity in the theoretical foundations and reference disciplines used to account for IS phenomena.
  3. Diversity of research methods used to collect, analyse and interpret data.

The working paper also suggests that Vessey et al (2002) added two further characteristics

  1. Research approach.
  2. Research method.

I haven’t read the Vessey paper but given this summary, I’m a bit confused. These two additional characteristics seem to fit into the 3rd “way” from Benbasat and Weber. Obviously some more reading is required.

In the work on my thesis I’m drawing on four classes of questions about a domain of knowledge from Gregor (2006). They are

  1. Domain questions. What phenomena are of interest in the discipline? What are the core problems or topics of interest? What are the boundaries of the discipline?
  2. Structural or ontological questions. What is theory? How is this term understood in the discipline? Of what is theory composed? What forms do contributions to knowledge take? How is theory expressed? What types of claims or statements can be made? What types of questions are addressed?
  3. Epistemological questions. How is theory constructed? How can scientific knowledge be acquired? How is theory tested? What research methods can be used? What criteria are applied to judge the soundness and rigour of research methods?
  4. Socio-political questions. How is the disciplinary knowledge understood by stakeholders against the backdrop of human affairs? Where and by whom has theory been developed? What are the history and sociology of theory evolution? Are scholars in the discipline in general agreement about current theories or do profound differences of opinion exist? How is knowledge applied? Is the knowledge expected to be relevant and useful in a practical sense? Are there social, ethical or political issues associated with the use of the disciplinary knowledge?

I wonder if these questions might form a useful basis or a contribution to a taxonomy of diversity in IS. At this stage, I think some sort of taxonomy of diversity might indeed be useful.

It is official – a best publication for IS in 2007

I don’t like to brag, but you don’t get this sort of thing all that often.

Last week I was in Paris for the ICIS’2008 conference. The main reason for going to the conference was to receive an award.

It turns out that the The Anatomy of a Design Theory by Professor Shirley Gregor and myself has been voted as one of the 5 best publications within the Information Systems discipline by a group of senior scholars.

That goes with being the paper of 2007 for the Journal of the Association for Information Systems.

Thanks Shirley.

I was going to include a photo the little plaque we each received but that was perhaps taking things a bit too far.

Information Systems Epistemology: An Historical Perspective

Information Systems Epistemology: An Historical Perspective

This is a summary, review, attempt to understand, and pick tidbits from the following book chapter

Hirschheim, R. A. (1992). Information Systems Epistemology: An Historical Perspective. In R. Galliers (Ed.), Information Systems Research: Issues, Methods and Practical Guidelines (pp. 28-60). London: Blackweel Scientific Publications.

It’s an attempt to start moving on Chapter 3 of my thesis.

Basic summary

Provides a basic overview/introduction to the history of epistemology

The bits I found particularly interesting, given my current state of understanding and work, include

  • quotes about the role of science being the search for understanding
  • Suggestions that better to view science as problem solving – Poper quot that understanding is the same as problem solving. Particularly appropriate for what I understand as design research.
  • Very nice quote about researchers needing to be tool builders
  • Quotes from early anti-positivists of a need to complement positivism, not replace it.
  • Quote from Schutz about the main function of social science being understanding, subject meaning and action. Does this imply design research type work?

Abstract

The paper aims to take a look at the history of epistemology within the IS discipline and consequently expose hidden assumptions beneath the conception of valid research and research methods.

Abstract

It is my contention that information systems epistemology draws heavily from the social sciences because information systems are, fundamentally, social rather than technical systems.

The suggestion is that the natural sciences scientific paradigm is only appropriate as much as it is appropriate for the social sciences.

Fundamental aspects of epistemology

epistemology – our theory of knowledge, how we acquire knowledge.

What is knowledge – the author considers it to roughly synonymous with understanding.

Raises two questions

  1. What is knowledge – a simple problem
  2. How do we obtain valid knowledge – more problematic

What is knowledge?

Mentions the Greeks and their two types of knowledge

  1. doxa – knowledge believed to be true
  2. episteme – knowledge known to be true

This leads into the Sophists. How do we know something is true?

Author suggests it’s a straightforward problem. Since we cannot transcend our language/cultural system there is no chance of obtaining any absolute viewpoint. Hence knowledge must be “asserted”, knowledge claims are conceived in a probalistic sense. Knowledge is not infalible but conditional, a social convention, relative to both time and place, of societal or group acceptance.

How do we obtain knowledge

“This is the role of science”. But science itself is related to societal norms and expectations. So it can be argued about. “In its most conceptual scence, it is nothing more than the search for understanding” (p30)

This implies that, given any particular cultural/societal view, just about any “scholarly” attempt to acquiring knowledge can be labeled “science”. Distinguishing between science and non-science is blurred. If you take a “multi-cultural” approach. Any particular culture may well have fairly well defined boundary.

“The conventions we agree to are those that have proved successful in the past. If, however, the conventions – and therefore our scientific process – cease to be successful then it would be time to reconsider” (p30-31)

It could be suggested that this is the origins of design research. A pushing of conventions because of perceived limitations.

Many of us are concerned that the present accepted research methods are no longer appropriate for the subject – indeed, they may never have been. What is needed is a fresh look at the field; in particular what is the most appropriate epistemological stance.

Science and method

Begins by laying the groundwork about the limitations/lack of success of the natural science approach. e.g. “yielded many knowledge claims but most do not have widespread community acceptance” (p31). Relates it to similar literature in the social sciences.

Some have suggested that science is better described in terms of problem or puzzle solving. If this is done, then many problems disappear because the emphasis has shifted away from correlations, statistical significance to simply looking for an appropriate way to solve a problem.

This is very much similar to some of the underpinnings of the design research work.

Author goes onto quote Poper (1972) “The activity of understanding is, essentially, the same as that of problem solving.”. Science, in this view, becomes more about practical solutions to problems.

The following paragraph has some interesting implications for design research.

Some chose to view the process of problem solving as a craft (Pettigrew, 1985). Within this context the researcher should be viewed as a craftsman or a tool builder – one who builds tools, as separate from and in addition to, the researcher as tool users. Unfortunately, it is apparent that the common conception of researchers/scientists is different. They are people who use a particular tool (or a set of tools). This, to my mind, is undesirable because if scientists are viewed in terms of tool users rather than tool builders then we run the risk of distorted knowledge acquisition techniques. As an old proverb states: ‘For he who has but one tool, the hammer, the whole world looks like a nail’. We certainly need to guard against such a view, yet the way we practice ‘science’ leads us directly to that view.

Eventually gets to to the point about positivism being the predominant conception of science. Defines it as “an epistemology which posits beliefs (emerging from the search for regularity and causal relationships) and scrutinizes them through empirical testing”.

Positivist science

Seeks to define/understand positivist science, uses 5 points

  1. Unity of the scientific method
    The scientific method – the accepted approach for knowledge acquisition – is universally applicable regardless of the domain of study.
  2. The search for human causal relationships
    There is a desire for regularity and causal relationships amongst the elements of the study. Elements of the whole which are reduced down into constituent parts – reductionism.
  3. The belief in empiricism
    The only valid data is those which are experienced through the senses. Subjective perception, extrasensory experience etc are not acceptable.
  4. The value-free nature of science (and its process)
    There are no connections between the practice of the scientific method and the political, ideological or moral beliefs.
  5. The logical and mathematical foundation of science
    They provide the formal basis for the quantitative analysis in the search for causal relationships.

Ontology of positivism

Ontology – the nature of the world around us and the part of it which the scientist examines

Positivism has a realist ontology. i.e. the universe consists of objectively given, immutable objects and structures that exist as empirical entities and are independent of the observer’s appreciation of them.

Which contrasts with relativism or instrumentalism which holds that reality is a subjective construction of the mind. The names and descriptions of reality that are communicated impact on how reality is perceived and structured….more on this

Positivism has had success in natural sciences, somewhat more checkered in social sciences.

Author provides some summaries of the historical development of epistemology and in particular draws on one provided by Ivanov (1984) which is shown in the following image.

Relevant schools of thought for information science (Ivanov, 1984)

History of IS epistemology

Divides up and introduces a history of IS epistemology into 4 stages

  1. The arrival of positivism
    • starts with the dark ages of the church and study of god as only intellectual pursuits, emergence of science through to 17th century
    • Descartes major source of positivism. Mathematics as sole based for study. All properties could be reduced to mathematical form. separation of mind and matter/mind and body.
    • positivism and empiricism came out of the late renaissance period. Backon and the inductive-experimental method. Gailileo – nature is consistent, not random. Newton stressing need for experimental confirmation. HObbes – humans could be studied using the same methods as physical phenomena.
    • and more into the 1900s.
  2. The entering of anti-positivism
    • Arriving in the latter part of the 19th century concerned the positivism was missing the fundamental experience of life.
    • A number talked about the need for something apart from positivism not something to replace it, something to complement it, hence the name: anti-positivism
    • Traces it back to a number of authors and gives summaries of their position. I found the description of Kant, interesting.
      Kant believed you achieve knowledge through a synthesis (which he called ‘transcendental’) of concept (understanding) and experience. The philosophy that arises is called ‘transcendental idealism’ in which there is a difference between theoretical (dealing with knowledge of appearances – the realm of nature) and practical reason (oral reasoning – issues).

      Okay, so less interesting towards the end. Is this perhaps the limitation of the short description in the paper. What does wikipedia say? Ahh, this quote pricks my interest

      Kant argues, however, that using reason without applying it to experience will only lead to illusions, while experience will be purely subjective without first being subsumed under pure reason.

      I can see this being applied to a range of issues and problems I’m currently thinking of including: Kaplan’s law of instrument, teleological design etc.

      Similarly interesting is the discussion of Dilthey and a couple of comments on his belief. First, the suggestion that life cannot be “understood as a mchine, as Hobbes suggested”. This might be useful for Sandy and her work.

      Secondly, is that life cannot be understood using the explanatory model and its attempt to classify events according to lawas of nature. Connetions with Shirley’s theory of theories stuff.

      The wikipedia page on Dilthey has this to say

      Dilthey strongly rejected using a model formed exclusively from the natural sciences (Naturwissenschaften), and instead proposed developing a separate model for the human sciences (Geisteswissenschaften). His argument centered around the idea that in the natural sciences we seek to explain phenomena in terms of cause and effect, or the general and the particular; in contrast, in the human sciences, we seek to understand in terms of the relations of the part and the whole. In the social sciences we may also combine the two approaches, a point stressed by German sociologist Max Weber.

      And lastly, a nice quote for the Ps Framework

      Because individuals do not exist in isolation, they cannote be studied as isolated units; they have to be understood in the ocntext of their connections to cultural and social life.

      which apparently is a quote from Polkinghorne (1983).

  3. The re-entering of positivism (through logical positivism)
    Logical positivism suggested to be dominanat epistemology of contemporary sciecnce. But still rooted in positivism.

  4. The arrival of the contemporary critics
    Everyone’s a critic, so logical positivism didn’t last long. Some of the criticisms

    • Does not separate observable from theory
      What you observe is influenced by the theories. “In fact, it is unlikely that obervation can be theory free”.
    • Lack of success in using deductive reasoning to overcome the problem of induction
    • the idea of value-free science
      In the guise of neutrality, the researcher is in fact tacitly supporting the status quo.

    Goes on about a range of others. Interestingly, Schutz influenced by Weber and Husserl got into phenomenology. “Schutz contended that Weber’s concept that the main function of the social scientist was to interpret, did not go far enough. He believed the main characteristics of social science must be ‘understanding’, ‘subjective’ meaning and ‘action'”. This has some interesting implications for the work of Hevner et al that separate out natural/social sciences from design – perhaps.

  5. Post-positivism – being a fifth stage which the author suggests is currently emerging
    Arising out of growing band of researchers unhappy with postivism. Picks up the line of thought that knowledge is not apodeictic (i.e. a logical certainity, self-evident). Instead knowledge is accepted by some community which accepts it as an imporvement of previous understanding.

    Suggested that it is more a belief about knowledge than a school of thought with agreed tenets. The wikipedia page seems to suggest a little differently

    The main tenets of postpositivism (and where it differs from positivism) are that the knower and known cannot be separated, and the absence of a shared, single reality.

    A part of this is a methodological pluralism i.e. that there is no correct method, simply many that may be contigent on the problem being studied or the ‘kind’ of knowledge desired’.

A Paradigmatic Analysis of Information Systems As a Design Science

The following is a summary of and reflection upon

Juhani Iivari, (2007), A Paradigmatic Analysis of Information Systems As a Design Science, Scandinavian Journal of Information Systems, 19(2):39-64

Reflection

This paper is somewhat similar, at a very abstract level, to one I’ve been thinking about. However, it’s told from a different perspective, with a different intent and different outcomes (and probably much better than I could). There is enough difference that I think I can still contribute something.

One aspect of that difference would come from the fact that the foundation of my thoughts will be Shirley’s types of theories which Juhani identifies as being more complete than the framework he developed.

Questions and need for further thinking

In the epistemology of design science section the author outlines a framework to structure IS research. Somewhat equivalent to Shirley’s theory of theories. Does this structure belong in the “epistemology” section or the “ontology” section?

The question of truth value and truthlikeness is something I need to read on further

The 12 theses

The author summarises his view in 12 theses, I’ve listed them below with, where it exists, some early indication of some of my problems and/or thoughts. At least those that currently exist.

  1. Information Systems is ultimately an applied discipline.
    I agree. Juhani mentions the problems with the term “applied science” in the first footnote.
  2. Prescriptive research is an essential part of Information Systems as an applied discipline.
    Agreed. I would add that there has been significantly too much focus on the other forms of research – descriptive and explanatory – at the expense of prescriptive research. A flaw that has negatively impacted on the IS discipline.
  3. The design science activity of building IT artifacts is an important part of prescriptive research in Information Systems.
    I agree, however, I don’t see it as the main output or purpose of prescriptive research in information systems. At least not any more than building a quantitative survey is the main contribution/output of descriptive/explanatory research. For me, building an IT artifact is a method to test the theory being developed.
  4. The primary interest of Information Systems lies in IT applications and therefore Information Systems as a design science should be based on a sound ontology of IT artifacts and especially of IT applications.
    There’s a glimmer of agreement here. Not sure how I far that goes. I see IS as having a main interest in how IT applications are used/impact organisations/groups/people. For me the focus on just IT applications is computer science.
  5. Information Systems as a design science builds IT meta-artifacts that support the development of concrete IT applications.
    Agree, but with meta-artifacts expressed as information systems design theories.
  6. The resulting IT meta-artifacts essentially entail design product and design process knowledge.
    Yes.
  7. Design product and design process knowledge, as prescriptive knowledge, forms a knowledge area of its own and cannot be reduced to the descriptive knowledge of theories and empirical regularities.
    Not certain about this one. Mention a bit more below.
  8. Constructive research methods should make the process of building IT meta-artifacts disciplined, rigorous and transparent.
    Agree.
  9. Explication of the practical problems to be solved, the existing artifacts to be improved, the analogies and metaphors to be used, and/or the kernel theories to be applied is significant in making the building process disciplined, rigorous and transparent.
    Agree, but need more time to think about whether this is complete.
  10. The term ‘design theory’ should be used only when it is based on a sound kernel theory.
    Probably disagree, see more discussion below. Need more thought.
  11. Information Systems as a design science cannot be value-free, but it may reflect means-end, interpretive or critical orientation.
    Yes agree. I wonder if there are any other additional ethical perspectives.
  12. The values of design science research should be made as explicit as possible.
    Yes.

What distinguishes design science from IT development practice

Juhani suggests the use of rigorous constructive research methods as what distinguishes practice from design science. Which leads him to admit that if a practitioner uses a constructive research method, then they are doing research.

I find this vaguely troubling

I would suggest that need to move to the output. My view assumes that an artifact is not a sufficient output for design science. If you accept that the expected output of research is the generation or testing of theory (knowledge). Then the output of design science should be design theory (though I don’t like the phrase design science). An artifact can be part of the design theory, but not the sole output.

An IT practitioner will not (typically) generate design theory. They generate artifacts. A researcher aims to go the next step and generate design theory.

Does DSR have a positivistic epistemology

Juhani argues that action research and design science research are very different in terms of history, practice, ontology and epistemology. As part of this he suggests that DSR (especially from engineering and medicine) is based on positivistic epistemology and that argues against Cole et al that it might be possible for some applications of DSR around IS within organisations to have a different epistemology.

This argument is based on his work on the paradigmatic assumptions regarding systems development approaches which found that al 7 IS development approaches shared a fairly realistic ontology and positivistic epistemology.

However, earlier in the paper he argues that systems development approaches are not a good match for use as constructive research methods. Hence how can an analysis of systems development approaches be used to argue anything about DSR? Yes, there is likely to be some strong overlap, but it doesn’t seem to be strong evidence.

Also, simply because historically these systems development approaches (and one assumes IS developers/researchers) have held this particular view that this excludes some practice of DSR which has a different epistemology.

Test artifacts in laboratory and experimental situations as far as possible

It is suggested that action research can be used to evaluate artifacts and provide information on how to improve these artifacts. However, Juhani also suggests that design science artifacts should be tested in laboratory studies as far as possible.

I believe this closes off a major fruitful way of developing design theory. An approach that ties very much into Juhani’s first major source of ideas for design science research – practical problems and opportunities. DSR that uses action research as a methodology to not only evaluate but also inform the design of an artifact/ISDT can lead to very fruitful ideas.

Does a design theory need a kernel theory

Juhani says yes. If we do without there is a “danger that the idea of a ‘design theory’ will be (mis)used just to make our field sound more scientific without any serious attempt to strengthen the scientific foundation of the meta-artifacts proposed”.

There is something to this, but I also have some qualms/queries which I need to work through. The queries are

  • Situations where descriptive theory has to catch up with prescriptive theory.
    i.e. physics of powered flight being figured out after the Wright brothers flew.
  • Situations where descriptive theory is closing off awareness or insight.
    Someone deeply aware of descriptive theories will have a set of patterns established in their head which may limit there ability to be aware of the situation or envision different courses of action (i.e. inattentional blindness aka perceptual blindness).

    Awareness of a situation, or the ability to avoid established descriptive theories may highlight new and interesting solution (yes, I think this occurence might be rare).

There is an argument to be had about the difference between the final version of the ISDT and its formulation. It may be that a complete/formal ISDT does need to have a kernel theory or two. However, it may not have been there at the beginning.

For example, the work that forms that basis of my design theory for e-learning started without clearly stated and understood kernel theories based on formal descriptive research. However, a very early paper (Jones and Buchanan, 1996) on that work included the following

It is hoped that the design guidelines emphasising ease of use and of providing the tools and not the rules will decrease the learning curve and increase the sense of ownership felt by academic staff.

It’s not difficult to see in that statement a connection with diffusion theory and TAM. Descriptive knowledge that has informed later iterations of this work and diffusion theory certainly gets a specific inclusion as a kernel theory in the final ISDT.

What’s the kernel theory for the IS development life cycle

In footnote 7 the author writes that Walls et al (1992) “suggest that the information systems development life-cycle is a design theory, although I am not aware of any kernel theory on which it is based.”

I agree, in so much as I’m not aware of an clear statement of the kernel theories that underpin the SDLC. I also think that the absence of such a clear statement is a potential short-coming.

There is a world view embodied in the SDLC. For example, I believe that the SDLC assumes that the world fits into the simple or complicated fields of the Cynefin Framework and is completely inappropriate when used in other types of systems – even in the complicated field it can be difficult. Agile/emergent development methodologies appear to be a better fit for the complex section of Cynefin.

Which raises the question, is there value in going back and developing an ISDT for the SDLC which makes clear the assumptions that underpin it by providing kernel theories.

Irreducibility of prescriptive knowledge to descriptive knowledge. Juhani states, that since most IT artifacts aren’t strongly based on descriptive knowledge

This makes one to wonder whether the IS research community tends to exaggerate to the significance of descriptive theoretical knowledge for prescriptive knowledge of how to design IT successful artifacts. In conclusion, in line with Layton (1974) I am inclined to suggest that prescriptive knowledge forms a knowledge realm of its own and is not reducible to descriptive knowledge.

That seems to be a rather large leap to me. The questions it brings to mind include

  • Does the absence of strong links mean its irreducible?
    I don’t understand how Juhani has gotten from “most IT artifacts have weak links to descriptive knowledge” to “prescriptive knowledge is not reducible to descriptive knowledge”.

    Not to suggest it’s wrong. It’s just that I’m not smart enough to make the conenction, yet.

  • Is there more to this statement that meets the eye?

    Despite this weak reliance on descriptive theories people design reasonably successful IT artifacts.

    • What types of artifacts are reasonably successful? Who says? Why are they successful?
      There’s a large amount of literature about the failure of large scale information systems. Is that failure due to the weak reliance?

      We can all point to systems that are being used by people to perform tasks. But does use mean success? Does it mean that the need of the folk is strong enough that they will adapt and work around the system enough to do the task they wish to achieve? Is success generating the best possible system? How do you evaluate that?

      Perhaps the success of some systems, even with weak reliance on descriptive knowledge, simply proves how adaptable people are.

    • Does weak reliance, mean none?
      The example I give above shows a situation where without knowledge f a specific type of descriptive knowledge (diffusion theory) a practitioner was already aware of something very similar, a need to go that way. An example of the relevance/rigor gap?

If you haven’t noticed, I lost my way in the above. Need to come back to it. I feel there is more to unpack there.

Summary

Abstract

Discusses

  • ontology – suggests ontology of IT artifacts, draws on Popper’s three worlds as a starting point
  • epistemology – emphasizes the irreducibility of the prescriptive knowledge of IT artifacts to theoretical descriptive knowledge, suggests a 3 level epistemology for IS – conceptual knowledge, descriptive knowledge and prescriptive knowledge
  • methodology – expresses a need for constructive research methods for disciplined, rigorous, transparent building of IT artifacts as outcomes of design science research (so as to distinguish design research from simply developing IT artifacts), also discusses connections between action research and design science research.
  • ethics – points out IS as a design science cannot be value free, distinguishes three ethical positions: means-end oriented, interpretive and critical

of design science.

Introduction

Computer science has always been doing design science research. Much of the early IS research focused on systems development approaches and methods – i.e. design science research

But the last 25 years of mainstream IS research has lost sight of these origins – due to the “hegemony of the North-American business-school-oriented IS research” over leading IS publication outlets.

The dominant research philosophy has been to develop cumulative, theory-based research to be able to make prescriptions.

A pilot analysis of practical recommendations of MISQ articles between 96 and 2000 showed they were weak (Iivari et al. 2004)

Current upsurge in interest in design science may change this. Also important that these papers have turned attention onto how to do design science research more rigorously.

IS is increasingly being seen as an applied science, a quote from Benbasat and Zmud (2003)

our focus should be on how to best design IT artifacts and IS systems to increase their compatibility, usefulness, and ease of use or on how to best manage and support IT or IT-enabled business initiatives.

Iivari’s (1991) previous work on applying paradigms to IS development approaches or schools of thought used the Burrell and Morgan (1979) framework but expanded it in two ways to encapsulate his design science background.

  1. Added ethics as an explicit dimension
  2. incorporated constructive research to complement nomothetic and idiography research

This essay revisits that work and applies it directly to design science research.

Ontology

States design research should be based on sound ontology. However, does not state explicitly (at least at this stage) why this is the case. Not suggesting that it should be based on sound ontology, but I want to know why Juahni thinks it should be.

Talks about Poppers (1978) three worlds as the basis for this ontology (a lecture delivered by Popper)

  • World 1 – physical objects and events, including biological entities
  • World 2 – mental objects and events
  • World 3 – products of the human mind, includes human artifacts and also covers institutions and theories

Popper talks about World 3 include “also aeroplanes and airports and other feats of engineering.”

Iivari argues

  • institutions are social constructions that have been objectified (Berger and Luckman, 1967)
  • truth and ‘truthlikeness’ (Niiniluoto 1999) can be used in the case of theories, but not artifacts
  • Artifacts are only more or less useful for human purposes

Disciplines of computer are interested in IT artifacts. Dahlbom (1996) adopts a broad and possibly confusing interpretation of the concept of the artifact including people and their lives. Coming back to just IT he says

When we say we study artifacts, it is not computers or computer systems we mean, but information technology use, conceived as a complex and changing combine of people and technology. To think of this combine as an artifact means to approach it with a design attitude, asking questions like: Could this be different? What is wrong with it? How could it be improved? (p. 43).

Dahlbom also claims the discipline should be thought of as “using information technology” instead of “developing information systems” (p.34). Need to look at this more to see if there is much more to this claim than the surface interpretation.

Starts thinking about developing a sound ontology for design science. Identifies the need to answer the question about what sort of IT artifacts IS should build, especially if we wish to distinguish ourselves from computer science. In terms of ontology of artifacts mentions

  • Orlikowski and Iacono (2001) – the names from the IT artifact
    And their list of views of technology: computational, tool, proxy and ensemble.
  • March & Smith (1995)/Hevner et al (2004) from design research
    And their constructs, models, methods and instantiations. Iivari suggests this is a very general classification, its application is not always straightforward
  • diffusion of innovations – Lyytinen and Rose (2003), refining Swanson (1994) identify
    • base innovations
    • systems development innovations
    • services – administrative process innovations (e.g. accounting systems) technologyical process innovations (e.g. MRP), technological service innovations (e.g. remove customer order entires), and technological integration innovations (e.g. EDI).

In my view the primary interest of Information Systems lies in IT applications.

Defines 7 archetypes of IT applications. As archetypes they may not occur in practice in their pure forms.

Role/function Metaphors Examples Connection with Orlikowski & Iacono
To automate Processor Many embedded or transaction processing systems technology as labour substitution tool
To augment Tool (proper) Many personal productivity systems; Computer aided design technology as productivity tool
To mediate Medium Email, instant messaging, chat rooms, blogs, electronic storage systems (e.g. CDs and DVDs) technology as socail relations tool
To informate Information source Information systems proper technology as information processing tool
To entertain Game Computer games
To artisticize Piece of art Computer art
To accompany Pet Digital (virtaul and roboting) pets

The above table interprets information system

  • is a system whose purpose “is to supply its group of users with information about a set of topics to support their activities” (Gustafsson et al, 1982, p100)
  • implies that an IS is specific to the organisational/inter-organisational context in which it is implemented
  • information content is also a central aspect

Differences between IT artifacts include

  • In design – different design approaches used for different purposes
  • In their diffusion – Swanson (1994) and Lyytinen and Rose (2003)
  • In their acceptance – Iivari’s conjecture

Proposes that IT artifacts have invaded all of Popper’s worlds

  1. IT artifacts are embedded in natural objects, e.g. to measure physical states, and nanocomputing may open up new opportunities. How IT artifacts affect natural phenomena is likely to become a significant research problem.
  2. IT artifats are influencing our consciousness and mental states, our perceptions.
  3. Significant constituents of organisations and societies – make it feasible to develop more complex theories.

Research phenomena below influence epistemology and methodology

  1. How does the use of a mobile phone affect one’s brain temperature?
  2. How does the use of a mobile phone affect one’s perception of time and space?
  3. How do mobile phones affect the nature of work in organisations?

An ontology for design science

World Explanation Research Phenomena Examples
World 1 Nature IT artifacts + World 1 Evaluation of IT artifacts against natural phenomena
World 2 Conciousness and mental states IT artifacts + World 2 Evaluation of IT artifacts against perceptions, consciousness and mental states
Institutions
Theories
Artifacts: IT artifacts, IT applications, meta IT artifacts
IT artifacts + World 3 Institutions
IT artifacts + World 3 Theories
IT artifacts + World 3 artifacts
Evaluation of organizational information systems
New types of theories made possible by IT artifacts
Evaluation of the performance of artifacts comprising embedded computing

Epistemology of design science

Truth, utility and pragmatism. Argues against the adoption of the idea from pragmatism that truth is seen as practical utility. Artifacts, if theories are excluded, do not have any truth value. Practical action informed by theory may develop some level of truth if it consistently proves to be successful.

Draws on his earlier work in adopting a framework from economics to structure research within IS. It’s again based the type of knowledge being produced, in his case there are three types

  1. Conceptual knowledge – which has no truth value
    Includes concepts, constructs, classifications, taxonomies, typologies and conceptual frameworks.
  2. Descriptive knowledge – has truth value
    Includes observational facts, empirical regularities and theories/hypothesis which group under causal laws.
  3. Prescriptive knowledge – which has no truth value
    Design product knowledge, design process knowledge and technical norms.

The author suggests the following mapping between his framework and Shirley’s types of theory

  1. Conceptual – “Theories for analysing and predicting”
  2. Descriptive – theories for explaining and predicting and theories for explaining (as empirical regularities)
    Can include

    • observational facts – who invented what, when.
    • descriptive knowledge – TAM, Moore’s law
    • empirical regularities and explanatory theories identify causal laws that are either deterministic or probablistic
  3. Prescriptive – theories for design and action
    Relatively speaking, prescriptive knowledge is the least well understood
    form of knowledge in Table 3.

Suggests that theories of explaining, in the form of grand theories such as actor-network theory, do not fit into his framework. But the do in Shirley’s.

On the question of truth value or truthlikeness

  • Conceptual – the goal is essentialist, to identify the essence of the research territory and the relationships. May be more or less useful in developing theories at the descriptive level (quotes Bunge 1967a here).
  • Prescriptive – artifacts and recommendations do not have a truth value. Only statements about their efficiency and effectiveness have such a value

Beckman (2002) identifies four criteria of artefacts

  1. Intentional – the knife is a knife because it is used as a knife
  2. Operational – it is a knife because it works like a knife
  3. Structural – is a knife because it is shaped and has the fabric of a knife
  4. Conventional – is a knife because it fits the reference of the common concept of a ‘knife’

Juhani does not include the conventional with artifact as it may not achieve community acceptance until years after invention and construction.

Prescriptive knowledge is irreducible to descriptive knowledge

Suggests that most IT systems are built divorced from descriptive knowledge. There is only a weak link between IT artifacts and descriptive knowledge. And yet IT systems are still reasonably successful.

This makes one to wonder whether the IS research community tends to exaggerate to the significance of descriptive theoretical knowledge for prescriptive knowledge of how to design IT successful artifacts. In conclusion, in line with Layton (1974) I am inclined to suggest that prescriptive knowledge forms a knowledge realm of its own and is not reducible to descriptive knowledge.

Kernel theories

Believes the presence of a kernel theory is the defining characteristic of a “design theory”.

This is seen as difficult and leads to a softening of requirements for a kernel theory – e.g. Markus et al (2002) allowing any practitioner theory-in-use to serve as a kernel theory. Implying the design theory is not based on scientifically validated knowledge.

Methodology of design science

Classifications of IS research methods (Benbasat, 1985; Jenkins, 1985; Galliers and Land, 1987 and Chen and Hirschheim, 2004) do not recognise anything resembling constructive research methods. Iivari (1991) suggested constructive research as the term to denote the research methods required for constructing artifacts.

Positions building artifacts as a very creative task. Hence it is difficult to define an appropriate method for artifact building. Having constructive research methods is essential for the identity of IS as a design science. The rigor of methods distinguishes the design science from the practice of building IT artifacts.

Suggests two ways to identify the difference

  1. There is no constructive research methods, instead the difference is the evaluation. Design science requires scientific evaluation of the artifacts.
    drawback may lead to reactive research where IS as a designs cience focuses on the evaluation of existing artifacts, rather than building new ones.
  2. Define a rigorous approach for constructive research and use this to differentiate design science from invention in practice.

Iiivari didn’t specify the constructive research methods. Talks about Nunamaker et al (1990-1991) and their suggestion that systems development methods could serve this role. Iivari doesn’t appear to think so. Pitfalls include:

  • Do SDMs allow sufficient room for creativity and serendipity which are essential for innovation?
    A significant concern when attempting to make the building process more disciplined, rigorous and transparent.
  • Most serious weakness of the Nunamaker et al suggestion is that it integrates systems development quite weakly with research activities.

Hevner et al (2004) suggests rigor in designs cience research is derived from the effective use of prior research – using the existing knowledge base. Iivari claims it is in making the construction process as transparent as possible.

The source of ideas

Iivari suggests four major sources for ideas for design research

  1. Practical problems and opportunities
    Emphasizes the practical relevance of this research. Customers known as significant source of innovations (von Hippel 2005). But practice problems may be abstracted or seen slightly differently. Design science can also create solutions long before a problem is seen/understood.
  2. Existing artifacts
    Most design science research consists of incremental improvements to existing artifacts. Must understand what has gone before, if only to evaluate contribution.
  3. Analogies and metaphors
    Known that analogies and metaphors stimulate creativity.
  4. Theories
    i.e. kernel theories can serve as inspiration

Design science and action research

Many authors associated design science and action research, since they both attempt to change the world. Iivari suggests that they are different in a number of ways

  • Historically
    Action research – socio-technical design movement. Design science – engineering.
  • Practically
    Action research – focused on “treating social illnesses” within organisations and other institutions. Technology change may be part of the treatement, but the focus is more on adopting than building technology.
    DSR – focus on the construction of artifacts, most having material embodiment. Usually done in laboratories, clearly separated from potential clients.
  • Ontologically,
    DSR – in engineering/medicine adopts a realistic/materialistic ontology
    Action research – accepts a more nominalistic, idealistic and constructivist ontology

    Materialism attaches primacy to Popper’s World 1, idealism to World 2. Action research is also interested in the institutions of World 3.

  • epistemologically, and
    Consequently, design research, especially in engineering and medicine, have a positivistic epistemology in terms of knoweledge applied from reference disciplines and knowledge produced. Action research is strongly based on an anti-positivistic epistemology. The very idea of AR is anti-positivistic as each client is unique.
  • methodologically.

Cole et al (2005) take the alternate perspective that design science and AR share important assumptions regarding ontology and epistemology. Cole et al implicitly limit design science to IS in an organisational context, if so then shouldn’t the ontology and epistemology of DSR be different. Juhani is doubtful about this, based on his work evaluating systems development approaches – but he’s said earlier that systems development approaches aren’t a good match for constructive research – for DSR. Can he make this connection here?

Ethics of design science

Design science shapes the world. “Even though it may be questionable whether any research can be value-free, it is absolutely clear that design science research cannot be.” which suggests that the basic values of research should be expressed as explicitly as possible.

Juhani then uses his own work (1991) to identify three roles (?types of ethics?)

  1. Means-end oriented
    Knowledge is provided to achieve an ends without questioning the legitimacy of the ends.
    Evaluation here is interested in how effectively the artifact helps achieve the ends
  2. interpretive
    The goal is to enrich understanding of action. Goals are not clear, focus on unintended consequences.
    Evaluation seeks to achieve a rich understanding of how an IT artifact is really appropriated and used and what its effects are, witout focusing on the given ends.
  3. Critical
    Seeks to identify and remove domination and ideological practice. Goals can be subjected to critical analysis.
    Evaluation focuses on how the IT artifact enforces or removes unjustified domination or ideological practices.
  4. Most DSR is means-end oriented, but it can be critical (e.g. Scandinavian trade-unionist systems development approaches)

    Question values of IS research – whose values and what values dominate?

    Conclusions

    Introduces the 12 theses summarised right up the top

    References

    Benbasat, I., & Zmud, R. (2003). The Identity Crisis within the IS Discipline: Defining and Communicating the Discipline’s core properties. MIS Quarterly, 27(2), 183-194.

Design Based Research vs. Mixed Methods: The Differences and Commonalities

This post is a summary and some reflection on a discussion paper posted to ITForum. It’s by Goknur Kaplan Akilli and is titled Design Based Research vs. Mixed Methods: The Differences and Commonalities. The author is a PhD candidate with some interesting research runs on the board.

The following contains two main sections

  1. Reflections – my, as yet incomplete, meanderings on the paper.
  2. Summary – an attempt to understand the paper.

Reflections

This paper seems to indicate that the education discipline, like information systems, appears to be struggling with understanding where design research fits. How is it different? How to do it well? Reading this paper should help me, given that one of my current tasks is to re-write chapter 3 of my thesis and essentially set out what I understand about these questions.

Given my background in information systems much of my thoughts on the following are influenced by that work. One representation of that work is given on the design research in information systems page. However, I don’t agree with a number of the points made there.

Questions and points of disagreement or depature

DBR: a methodology or a paradigm

There appears to be some confusion over whether or not DBR is a methodology or a paradigm. Which may come back to a lack of agreement on the difference amongst the author and her cited sources.

In the last paragraph of the section on DBR the author suggests

Lastly, the immaturity of the methodology is another criticism (Kelly, 2004; Wang & Hannafin, 2005), which consists of methodological challenges that need to be addressed if DBR is to be developed “from a loose set of methods into a rigorous methodology” (Kelly, 2004, p.116).

In the conclusion the author suggests

DBR is more of a generic paradigm rather than a method in the way that mixed methods research is. DBR offers a new worldview of theory development and refinement along with design to construct design sciences of education.

So which is it, paradigm or method?

Dede’s “conditions of success” criticism

In a couple of sentences the author describes Dede’s (2004, 2005) criticisms of DBR related to “conditions of success”.

For instance, Dede (2004) argues that there seems to be hardly any standards to decide whether a design should be dropped or sustained and further explores due to its promising nature by differentiating it from its “conditions of success” (p.109). However this is not possible, since the findings in DBR are strongly bounded with contextual variables shaping the design’s “desirability, practicality and effectiveness” (Dede, 2005, p.7).

At this point in time, I don’t understand (at all) this criticism. Appears I have some more reading to do.

Interventions embody theory

In characterising DBR the author follows (DBRC 2003) that the interventions arising from DBR embody specific theoretical claims about teaching and learning. I’m assuming that one reason for this is that as research, DBR should be purposeful about its interventions and shouldn’t simply be trying any idea that crops up, it needs to be informed by theory. Within the information systems field and its approach to design research the theories which a design embodies are called “kernel theories”, a term coined by Walls et al (1992).

One of the foundations of this work is Simon’s work on sciences of the artifical work that underpins the interest in design research in a number of fields, including information systems.

Simon talks about the idea that it is possible to design a successful artifact that makes a contribution without being fully aware of all of the theories/knowledge which underpin the artifact. The example often used is that the folk who built the first airplanes had little understanding of the aeronautical sciences, the physics that explained why their machines flew.

There is also the problem that simply following learning and teaching theories my lead to inattentional blindness (aka perceptual blindness). A situation where the established theories of learning and teaching limit what you can actually see or envision happening in a given situation.

The possibility that requiring an embodiment of existing learning and teaching theory in all educational DBR is troubling if you agree that the vast majority of learning and teaching theories have been developed by more “traditional research”. An approach to research which, according to the DBR proponents, have some significant flaws. Hence the patterns embodied by current theories of learning and teaching may have some flaws which is limiting possibilities.

A later characteristic of DBR – interactive, collaborative, iterative and flexible processes – address this somewhat in the recognition that DBR is flexible enough to react to situations where the expected outcomes did not occur in reality and consequently led to a re-thinking of these understanding sof the world.

So perhaps the characteristic re: embodying theory should be understood that at the end of the DBR process the work should embody some sort of theory around learning and teaching. But perhaps, when it started, that theory was not well understood or espoused, or perhaps a new one developed as the iterative process was followed.

The role of theory

Throughout the paper, and the literature it quotes, there appears to be some issues with the definition of theory including:

  • No clear definition of what the author or the literature thinks theory is (or not).
  • Many different terms that are close to theory but seem to indicate a difference e.g. prototheories, design theory, design principles, “usable knowledge”..

All this seems to point to a lack of agreement to a fairly fundamental building block of this argument. Especially if you agree that the ultimate aim of research is to generate knowledge which should typically aim to be expressed as theory.

This is especially troubling given that one of the stated criticisms of DBR is that it often doesn’t make a significant contribution of theory. This might well be expected if the current understanding of theory within the education discipline is more appropriate to traditional research and somewhat under done or limiting due to the nature of design research.

The nature of theory also raises its head in terms of the problems facing DBR in “universality of findings”.

I also wonder if my connection with Shirley and her work on the nature of theory in information systems colours my perspective. I also think that only very limited reading I’ve done of the education based DBR literature also shows, as I believe some of that literature does address this issue. Though not with the same outcomes as Shirley’s work.

Misc questions

Much of the design research thinking in information systems is focused on the IT (or IS) artifact. The aim of design research is to construct or develop theory to guide the construction of an IT artifact. What, if any, is the similar aim in education?

Summary

The basic aim of the paper is to establish that there is a difference between DBR and mixed methods as a research methodology. It has two main sections, one each on the respective approaches. One assumes (given that I haven’t read the paper) that these two sections explain the differences and serves the authors purpose.

In the authors conclusion she argues

  • Mixed methods is a research method. A 3rd methodology that arose from the qualitative/quantitative paradigm wars.
  • DBR more of a generic paradigm, claims it is a wicked paradigm since it deals with wicked problems (Rittel & Webber, 1973).
  • DBR offers a new worldview of theory development and refinement.
  • But it also offers a newly-emerging research methodology drawing on different fields of design and education, including mixed methods.
  • DBR produces knowledge that is
    • dynamic,
      It is knowledge that changes in relation to context, shaped by time, place, actors and actions.
    • usable,
      Knolwedge that informs theories and real-world practices.
    • glocal.
      Local in that it produces tentative generalizations that are drawn from initial implementations. Suggested it is global because these generalisations can be “globalised” with studies that have similar contexts.

Design-Based Research

First, the author establishes some of the variety in perspectives, or at least terminology, used around DBR

Design Based Research Collective (DBRC) (2003, p.5) characterizes DBR as a research paradigm that “blends empirical educational research with the theory-driven design of learning environments,”

Wang and Hannafin (2005) define it as “a systematic but flexible methodology [italics added] aimed to improve educational practices through iterative analysis, design, development, and implementation, based on collaboration among researchers and practitioners in real-world settings, and leading to contextually-sensitive design principles and theories”

DBR aims to develop and refine theories via closely linked strategies rather than testing intact theories using traditional methodologies (Edelson, 2002).

Suggests that the origins of DBR were to

  • develop a design science of education – a connection back to Simon (1969).
  • develop a methodology to help develop design theory (Collins, 1992)
  • prevent the detachment of educational research (laboratory settings) from problems and issues of everyday practice
  • close the credibility gap (Levin & O’Donnell, 1999)
  • develop more “usable knowledge”.

The main characteristics of DBR are

  • Pragmatic.
    It is based in real-world situations. Attempts to improve those through interventions, but at the same time make a contribution to theory. The value of theory is in its utility to practitioners and other designers.
  • Theory-driven and grounded in real-world contexts.
    The interventions embody specific theoretical claims about teaching and learning.
  • Uses a process that is interactive, collaborative, iterative and flexible.
    It continues to respond to the findings within the real world setting.
  • Is integrative through the richness and variety of theories, methods and procedures utilized to meet research needs.
    Multiple (mixed) methods are used to analyse and refine the intervention.
  • Contextualised.
    It cannot be thought of as indepedent from context, must involve authentic settings.

Criticisms of DBR include

  • “Conditions of success” – mentioned above as something I don’t get
  • Absence of theoretical foundation or contribution
    Dede (2004) suggests this is due to different skills for creative designers and rigorous scholars. Also the problem of “innovation fascination” leading to under-conceptualized research in order to try the new toy.

    Is suggested that DBR is over-methodologized and its tendency towards excessive data collection (Nona, are you reading this? ;) ) which result in only tiny contributions to theory.

  • Generalisation.
    The very contextual nature of DBR is seen as making it difficult to make generalisations to other contexts. Some arguments here about the close interaction between researcher/practitioner necessarily limiting the ability to be rigorous or objective.
  • Immaturity of the methodology
    Suggests DBR is more a loose set of methods than a rigorous methodology (Kelly, 2004). He suggests that DBR studies are descrbied as a set of processes rather than describing the essential underlying conceptual structure.

Mixed methods research

Defined as a methodology that uses multiple approaches in all stages of research.

Theoretical assumptions include

  • Pragmatist philosophy
    Researchers avoid philosophical arguments about research methods and mix approaches based on the utility the approach will give within a particular problem or context.
  • Compatibility thesis
    Assumption that quantitative and qualitative methods are compatible and can be mixed.
  • principle of mixed methods research
    Methods are mixed in a way that uses their complementary strengths and non-overlapping weaknesses.

Four additional criteria

  1. Sequence of data collection approaches.
    Concurrently/sequentially, intra or inter-method mixing. Connected with “data triangulation” and “method triangulation”.
  2. Which method was given priority
  3. Stage of integration – where did the mixing or connecting of methods occur
  4. Theoretical perspectives – the researchers’ personal stances toward the topcis.

From these a diverse typology of mixed methods research is outlined

A strength of mixed methods is the ability to answer both exploratory and confirmatory questions at the same time – i.e. verify and generate theory.

A strength of mix-methods research is the availability of information about how to do such research well.

Major criticism of it is the “incompatibility thesis” which argues that quantitative and qualitative research paradigms should not be mixed. (Onwuegbuzie and Leech, 2005)

References

Dede, C. (2004). If design-based research is the answer, what is the question? Journal of the Learning Sciences, 13(1), 105-114.

Dede, C. (2005). Why design-based research is both important and difficult. Educational Technology, 45(1), 5-8.

Design-Based Research Collective. (2003). Design-based research: An emerging paradigm for educational inquiry. Educational Researcher, 32(1), 5-8, 35-37.

diSessa, A. A., & Cobb, P. (2004). Ontological innovation and the role of theory in design experiments. Journal of the Learning Sciences, 13(1), 77-103.

Kelly, A. E. (2004). Design research in education: Yes, but is it methodological? Journal of the Learning Sciences, 13(1), 115-128.

Onwuegbuzie, A. & Leech, N. 2005. “Taking the “Q” Out of Research: Teaching Research Methodology Courses Without the Divide Between Quantitative and Qualitative Paradigms.” Quality and Quantity 39, 267-296.

Walls, J., Widmeyer, G., & El Sawy, O. A. (1992). Building an Information System Design Theory for Vigilant EIS. Information Systems Research, 3(1), 36-58.