Authors: Lemke, C., Coughlin, E., Reifsneider, D.
Published: 2009
Publisher: Cisco Systems Inc.
Reviewed by Russell Waldron. PDF Download
A number of educationalists have maintained for some time that technological investment in schools has not produced measurable improvement (Keengwe et al, 2008). Nonetheless, the Digital Education Revolution and similar initiatives overseas are fuelling growth of technology companies such as Cisco Systems.
Over the past decade, Cisco has collated, published and sometimes sponsored educational research into the educational impact of ICT (Leask & Meadows, 2000), and recently updated their survey of educational research findings (Lemke et al, 2009). The current situation is that “the real potential of technology for improving learning remains largely untapped in schools today” (ibid, p.5). Their findings about specific technologies are summarised at the California Technology Assistance Project (Chandler, 2010) and will not be repeated here.
Rather, this review examines the assertion that in general, technology uptake in K-12 schooling has been too shallow, undocumented, too hurried, too heirarchical, too timid and too weakly resourced. Gaps in the research are discussed below, with reflections on the definitions employed by Lemke et al.
What is Innovative?
Rogers definition of innovation notably emphasises an individual’s perception of newness (Rogers 2003, p. 11). The subjective qualifier, ‘innovative’, is applicable only within a specific context or community. For example, some Australian schools were still describing Interactive Whiteboards as innovations in 2008, which were thoroughly evaluated in UK schools around 2001 (BECTA 2003).
An individual considering something new proceeds through phases of awareness, interest, evaluation, trial, and adoption. Adoption depends on the individual’s perception of relative advantage, compatibility, (lack of perceived) complexity, trialability, and visibility. (Rogers, 2003, p.16)
Underestimating barriers
Inadequate documentation
Statutory requirements and National Assessment results should not be assumed to provide an adequate baseline for evaluation of educational impact. The empirical approach favoured by Lemke et al requires that potential impacts be identified prospectively, in order to devise instruments for measuring the experimental effect.
Hasty conclusion.
Participatory culture.
In NSW government schools, a shift to more collaborative project work with greater independent communication between students is expected to result in improved engagement with school and schoolwork. NSW DET predicts a culture which they characterise as more “learner-centered”, “assessment-centered”, “community-centered”, and “knowledge-centered”. Collaboration is predicted to yield higher test scores in English and Mathematics. (Curriculum K-12 Directorate, 2009).
Scope of change.
Omissions
There is a notable absence of evidence for the collaboration potential of several technologies which appear well suited to social, participatory learning: modelling tools, augmented reality, virtual worlds, mobile devices, visualisation tools, and computer-aided instruction. This gap resulted from selection of research which addressed traditional outcomes. These families of technology do support participatory learning (as the ‘descriptive’ research shows). However, each listed ‘experimental’ study compared achievement with existing practices, and social outcomes were not measured.
In some cases, Lemke et al found conflicting results. This may be the result of proliferation of research motivated by potentially high implementation costs. For example, one-to-one laptop programs have been extensively trialled, with both vendors and school systems anticipating huge expenditure if the outcomes were positive. [citation]
Further, Lemke et al were confined by their stance that an experimental design is the ‘Gold Standard’ of evidence amongst empirical research methods. Their definitions appear in Table 1.
Table 1. Definitions of the Categories of Research Used as Evidence (Lemke et al 2009, p.7)
Type of Research |
|
||||||
Experimental |
|
||||||
Descriptive |
|
This stance is poorly suited to discovering transformative change, and does not reflect operational priorities that might reasonably be expected to govern decision-making in schools: learning, teaching and managing.
Learning: An experiment conducted in a distinctive sample has questionable validity for students with differences of culture and preferred learning styles. To compensate, the student sample should be representative, or at least large and diverse. There are many theories of Multiple Intelligence or Learning Styles (Coffield et al 2004), and it may be difficult to detect the paradigmatic assumptions that would affect application of the research findings. Instructional Events (Gagne 1985) are relatively clear targets for experimental research, less subjective than qualities emphasised in social learning theories.
Teaching: Individual teachers must react to and interpret their observations in ‘real-time’, without the benefit of experimental time-scale. Descriptive studies have value in providing a model for teachers examine in the light of their own practice. Self-reflection is a key process because each teacher is not only applying a new technology, they are reshaping their professional role and identity. Vacirca (2008) describes an example of this process.
Managing: Correlations discovered in multi-campus studies are needed to inform funding and administrative policy. Principals are responsible for overall outcomes from the interaction of all school policies and practices, which could easily subvert the benefits predicted by well-controlled experiments.
These considerations warrant a revision of the model Research Questions.
Table 2. Three categories of educational research reconsidered.
Research design |
Sample |
Research question |
Who cares | Follow-up needed |
Descriptive study | > 2 | What do people actually do? What do they say about it? | Practitioners | Why or how did that work? |
Experimental study | > 20 | Does a theory of learning fail, even in ideal conditions? | Theorists | Will it work in the real world? |
Correlational study | > 200 | Do action and success go together, in practice? | Managers | Was this cause and effect, or effects of some other factor? |
Conclusions
Lemke et al have provided a valuable, very readable overview of successful educational experiments. Their book helps justify the application of new technology to existing curricula. Tactfully, it does not address the relevance of existing school syllabuses for guiding and assessing 21st century learning. However, the implication of their survey is a call for a change in objectives, to support the development of outcome measures for collaborative and participatory learning.
Educators considering new technology would do well to address Keengwe’s summary of time-honoured recommendations regarding teacher practices, student opportunities, curriculum orientation, and professional development (Keenge et al, 2008). Teachers employed by the Department of Education and Training in NSW can access resources and services provided for early adopters through the Centre for Learning Innovation (CLI 2009).
References
BECTA (2003) What the research says about interactive whiteboards. British Educational Communications and Technology Agency (Becta), Coventry.
CLI (2009) Overview of the Centre for Learning Innovation. CLI [http://www.cli.nsw.edu.au/about_us/overview.htm]
COAG (2008). National Education Agreement Factsheet, Council of Australian Governments, 29 November 2008, [http://www.mceetya.edu.au/verve/_resources/Data_Standards_Manual_2010-SEC2-NAP_Measuring_and_Rep.pdf]
Coffield, F., Moseley, D., Hall, E., Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning. A systematic and critical review. London: Learning and Skills Research Centre.
Cradler, J. (2010). Technology in Schools: What the Research Says: A 2009 Update, CTAP, 19 January 2010, [http://www.myctap.org/index.php/administrators-and-data/edtech-research-reviews/191-technology-in-schools-what-the-research-says]
Curriculum K-12 Directorate, 2009. One-to-one computing: literature review. NSW DET.
Gagne, R. (1985). The Conditions of Learning (4th ed.). New York: Holt, Rinehart & Winston, cited in
Keengwe, J., Onchwari, G. & Wachira, P., 2008. Computer Technology Integration and Student Learning: Barriers and Promise. Journal of Science Education and Technology, 17(6), 560-565.
Leask, M., and Meadows, J., eds (2000). Teaching and learning with ICT in the primary school. Routledge/Falmer, London
Lemke, C., Coughlin, E., Reifsneider, D. (2009). Technology in schools: what the research says. A 2009 Update. Cisco Systems Inc.
Newhouse, P. (1999). Examining how teachers adjust to the availability of portable computers. Australian Journal of Educational Technology, 15(2), 148-166.
Rogers, E. (2003) Diffusion of innovations. 4th ed. Free Press, Glencoe.
Vacirca, E. 2008, How do teachers develop their technological pedagogical content knowledge in the context of system-wide pedagogical and curriculum reform? AARE Conference Brisbane Nov 30 – December 4, 2008.
Zucker, A.A. (2009). Transforming Schools with Technology, Independent School Magazine, Winter 2009.