Review: Technology in schools: What the research says. A 2009 Update.

Authors: Lemke, C., Coughlin, E., Reifsneider, D.

Published: 2009

Publisher: Cisco Systems Inc.

Reviewed by Russell Waldron.  PDF Download


A number of educationalists have maintained for some time that technological investment in schools has not produced measurable improvement (Keengwe et al, 2008). Nonetheless, the Digital Education Revolution and similar initiatives overseas are fuelling growth of technology companies such as Cisco Systems.

Over the past decade, Cisco has collated, published and sometimes sponsored educational research into the educational impact of ICT (Leask & Meadows, 2000), and recently updated their survey of educational research findings (Lemke et al, 2009).  The current situation is that “the real potential of technology for improving learning remains largely untapped in schools today” (ibid, p.5). Their findings about specific technologies are summarised at the California Technology Assistance Project (Chandler, 2010) and will not be repeated here.

Rather, this review examines the assertion that in general, technology uptake in K-12 schooling has been too shallow, undocumented, too hurried, too heirarchical, too timid and too weakly resourced. Gaps in the research are discussed below, with reflections on the definitions employed by Lemke et al.

What is Innovative?

Rogers definition of innovation notably emphasises an individual’s perception of newness (Rogers 2003, p. 11). The subjective qualifier, ‘innovative’, is applicable only within a specific context or community. For example, some Australian schools were still describing Interactive Whiteboards as innovations in 2008, which were thoroughly evaluated in UK schools around 2001 (BECTA 2003).

Various people within a community accept a new practice at different times. Typical roles taken in the introduction of new practices are commonly designated Innovators, Early Adopters, Early Majority, Late Majority and Laggards. (Rogers, 2003, p.22)

An individual considering something new proceeds through phases of awareness, interest, evaluation, trial, and adoption. Adoption depends on the individual’s perception of relative advantage, compatibility, (lack of perceived) complexity, trialability, and visibility. (Rogers, 2003, p.16)

In some contexts, newness attracts interest and excitement, but it generally involves risk and cost in a paucity of evidence for benefit. The purpose of technology (in Rogers’ theory) is to reduce uncertainty about achieving an outcome. In that sense, innovations are by definition not yet good technology. Innovation is undertaken by optimists.
This cultural understanding of innovation makes sense of the five typical shortcomings of educational technology projects conceived by Lemke et al.

Underestimating barriers

Innovators often underestimate cultural and institutional barriers to change (Lemke et al 2009, p.5). For example, the attitudes of teachers to computers and to change, and institutional failure of vision, administrative, training and technical support, were listed among other barriers to classroom integration of technology in a literature review by Keengwe et al (2008, p.562). Tools which support traditional teacher-classroom dynamics, such as interactive whiteboards, can be introduced quickly, while others, even those that are cheaper and more widespread, such as instant messaging, may be resisted strenuously because they open the door to radical changes in communication.
Technology can free up constraints and allow changes in the physical settings for learning, the boundaries of the learning community, the curriculum and the tools of learning (Zucker 2009), but in so doing, it destabilises the tacit contract between the clients and providers of education.Innovators face disappointment if they disregard the cultural fit of their projects.

Inadequate documentation

Lemke et al lament a lack of commitment to measurement in schools. Measurement of school performance is politically sensitive, although Australia governments have committed to “providing an evidence base to support future policy reforms and system improvements, including directing resources to areas of greatest need” (COAG, 2008, p.2).

Statutory requirements and National Assessment results should not be assumed to provide an adequate baseline for evaluation of educational impact. The empirical approach favoured by Lemke et al requires that potential impacts be identified prospectively, in order to devise instruments for measuring the experimental effect.

Hasty conclusion.

Lemke et al note that new technology takes longer to introduce and prove than early-adopters expect. Successful models typically involve pilots, evaluation, planning and change-management before achieving institutionalisation.

Participatory culture.

Learning from engagement in authentic, global collaborations demands a boldness to accept the risks that go with visibility outside the protection of the school, and liberation of students communication. However, allowing Web 2.0 participation deregulates communication and bypasses traditional means of gathering research data about the learning process. The indeterminate state of the Web reduces predictability of student experiences, weakening the validity of research findings about student practices.


Students working in Web 2.0 culture need a different preparation and climate for participatory learning and authentic assessment. Quite unlike Gagne’s conditions of learning which have informed now-traditional curricular and assessment of individual achievement, a base level of expertise is a pre-requisite for learners in a constructivist learning environment (Moallem, 2001).

In NSW government schools, a shift to more collaborative project work with greater independent communication between students is expected to result in improved engagement with school and schoolwork. NSW DET predicts a culture which they characterise as more “learner-centered”, “assessment-centered”, “community-centered”, and “knowledge-centered”. Collaboration is predicted to yield higher test scores in English and Mathematics. (Curriculum K-12 Directorate, 2009).

Scope of  change.

Finally, Lemke warns that it is difficult to anticipate and resource the profound change which can ensue from the interplay of many innovations. Schools may well flinch at the cost of educational pioneering. There has only rarely been sufficient allowance for the high costs of rapid change, including curriculum redesign, staff development, and technology provisioning.
This problem can become even larger after the researchers finish. Digital tools which may be eschewed at first due to cost can rapidly become cheap and ubiquitous, empowering individuals and facilitating new forms of community, transparently bridging time-lags, distance, media and jurisdictions, as, for example, Facebook does. New technologies are mutable, and often permit, reward or demand more intricate interaction between users, multiple unrelated technologies and traditional media (Zucker 2009). ‘Scaling up’ may require re-architecting software, curricular or an institution.

Omissions

Lemke’s assessment of the research was summarised in Table 19, copied below.

There is a notable absence of evidence for the collaboration potential of several technologies which appear well suited to social, participatory learning: modelling tools, augmented reality, virtual worlds, mobile devices, visualisation tools, and computer-aided instruction. This gap resulted from selection of research which addressed traditional outcomes. These families of technology do support participatory learning (as the ‘descriptive’ research shows). However, each listed ‘experimental’ study compared achievement with existing practices, and social outcomes were not measured.

In some cases, Lemke et al found conflicting results. This may be the result of proliferation of research motivated by potentially high implementation costs. For example, one-to-one laptop programs have been extensively trialled, with both vendors and school systems anticipating huge expenditure if the outcomes were positive. [citation]

Further, Lemke et al were confined by their stance that an experimental design is the ‘Gold Standard’ of evidence amongst empirical research methods. Their definitions appear in Table 1.

Table 1. Definitions of the Categories of Research Used as Evidence (Lemke et al 2009, p.7)

Type of Research
Research Question
Research Design
Experimental
Does something cause an effect?
Experimental
Quasi-Experimental
Descriptive
What is happening? Simple Descriptive
How is something happening? Comparative Descriptive
Why is something happening? Correlational

This stance is poorly suited to discovering transformative change, and does not reflect operational priorities that might reasonably be expected to govern decision-making in schools: learning, teaching and managing.

Learning: An experiment conducted in a distinctive sample has questionable validity for students with differences of culture and preferred learning styles. To compensate, the student sample should be representative, or at least large and diverse. There are many theories of Multiple Intelligence or Learning Styles (Coffield et al 2004), and it may be difficult to detect the paradigmatic assumptions that would affect application of the research findings. Instructional Events (Gagne 1985) are relatively clear targets for experimental research, less subjective than qualities emphasised in social learning theories.

Teaching: Individual teachers must react to and interpret their observations in ‘real-time’, without the benefit of experimental time-scale. Descriptive studies have value in providing a model for teachers examine in the light of their own practice. Self-reflection is a key process because each teacher is not only applying a new technology, they are reshaping their professional role and identity. Vacirca (2008) describes an example of this process.

Managing: Correlations discovered in multi-campus studies are needed to inform funding and administrative policy. Principals are responsible for overall outcomes from the interaction of all school policies and practices, which could easily subvert the benefits predicted by well-controlled experiments.

These considerations warrant a revision of the model Research Questions.
Table 2. Three categories of educational research reconsidered.


Research design
Sample
Research question
Who cares Follow-up needed
Descriptive study > 2 What do people actually do? What do they say about it? Practitioners Why or how did that work?
Experimental study > 20 Does a theory of learning fail, even in ideal conditions? Theorists Will it work in the real world?
Correlational study > 200 Do action and success go together, in practice? Managers Was this cause and effect, or effects of some other factor?

Conclusions

Lemke et al have provided a valuable, very readable overview of successful educational experiments. Their book helps justify the application of new technology to existing curricula. Tactfully, it does not address the relevance of existing school syllabuses for guiding and assessing 21st century learning. However, the implication of their survey is a call for a change in objectives, to support the development of outcome measures for collaborative and participatory learning.

This message is not necessarily welcomed in schools. For example, Zucker (2009), while recognising the whirlwind effect of combined changes in availability and capability of technology, offers reassurance that the newest is not best, and the 21st Century does not demand the invention of new learning skills. Rather, we should focus on the aims of schooling and harness technology that will enable us to deliver them.
However, in Australia, the Digital Education Revolution initiative has mandated a changed mission, injected physical resources, and challenged schools to redevelop their operations, human resources, and culture.

Educators considering new technology would do well to address Keengwe’s summary of time-honoured recommendations regarding teacher practices, student opportunities, curriculum orientation, and professional development (Keenge et al, 2008). Teachers employed by the Department of Education and Training in NSW can access resources and services provided for early adopters through the Centre for Learning Innovation (CLI 2009).


References

BECTA (2003) What the research says about interactive whiteboards. British Educational Communications and Technology Agency (Becta), Coventry.

CLI (2009) Overview of the Centre for Learning Innovation. CLI [http://www.cli.nsw.edu.au/about_us/overview.htm]

COAG (2008). National Education Agreement Factsheet, Council of Australian Governments, 29 November 2008, [http://www.mceetya.edu.au/verve/_resources/Data_Standards_Manual_2010-SEC2-NAP_Measuring_and_Rep.pdf]

Coffield, F., Moseley, D., Hall, E., Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning. A systematic and critical review. London: Learning and Skills Research Centre.

Cradler, J. (2010). Technology in Schools: What the Research Says: A 2009 Update, CTAP, 19 January 2010, [http://www.myctap.org/index.php/administrators-and-data/edtech-research-reviews/191-technology-in-schools-what-the-research-says]

Curriculum K-12 Directorate, 2009. One-to-one computing: literature review. NSW DET.

Gagne, R. (1985). The Conditions of Learning (4th ed.). New York: Holt, Rinehart & Winston, cited in

Keengwe, J., Onchwari, G. & Wachira, P., 2008. Computer Technology Integration and Student Learning: Barriers and Promise. Journal of Science Education and Technology, 17(6), 560-565.

Leask, M., and Meadows, J., eds (2000). Teaching and learning with ICT in the primary school. Routledge/Falmer, London

Lemke, C., Coughlin, E., Reifsneider, D. (2009). Technology in schools: what the research says. A 2009 Update. Cisco Systems Inc.

Newhouse, P. (1999). Examining how teachers adjust to the availability of portable computersAustralian Journal of Educational Technology, 15(2), 148-166.

Rogers, E. (2003) Diffusion of innovations. 4th ed. Free Press, Glencoe.

Vacirca, E. 2008, How do  teachers develop  their  technological pedagogical content knowledge  in  the  context  of  system-wide  pedagogical  and curriculum reform? AARE Conference Brisbane Nov 30 – December 4, 2008.

Zucker, A.A. (2009). Transforming Schools with Technology, Independent School Magazine, Winter 2009.