A Research Agenda for Interactive Learning in the New Millennium
Thomas C. Reeves,
Department of Instructional Technology
College of Education
The University of Georgia
604 Aderhold Hall,
Athens, GA 30602-7144 USA
|Abstract: During the past three decades, hundreds of research studies have been conducted to investigate interactive learning in a variety of forms ranging from the earliest days of mainframe-based computer-assisted instruction to contemporary multimedia learning environments accessible via the World Wide Web. In light of this body of research, some researchers believe that we are on the verge of developing a true instructional science whereas others conclude that we simply cannot pile up generalizations fast enough to adapt our interactive designs to the myriad variables inherent in human learning. In this paper, I summarize what we know and what we don't know about interactive learning, describe the strengths and weaknesses of various approaches to interactive learning research, and conclude by proposing a new developmental research agenda for the first decade of the new millennium.|
WHEN I heard the learn'd astronomer;
When the proofs, the figures, were ranged in columns before me;
When I was shown the charts and the diagrams,
to add, divide, and measure them;
When I, sitting, heard the astronomer, where he lectured with much applause
in the lecture-room,
How soon, unaccountable, I became tired and sick;
Till rising and gliding out, I wander'd off by myself,
In the mystical moist night-air, and from time to time,
Look'd up in perfect silence at the stars.
- Walt Whitman, Leaves of Grass.
Unlike Walt Whitman's "learn'd astronomer," I offer no proofs, no figures, and nary a chart in this paper. Instead, the paper primarily consists of a logical argument. I desire to persuade you with my argument, and although the phenomenon of interactive learning is hardly as wondrous as the stars, I hope to leave you with a greater appreciation for the importance and complexity of researching interactive learning. More specifically, I attempt to outline what we know about interactive learning as well as what we don't know, and to suggest some directions for a renewed research agenda in the early years of the new millennium. Before proceeding, however, I must reveal some of the biases I bring to this argument and also clarify what I mean by interactive learning.
Almost every field of inquiry today is beset with dichotomous controversies. Consider biology where one camp of scientists is laboring mightily to explain the nature of human behavior on the basis of genetic mapping whereas another camp argues that human behavior will ultimately be explained more completely by the effects of nurture and culture. If research on interactive learning can be regarded as a field (or at least a tiny plot) of inquiry, then it too has its controversies. One of the most obvious is between those who view this enterprise as a branch of science or technology and those who regard it as more akin to a type of craft or even art (Clark & Estes, 1998). I must confess that I have grown increasingly skeptical about the science of designing interactive learning and more attracted to the craft or art of this activity. However, my skepticism concerning learning sciences and educational technology does not preclude a strong commitment to developmental research and evaluation as necessary, but insufficient, methods for collecting information to guide the decisions that must be made when designing (crafting) interactive learning environments.
What is my definition of interactive learning? Burdened by a history of failed technology-based innovations (e.g., programmed instruction, teaching machines, and computer-assisted instruction), the latest buzzwords for interactive learning (e.g., interactive multimedia, the World Wide Web (WWW), and virtual reality) attract both ebullient enthusiasm (Perelman, 1992) and serious skepticism (Postman, 1995). Ultimately, all learning is interactive in the sense that learners interact with content to process, tasks to accomplish, and/or problems to solve. However, in this paper, I refer to a specific meaning of interactive learning as involving some sort of technological mediation between a teacher/designer and a learner. In my view, an interactive learning system requires an electronic device equipped with a microprocessor (e.g., a computer) and at least one human being (a learner). The adult school dropout developing basic literacy skills via a multimedia simulation, the high school student surfing the WWW for archival material about indigenous people to prepare a class presentation, and the three-year old practicing color-matching skills with Big Bird with a Sesame Street CD-ROM program are all engaged in interactive learning.
What We Know and Don't Know
There are two major approaches to using interactive learning systems and programs in education. (Although many of the ideas expressed in this paper may apply within training contexts, this paper will be limited to research from and implications for education.) First, people can learn "from" interactive learning systems and programs, and second, they can learn "with" interactive learning tools. Learning "from" interactive learning systems is often referred to in terms such as computer-based instruction or integrated learning systems (ILS). Learning "with" interactive software programs, on the other hand, is referred to in terms such as cognitive tools and constructivist learning environments.
The foundation for the use of interactive learning systems as "tutors" (the "from" approach) is "educational communications theory," or the deliberate and intentional act of communicating content to students with the assumption that they will learn something "from" these communications. The instructional processes inherent in the "from" approach to using interactive learning systems can be reduced to four simple steps:
1) exposing learners to messages encoded
in media and delivered via an interactive technology,
2) assuming that learners perceive and encode these messages,
3) requiring a response to indicate that messages have been received, and
4) providing feedback as to the adequacy of the response.
The findings concerning the impact of interactive learning systems and programs can be summed up as:
- Computers as tutors have positive effects on learning as measured by standardized achievement tests, are more motivating for students, are accepted by more teachers than other technologies, and are widely supported by administrators, parents, politicians, and the public in general.
- Students are able to complete a given set of educational objectives in less time with CBI than needed in more traditional approaches.
- Limited research and evaluation studies indicate that integrated learning systems (ILS) are effective forms of CBI which are quite likely to play an even larger role in classrooms in the foreseeable future.
- Intelligent tutoring systems have not had significant impact on mainstream education because of technical difficulties inherent in building student models and facilitating human-like communications.
- Overall, the differences that have been found between interactive learning systems as tutors and human teachers have been modest and inconsistent. It appears that the larger value of these systems as tutors rests in their capacity to motivate students, increase equity of access, and reduce the time needed to accomplish a given set of objectives.
The foundation for the use of interactive learning systems as "cognitive tools" (the "with" approach) is "cognitive psychology." Computer-based cognitive tools have been intentionally adapted or developed to function as intellectual partners to enable and facilitate critical thinking and higher order learning. Examples of cognitive tools include: databases, spreadsheets, semantic networks, expert systems, communications software such as teleconferencing programs, on-line collaborative knowledge construction environments, multimedia/ hypermedia construction software, and computer programming languages. In the cognitive tools approach, interactive tools are given directly to learners to use for representing and expressing what they know (Jonassen & Reeves, 1996). Learners themselves function as designers, using software programs as tools for analyzing the world, accessing and interpreting information, organizing their personal knowledge, and representing what they know to others.
The basic principles that guide the use of interactive software programs as cognitive tools for teaching and learning are:
- Cognitive tools will have their greatest effectiveness when they are applied within constructivist learning environments.
- Cognitive tools empower learners to design their own representations of knowledge rather than absorbing representations preconceived by others.
- Cognitive tools can be used to support the deep reflective thinking that is necessary for meaningful learning.
- Cognitive tools have two kinds of important cognitive effects, those which are with the technology in terms of intellectual partnerships and those that are of the technology in terms of the cognitive residue that remains after the tools are used.
- Cognitive tools enable mindful, challenging learning rather than the effortless learning promised but rarely realized by other instructional innovations.
- The source of the tasks or problems to which cognitive tools are applied should be learners, guided by teachers and other resources in the learning environment.
- Ideally, tasks or problems for the application of cognitive tools will be situated in realistic contexts with results that are personally meaningful for learners.
- Using multimedia construction programs as cognitive tools engages many skills in learners such as: project management skills, research skills, organization and representation skills, presentation skills, and reflection skills.
- Research concerning the effectiveness of constructivist learning environments such as microworlds, classroom-based learning environments, and virtual, collaborative environments show positive results across a wide range of indicators.
In summary, thirty years of educational research indicates that various interactive technologies are effective in education as phenomena to learn both "from" and "with." Historically, the learning "from" or tutorial approaches have received the most attention and funding, but the "with" or cognitive tool approaches are the focus of more interest and investment than ever before. Preliminary findings suggest that in the long run, constructivist approaches to applying media and technology may have more potential to enhance teaching and learning than instructivist models (Jonassen & Reeves, 1996). In other words, the real power of interactive learning to improve achievement and performance may only be realized when people actively use computers as cognitive tools rather than simply interact with them as tutors or data repositories.
At the same time, there is a paucity of empirical evidence that interactive learning technologies are any more effective than other instructional approaches. This is because most research studies confound media and methods. Sixteen years ago, Richard E. Clark ignited a debate about the impact of media and technology on learning with the provocative statement that "media do not influence learning under any conditions" (Clark, 1983, p. 445). He clarified this challenge by explaining that media and technology are merely vehicles that deliver instructional methods, and that it is instructional methods, the teaching tasks and student activities, that account for learning. Clark maintained that as vehicles, interactive technologies such as computer-based instruction do not influence student achievement any more than the truck that deliver groceries changes our nutrition. Clark (1994) concluded that media and technology could be used to make learning more efficient (enable students to learn faster), more economical (save costs), and/or more equitable (increase access for those with special needs).
Robert Kozma challenged Clark in the debate about the impact of media and technology on learning by arguing that Clark's separation of media and methods creates "an unnecessary and undesirable schism between the two" (Kozma, 1994, p. 16). He recommended that we move away from the questions about whether technologies impact learning to questions concerning the ways can we use the capabilities of interactive technology to influence learning for particular students with specific tasks in distinct contexts. Kozma recognized that although interactive technologies may be essentially delivery vehicles for pedagogical dimensions, some vehicles are better at enabling specific instructional designs than others.
Both Clark and Kozma present important ideas. It is evident that the instructional methods students experience and the tasks they perform matter most in learning. In addition, I maintain that the search for unique learning effects from particular interactive technologies is ultimately futile. After all, fifty years of media and technology comparison studies have indicated no significant differences in most instances. Whatever differences are found can usually be explained by differences in instructional design, novelty effects, or other factors. However, even though technologies may lack unique instructional effects, some educational objectives are more easily achieved with interactive learning than in other ways. Revealing effective implementations of interactive learning for various types of learners and discrete learning objectives and content is an important goal for educational researchers and evaluators.
A Renewed Research Agenda
The fact that educational research is not highly valued by educational practitioners is widely recognized. A large part of the problem can be attributed to the fact that the interests of academics who conduct research and those of administrators, teachers, students, parents, and others involved in the educational enterprise are often quite different. Tanner (1998) reminds us that educational research should be focused on the mission of enhancing educational opportunities and outcomes:
Unfortunately, much that is taken for social research serves no social purpose other than to embellish reputations in the citadels of academe and sometimes to even undermine the democratic public interest.... Early in this century, John Dewey warned that educational practices must be the source of the ultimate problems to be investigated if we are to build a science of education. We may draw from the behavioral sciences, but the behavioral sciences do not define the educational problems. The faculties of the professional schools draw on the basic sciences, but their mandate is mission-oriented, not disciplined centered. (p. 348-349)
As noted in the previous section, research reveals that students learn both from and with interactive learning technology. Computer-based instruction and integrated learning systems have been demonstrated to be effective and efficient tutors, and there is considerable evidence that learners develop critical thinking skills as authors, designers, and constructors of multimedia or as active participants in constructivist learning environments. Unfortunately, the level of our knowledge about interactive learning is somewhat analogous to what health practitioners know about the functions of vitamins and herbs in supporting good health. There is general agreement within the healthcare professions that most vitamins and many herbs have health benefits, but there is considerable disagreement about the proper dosages, regimens, and protocols for using various products. Similarly, in education, while we can and do generally agree that interactive learning is good, we know very little about the most effective ways to implement interactive learning. In fact, the need for long-term, intensive research and evaluation studies focused on the mission of improving teaching and learning through interactive learning technology has never been greater. Both government and commercial interests are pushing interactive learning in various forms from preschool through lifelong learning, and major decisions are being made about these technologies based upon habit, intuition, prejudice, marketing, politics, greed, and ignorance rather than reliable and valid evidence provided by research and evaluation.
As we enter the new millennium, I maintain that our research and evaluation efforts should be primarily developmental in nature, i.e., focused on the invention and improvement of creative approaches to enhancing human communication, learning, and performance through the use of interactive learning technologies. The purpose of such inquiry should be to improve, not to prove. Further, developmental research and evaluation should not be limited to any one methodology. Any approach, quantitative, qualitative, critical, and/or mixed methods, is legitimate as long as the goal is to enhance education.
My recommendation to engage and invest in developmental research and evaluation overlaps somewhat with advice emanating from policy-makers in the USA where the Panel on Educational Technology of the President's Committee of Advisors on Science and Technology (1997) established three priorities for future research:
1. Basic research in various learning-related
disciplines and fundamental work on various educationally relevant
2. Early-stage research aimed at developing new forms of educational software, content, and technology-enabled pedagogy.
3. Empirical studies designed to determine which approaches to the use of technology are in fact most effective. (p. 38)
The second of these priorities reflects my call for development research issued above. However, I believe that the President's Committee of Advisors on Science and Technology (1997) has placed too much faith in the ability of large-scale empirical studies to identify the most effective approaches to using interactive learning in schools. In the final analysis, the esoteric and complex nature of human learning may mean that there may be no generalizable "best" approach to using interactive learning technology in education. The most we may be able to hope for is more creative application and better informed practice.
Salomon (1991) describes the contrast between analytic and systemic approaches to research that transcends the "basic versus applied" or "quantitative versus qualitative" arguments that so often dominate debates about the relevancy of educational research. Salomon concludes that the analytic and systemic approaches are complementary, arguing that "the analytic approach capitalizes on precision while the systemic approach capitalizes on authenticity" (p. 16). Salomon's critique remains relevant because much of the research and evaluation of the effectiveness of CBI and other forms of interactive learning continues to be plagued by fundamental flaws that render much of this literature little more than pseudoscience (Reeves, 1993).
One reason for this deplorable state of affairs is that there has long been great disagreement about the purpose and value of educational research. For example, Nate Gage, a past president of the American Educational Research Association (AERA), has been a staunch defender of the notion that the goal of basic research in education is simply "more valid and more positive conclusions" (Farley, 1982, p. 12) whereas another past president of AERA, Robert Ebel, proclaimed:
....the value of basic research in education is severely limited, and here is the reason. The process of education is not a natural phenomenon of the kind that has sometimes rewarded scientific investigation. It is not one of the givens in our universe. It is man-made, designed to serve our needs. It is not governed by any natural laws. It is not in need of research to find out how it works. It is in need of creative invention to make it work better. (Farley, 1982, p. 18, Ebel's italics).
Should researchers and evaluators seek to establish immutable laws akin to those found in the harder sciences? Or should we be focused on finding out how to improve education with different types of students in specific places at particular times of their development? These questions reflect an on-going struggle between those who view our field as a science and those who regard it as a craft. The questions also reflect the so-called "paradigm wars" among educational researchers. Despite the increased acceptance in some educational circles of qualitative alternatives to the experimental methods that have dominated educational research in the past, there are signs that some powerful policy-makers are still pushing for more classically empirical approaches. The aforementioned Panel on Educational Technology of the President's Committee of Advisors on Science and Technology (1997) listed as one of its six major strategic recommendations that the government "initiate a major program of experimental research....to ensure both the efficacy and cost-effectiveness of technology use within our nation's schools" (p. 5). I contend that a wiser course would be to support more development research (aimed at making interactive learning work better) using a wider range of quantitative, qualitative, critical, and mixed methods and less empirical research (aimed at determining how interactive learning works) using experimental designs.
[Clark 1994] Clark, R. E. (1994). Media will never influence learning. Educational Technology Research and Development, 42(2), 21-29.
[Clark 1983] Clark, R. E. (1983). Reconsidering research on learning with media. Review of Educational Research, 53(4), 445-459.
[Clark & Estes 1998] Clark, R. E., & Estes, F. (1998). Technology or craft: What are we doing? Educational Technology, 38(5), 5-11.
[Farley 1982] Farley, F. H. (1982). The future of educational research. Educational Researcher, 11(8), 11-19.
[Jonassen & Reeves 1996] Jonassen, D. H., & Reeves, T. C. (1996). Learning with technology: Using computers as cognitive tools. In D. H. Jonassen (Ed.), Handbook of research for educational communications and technology (pp. 693-719). New York: Macmillan.
[Kozma 1994] Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research and Development, 42(2), 7-19.
[Perelman 1992] Perelman, L. J. (1992). School's out: Hyperlearning, the new technology, and the end of education. New York: William Morrow.
[Postman 1995] Postman, N. (1995). The end of education: Redefining the value of school. New York: Alfred A. Knopf.
[President's Committee of Advisors on Science and Technology 1997] President's Committee of Advisors on Science and Technology. (1997, March). Report on the use of technology to strengthen K-12 education in the United States (http://www.whitehouse.gov/WH/EOP/OSTP/NSTC/PCAST/k-12ed.html). Washington, DC: The White House.
[Reeves 1993] Reeves, T. C. (1993). Pseudoscience in computer-based instruction: The case of learner control research. Journal of Computer-Based Instruction, 20(2), 39-46.
[Salomon 1991] Salomon, G. (1991). Transcending the qualitative-quantitative debate: The analytic and systemic approaches to educational research. Educational Researcher, 20(6), 10-18.
[Tanner 1998] Tanner, D. (1998). The social consequences of bad research. Phi Delta Kappan, 79(5), 345-349.
1. I would like to thank Betty Collis and Ron Oliver, the Co-Chairs of ED-MEDIA '99, for inviting me to present a keynote address at this important conference. If there is any merit in what I present, they deserve much of the credit for their advice and support in preparing this address. I take full responsibility for the faults that will inevitably be found.
2. This paper reflects my preliminary (and admittedly hurried) thinking about what I shall eventually present at the conference in Seattle. Please don't be surprised if I radically change the ideas presented in this paper when I actually give the keynote address. As Michelangelo said, "I am still learning."