46632 KNOWLEDGE MAP: MONITORING AND EVALUATION GUIDING QUESTIONS: What do we know about effective monitoring and evaluation practices and studies related to the uses of ICTs in education? What large scale comparative studies of ICT uses in education exist, and what do they tell us about the monitoring and evaluation This Knowledge Map is an excerpt process? from the publication, Knowledge What do we know about useful indicators related to the uses of ICTs in education? Maps: ICTs in Education: What Do We Know About the Effective Uses CURRENT of Information and Communica- KNOWLEDGEBASE tion Technologies in Education in What we know, what we believe Developing Countries? produced -- and what we don't by the Information for Development Program (infoDev). Monitoring and evaluation is not receiving the attention it warrants A consensus holds that insufficient attention is paid to monitoring and evaluation issues and feedback loops during the program design process of most ICT in education initiatives. e issues are known, but tools and data are missing In general, many of the issues and challenges associated with ICT in education initiatives are widely known by experts and advanced practitioners in the field (although this general awareness does not appear to extend to most policymakers, donor staff and educators new to ICT use in education). However, data on the nature and extent of these issues remain limited in most places because of the lack of monitoring and evaluation tools and methodologies dealing with the use of ICTs in schools and their impact on teaching and learning. Much of the work done to date may suffer from important positive biases Where evaluation data is available and monitoring and evaluation projects have occurred, much of such work is seen to suffer from important biases. No common set of indicators ere are no common international usage, performance and impact indicators for ICTs in education. Examples of monitoring and evaluation indicators and data collection methods exist from many countries. e process for the development of ICT in education indicators is the same as the process for the development of indicators in other fields. Monitoring and Evaluation 9 Few international comparative evaluations have been done ere have been very few international evaluations of impact of ICT use in education. ose that exist rely in large part on self-reported data. Quantitative data related to infrastructure has been the easiest to collect Quantitative data, typically related to the presence and functionality of ICT-related hardware and software, are seen as the easiest to collect, and most monitoring and evaluation indicators and collection efforts have focused on such data. In general, there has been a greater emphasis on technical infrastruc- ture issues than on program design, monitoring and evaluation, training and on-going maintenance/ upgrades issues. Data collection methods are varied Data collection methods are quite varied. e use of the Internet to collect data, and for self-assessment, especially in LDCs, has not been very successful and is seen as problematic. A reliance on self-reported data Qualitative indicators have focused to a large report on self-reported data. ICTs are not being well used in the M&E process ere is a general belief that the communication potential of ICTs to facilitate feedback from findings of monitoring and evaluation work, to create and sustain communities of interest/practice, and to provide information and communication linkages with other communities is being under-utilized. COMMENTS General comments Simply put: A lot of work needs to be done in this area if ICTs are to become effective and integral tools in education, and if impact is to be demonstrated to donors and communities financing ICT- related initiatives in education! Bias is a very real issue in most of the monitoring and evaluation work done of ICT in education issues across the board. Such biases are often introduced at the monitoring and evaluation design stage, and include a lack of relevant and appropriate control groups, biases on the part of `independent evaluators' (who often have a stake in seeing positive outcomes), and biases on the part of those evaluated (who may understandably seek to show that they have made good use of investments in ICTs to benefit education). e opportunity for such biases (which are usually positive biases) are especially acute where there a great reliance on self-reported data. ere appears to be a lack of institutional and human resource capacity to carry out independent evaluations of ICT in education initiatives by local organizations in LDCs (which increases the cost of such activities and potentially decreases the likelihood that the results will be fed back into program design locally). A general lack of formal monitoring and evaluation activities inhibits the collection and dissemination of lessons learned from pilot projects and the useful formation of necessary feedback loops for such lessons learned to become an input into educational policy. Where such activities have occurred, they focus largely on program delivery, and are often specific to the project itself. Applicability to LDC/EFA context e issues highlighted above are particularly acute in most developing countries. Developing in-country capacity for monitoring and evaluation work will be vital if ICT in education investments are to be monitored and evaluated at less cost. e opportunity costs of monitoring and evaluation work related to ICT in education interventions are potentially great, as there is typically a limited number of people able to do such work, and schools typically have little room in their calendars to participate in such activities. is is especially true where control groups are needed for interventions in rural and/or hard to reach areas--particular areas of interest for educational investments targeting education-related MDGs. at said, given the potential implications 10 Knowledge Maps: ICT in Education of costly mistakes in this field, especially in countries with education budgets that are severely constrained, investing in monitoring and evaluation in this field should be considered money well spent. Attention to equity issues needs to be included in all monitoring and evaluation efforts related to the uses of ICTs in education. While the introduction of ICTs in LDCs is seen as a mechanism to reduce the so-called `digital divide', in most cases such introductions serve to increase such divides, at least initially. Some areas for further investigation and research In general, there is a pressing need for additional work related to performance indicators to monitor the use and impact of ICTs in education. What would be a useful set of `core' indicators that could be used across countries? How have monitoring and evaluation studies related to the uses of ICTs in education been conducted in LDCs, and what can we learn from this? How should monitoring and evaluation studies of the impact of ICTs in education in LDCS be con- ducted? Some Recommended Resources to learn more .... Assessing the Impact of Technology in Teaching and Learning: A Sourcebook for Evaluators [Johnston 2002] Comparative International Research on Best Practice and Innovation in Learning [Holmes 2000] Consultative Workshop for Developing Performance Indicators for ICT in Education [UNESCO- Bangkok 2002] Developing and Using Indicators of ICT Use in Education [UNESCO 2003] e Flickering Mind: e False Promise of Technology in the Classroom and How Learning Can Be Saved [Oppenheimer 2003] Monitoring and Evaluation of Research in Learning Innovations--MERLIN [Barajas 2003] e Second Information Technology in Education Study: Module 2 (SITES: M2) [ISTE 2003] Technology, Innovation, and Educational Change--A Global Perspective. A Report of the Second Information Technology in Education Study, Module 2 [Kozma 2003] World Links for Development: Accomplishment and Challenges, Monitoring and Evaluation Reports [Kozma 1999, 2000] About these Briefing Sheets: infoDev's Knowledge Maps on ICTs in education are intended to serve as quick snapshots of what the research literature reveals in a number of key areas. ey are not meant to be an exhaustive catalog of everything that is known (or has been debated) about the use of ICTs in education in a particular topic; rather, taken together they are an attempt to summarize and give shape to a very large body of knowledge and to highlight certain issues in a format quickly accessible to busy policymakers. e infoDev knowledge mapping exercise is meant to identify key general assertions and gaps in the knowledge base of what is known about the use of ICTs in education, especially as such knowledge may relate to the education-related Millennium Development Goals (MDGs). Monitoring and Evaluation 11