Reflections on the Politics of Practicality: Evaluating ICT for community development
From 3C Media: Journal of Community, Citizen’s and Third Sector Media and Communication
http://www.cbonline.org.au
Issue 4 (August) 2008
The issue of evaluation is far from the sexiest topic for a journal issue on community technology. It brings to mind the endless forms and responses required by funders, or writing proposals with a detailed evaluation methodology when one is sitting there thinking ‘If you don’t give me any money there won’t be a damn project to evaluate!’ The word evaluation just innately conjures negative affect. Say it and see how it rolls around on your tongue: ‘project evaluation’. Compare it to ‘Web 2.0’ or ‘information superhighway’. It just doesn’t sound like fun.
While my own tawdry presentation skills likely played a role, there’s no doubt that the lukewarm reception my talk received at the 2007 Making Links conference in Sydney, Australia, had something to do with focusing on a topic that is the community media worker’s equivalent of going to the dentist. No inspirational success stories, no rousing polemic about the need to get with the future programme, no fancy technologies, no ideas for new programmes. Maybe that’s what the sector is about? Maybe I’d misjudged my audience completely, I thought? Maybe I am just interested in arcane bureaucratic nonsense for my own reasons and this doesn’t offer anything useful to anyone?
I was fortunate at dinner that night to be slightly relieved when the director of a well- established NGO looked me over with a suspicious eye and asked me why I didn’t talk about any of my projects. I replied that I wasn’t really a practitioner, just a consultant and academic. She wasn’t having any of it. ‘But everything you talked about are the issues we deal with on the ground – it was obvious you’d dealt with them, but the projects themselves weren’t there.’ I admitted that I do actually work on projects, but I don’t like to talk about them. Partially because the projects are long-term and small scale and unsuited to conference presentations, but also because I just don’t know if talking about them helps the projects (Spivak and Sharpe 2002: 623), and it also introduces the risk of obscuring the more important issues that structure practice but are not drawn from it. Evaluation is one of those topics whose unavoidable importance has not generally been asserted by those working on the ground. It emerges from an international discourse that, I offer, urgently needs to be transformed by those with on-the-ground experience. But in order to change that we have to focus on this discourse and where it comes from, testing it against our experience and making interventions back at that level, rather than expecting that our realities will be understood by those setting the terms of reference for our projects. In this paper I want to highlight some of the challenges to effective evaluation in the sector; look at the lessons drawn from the international discourse on evaluation; and suggest some pragmatic responses that can be made by project workers.
This approach seems appropriate to both my current role as a consultant/academic and my experience in the sector. My masters study initially looked at rural community development through ICT, but ended up in social theory, once I realised that the structuring concepts around the ‘digital divide’ that constrained development possibilities where I lived were not local issues, though they had local impacts. Perhaps more accurately, I realised it was at the definitional level where I could contribute most and where change seemed most urgent, and those wouldn’t be accessed through local interventions when the connecting discourses in the institutional support were inadequate. While I still regard the community impacts as the test of my work, it has proven useful to venture into the wild jungles of international policy and development agencies in order to better understand why different local experiences seem to struggle with the same issues.
Since then, I’ve been fortunate to be involved in a number of Asia Pacific ICT-for- development (ICT4D) initiatives where I see the results of the work of many projects. If I were to summarise my experience of the community sector, it would mostly be based on two observations:
Firstly, community sector ICT workers are dealing with a huge range of competing demands, and compared to their larger organisational colleagues they have to assume many more roles. The upside of this is the level of flexibility and an ability to ‘get things done’ with minimal bureaucracy. However the downside is time-poverty, where blocks of time allocated to strategic planning evaporate in the face of urgent demands such as keeping the website up or the email working.
The second is the chronic undercapitalisation which affects many community projects. On one telecentre project described by the late Steve Cisler (2007), USAID staff responded to problems by extending the project life cycle and pumped some more money into lab maintenance while demanding that the project be ‘self-sustaining by month 18.’ What does it mean to talk about ‘sustainability’ in a setting where there’s little money available to the administration or any of the users and where costs of fuel, paper, staff time, Internet access, and electricity were/are very high? Yet without such an impossible exit strategy for the donor, no money will be forthcoming for the project, and it would be a brave or foolhardy person who would decide to not just keep their mouth shut and say all the right things, knowing that communities are in desperate need of resources, and pandering to unrealistic expectations of funders is a small price to pay. But is it a small price to pay to have a funding environment that evaluates projects according to fantasy? That becomes a decision only the practitioner can make.
What these two issues suggest to me is that one of the most important determinants of the ‘practical’ possibilities is precisely in the political dimensions of our organisations. Technical workers are historically not very interested in politics; or more correctly, they prefer not to discuss the political aspects of their practices. However, a recent Australian study (Department of Communications Information Technology and the Arts 2005) highlights three critical factors in realising benefits from a range of ICT projects:
1. Being ICT Aware
2. Being open to Organisational Transformation
3. Being persistent through the time lag.
These are all political issues! As the French saying goes, ‘those who don’t do politics get done in by politics’.
Evaluation in the ICT4D Imaginary
If I sound cynical, it is only because in 15 years of work with technology projects I have too often seen convenient fantasies of results manufactured that serve the short-term interests of projects, but eventually leave community workers disillusioned and funders dissatisfied when these results cannot be measured or achieved.
Let me take an example: Throughout the Asia Pacific, the theme of ‘universal access’ drives ICT4D policy; policy initiatives are given ambitious titles such as ‘Computers for all’ or ‘One laptop per child’. These are worthy ideals but in the policy setting they become problematic as they are never finally achievable and provide little guidance for the tough decision-making that is required to support the use of ICTs where basic poverty issues such as access to food, water, and basic healthcare remain unsolved.
These ideals are part of what Iris Marion Young (1990: 18) calls a ‘distributive paradigm’ that ‘defines social justice as the morally proper distribution of social benefits and burdens among society’s members.’ Virginia Eubanks (2007) notes that this paradigm is at the heart of much work in the community informatics sector, but that it restricts the scope of an equity agenda because, among other things, its demographic cast cannot account for the complex inequalities of the information economy. As ICT4D matures as a field, a number of reviews of ICT4D literature are beginning to appear (Wilson 2003; Ekdahl and Trojer 2002). They find recurrent features in ICT4D reports which are in tension with findings in other parts of the development sector.
Firstly, the ICT4D literature’s metaphors of catch-up, progress and leap-frogging present development as a linear pathway and ICT as a positive, or at least neutral development. As Eubanks suggests, when technology is framed as a commodity to be received, rather than a complex field to enter, we are unable to account for the gap between the normative solutions we seek and the lived experience of unintended effects of technological systems in communities.
Secondly, there are common demands for urgency and the need to act quickly on ICT4D in order to not be excluded from fast-paced developments – even though national human development indicators listed in the United Nations’ Human Development Reports remain remarkably stable over time. I am certain that anyone who works in the community technology sector has used this rhetoric, as the discourses of speed and paradigm-shift are fundamental to how we understand technology in the West.
Thirdly, assumptions are made about what kinds of information are valuable for development, and a category of information-poor peoples are implicitly compared to the knowledge-holders of the developed world, rather than looked at in terms of reference drawn from the context of their life. We are cast in the role of missionary, bringing the new religion to the people. Or perhaps, if we are more cautious, we are bringing people an understanding of a new power system and structure (ICTs) that they will need to learn to navigate.
It is worth taking a sceptical approach to these ‘articles of faith’ in ICT-enabled development, because as Kerry McNamara points out, there is still a significant gap in evaluation:
Despite a proliferation of reports, initiatives, and pilot projects in the past several years, we still have little rigorous knowledge about ‘what works.’ There are abundant ‘success stories,’ but few of these have yet been subjected to detailed evaluation. There is a growing amount of data about the spread of ICTs in developing countries and the differential rates of that spread, but little hard evidence about the sustained impact of these ICTs on poverty reduction and economic growth in those countries. (McNamara 2003: 1)
The point is not that these articles of faith are wrong per se. It is that they exist within a distributive paradigm which suits the ICT industry – including the ICT4D industry – more than it suits the long-term needs of communities. Now in Australasia many in the community sector are not dealing with such a huge gap between the basic life conditions of ourselves and those benefiting from our work, but I would say that this still generally holds true: much of the time projects occur because one of us thinks it’s a good idea or we know resourcing might be available for these projects, rather than coming from detailed experiences of project success.
We have to more rigorously question this paradigm for our work if we are to learn from the work of others and not simply promote what ethnographer Eric Michaels (1994) termed ‘well-meaning but ineffective advancement projects, the discarded skeletons of which litter the countryside.’ This paradigm puts us in the producer role and our communities in the consumer role, and causes funders to evaluate development as a product rather than as a relationship. The currency of international ICT4D is the photograph of the rural woman (preferably with child nearby) in front of a computer. The photograph will probably not be taken by a member of the woman’s community, but by an external consultant who is initiating or evaluating the project. The photograph will appear in a project report (or, if it is a good photograph, the funder’s annual report), and the fantasies of rural women entering the information economy will be complete. At this point, the project has been a success for the funder, and their future programme budgets are made more secure. If the community worker remains to try and consolidate this initial success into the fabric of the community in a positive way, they will soon realise that the support that is required may not dovetail smoothly with the need to produce success stories – they might find that the cycle of intervention and evaluation takes longer. They will then fall out of favour with an evaluation cycle whose political exigencies require faster results. The community worker might move to a new job or sector to regain their enthusiasm, the funder might shift their program budgets, and the communities who were promised the dream of the ICT panacea will wait until the next person who comes along who sees the ‘potential’ in a ‘project.’
This might be considered a harsh assessment, but it is one that I think is congruent with the experience of many who have worked in this sector for some time. This does not mean that I don’t think anything good comes out of community technology projects, but it does mean that I believe it is vital to begin shifting the terms by which we evaluate projects. This is why I want to focus on evaluation, because to me it is one of the key battlegrounds for the political dimensions of ICT projects in the community sector. My hope is that this discussion will not just shift the way you think about evaluating work for your own purposes, but that it helps those who work with funders, donors, and budgets gain more traction in the realpolitik of resource allocation.
Evaluation is no panacea€”it is often an instrument of control and this is the way that those working in the community technology sector generally experience it. Niles Norris vividly captures the perspective of those who receive evaluation criteria from above:
‘Mostly executive decision makers do not want to be told that things are complex and open to different interpretations and valuing; they know that. It is the way out of or around the complexity and the plurality of interests and values that they want help with. They want to use evaluation as a resource to solve problems, not pose or redefine them. Some of the problems they want to address are social problems. Other problems are creatures of the politics of government: avoiding embarrassment, displacing blame, deflecting criticism, maintaining reputation, legitimating action or inaction, reordering priorities, justifying budgets. To the governmental frame of mind, beset with accountability, other people’s autonomy is a problem. It is a source of contingency, ambiguity, and unpredictability and a potential for loose cannons. The increasing tendency of governments to prespecify the characteristics of good evaluation by providing guidelines and standards stems from an understandable desire for greater predictability and control over the content and process of evaluation. It is a kind of security blanket.’ (Norris 2005: 584)
However, as I have discussed in the previous section, it is not only governments and funders who could use evaluation criteria to secure their projects. It can also be a powerful tool for deconstructing the assumptions we might hold working on the ground, particularly if we make the formulation of these criteria a collaborative exercise with our communities. In all such cases, it is important to identify the cultural assumptions embedded in the way they describe projects. There is a practical reason for this, which is that it helps avoid the unintended consequences that come from using a shared language (‘technology’, ‘development’) but having different understandings and intentions.
Unintended Consequences
Unintended consequences are central to the rationale for evaluation, and there are three stories about them I’d like to share.
The first comes from Ramsay Taum, a Native Hawaiian leader from the University of Hawai’i I was fortunate to work with in 2006. He tells the story of a fish and a monkey who had become good friends. The monkey would stand at the edge of the stream and talk to the fish everyday. One day, the fish came along and said to the monkey, ‘Friend, I need your help’. The monkey replied, ‘Sure!’ and pulled the fish out of the stream, placed it in the most bountiful tree in the forest, and walked off feeling proud of his generosity. That is probably my favourite story about development.
The second concerns the unintended (or perhaps semi-intended) consequences of eGovernment projects in India. According to the World Bank, the government of Andhra Pradesh developed a land registration system where the land owner can enter details of their property (location, dimensions and other factors) and the system then calculates the value. Prior to the system, land valuation was performed in an entirely non- transparent system by assessors and agents, was fraught with corruption, and often required weeks and sometimes additional payments. After the implementation of the new system, land registration can be completed in a few hours where earlier it took 7-15 days (Parks 2005: 6). However, researcher Solomon Benjamin (2005) has found that such new land regimes might have very uneven effects. In Bangalore, the reduction of complexity in titles and centralization has made land much more open to larger purchasers and more competitive for local investors who are unable to compete. ‘This has allowed very large real estate companies catering to the IT industry to access land in Bangalore, resulting in dramatic changes in land markets’ (Benjamin 2005: 8). Gentrification makes the rights of the poor more tenuous when ICT enables companies and politicians to collaborate on larger ‘real estate development projects’, which may be good for a region’s overall economy but which result in the transfer of security away from the poor to the benefit of the wealthy.
A third, more academic story comes from Jonathan Morell (2005) who completed a substantial academic review of the research on unintended consequences. He notes that explanations of unintended consequences discuss the complex nature of systems: multiple cross-linked processes, non-linear interactions, long feedback loops, sensitivity to initial conditions, and the inability to completely specify all relevant variables, among others. He suggests that a lack of information about the environment we are trying to affect is chronic rather than exceptional, and that we often lack vigilance in scouting for environmental changes that would be tell-tale signs that things will not unfold as we expect. Complicating this further is that ‘the nature of planning is such that opportunity for powerful intervention and change exists at only limited times in the life cycle of a policy or a program.’ (445)
Morell distinguishes unforeseen consequences as those that emerge from ‘the weak application of analytical frameworks and from failure to capture the experience of past research.’ Unforeseeable consequences, on the other hand, ‘stem from the uncertainties of changing environments combined with competition among programs occupying the same ecological niche.’ (446) Morell suggests a range of remedies which are too extensive to list here, but which should be read by anyone involved in planning or evaluating projects. They generally involve more rigorous pre-programme evaluation of similar projects, diversifying inputs, and maintaining flexibility in programme direction in response to shifting circumstance.
Evaluation in ICT4D Projects
Regardless of whether one believes in the value of evaluation for programme practitioners, there is a degree to which it is now an unavoidable fact of life in the community technology sector. In a time where the ongoing maintenance costs of technology are rising, and it gets harder to gain support for new initiatives, we need a different mindset from the philosophies that served us during the dotcom era. I’d like to finish by offering some personal observations on what I see as being useful considerations for practitioners in evaluating their own projects.
Firstly, there are three imperatives that are oriented toward funders and donors:
1. Distinguish what should and shouldn’t be measured, how, and when, before your funder does. This is not always possible, but it is often possible to set evaluation measures around phenomena which you know are real, while also giving a funder something like what they want. For example, a programme which aims to increase educational opportunities for target groups could be measured in educational participation of participants over a number of years. But if your evaluation cycle is shorter, you don’t want to get sucked into measuring large-scale behaviour change when this might not always be visible immediately. Instead, one can evaluate attitudinal changes, confidence levels, or other behaviour which will influence later outcomes the project is trying to achieve.
2. Build in a Quick Win. For example, when instituting a content management system or a blogging platform, ensure that there is a voice in the community that can contribute immediately. It is often especially successful to schedule an event-related initiative early in the project. Small, carefully focussed, time-bounded projects are underrated and have the advantage of gaining visibility when documented well, yet have much more predictable resource demands than ongoing projects.
3. Communicate the Results. Simply: be proactive about setting your own measures for success that are appropriate to the communities you work with (preferably developed in collaboration with them) and continually communicate these to all stakeholders. This is easier said than done under the day-to-day grind of keeping projects on track, but it helps bring upcoming problems into view earlier.
Evaluating the user relationship
There are three other criteria that I use to think about projects in how they relate to communities of users.
1. Is the project giving key users what they want? It sounds simple but a surprising number of projects treat their outcomes as a self-evident universal good. Meanwhile, key users battle low motivation as their goals are not those of the project, but they hope to achieve their goals through their participation in the project. It is important to remain sceptical of our own knowledge of what users want, even when they tell us. User communities may not understand all the implications of their desires and an important role for project workers is to help them clarify their goals in ways which are true to their impulses yet also achievable.
2. Is the project going through the channels the user expects? The build it and they will come era is over. Channels have matured and have more specific genre constraints on the kind of content that fits in a particular format. Channels that we can consider include not just email and the web, but SMS, YouTube, Facebook, RSS, Google. If the channel isn’t under your control, partnering might be required. Many projects have tried to build entirely new platforms (e.g. a ‘local YouTube’) whose sustainability is predicated on participation by a much larger community of users than the initial project participants. This is a very risky strategy and ultimately, a potential waste of valuable resources when existing platforms are often available and such funds can go into developing user capabilities to navigate and critically assess those platforms.
3. Is it building future capability and flexibility? Whatever we know about technological platforms, we know that they will be different in the future. It is important to ensure flexibility in platforms and content that are generated, and to increase the skills and capacity of communities to adapt to change.
Conclusion
Over time I’ve come to believe that our core skill is not so much our technological expertise, which is rapidly becoming commoditised by web 2.0 platforms, even if it often prompted our entry into the sector in the first place. It’s our understanding of our stakeholders, our ability to visualise the way they use the internet, to empathise with their needs and to bring our related organisations around to support it. Increasingly, this probably requires less of the skills of the technologist and marketer, and more the skills of the anthropologist and facilitator. The debates on anthropological method are asking critical questions about the value for communities of projects whose benefits were previously seen to be self-evident. I suggest that evaluation methodology is one place where we can productively bring some of these insights to our own work, to step outside of technological fantasies and increase our value for the communities we represent.
References
Benjamin, Solomon. 2006. Analogue To Digital: Re-Living Big Business’s Nightmare In New Hydras. Impressum: Institut für Neue Kulturtechnologien/t0, November 20 2005 [cited 6 December 2006]. Available from http://static.world- information.org/infopaper/wi_ipcityedition.pdf
Cisler, Steve. 2007. Re: OLPC presentation in Norway. incom mailing list, http://mail.kein.org/pipermail/incom-l/2007-February/001592.html.
Department of Communications Information Technology and the Arts. 2005. Achieving Value from ICT: Key Management strategies. Place Published: Australian Government. http://www.dcita.gov.au/__data/assets/pdf_file/25466/Achieving_value.pdf (accessed October 2, 2007).
Ekdahl, Peter , and Lena Trojer. 2002. ‘Digital Divide: Catch up for What?’ Gender, Technology and Development 6 (1):20.
Eubanks, Virginia. 2007. ‘Trapped in the Digital Divide: The Distributive Paradigm in Community Informatics’ The Journal of Community Informatics 3 (2).
McNamara, Kerry. 2003. ‘Information and Communication Technologies, Poverty and Development: Learning from Experience’. In infoDev Annual Symposium. Geneva, Switzerland: infoDev.
Michaels, Eric. 1994. Bad Aboriginal art : tradition, media and technological horizons. St. Leonards, N.S.W.: Allen & Unwin.
Morell, Jonathan A. 2005. ‘Why Are There Unintended Consequences of Program Action, and What Are the Implications for Doing Evaluation?’American Journal of Evaluation 26 (4):444-463.
Norris, Niles. 2005. ‘The Politics of Evaluation and the Methodological Imagination.’ American Journal of Evaluation 26 (4):584-586.
Parks, Thomas. 2007. A Few Misconceptions about eGovernment 2005 [cited February 2, 2007]. Available from http://www.asiafoundation.org/pdf/ICT_eGov.pdf.
Spivak, Gayatri Chakravorty, and Jenny Sharpe. 2002. A Conversation with Gayatri Chakravorty Spivak: Politics and the Imagination Signs: Journal of Women in Culture and Society 28 (2):609-625.
Wilson, Merridy. 2003. ‘Understanding the International ICT and Development Discourse: Assumptions and implications.’ The Southern African Journal of Information and Communication 3.
Young, Iris Marion. 1990. Justice and the politics of difference. Princeton, N.J.: Princeton University Press.