There is a crack in everything. That’s how the light gets in.
The purpose of this final chapter is to show how the analysis in this book promotes a kind of thinking about evaluation that is an alternative to the present self-understanding of contemporary evaluation practices. Such alternative thinking is not for evaluators alone. Evaluation so deeply involves professionals, managers, and users of services that it is a broad social responsibility to think cleverly about evaluation and use it intelligently.
The central thesis of this book is that if we want to understand many of the norms, values, and expectations that evaluators and others bring to evaluation (sometimes unknowingly), we should understand how evaluation is demanded, formatted, and shaped by the two great principles of social order in modernity, called organization and society.
In the organizational part of the book, it was argued that modern evaluation is deeply embedded in organizational logics. We cannot think of modern evaluation apart from modern organization. Furthermore, the thinking of evaluation is formed and shaped according to the dominant organizational model, of which we looked at three, in brief called the rational, the learning, and the institutional models. Although the rational model connotes control and predictability, most evaluators subscribe to the positive vision of the learning organization.
However, it was argued—perhaps provocatively—that learning is not just troublesome and time-consuming; the learning model in its optimism does not quite grasp why evaluation processes in organizations are sometimes inconsistent, disconnected, ritualistic, and hypocritical. The institutional model was offered as an alternative to better grasp these apparently peculiar phenomena. A key theoretical insight learned from the institutional model is that organizations do evaluation—and attend to such components as topics, criteria, designs, and follow-up actions—in ways that reflect norms, values, and expectations in organizational environments regardless of whether there is any consistency in evaluation and its use as such.
The idea of evaluation as a ritual supported by societal norms is thus central to our analysis. The institutional and managerial support for the idea of evaluation is much stronger than the empirical evidence about the instrumental use of actual evaluations. So, evaluations must be carried out for institutional and normative reasons. In our society, the pressure to do evaluation is stronger than the pressure to do good evaluation.
The full meaning of the idea of evaluation as a ritual mandated by society, however, did not unfold until sociological analysis was carried out, by means of a tripartite distinction among modernity, reflexive modernity, and the audit society. Faith in evaluation has been deep, because it is deeply rooted in fundamental modern beliefs and values. Yet in every epoch of modernity (such as reflexive modernity and the audit society) evaluation is shaped and colored by the dominant myths and values.
Evaluation has been clever in how it has incorporated myths, values, and norms from different epochs through the history of modern societies.
The field of evaluation has never slept. It has constantly come up with new models, approaches, and ways to evaluate to compensate for weakness in earlier approaches. In doing so, it has managed to remain in close contact with dominant ways of thinking characteristic of each sociohistorical phase in society. Evaluation has constantly found new forms, but each has also allowed some myths and values from its corresponding sociohistorical milieu, untouched by evaluation, to have an influence—perhaps exactly because the ideal of an ever-transparent and ever-self-transforming social reality was an unachievable ideal in the first place.
Many value choices in evaluation take place with reference to values taken for granted in a particular era. Many evaluation criteria, indicators, and standards are floating around in a given sociohistorical climate, just waiting to become attached to particular evaluands. Combine this with the general pressure to evaluate, and it becomes less necessary for evaluators and evaluation systems to be always careful, detailed, and mindful with the elaboration of evaluation criteria.
Our history of evaluation in modernity, reflexive modernity, and the audit society is not a story of increasing perfection. Instead, each phase has its own limitations and tensions.
In reflexive modernization, evaluation signals a willingness to listen and reflect on side effects of and reactions to one’s practice. Evaluation becomes popular in an unprecedented way. It is now fashionable practice, signifying the appearance of reflexivity as a social norm. In the name of reflexivity, evaluation opens up for contingency and multiple perspectives, not only of various stakeholders but of a number of paradigms and approaches in evaluation itself. On the other hand, evaluation therefore gives way to relativity, contingency, perspectivity, arbitrariness, and unpredictability.
These very problems in evaluation—as well as in the organizations that host them—pave the way for new alliances between evaluation and organization in terms of evaluation culture, capacity building, systems, and policies defining evaluation.
The most radical response, antithetical to the subjectivism, contingency, and apparent arbitrariness of reflexively modern evaluation, is what I call evaluation machines, an ideal type describing the preferred form of evaluation in the audit society.
Steering, control, accountability, and predictability come back on the throne. The purpose of evaluation is no longer to stimulate endless discussions in society, but to prevent them. Instead of human subjectivity, a style of coolness, objectivity, and technocracy becomes the dominant ideal in evaluation.
There is a particular social investment in evaluation machines that can be understood only if we mobilize a combination of theoretical elements: the organizational legitimacy that organizations attain through consistency with broader themes, norms, values, and expectations in society; experiences with faulty, inconsistent, subjectivist, unpredictable, and overly complex evaluation processes in reflexive modernization; a new social imaginary that makes fear and risk management central cultural themes and that celebrates a new rigorism concerning what cannot be up for critique and discussion; and a strong comeback of rational, bureaucratic, management-oriented organization that is believed to guarantee predictability, nonsubjectivism, and order.
Reflexive modernization encouraged reflexivity, which itself encourages multiperspectivity and fragmentation. It left a vacuum with regard to how a perspective on society as a whole can be represented—a vacuum that evaluation machines try to fill on the basis of an imagery of objectivity, rational measurement, and depersonalized procedure.
It is highly debatable, however, whether evaluation machines represent social reality in a way that allows society to handle its complex problems adequately. Evaluation machines tend to lock evaluees into micro-oriented accountability structures. Evaluation machines often overfocus on defensive, reassuring measurement of microquality at the expense of attention to complex macroproblems that call for offensive, experimenting, and future-oriented views of quality.
Among the many problems evaluation machines have are their hidden costs, the rigidity of their control procedures, degradation of the work of professionals, and constitutive effects.
Constitutive effects are complex, relational, and contextual. They emerge through definitions, categorizations, and measurements that enhance some social constructions of reality but not others. Indicators do not merely communicate about an activity; they also meta-communicate about the identities and roles of human beings and their relations. Indicator systems produce socially relevant labels that stick to practices and people, and that help organize social interaction in particular ways. To categorize constitutive effects of evaluation as unintended, negative, pathologic, and dysfunctional is to assume a technical, pure, and unrealistic model of how knowledge can be applied to social realities. There is an open-ended nondeterminism in the concept of constitutive effects. It suggests that the finality of evaluation is not determined by the data as such, nor by intentions behind evaluations, but by complex processes of social construction.
One of the things we have learned from the analysis in this book is that although evaluation is formed and shaped by two large forms of social order to which it belongs (organization and society), the evaluation field itself has been active, dynamic, and almost self-transformative in its responses to challenges and problems. One must admire the flexibility and dynamic creativity in the field. On the other hand, forms of evaluation that are meaningful at one time as a response to a particular problem or societal situation may, even if they were conceived with the best of intentions, turn out to create a new set of problems in the next phase of history.
For example, the emphasis that evaluators put on learning may have contributed to an emptying of the concept, to time-consuming and frustrating initiatives seeking to foster eternal personal and organizational development, and to evaluation fatigue. Another example: the fragmentation of perspectives in reflexive modernization may have led to unpredictable evaluation and a loss of common perspective that pave the way for rigid evaluations of a much more controlled and predictable nature. And finally: the closer alliance of evaluation and management and the integration of evaluation systems into organizations may at first sight have been meaningful from a utilization perspective but turn out to be problematic for the democratic debatability of evaluation.
Only if we look at evaluation in a broad historical and sociopolitical perspective can we discover the ambiguities and tensions inherent in what appear to be reasonable innovations in evaluation, if we look at them in specific isolated situations. A more modest evaluation industry would be aware of such ambiguities and tensions.
An important task is to attend to constitutive effects of evaluation machines, such as performance indicator systems in practice. Compared to widespread belief in the value of evaluation machines as political and administrative regimes, relatively little is known about their extrinsic impact on social realities. Although this book, like Modell (2004), suggests that performance measurement does not work according to its own myth, much remains to be known about what else is happening. It would be helpful to know more about the socially constructed nature of the validity of indicators and standards. Evaluation criteria and indicators can be developed with more or less advanced substantial understanding of the relationships among policy, programs, and outcomes in particular fields (Courtney, Needell, and Wulczyn 2004). It would also be informative to map some of the wash-back effects of evaluation machines on larger cultural, organizational, and democratic systems. Of special interest is how knowledge flowing from indicator systems is handled in media discourse in the public arena. Too often, evaluation in general and evaluation machines in particular are introduced on the basis of values and belief in evaluation as a general good, but without any substantial demonstration of the actual effects of evaluation on social realities.
Perhaps evaluators see these processes without seeing them. Perhaps we have reached a point where the design and use of evaluation machines are determined more by the architects and guardians of evaluation machines than by any other sector or agent in society. Perhaps the contemporary philosophy of surveillance serves surveyors, monitors, and auditors more than other groups. Through evaluation systems and evaluation machines, the burden of data collection may decrease, or be streamlined, but only from the viewpoint of an inspector or evaluator.
This does not necessarily mean that evaluation systems are not required to produce enormous amounts of data. The point is that the burden placed on the inspector or evaluator is relieved. This is done by requiring the inspected organization to produce the data necessary for inspection or by raising the level of abstraction to which the inspector or evaluator has to operate. Whether “the system is in place” or not can be turned into an operational question in the eyes of an inspector.
Evaluation machines are based on abstraction and “de-skilling,” meaning an operator of a mechanically functioning evaluation machine may have quite limited insight into the practices under evaluation. The operators of evaluation machines may have less insight into the practice under evaluation than designers of evaluation machines do, and designers may know less about the practice under evaluation than practitioners themselves. Evaluation machines in schools may incorporate some knowledge about quality management, but very little about teaching. Through this abstraction—inspection systems inspecting systems—complexity is reduced considerably. Evaluators no longer need substantial insight into what is evaluated; they can rely on broad and fairly vague assumptions about the virtues of particular organizational recipes (Meyer 1994; Røvik 1998). The legitimacy of the evaluation industry depends on the extent to which citizens and other audiences buy into the taken-for-grantedness of these fashionable organizational recipes.
However, in evaluation not too much should be dependent on beliefs. The time has come to compare the enormous costs of operating evaluation machines with the actual social effects of evaluation, negative as well as positive. Some of the self-congratulatory rhetoric of the evaluation industry may be unwarranted. It is time to consider, as this book has argued, whether the marginal utility of evaluation may be decreasing, and whether there are sometimes good reasons for evaluation fatigue.
If the analysis presented in this book is correct, evaluators should make only modest promises. A rhetoric of endless development, learning, and improvement may be too disconnected from reality to be a trustworthy promise on behalf of evaluation. So, too, may a rhetoric of total quality management, control, and predictability. There is no such thing as total quality (Weick 2000). Management is never total, and neither is quality in a complex world. Although many evaluation machines emphasize defensive quality and prospective control, they not only reduce risk but also reallocate risks (and create them anew) because evaluation machines have their own effects on those who are under evaluation, and because many of their wider effects are not predictable or not consistent with the official purposes of evaluation machines.
Evaluators should be more aware of the social imagery they transmit when they make promises about the wonderful benefits of evaluation. Evaluators should also be aware that today they are no longer just working on the fringes of organizations to create reflexivity. Instead, they come in with mandatory evaluation backed up by the full force of institutionalized bureaucratic machinery. They should admit that, and they should behave with care.
One way to make the evaluation industry more realistic would be to break down some of the barriers between evaluation and the practices under evaluation. Evaluation machines may stand in between evaluators and their understanding of the practices they evaluate.
Good professionals have always evaluated their work, at least informally or tacitly. In reflexive modernization, a central idea was to reconnect users and professionals, programs and effects, planning and reflection in order to break with the self-containment found among all of them. Reflexivity should be expanded through feedback loops and dialogue. In the audit society, however, the design of evaluation machines is more and more separated from the doing of work, and evaluation procedures are abstracted and generalized as they move up the organization hierarchy toward the managerial layers. Handbooks, manuals, indicators, and evaluation systems constitute a large and complicated social interface between evaluators and the realities they evaluate. For this reason, evaluators become specialists who are not held accountable for the consequences of their evaluation, and who are not faced with the experience hereof, because there is too much social interface in time and space between evaluation and its consequences. In an era in which we are losing faith in rational knowledge as such, God-like rationality per se cannot guarantee the good social consequences of evaluation. Evaluators must therefore be confronted, in new, socially embedded ways, with the consequences of their products.
Evaluators should recognize that they are not performing evaluation on isolated evaluands in an empty world. Before evaluation, there was not a social desert without values, norms, and control mechanisms. There was a complex social world with relational and embedded practices (Schwandt 2002, 2003), with historical traditions and expectations, and sometimes imperfect institutions—but there was not nothing. A teacher without formal external evaluation is not necessarily an isolated, anomic, irresponsible, and nonproductive teacher. However, the benefits of evaluation are easier to see if the benefits of all other values, norms, and social regulations are ignored.
Instead, evaluation must demonstrate new attention to qualities that are hard to demonstrate in quality measurement. Some qualities may be local, situational, historic, embedded, or just unique (Stake 2004). Especially in the era of evaluation machines, evaluation should be more attentive to the value of practices and work. The meaning of work is an important ingredient in the culture of the West, and it is also an important ingredient in well-functioning public organizations. There is in our time practical interest in recruiting well-qualified staff in teaching, health, social work, and research, and in fact in all areas of public organization, and there is sociological and philosophical interest in restoring pride in work and a sense of craftsmanship (Sennett 2008). Following Arendt’s distinction between types of human activity, labor is biologically necessary, and work is purely instrumental. Action, however, is where we realize ourselves as human beings in the light of our own aspirations and the recognition of others. Using this distinction, we cannot afford evaluation machines that reduce the meaningfulness of action in what is done by teachers, nurses, policemen, social workers, and all others who carry out important tasks on behalf of society. Evaluation machines in their present form have too many similarities with the industrial surveillance techniques under early industrialization: time-oriented, abstract, dehumanized, and too separated from the meaning of work itself. We already know the consequences of that philosophy for the meaningfulness of industrial work, so there is no need to let contemporary evaluation machines repeat the same mistakes with respect to human services and work in public organizations. It would be counterproductive to undermine responsibility and honor in the name of accountability and effectiveness. Instead, there is a need for a new philosophy in evaluation working together with the values of good craftsmanship, pride in work, and a sense of recognition.
In our time of evaluation machines, there is no clear functional distinction among accreditation, auditing, quality assurance, and evaluation. I see no particular need in society to carve out and defend the identity of evaluation and evaluators in a narrow sense. This is—at best—the project just of evaluators.
By choosing evaluation and attending to its changing definitions, I have been able to carve out a particular object of organizational and sociological analysis, but also to show that its boundaries and forms have changed considerably over time. However, one of the main reasons I have precisely maintained the term evaluation as this headline is the inherent democratic dimension that has been in evaluation from its beginning. The democratic issue and mandate is both inherent and debatable for all the practices I have dealt with under the headline of evaluation, regardless of whether these practices like to identify themselves as evaluation, audit, accreditation, or quality assurance.
“The political” is a particular opening that appears in societies that have discovered that their destiny lies in their own hands—the discovery of societal autonomy, if you will. The political is where society works on itself (Rosanvallon 2009). The political is the self-constitution of society.
Democracy is an answer to how the political can be handled. Democracy is the way in which society handles its self-appropriation. Evaluation therefore fits into a democracy, that is, a society appropriating itself through deliberate and self-conscious inquiry into the effects of its own actions and initiatives. Whether they like it or not, whether they know it or not, evaluators and others like them are embedded in the terrain where society works with itself, and works on itself. This is what democracies do.
Evaluation is thus inherently political. It aspires to help democracy work. Evaluation is—not the least in its constitutive sense—“societing” (Woolgar 2004, 454), because it helps define social realities, and not only describe them. The political and the democratic concern is an ever-present aspect of evaluation (Karlsson Vestman and Conner 2006). It is not a “dark side of life” that should be “expelled” from the “rational, well-planned and well-intended” noble art of evaluation.
However, as Rosanvallon (2009) shows, democratic ambition is full of tensions and ambiguities. It is thus a problematic solution to the question of the political. The dream of the good society is full of inherent vagueness, and it must be so (Rosanvallon 2009, 19). For this reason—and because of the difficulties of establishing any rational and legitimate authority, of representativeness, and of demarcating “the people” who qualify for political participation—there is instability in the very essence of democracy. This analysis is thus very close to the ambiguities pointed out by Castoriadis regarding the tension between rationality and autonomy as key modern principles. There is a necessary element of fiction in the ideas of “people,” “progress,” “justice,” “equality,” etc., which are candidates to be the structuring principles of the democratic constitution. In Rosanvallon’s view, there can be no fixed normative determination of these principles; they will remain “contested concepts” (Gallie 1955). It is necessary to clarify them and develop them according to the moment in the history of democratic nations and according to the situation.
Exactly the same is true for key terms in evaluation such as “means,” “ends,” “effectiveness,” “stakeholders,” “indicators,” “quality,” “values,” “evaluands,” and “use of evaluation”—not to mention evaluation itself! All are ambiguous structuring principles with democratic relevance. However, the meaning of each of these terms is itself a democratic riddle rather than a fixed technological or methodological given. What evaluators do as they work with these terms in practice is therefore also inherently a democratic undertaking, whether they think of it this way or not.
In evaluation is true what is also true in democracy: it is the most fundamental terms that are hardest to define (Rosanvallon 2009, 45). In this sense, an evaluation is something that takes a particular standpoint in relation to a democratic problem; it cannot just be the imposition of an ideal order (Rosanvallon 2009, 39).
If we understand that democracy is a regime instituting itself, time is an active and constructive variable (Rosanvallon 2009, 40). Evaluation must thus be an unstable democratic striving or ambition; it cannot be a nonnegotiable solution. This is because there is no origin of origins that helps keep the standpoint of standpoints fixed. Democracy is an opening that must remain open for questioning. Any evaluation must reflect this.
Yet there are various simultaneous forms of democracy coupled to different temporalities (Rosanvallon 2009, 44). In evaluation, we have some constitutions that are several hundred years old, we have policies and programs and goals that are years old, and we have various claims and demands for legitimacy that are uncoordinated with each other. And there are multiple relevancies in one particular evaluand in a practical context. Evaluation is full of examples of conflicting heritages and legitimacies.
In a democratic light, however, we should not fall for the temptation to install just one principle of legitimacy or authority, and neither should we replace democratic concerns with technocratic evaluation machines. In fact it is the resonance between our experiences and our society that should be listened to; it is the resonance between evaluation and our concern for democracy that turns evaluation into a truly democratic phenomenon.
Democracy is alive, though, only to the extent that democratic responsibilities are taken up by people. The political depends on the notion of people living together, sharing a sense of common social destiny, and acting on this feeling. Evaluators have a role to play in highlighting the democratic character of public initiatives, showing that there is society in public activities, because what is done by one part in society is never really isolated from the rest of society. Evaluation is societing.
Each of the phases analyzed in this book—modernity, reflexive modernity, and the audit society—poses a particular and specific set of democratic challenges to evaluation. In modernity, strong emphasis on rationality and progress leads to technocratic mentalities that tend to suppress the political element in evaluation. In reflexive modernity, evaluators give voice to a cacophony of perspectives that do not add up to a common agenda. A societal perspective is vanishing.
Evaluators under reflexive modernity should remember to connect local feedback loops with broader societal concerns. They should remember to connect particular voices with an obligation toward a larger democratic picture. They should not escape into entirely local contexts in order to carry out responsive and participatory evaluation, but return to the public arena to enlighten democratic audiences about their results. They should also remember to take part in discussion about policies, and not only their local ramifications.
The audit society and its evaluation machines present another set of complicated democratic problems. Evaluation criteria that are democratically problematic are often used in a robotlike manner. Evaluation machines break accountability down to its most atomized, measurable, and manageable parts, but they cannot motivate different partners in society to work together to solve complex and dynamic problems. They sacrifice macroquality for microquality. They also overemphasize defensive quality at the expense of offensive quality. Because they are mandatory and work repetitively in alliance with powerful institutional structures, evaluation machines transform the whole classical utilization problem in evaluation. The most pressing problem may no longer be nonutilization, but the constitutive effects of evaluation machines on a variety of social and professional practices. How evaluation prestructures the reality it claims to measure is an important and often ignored democratic problem, but even more relevant in the audit society.
As evaluators help build up evaluation capacities, cultures, policies, and machines that regulate evaluation, they should be aware that they are also building institutional structures. In a democratic light, however, institutions support identities (Wildavsky 1987), and democracy presupposes institutional structures that favor identities having a preference for democracy and attention to the democratic perspective on problems in society.
A good thing about democracy, however, is exactly what Rosanvallon calls its historicity, or what Arendt (1950) might call the principle of natality. It is unfinished business, never complete. It never works according to its ideals; it can always be improved. And there are no guarantees.
If we take Rosanvallon’s view of democracy as unfinished business seriously, a new skepticism about evaluation systems, machines, capacity, and evaluation policies should be aired. Although advocates of these ways of streamlining and structuring evaluation are correct in pointing out that it may too often be left to chance and to the caprices of subjective individuals (which may be relatively true under reflexive modernization), there are also dangers in too strict planning of evaluation. If it is inherently democratic and political, and democracy constantly renews itself according to the principles of historicity and natality, then evaluation should also sometimes cut across, neglect, or challenge official definitions of what needs to be counted, measured, and controlled. A policy of evaluation that denies such opening of democracy toward itself sounds too much like organizing in advance what can be said in the democratic debates of the future. For this reason, evaluation may serve the political better if it is not tightly described and regulated in an evaluation policy, and not made automatic by an evaluation machine.
Although I have described in this book the many organizational and social forces that help shape evaluation, it is my hope that I have also portrayed all these forces as social constructions, that is, the products of human beings. And as my analysis has suggested that different organizational and social principles sometimes overlap, and that the environment in which evaluation unfolds is often inconsistent, ambiguous, or fragmented, there is often space for evaluators to make an argument. In fact, evaluation and its use has a lot to do with making good arguments (Valovirta 2002).
Perhaps to the surprise of some evaluators, these arguments can best be seen in the context of what some philosophers call weak thinking.
Increased awareness of the democratic role of evaluators must, as already mentioned, go hand in hand with a new modesty among evaluators. Evaluators should acknowledge that there is no indebatable anchor in “truth,” “method,” or “objectivity,” nor in any particular normative vision of “learning,” “development,” “effectiveness,” “risk management,” or “quality assurance.”
Truly, some forms of knowledge are more qualified, elaborate, systematic, and verifiable than others. But the means of controlling, checking, and discussing knowledge are social and contingent rather than God-given. No knowledge is independent of perspective; knowledge is itself contingent and uncertain (Morin 1992). In addition, no knowledge can guarantee its own utilization.
Vattimo (2004) offers a way of talking about these uncertainties. To him, as well as to other philosophers with roots in hermeneutics (Schwandt 2000), interpretation is not a special activity based on a particular algorithm or procedure, but something we do in general as human beings. Our interpretation of the world does not begin with some external objectivity, but with our “thrownness” into particular historical, institutional, cultural, relational, and biographical realities. Our interpretations are bound to our radical historicity (Vattimo 2004, 76). For this reason, there is no absolute list of fundamental categories describing how our being should be understood. The origins of the institution of being turn out to be relative to changing epochs (Vattimo 2004, 13). We must recognize that any interpretive perspective is partial, unstable, and temporary.
Modernity has led to a quite particular social space for the making of experiences. Modernity has led to division of labor, specialization, segmentation of our spheres of values and of existence, fragmentation of our sense of reality, and many discontinuous forms of life. On top of this social order, we have increased pace of social change (Vattimo 2004, 86–87).
If we take modernity seriously, we can see that perspectives are partial; we can see that our interpretations of earlier epochs and of societies other than our own are tied to our own perspectives. If not, we should consult ethnography, anthropology, philosophy, and the sociology of knowledge. In other words, there is in modernity an inherent pluralism and antifoundationalism that is not coincidental. It is in fact the very modernist belief in clarity and transparency that has had this consequence. A mythical belief it turned out to be.
Philosophical hermeneutics, says Vattimo, cannot respond to this situation by insisting on a metaphysical foundation that can be used as a critical platform against modernity. If philosophical hermeneutics will take its own theory about the historicity of interpretations seriously, it must step into modern conditions or, perhaps more precisely, recognize that it is already there and accept the loss of metaphysical absolutes. Postmetaphysical thinking, a thinking without reference to absolutes, must recognize that it has no legitimation outside of its sociohistorical being. It must thus see itself as both a result of and a response to modernity. On the other hand, hermeneutics is exactly a winning position with respect to how it reflects its own historicity, and reflects on the pluralism in modern society.
The departure from the ideals of progress and the continued perfection of human life in fact amount just to recognizing that modernity was not a departure from metaphysics, as it believed itself to be, but really the high point of metaphysics. Even if modernity declared to cancel the relevance of all metaphysical imaginaries for collective social life (Schanz 1990), it in fact inaugurated a grand new metaphysical tale of autonomy, rationality, and progress that was merely the “last great objectivist metaphysics” (Vattimo 2004, 154). The fall of this metaphysics should not be bemoaned, however. It opens up new social opportunities without an imposing master principle or project.
The thinking under these conditions must be without neurosis and remorse. Thinking must be “weak,” without insisting on one’s own truth. This is why Vattimo’s philosophy has been called weak thinking. This type of philosophy has itself democratic propensities, and it does not attempt to fall back onto stronger, more comforting, more authoritarian, and more threatening versions of the real (Vattimo 2004, 20).
Weak thinking is thus thinking against neorigorism. Weak thinking takes away the charms of authoritarian tendencies, be they ethnic, military, social, ideological, or managerial. This is no minor accomplishment. With Vattimo, ethics, politics, law—and, may I add, evaluation!—cannot be legitimized by means of a particular content or procedure. Vattimo does not accept Habermasian rules for a dominance-free discourse (which would lead one to nullify the statements of others if they were made under nondominance-free rules). Following Vattimo, the rules of politics, law, ethics, and evaluation are earthly, pedestrian, context-bound, and historical, much like Rosanvallon’s democracy. But they do not embody one particular ideal procedure or content. Nor do they impose their own image of social reality on those realities in violent or authoritarian ways.
Truly, evaluators can help do good things. They can increase the sensitivity of practitioners and decision makers to the effects and side effects in society of what they do. They can question political, organizational, and ideological assumptions and demonstrate how initiatives work in practice. They can increase society’s sensitivity to the macroquality of social interventions. They can connect what is otherwise disconnected in hypocritical organizations (such as goals and activities, promises, and deeds). They can remind society of its own ambitions and its mistakes in achieving them.
But all this should take place in the context of weak thinking. What would weak thinking mean for evaluation? It would mean that any approach in a given evaluation is aware that it has no metaphysical anchor and cannot subscribe to one pregiven form of legitimacy only. It is also aware that it embodies one perspective among several possible perspectives. Weak thinking in evaluation also implies modesty about the promises of all good consequences of evaluation. An evaluator knows that results are debatable and that the uses of evaluation are in the hands of an imperfect human collective. A hermeneutically conscious evaluator also knows that evaluation helps shape reality in unforeseen ways, and that the perspectives of others are democratically respectable. Weak thinking also implies respect for those social realities that are difficult to represent within a given evaluation. Furthermore, evaluators with weak thinking would incorporate a healthy dose of hesitation, care, modesty, inconsistency, and lack of total dominance in any evaluation. Responsible evaluators would question the democratic legitimacy and use of evaluation machines as a necessary dimension in the very construction of such machines. They would also lay bare those aspects of evaluation ideologies that are derived from myths inherent in modern societal and organizational ideals.
A good evaluator knows that although under present social conditions the demands for evaluation are increasing, the need for wisdom and modesty in evaluation is greater than ever before.