Bright and early at the Presidential Strand, 'Are We Part of the Problem or the Solution?: Using Systems Approaches to Speak Truth(s) to Power(s) for Better or Worse.' #Eval18
Bob Williams: How could we make a system that speaks truth to power? We are trying to do that in real time this morning. We need to focus on boundaries. Setting the boundary is what speaks truth to power.
Many people think holism is about thinking and including everything, that is not a system, that is cosmology (history and study of universe), and even cosmologists have boundaries.
Systems thinking is about being incredibly thoughtful about what's in and what's left out.
For that, we turn to a specific type of systems thinking, Critical Systems Heuristics. Four questions about the boundary area:
Purpose of evaluation?
Who controls the evaluation?
What knowledge is honored (or rejected)?
How evaluation gains legitimacy?
Robin Lin Miller on PURPOSE, when is it appropriate to examine the broader sociopolitical systems and issues that influence the evaluand:
Short answer: When wouldn't you?

Longer answer: We have to do so whenever we are doing so with situation of contested human rights, or when the evaluand (what is being evaluated) is about social change, or structural change.
After this presentation we as audience members were asked to consider this question in light of someone next to us. I spoke with Tom Schwandt about this question.
Miles McNall on Control and 'When should and when should not the evaluator/evaluation team share decision-making power with others? Who and why?'
Scenario 1: external control for international development program:
1. Defining questions rarely up to eval, may never know where came from.
2. Approach to evaluation, tends to rest mainly with client.
3. Deciding methods, sometimes in RFP, but mostly entirely up to evaluator
4. Analysis: offloaded onto evaluator, if you try and introduce co-analysis, that is mind-blowing for many clients.
5. Dissemination: almost always the client. Products go and sometimes you don't know where they go, and even evaluation manager might not know.
Scenario B: small local non-profit needs evaluation for compliance for funder:
1. Questions: client
2. Approach: evaluator
3. Methods: mostly client, often don't have enough money.
4. Analysis: mostly evaluator, more efficient
5. Dissemination: all client.
What ought to be? Rarely am I involved in decision-making at either end of the evaluation, more probable with scenario B, but need to move away from being external.
Onto Knowledge: What knowledge matters most for the evaluation? How is this decided?
Speaking of a specific configuration, each person was paired with senior evaluator, had additional grad and undergrad students, knowledge distributed and all had something to learn from each other.
To say you need a specific type of knowledge is not true. Need to constantly morph.
Heather Britt on Legitimacy: 'How can an evaluation/evaluator consider the negative consequences of the evaluation itself?'
Drawn from most ethically challenging evaluations I have been involved with, one evaluation was experienced as biased and unjust and the evaluation's role was to gather evidence against them.
(This!) "To throw a cloak of truth and transparency in a process in which the decision had already been made."
This had consequences for the evaluation process, but also evaluation with a capital E.
Gather info about political landscape.
Advocate for participatory approaches, from start to finish.
Take appreciative approaches, especially in situations where there is already a negative bias.
As external Evaluators we come to this knowledge in a very inconvenient point in the process. Often when in country, thousands of miles away from home, and hard to pull out. We can make choice to pull out of evaluation.
Ask about consequences in inception stage. We need to ask aloud who the winners and loser are. That signals we are not up for rubber-stamping. You may lose the evaluation opportunity, but that might not be a bad thing.
Bob bounds Q&A by grounding in the four questions of setting boundaries in evaluation system. How do we keep those in control of a system to speak truth to power, how do we keep them honest? What needs to be in and out to inform decision makers to achieve speaking truth to power?
Who ought to be in the system itself in order to keep legitimacy based on the notion that underpins this approach to systems: that those inside the system cannot legitimize the system. The evaluation cannot legitimize itself; only those affected by it can legitimize the system.
Dominica McBride asks about the role of 'love' for evaluators who question boundaries in systems...
Miles McNall: decision-making about what and whether to make your opinion known, I moved from external and to internal, and I realized that my love as a professional is in elucidation in empirical information. I love that. So I could contribute to critical questions about that.
Heather: I have a relationship with recommendations…there is a lot I don't know about as an external evaluator. Here is an example: Civil war had broken out after the start of a program and this had more than a few implication for the program....
...I was passionate about grantees and sub-grantees were doing to protect and expand civil space (referring back to love). However, the ruler that was being held to measure, was the original plan of the project...
...They were being measured against the plan before their country erupted in civil war. What I did as an evaluator was to shift design of the evaluation to include emergent outcomes. What were they doing in this dynamic space?
...This may seem technical, but moved by love. Did I make recommendations? Can't remember, but I recall I made sure stories were shared.
Tom Schwandt closed us out and helped us consider how these questions invite us to think differently about boundary choices and their ethical implications?
1. We have an ethical responsibility. Decision about boundaries are dealing with limits of understanding, which means we are dealing with ethics. We can't abdicate responsibility under premises of 'expertise'.
2. Knowledge and power go together and should be thought of as knowledge/power. Dominant models connected to kind of knowledge and power of designers and coupled to valuing schemes. We rarely examine valuing schemes in which those things are conceived.
At large multinational agencies, many never examine water in which they swim, which is Results-Based Management. How do we intervene in that situation to speak truth to power there?
(Let me take live-tweeter privilage to point toward a group of practitioners, scholars, and evaluation professionals who have been wrestling with this question. The @bigpushforward convened some years ago and published a book in 2015...)
(Its called 'The Politics of Evidence and Results in International Development: Playing the game to change the rules?' and it is an essential read for evaluators who want to work in this space and ask critical questions about boundaries.)
Here's the link: amazon.com.au/Politics-Evide…
3. Whose knowledge matters? We tend to think of knowledge production divorced from knowledge producers. Is the knowledge we are producing from the privilege of tenured, self employed consultant, or from a position of those who live the realities of the intervention?
We have to come to terms of privileging the knowledge of those vis-à-vis the other. Sometimes this doesn't always mean we only privilege those views who are subject to intervention.
(Another live-tweet privilege: to this end, one of the most prolific thought-leaders in sustainable development based at @IDS_UK, Robert Chambers published an amazing open-access book in 2017 titled "Can We Know Better?" and you can find it here: developmentbookshelf.com/doi/book/10.33… )
4. (actually # 3 so situate before previous) What knowledge? Not about tools, or approach, it should be about evaluative reasoning. How to intervene, and how to think about arguments for and against something. How do we help others to evaluate arguments for merits in discussion.
5. What does it mean to establish legitimacy to engage in critique? When we talk about legitimacy we talk about competencies, or social advocacy, or (so-called) value-neutral social sciences.
How do we critique the very system in which we intervene, and what basis do we establish the legitimacy of our critique. Can we grasp and seize the professional imperative that evaluation is a critique and not simple an appraisal and assurance.
(Again to this question, the title of the book I shared alludes to this, but evaluators should consider their work to not just determine merit, worth, and value of interventions, but recognize how that process is an intervention in the system in and of itself.)
(With that, evaluators actually hold a great deal of power in brokering the process of speaking truth to power. #Eval18)
(To the so-called value-free social science that Tom referred to that is implicit in the RBM and EBM agendas, there is a great article on this as it pertains to peacebuilding by @rogermacginty, one of the principal researchers who I work with at @everydaypeacein.)
(It's called 'Routine Peace: technocracy and peacebuilding' and you can find it here: journals.sagepub.com/doi/abs/10.117… )
(In it we learn that the RBM and EBM arise from the 'New Public Management' movement that is an “elision of an international institutional and ideological system that has valorized efficiency and the world-view of business organizations”)
(This chimes with a line in the 'Politics of Evidence' book that expounds that NPM comprises “corporate sector practices designed to maximize shareholder profit and eschewing any explicit ideological commitment” )
(The reason I am pointing to this is to follow a point Tom Schwandt makes about not questioning the water we swim in and something we should consider as we ruminate on the political project that @MQuinnP reminds us of in framing Evaluation as as Science)
(That is, the terms 'evaluation science' and 'systems science' convey a certain appeal to the authority of the natural sciences, or some uncovering universal truths about the natural world, and with that the attendant notion of 'objectivity')
(Evaluators who want to apply Critical Systems Heuristics should note that systems thinking is often the hegemonic paradigm that uses natural sciences to import many values that are often unquestioned. This is important to consider when speaking truth to power.)
(Prof. Chris Mowles speaks to this phenomenon "...managerialism derives much of its potency from the way that it borrows extensively from systems theories and presents management as a technical, rational discipline based on scientific principles....)
(...Systems theories have proved highly effective in biology and
engineering science, and in adducing such scientific heritage...)
(...managerialist discourse lays claim to rationality and effectiveness. It provides the conceptual underpinnings for the promise of wholesale transformation.")
In this, evaluators who employ Critical Systems Heuristics can use the 'discourse capital' of systems and complexity sciences as a Trojan Horse to question taken-for-granted knowledges and change the dominant systems of evaluation when speaking truth to power. #Eval18
At the end of this session we were given a call to action to employ all 12 questions in Critical Systems Heuristics in our respective evaluation practices. You can learn more here: betterevaluation.org/en/plan/approa…
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to 𝐙𝐀𝐂𝐇 𝐓𝐈𝐋𝐓𝐎𝐍
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!